|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
July 2025 | Vol. 103 No.13 |
Title: |
OPTIMIZING EMERGENCY RESPONSES VIA DISASTER LSTM ALGORITHM FOR SOCIAL MEDIA
ANALYSIS |
Author: |
S. BABY SUDHA, Dr. S. DHANALAKSHMI |
Abstract: |
The increasing frequency of natural disasters demands efficient and accurate
classification systems for real-time information dissemination. This study
develops and evaluates classification systems for tweets related to six disaster
types—hurricane, forest fire, tornado, drought, flood, and fire—aiming to
enhance disaster response capabilities through improved classification accuracy
and sentiment analysis. The study uses a dataset of labeled tweets,
incorporating both disaster type and sentiment labels (positive, negative,
neutral) to assess public reactions. By integrating sentiment analysis into the
classification process, emergency responders can identify disaster events and
gauge the emotional tone of the population in real-time, providing crucial
insights for better decision-making. Two models were implemented: a K-Nearest
Neighbors (KNN) algorithm and a Disaster Long Short-Term Memory (DLSTM) network.
The KNN algorithm, used as a baseline, showed adequate performance but struggled
with the complexities of sequential text data and sentiment analysis. In
contrast, the Disaster LSTM model, which utilized both disaster classification
and sentiment analysis, significantly outperformed KNN. It achieved an overall
accuracy of 84.0%, with 87.2% for hurricanes and 81.9% for droughts, while also
demonstrating strong performance in sentiment classification. The novelty of
this study lies in its dual-focus model, which simultaneously classifies
disaster types and associated sentiments using a fine-tuned LSTM network. This
integrated approach offers a more comprehensive understanding of real-time
public discourse during disasters—unlike prior studies that addressed these
aspects in isolation. These improvements enable faster disaster identification,
more efficient resource allocation, and a better understanding of public
emotions during crises. This research contributes to the fields of natural
language processing and disaster management by constructing a robust framework
for leveraging social media data for real-time situational awareness and
emergency response. |
Keywords: |
Disaster Response, Disaster Long Short Term Memory, Tweet Classification,
Real-Time Information, Natural Language Processing, Deep Learning, Social Media
Data. |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
A NOVEL METHOD TO ESTIMATE HIDDEN NEURONS IN ENSEMBLE FLOOD FORECASTING MODEL |
Author: |
NAZLI MOHD KHAIRUDIN , NORWATI MUSTAPHA2, TEH NORANIS MOHD ARIS, MASLINA
ZOLKEPLI |
Abstract: |
Machine learning model such as neural networks have been widely adopted to
provide flood forecast. Self-adaptability in neural networks enable them to
learn pattern by their own and adjusted the connection between the neurons, but
still producing error. In many neural network adaptations for flood forecasting,
the number of hidden neurons is normally randomly selected which can caused
overfitting problem in the network. In this study, a novel method to estimates
the hidden neuron is proposed to overcome this problem. This method integrates
the evaluation of various convergence theorem criteria with grid search to
estimates the hidden neuron. By having this integration, optimal number of fix
hidden neurons can be determined. This method is used in the ensemble model that
based on neural networks to forecast the water level based on rainfall data.
Based on the performance measurement using Root Mean Square Error (RMSE), Mean
Absolute Error (MAE) and Nash Sutcliffe Efficiency (NSE), it is found that the
integration of convergence theorem and grid search can be used to fix the number
of hidden neuron and has reduce the error that led to overfitting in the
forecast. |
Keywords: |
Neural network, Hidden neuron estimation, Flood forecasting, Grid search,
Convergence Theorem |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
ENHANCED ENERGY EFFICIENCY AND POWER ANALYSIS IN SOFTWARE-DEFINED NETWORKS
THROUGH MALICIOUS SWITCH DETECTION |
Author: |
Dr. THANGARAJ E , Dr. MOHAMED MALICK , Dr. AROCKIA JAYADHAS S, Dr.BARKATHULLA ,
Dr. HARIHARASUDHAN S, R MATHU SUDHANAN |
Abstract: |
Software Defined Networking (SDN) has progressed by a network model to assist
with different requirement of a real time traffic flow in a huge scale switch.
The basic function of SDN plays a vital role in partition of data plane from the
controller plane with accepting the needed changes as per the necessities. SDN
provides the broad programmability of switches and its architecture concepts for
the services and applications. In our proposed system, deep learning atmosphere
is generated using TensorFlow and Keras atmosphere. It involves in Kaggle
dataset for the trusted and the malicious flow detection. Through analysis part,
power analysis is added in the controller built. SDN overall performance depends
on energy consumption, load balancing and traffic management. In this proposed
approach energy efficiency and badge power of the Software Defined Network is
presented. We focused on the energy efficiency of the network by improving the
already proposed route selection algorithm. We proposed a method for the
analysis to determine the power management functionalities in various domains.
The stimulation results show the power analysis feature classification helps to
identify the trusted flow of switches. We propose an empirical algorithm and
mathematical model of energy consummation determines controller plane to
maximize the network performance and scalability. The overall performance study
determines that the proposed system of drop rate, constant throughput, load
balancing and energy efficiency of switches in SDN is better than the already
existing algorithms. The comparative study determines the proposed algorithm is
approximately saves 30% of energy. |
Keywords: |
Software Defined Networking, Kaggle, Convolutional Neural Networks, Keras, Power
Analysis, load balancing, energy consumption, Tenser-flow. |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
ENHANCING BRAIN TUMOR CLASSIFICATION THROUGH ADVANCED IMAGE PROCESSING, HYBRID
FEATURE EXTRACTION, MRMR FEATURE SELECTION, AND MULTI-CLASS CLASSIFIERS |
Author: |
GOURI SANKAR NAYAK, DR. PRADEEP KUMAR MALLICK, DR. BHUSHAN MARUTIRAO NANCHE, DR.
NEELMADHAB PRADHI, DR. DILLIP RANJAN NAYAK |
Abstract: |
The purpose of this research is to create a holistic methodology for detecting
and classifying brain tumors using up to date image processing, feature
extraction and machine learning methods. To improve the diagnostic accuracy,
this research evaluated the model performance of various classifiers as well as
expanding feature representation to select the best available model for Brain
tumor classification. Techniques: Such process includes developing techniques of
image enhancement using TV-L1 norm and MRI segmentation algorithms etc.
Preparation: Using sophisticated MRI segmentation techniques, the areas of
tumors were precisely identified and improved image quality by implementing the
TV-L1 standard for efficient image augmentation. For robust feature extraction
and feature richness, Multi Scale Local Binary Pattern (MSLBP) for texture
analysis was integrated with the multi-dimensional feature representation
feature through the use of Quaternion Wavelet Transform (QWT). The Histogram of
Oriented Gradients (HOG) is leveraged to quickly capture edge and shape data.
Selecting Features Effectively: the dimensionality is reduced and hence
increased classification effectiveness by selecting and prioritizing the most
useful features via the Minimum Redundancy Maximum Relevance (MRMR) technique.
Different Classification Techniques: Random Forest has great accuracy and is
robust, so a number of classifiers were constructed. When it comes to modeling
complex data distributions, Support Vector Machines (SVMs) prove effective. The
K-Nearest Neighbors (KNN) is utilized for local patterns because it is user
friendly and good. Results: The dataset from China's Nanfang Hospital and
General Hospital was used to evaluate the techniques and is available on Kaggle.
With low error rate (0.0147), high sensitivity (98.53%), and specificity
(99.51%), Random Forest (RF) had the highest accuracy of 98.53%. In the end,
Naive Bayes (NB) performed the worst at 96.67% while SVM and KNN produced
slightly lower accuracies at 97.22% and 97.50% respectively. More sophisticated
processing, feature extraction, and selection procedures used as part of the
suite of preprocessing operations contributed significantly to the increased
classification accuracy. RF revealed great promise for clinical use in
identifying brain tumors, achieving high sensitivity and specificity while
decreasing false positives to an accuracy rate of 98.53%. Even though RF
consistently outperformed the others like SVM and KNN on all measures, they did
very well on their own as well. |
Keywords: |
Histogram of Oriented Gradients, K-Nearest Neighbors, Multi-Scale Local Binary
Pattern, Naïve Bayes Classifier, Quaternion Wavelet Transform, Random Forest
Classifier, Support Vector Machine, TV-L1 norm |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
A FUTURE TREND ON 5G NETWORK SUB-SLICING TECHNIQUES FOR MACHINE LEARNING
ALGORITHMS |
Author: |
A. KARTHIKA, J. YAMUNA BEE, S. ARUN KUMAR, Dr. A. ANNA LAKSHMI, Dr. G. UMA
MAHESWARI, Dr. NAGARAJAN GURUSAMY, N. RAGAVENDRAN, Dr. M. VARGHEESE |
Abstract: |
One of the primary objectives of 5G networks is to meet the need for vertical
services. Artificial reality/virtual reality, electronic health records, live
video streaming, robotics, driverless vehicles, and many other applications
might benefit from 5G networks. Optimal 5G Network Sub-Slicing Automation
(ONSSA) is a novel machine learning framework introduced in this paper for
autonomous and dynamic 5G network slice processing. After analyzing historical
data and the present state of the network, the framework uses the LazyPredict
module to automatically choose the most effective unsupervised learning
algorithms. For systems that run concurrently at numerous infrastructure layer
levels, network slicing poses very difficult and critical security challenges.
Some security challenges that need attention are as follows: It is possible to
implement security-based interslice, interslice, and multidomain. The data is
vulnerable to hacking in situations when a conventional network based on machine
learning is used. Accordingly, Data Communication Network as a Service (DCNaaS),
is a whole new service model for Mobile Network Organizations (MNOs). To
evaluate the framework, we used Python for dynamic testing and Anaconda Spyder
for machine learning implementation. To address the slice allocation problem,
this research makes use of datasets and machine learning techniques. We have
evaluated the slice allocation approach using many ML models. We demonstrated in
the simulation that the proposed method correctly allocates the best slice for
service. |
Keywords: |
5G Networks, Optimal 5G Network Sub-Slicing Automation, Lazypredict, Mobile
Network Organizations, Data Communication Network As a Service, Machine Learning |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
EARLY PREDICTION OF HEART DISEASE USING DEEP BELIEF-ASSISTED NEURAL PREDICTOR
(DBANP) MODEL |
Author: |
M.RANJANI, DR.P.R.TAMILSELVI |
Abstract: |
Heart Disease is considered to be the one of the major diseas.Nowadays it makes
the loss of numerous lives in our country. For this we need to predict the
diseases. The Deep Belief-Assisted Neural Predictor (DBANP) model is proposed in
this study to increase the accuracy of heart disease prediction. The model
overcomes the drawbacks of previous models like Recurrent Neural Networks (RNN)
and Long Short-Term Memory (LSTM), which frequently suffer from vanishing
gradients and lengthy training times, by combining an ANN for initial feature
extraction with a DBN for capturing intricate hierarchical dependencies. Models
were developed and evaluated using the Cleveland Heart Disease Dataset and the
Cardiovascular Disease Dataset. To guarantee high-quality input data, extensive
pre-processing was carried out, including managing missing values and feature
selection. Using important measures including Accuracy, Precision, Recall,
Specificity, F1 Score, AUC-ROC, and AUC-PR, the DBANP model was contrasted with
RNN and LSTM models. According to experimental data, the suggested model
performs noticeably better than current models, providing increased predictive
power and resilience. By offering a hybrid deep learning system for early
disease identification that is both scalable and flexible, this study advances
predictive healthcare. |
Keywords: |
Heart disease prediction, DBANP model, artificial neural network, deep belief
network, Cleveland Heart Disease Dataset, Cardiovascular Disease Dataset, deep
learning, early diagnosis, predictive healthcare. |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
POSTQUANTUM MERKLE SIGNATURE BASED ON MODIFIED LAMPORT ALGORITHM |
Author: |
LARISA CHERCKESOVA, ELENA REVYAKINA, NIKITA LYASHENKO |
Abstract: |
With the advancement of quantum computing, conventional digital signature
algorithms such as RSA and El-Gamal are increasingly vulnerable to quantum
attacks. This presents a significant challenge for ensuring the security and
integrity of electronic signatures. As a result, there is a need to develop
post-quantum signature algorithms that can withstand attacks from quantum
computers while maintaining computational efficiency. This study proposes a
novel modification of the Merkle post-quantum signature scheme, integrating an
optimized version of Lamport’s one-time signature algorithm. The core
contribution of this work is the design and implementation of a new algorithm
that significantly reduces signature verification time without compromising
cryptographic strength. A software implementation of the modified scheme was
developed using Python and the PyQt 6 library, allowing for practical testing
and analysis. The performance of the modified algorithm was compared against the
standard Lamport algorithm using execution time measurements and statistical
analysis. The experimental results demonstrate that the modified algorithm
significantly improves signature verification speed, reducing the time required
by up to 44.81% compared to the standard algorithm. The study presents a more
efficient post-quantum Merkle signature scheme with a modified Lamport algorithm
that enhances signature verification speed while maintaining strong
cryptographic security. The results suggest that the proposed scheme is
particularly well-suited for environments where fast authentication of multiple
digital signatures is required. Experimental results confirm the advantage of
the proposed approach, offering a more efficient solution for secure, high-speed
digital authentication in post-quantum environments. |
Keywords: |
Post-Quantum Algorithm, Digital Signature, Merkle Signature Scheme, Lamport
Signature. |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
DRAGONFLY-INSPIRED OLSR PROTOCOL FOR INTERFERENCE-AWARE SEAMLESS CONNECTIVITY IN
FANET |
Author: |
GUNAVATHY H , Dr. RAVINDRANATH P.V |
Abstract: |
Flying Ad Hoc Networks (FANETs) are vital for various applications, relying on
unmanned aerial vehicles (UAVs) for communication in dynamic and challenging
environments. Seamless connectivity within FANETs is crucial for uninterrupted
data exchange and mission success. However, interference from terrain obstacles,
weather conditions, and other UAVs poses significant challenges to routing
efficiency. The proposed “Dragonfly-Inspired OLSR Protocol (DO-OLSR)” introduces
a novel approach by integrating dragonfly-inspired optimization techniques into
the OLSR protocol. This integration minimizes control overhead and route
discovery latency, optimizing network performance. The protocol incorporates
interference-aware mechanisms that dynamically adapt routing decisions based on
real-time environmental conditions, mitigating interference effects. Through
simulation-based evaluations, the protocol demonstrates improved network
performance, reduced packet loss, and enhanced throughput compared to
traditional routing protocols. By dynamically adapting to real-time
environmental conditions, the DO-OLSR maintains seamless connectivity while
mitigating interference effects, showcasing its potential to enhance overall
network reliability and performance in FANETs. |
Keywords: |
FANET, UAV, Seamless Connectivity, Interference Mitigation, Dragonfly
Optimization, OLSR Protocol |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
ENHANCE FAKE REVIEW DETECTION: A HYBRID APPROACH OF IMPLICIT ABSA AND IMBALANCED
DATASET HANDLING |
Author: |
LEENA ARDINI ABDUL RAHIM, KHYRINA AIRIN FARIZA ABU SAMAH, NOOR HASIMAH IBRAHIM
TEO, ANIS AMILAH SHARI |
Abstract: |
Online shoppings convenience has increased reliance on reviews, but fake reviews
undermine trust in e-commerce. Many detection models analyze full review text,
while overlooking subtle cues like lack of specificity, repetitive wording, and
exaggerated sentiments expressed implicitly. Additionally, where genuine reviews
far outnumber fake ones, dataset imbalance leads to biased machine learning
models with reduced reliability. To address these challenges, this study
introduces a hybrid approach integrating Implicit Aspect-Based Sentiment
Analysis (ABSA) using Bidirectional Encoder Representations from Transformers
(BERT) for implicit aspect extraction and Sentiment Analysis with the Synthetic
Minority Over-sampling Technique (SMOTE) for handling imbalanced data.
Rule-based indicators identify fake reviews, while a Support Vector Machine
(SVM) with k-fold cross-validation evaluates performance. The dataset, sourced
from Kaggle’s Amazon Reviews, contains 2,852 reviews across four product
categories: foods, home care, personal care, and refreshments. Before SMOTE, the
average k-fold recall was 78%. After SMOTE, it rose to 95%, enhancing the
approach’s ability to detect most fake reviews despite some false positives. The
final result of the constructed hybrid approach achieved 96% accuracy, 60%
precision, 100% recall, and a 75% F1 score. We evaluate the performance against
two comparative feature approaches, which are (i) an SVM baseline and (ii) a
BERT + Rule-based + SVM without SMOTE. High recall ensures effective fake review
detection, though lower precision results in some false positives. It concludes
that this study enhances trust in online reviews and supports informed
purchasing decisions. Future research should expand labelled datasets and
explore alternative techniques like Edited Nearest Neighbors to refine the
precision-recall trade-off. |
Keywords: |
Fake Review Detection, Sentiment Analysis, Aspect-Based Sentiment Analysis,
Implicit ABSA, Imbalanced Dataset Handling, BERT, SMOTE |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
AI-POWERED SPECULAR IMAGE ANALYSIS FOR CORNEAL ENDOTHELIUM DYSTROPHY PROFILING |
Author: |
KAMIREDDY VIJAY CHANDRA, E.K.MOUNIKA, SWATHI SAMBANGI, D.MANJU , POORNAIAH
BILLA, SYEDA SADIA FATIMA, N. SYAMALA |
Abstract: |
Human cornea consists of ‘5’ layers the thickest layer is stroma layer and
densest layer is the endothelium layer, the hexagonal structured cells is
occupied in the endothelium layer, The cell gets disturbed when dystrophies like
fuch’s dystrophy (FD), advanced fuch’s dystrophy (AFD), posterior polymorphous
corneal dystrophy, Irido corneal dystrophy (ICD), mild polymegathism, and
corneal guttata (CG) encounters..Total ‘13’ images are acquired from specular
microscope of different dystrophies from various patients and processed into the
artificial intelligent convolution filter (AICF) algorithm, which extracts mean
cell area of endothelium cell, elongation of endothelial cell, Heywood
circularity of endothelial, and compactness of endothelial, hexagonality,
standard deviation, co-efficient of variation of endothelium layer (layer 5).
The mean cell area of images I1 to I13 varies in the range of 79.5 µm2 to 3485
µm2, elongation endothelial cell varies in the range of 3.11 µm2 to 3.98 µm2,
compactness factor of endothelial cell ranges 0.62 µm2 to 0.93 µm2 and Heywood
circularity factor ranges 0.8 µm2 to 1.98 µm2, coefficient of variation ranges
10 µm2 to 99 µm2 and hexagonality of endothelial cell 46.6 µm2 to 65.2 µm2 are
meticulously calculated. These endothelial statistical parameters represent the
healthy condition of endothelial layer. |
Keywords: |
Fuchs Dystrophy, Corneal Guttata, Heywood Circularity Factor, Endothelium Layer,
Compactness Factor |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
IMPACT OF MINIMUM SPANNING TREE ALGORITHMS ON EXTRACTIVE ARABIC TEXT
SUMMARIZATION APPROACH |
Author: |
AKRAM EL KHATIB , GAMAL BEHERY , REDA ELBAROUGY |
Abstract: |
The purpose of this research is to investigate the impact of using Minimum
Spanning Tree (MST) algorithms for enhancing the Arabic Text Summarization (ATS)
graph-based approach's performance. The previous researches were conducted in an
extractive ATS that relied on a graph approach are very limited, and their
performance is still low. This low performance is attributed to the
characteristics of Arabic language which is morphologically complex, moreover,
there is a lack in ATS researches using graph-based technique. The final results
of the graph-based technique mainly rely on the weights between sentences as
major features which are poorly calculated. To address these limitations, this
study applies and evaluates three MST algorithms (Prim’s, Kruskal’s, and
Bourka's) within a single-document extractive ATS system. The proposed system
converts text into a graph where sentences are nodes and similarity-based
weights are used as edges. The MST algorithm is then applied to extract the most
representative sentences. To ensure objective comparison, the Essex Arabic
Summaries Corpus (EASC) was used as a benchmark dataset. Experimental results
show that Kruskal’s MST algorithm achieves the best performance, demonstrating a
significant improvement of 15.2% in recall and 14.3% in F-measure over previous
single-document extractive ATS methods. This confirms the effectiveness of
MST-based graph algorithms in improving Arabic text summarization quality. |
Keywords: |
Extractive Arabic Text Summarization, Arabic NLP, Graph Model, Minimum Spanning
Tree Algorithm. |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
REAL-TIME RESPIRATORY SOUND CLASSIFICATION FOR REMOTE DIAGNOSTIC SYSTEMS
UTILIZING DEEP LEARNING AND SPECTRUM ANALYSIS |
Author: |
MADHAVI LATHA PANDALA, B. VISHNU VARSHAN, K. SNEHITH, Y SUMANTH, KANCHARLA
PRAVEEN KUMAR, G N SOWJANYA |
Abstract: |
The detection of abnormal lung sounds is a critical issue that falls under the
diagnosis of respiratory conditions and holds promising developments in deep
learning to address such issues. This study proposes a new methodology for
classifying adventitious RS by using a remote stethoscope vest coat installed
with deep CNNs. The process begins with preprocessing the raw audio data into
standard waveforms, transforming the latter waveforms to spectrograms for
further processing. Another Fourier Transform is carried out on the data to
extract its frequency features that aid in improving discriminative patterns
identification in lung sounds. Horizontal flipping is among the techniques for
augmenting the data to avoid overfitting. Classifiers such as VGG, AlexNet,
ResNet, Inception Net, and LeNet were tested for their classification
performance in respiratory sound spectrograms. From all the models built with
VGG, VGG-B1 proved to have the highest precision, recall, and accuracy values
(96%). There are four types of aberrant lung sounds in the dataset: wheeze,
rhonchi, stridor, and crackles, which are obtained from R.A.L.E. Lung Sounds and
Easy Auscultation. The proposed system, therefore, provides an efficient and
robust solution in real-time detection and classification of abnormal lung
sounds as a step toward remote monitoring for early diagnosis of respiratory
disorders. |
Keywords: |
Abnormal Lung Sounds, Deep Convolutional Neural Networks (CNNs), Remote
Stethoscope Vest Coat, Respiratory Sound Classification, Spectrogram, Fourier
Transform, Data Augmentation. |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
MULTI-AGENT PATH PLANNING BASED ON IMPROVED ASYNCHRONOUS ACTOR-CRITIC AGENT
ALGORITHM |
Author: |
TEH NORANIS BINTI MOHD ARIS , CHEN NINGNING , NORWATI MUSTAPHA , MASLINA
ZOLKEPLI |
Abstract: |
Existing approaches often rely on centralized or reactive planning methods,
which can become inefficient or cause deadlocks in complex environments. There
is a clear need for a process that handles these challenges while maintaining
real-time performance and robustness. We propose a multi-agent path planning
method based on the improved Asynchronous Actor-Critic Agent (A3C) algorithm.
First, we enhance the Actor-Critic neural network within the A3C framework by
integrating it with the VGG network to develop a fully decentralized strategy.
Additionally, we improved the reinforcement learning rewards and penalties,
allowing agents to perform real-time reactive path planning with partially
visible information, exhibiting implicit coordination. Second, we use the
sequence information input characteristics and long-term memory capabilities of
Long Short Term Memory (LSTM) neural networks, allowing the neural network to
gain "long-term memory" and enhance network learning speed. The LSTM neural
network replaces the deep neural networks (DNN) in the original A3C network. We
tested this approach in various environments with different sizes, agent
quantities, and obstacle densities. Finally, we quantified the results based on
average planning length, average planning time, and success rates over 100
tests. The experimental results show that the proposed method significantly
improves the success rate and efficiency of multi-agent path planning in noisy
and uncertain environments. |
Keywords: |
Multi-Agent Path planning, A3C, VGGnet, LSTM |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
DESIGN AND EVALUATION OF HYBRID EXPLAINABLE AI INTERFACES FOR VISION RESTORATION
IN NEXT-GENERATION RETINAL PROSTHETICS |
Author: |
SHANMUGA SUNDARI M , VIJAYA CHANDRA JADALA , SIREESHA VIKKURTY |
Abstract: |
The development of retinal prosthetics has advanced significantly in recent
years, yet challenges remain in achieving both high-quality vision restoration
and interpretability of prosthetic function. This paper presents a novel
framework—Bio-Optical Explainable Interfaces (BOEI)—which integrates biological
signal modeling, optical encoding, and explainable artificial intelligence (XAI)
to enhance both the efficacy and transparency of retinal prosthetic systems.
BOEI employs a hybrid AI approach combining physics-informed neural networks
with interpretable deep learning modules to translate visual stimuli into neural
signals tailored for the damaged retina. The system models retinal ganglion cell
responses while incorporating feedback loops that visualize and explain the
decision-making processes of the AI components. Benchmarked against existing
retinal interface models, BOEI demonstrates improved reconstruction accuracy (up
to 27% over baseline models) and offers interpretable visual heatmaps
correlating prosthetic output with retinal anatomy and cognitive perception
metrics. The proposed framework represents a critical step toward clinically
viable, trustworthy, and adaptive retinal prosthetics that align with both
biological plausibility and patient-specific needs. |
Keywords: |
Retinal Prosthetics, Explainable Artificial Intelligence (XAI), Bio-Optical
Signal Modeling, Eye Floaters Impact Analysis, Neural Signal Prediction |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
ADVANCED IMAGE ANALYSIS OF CORNEAL ENDOTHELIAL DYSTROPHIES USING ARTIFICIAL
INTELLIGENT CONVOLUTION FILTERS |
Author: |
KAMIREDDY VIJAY CHANDRA , A.JOY PRANAHITHA , D.MANJU , SWATHI SAMBANGI
,POORNAIAH BILLA , P.SAMPURNA LAKSHMI, N SYAMALA |
Abstract: |
The human cornea is a complex structure composed of five distinct layers, each
playing a crucial role in maintaining vision. Among these layers, the stroma
stands out as the thickest, while the endothelium layer exhibits the highest
cell density. The endothelium layer is characterized by hexagonally structured
cells, which normally maintain the cornea's health and function. The major
diseases such as Fuch's dystrophy (FD), advanced Fuch's dystrophy (AFD),
posterior polymorphous corneal dystrophy, Irido corneal dystrophy (ICD), mild
polymegathism, and corneal guttata (CG) plays crucial role in reduction of
endothelium cells in cornea. To know the condition of endothelium cells we
conducted a study using a specular microscope, capturing a total of 13 images
from different patients, each exhibiting FD, AFD, ICD, CG etc., To analyze these
images, we utilize the Artificial Intelligent Convolution Filter (AICF)
algorithm, which enabled us to extract key parameters related to the endothelium
layer. These parameters are instrumental in assessing the health of the corneal
endothelium and understanding how different dystrophies affect cell morphology.
The parameters extracted by the AICF algorithm include Mean Cell Area (MCA),
This parameter quantifies the average cell size, with values ranging from 79.5
µm² to 3485 µm². Elongation of Endothelial Cells (EEC) measures the shape of
endothelial cells and varies from 3.11 µm² to 3.98 µm². Compactness Factor (CF)
reflects the closeness of endothelial cells, with values spanning from 0.62 µm²
to 0.93 µm². Heywood Circularity Factor (HCF) assesses the roundness of
endothelial cells and exhibits a range of 0.8 µm² to 1.98 µm². Coefficient of
Variation (CV) provides insights into the variability in cell sizes, with values
ranging from 10 µm² to 99 µm². Hexagonality of Endothelial Cells (HEC) indicates
the regularity of cell shapes and varies from 46.6 µm² to 65.2 µm².The
statistical parameters serve as valuable indicators of the overall health of the
endothelial layer and offer significant insights into how different corneal
dystrophies impact the morphology of these crucial cells. By studying these
parameters, we can better understand the progression and effects of various
corneal conditions on the corneal endothelium. The AICF algorithm efficiently
extracts the clinical features with diagnostic quality results 1000
milliseconds, which enhances the clinical interpretation time by over 90%, The
algorithm offers scalable and reliable solution for automated corneal image
analysis. |
Keywords: |
Fuchs Dystrophy, Advanced Fuch's Dystrophy, Corneal Guttata, Artificial
Intelligent Convolution Filter, Elongation Of Endothelial Cells, Compactness
Factor. |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
REFINEMENT OF A SOFT-CORE PROCESSOR IMPLEMENTATION ON FPGA |
Author: |
GYOO SOO CHAE |
Abstract: |
CPUs with soft cores represent a category of microprocessors whose architecture
and functionality can be delineated entirely utilizing languages used to
describe hardware, such as Verilog or VHDL. These processors offer a high degree
of customization tailored to specific applications and are deployable on
platforms for reconfigurable hardware, including Field Programmable Gate Arrays
(FPGAs). After the design is successfully implemented, the potential exists for
developing Application Specific Integrated Circuits (ASICs) for large-scale
production. This study entails designing, simulating, and validating an 8-bit
processor utilizing VHDL. The envisaged processors are intended for application
in control systems characterized by modest to moderate complexity. Moreover, the
8-bit soft processor design lays the groundwork for potential ASIC development.
The simulation outcomes, facilitated by a commercial software (ModelSim), are
comprehensively illustrated via timing diagrams. Additionally, the practical
outcomes of this design's execution on the VIRTEX-5 are meticulously presented
through laboratory experimentation. The primary contribution of this study is
the development of a compact, application-focused soft-core processor
architecture that emphasizes predictable execution, minimal response time, and
efficient use of hardware resources—features that are crucial for embedded
control system applications. The simulation and practical implementation results
offer insights into the performance and feasibility of deploying such processors
in real-world scenarios, paving the way for potential ASIC development and
broader industrial application. |
Keywords: |
Soft-core processor, FPGA, VHDL, Verilog, Hardware description language, ASIC,
customization, Reconfigurable hardware, Control applications, Simulation
software(ModelSim), implementation, Xilinx Spartan-3, Laboratory experimentation |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
A HYBRID LEARNING APPROACH COMBINING GRAPH AND TRANSFORMER MODELS FOR
COMMUNICATION EFFICIENT DISTRIBUTED CONTROL AND ESTIMATION IN NETWORKED SYSTEMS |
Author: |
GUDURU JAHNAVI , S PRIYANKA |
Abstract: |
The operation of the Networked Control Systems underpins modern-automated
industries including cyber-physical systems, large-scale infrastructure and
process applications. However, classical methods of Distributed Model Predictive
Control and Event-Triggered Distributed Estimation suffers from inefficiencies
such as communication overhead, poor state estimation, slow adaptation to system
variations, and short predicting horizons. The methods currently in use fail to
capture the spatial and temporal dependencies within the networked subsystems,
resulting in suboptimal control decisions and excessive computational. To
address this, the proposed Integrated Model for DMPC and ETDE includes the
following five enhanced methodologies: (1) Graph Neural Network-Based Predictive
Control; (2) Attention-driven Event-Triggered Estimation; (3) Transformer-Based
Predictive Observer; (4) Meta-Learning-Based Adaptive Control; and (5)
Variational Autoencoder-Based Communication-Efficient Control (VAE-CEC). GNN-PC
allows for an efficient approach to modeling interdependencies between
subsystems, thus strengthening decentralized control decisions. AET-E employs
attention mechanisms to focus on updates from relevant subsystems, therefore
preventing unnecessary transmissions. TPO utilizes transformers for accurate
predictions of long-range states to adopt resilience against data losses. MLAC
guarantees the required robustness under non-stationary conditions by promoting
quick adaptation to changing environments through meta-learning. VAE-CEC
realizes effective communication by compressing high-dimensional state
information at a low cost, which does not affect control performance. The
integrated model reported communication overhead saving of 50%, better control
adaptive capacity by 40%, improvement in accuracy of state prediction by 35%,
and control error reduction by 30%. This proves that the proposed work
significantly enhances the efficiency and reliability of DMPC and ETDE methods,
thereby making real-time distributed control technology more scalable, adaptive,
and resource efficient. |
Keywords: |
Graph Neural Networks, Event-Triggered Estimation, Distributed Model Predictive
Control, Transformer-Based Observer, Communication-Efficient Control, Process
control. |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
SYNERGIZING MACHINE LEARNING AND QUANTUM ANNEALING IN FRAUD PREVENTION SYSTEMS |
Author: |
S V N SREENIVASU1, RAVI UYYALA, PRANEETH CHERAKU, GARAPATI SATYANARAYANA MURTHY,
M. L. M. PRASAD, MANI MOHAN DUPATY, RAMA KRISHNA PALADUGU |
Abstract: |
With the increasing prevalence of online transactions, the risk of online fraud
has become a major concern for individuals, businesses, and financial
institutions. Traditional methods of fraud detection often fall short in
addressing the dynamic and evolving nature of fraudulent activities. The
escalating threat of online fraud necessitates innovative approaches to enhance
the efficacy of fraud detection systems. Using a quantum machine learning (QML)
strategy that incorporates Support Vector Machine (SVM) supplemented with
quantum annealing solvers, this study has developed and implemented a detection
framework. Our evaluation of its detection performance was based on a comparison
of the QML application's performance with twelve different machine learning
algorithms. This research investigates the fusion of classical machine learning
algorithms with quantum annealing solvers as a novel strategy for fortifying
online fraud detection. With traditional methods struggling to keep pace with
the dynamic nature of fraudulent activities, this paper explores the potential
synergy between machine learning and quantum computing to address the evolving
challenges in online transactions. Our study aims to demonstrate the feasibility
and effectiveness of integrating these technologies, leveraging quantum
annealing to optimize the complex decision-making processes inherent in fraud
detection. Through an in-depth analysis, we present findings on the performance,
speed, and adaptability of the integrated model, showcasing its potential to
revolutionize the landscape of online fraud detection and bolster cyber security
measures. |
Keywords: |
Cyber security, Fraud detection, Machine learning, Quantum computing,
Support Vector |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
PERFORMANCE CHARACTERIZATION OF CACHE REPLACEMENT STRATEGIES FOR MANAGING SHARED
LLC OVER HETEROGENEOUS INTEGRATED CPU-GPU ARCHITECTURE USING MULTI-THREADED
WORKLOAD |
Author: |
PRATAP KESHARI PANDA, BANCHHANIDHI DASH, PRASANT KUMAR PATTNAIK |
Abstract: |
The CPU is mostly designed for sequential and complex decision-making. it
propagates with its instruction processing in a few pipeline stages, including
steps for data and instruction fetching from memory. With time, the GPU, with
its parallel processing capabilities, came into existence to assist the CPU,
mainly for graphics processing. Because of parallel processing and a huge number
of thread executions, its data access pattern is quite different from that of
CPUs. In its rendering pipelines, the GPU may access varied data streams. The
access may vary in semantics in different access patterns. Due to commercial
requirements and to minimize data sharing latency between CPU and GPU,
integrated processor designs are emerging. these designs accommodate both CPU
and GPU in a single chip, sharing common resources. On many occasions, sharing
these resources becomes a bottleneck for the whole system, deteriorating overall
performance. Last Level Cache sharing among CPU and GPU is a bigger challenge in
this design approach. A suitable cache eviction strategy is highly essential to
utilize the shared LLC space effectively. Optimized cache with better clock
speed can also be considered along with a higher configuration for CPU and GPU
to improve the overall performance. Here in our work, we have designed an
integrated heterogeneous CPU-GPU model with upgraded configurations to boost
performance. Further, we have compared the performance with Alder Lake and
Raptor Lake processors, taking them as baselines. We have achieved a speedup
improvement of 59.8% and 22.12% compared to Alder Lake and Raptor Lake
processors, respectively, taking the geometric mean of eight different sets of
workloads. Eight different cache replacement schemes have been configured in
MacSim simulator, and the workload from different benchmark suites has been
given. Through simulation results, we found an average read miss count
improvement of 29.41% and 15.38% over Alder Lake and Raptor Lake, respectively.
This may help to IT infrastructure in terms of actuarial science, real time
systems etc |
Keywords: |
Multi-Core, Graphics Processing Unit, Multi-Threading, Benchmark, Cache
Replacement Policy, Shared Last Level Cache, Heterogeneous Architecture |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
MALICIOUS DOMAIN DETECTION USING INTEGRATED SUPERVISED AND UNSUPERVISED
MACHINE LEARNING APPROACHES |
Author: |
SRI LAXMI KUNA, PULLURI SRINIVAS RAO, A. LAKSHMANARAO, SOWNDARYA LAHARI
CHINTALAPUDI, NALANAGULA HARINI, HARI KRISHNA H |
Abstract: |
Detecting Domain Generation Algorithms (DGA) is crucial in cybersecurity to
identify malicious domain names. While existing studies focus individually on
either supervised or unsupervised learning, limited work has explored their
integrated use for DGA detection. This paper addresses that gap by combining
clustering-derived features with traditional classifiers to enhance detection
accuracy. This paper explores an innovative approach for DGA detection utilizing
supervised classification and unsupervised clustering techniques. The
methodology begins with preprocessing the dataset and extracting relevant
features, such as domain names, host information, and subclass labels. Later,
feature hashing is utilized for dimensionality reduction, transforming
categorical features like domain names, hosts, and subclasses into feature
vectors. Advanced clustering methods, including KMeans, Hierarchical Clustering
(Agglomerative), and Density-Based Clustering (DBSCAN), are employed to uncover
underlying patterns in the data. These techniques aid in identifying distinct
groups or clusters within the dataset, potentially assisting in differentiating
DGA from legitimate domain names. Later, cluster labels were added as features
for final dataset. Subsequently, multiple ML classifiers, including Random
Forest, Decision Tree, KNN, SVM, and Logistic Regression, are trained to
classify domain names as DGA or non-DGA based on the extracted features.
Rigorous experimentation and evaluation assess the performance of each
classifier in terms of accuracy and other relevant metrics. This hybrid approach
contributes new knowledge on how feature enrichment through clustering can
improve model generalization in real-world cyber threat scenarios. The results
offer insights into the effectiveness of the proposed methodologies for DGA
detection. |
Keywords: |
DGA Detection, Supervised learning, Unsupervised Learning, Machine Learning,
Random Forest. |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
PREDICTION THE DEPTH OF COLOR, OF CATIONISED COTTON FABRIC DYED WITH REACTIVE
DYES USING FUZZY LOGIC |
Author: |
M. ELKHAOUDII, M. EL BAKKALI, R. MESSNAOUI, A. SOULHI≠4, O. CHERKAOUI |
Abstract: |
Cotton fabrics are commonly dyed using reactive dyes. These dyes typically
require substantial amounts of salt or sulphates, which serve to reduce the
surface tension between the anionic dyes and the anionic cellulose fiber. This
interaction significantly enhances dye bath exhaustion and facilitates the
diffusion 0f the dye onto the cotton fibers. However, reactive dyeing of cotton
has a considerable environmental impact due to the large volumes of dye effluent
containing unfixed dyes and salts. As a result, reactive dyeing is considered
0ne of the most environmentally polluting dyeing methods. Among the alternative
approaches gaining increasing attention from researchers is the chemical
cationization of cotton, which improves dye affinity without the need for salt.
In this study, cotton was cationized using the commercial product CRW, supplied
by Impocolor, to achieve an adequate dye uptake for uniform reactive dyeing on
the modified cotton substrate. T0 effectively monitor the dyeing process of
cationized cotton with the reactive dye Triactive Red S3B, a Datacolor SF 450
spectrocolorimeter was employed to measure the K/S values of various samples.
The effects of temperature, dye concentration, sodium carbonate concentration,
and processing time on the dyeing performance of cationized cotton were
evaluated. |
Keywords: |
Cationization, Exhaustion, Reactive dye, Cotton, Fuzzy logic, Color strength |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
SECURING CYBER-PHYSICAL SYSTEMS: FLAMINGO SEARCH ALGORITHM OPTIMIZED DEEP
LEARNING FOR THREAT DETECTION |
Author: |
K. SRI VIJAYA, S. SUDESHNA, NARESH KUMAR BHAGAVATHAM, PARASA KONDALA RAO, N.
SRIJA, PRAVEENA MANDAPATI, RAMESH ELURI |
Abstract: |
Threat detection in Cyber-Physical Systems (CPS) is essential to safeguarding
the reliability and security of these integrated systems, which interface
digital components with the physical world. CPS platforms, common in healthcare,
industrial automation, smart cities, and transportation, face vulnerability to
various cyber threats. Effective threat detection in CPS involves identifying
and mitigating cybersecurity risks, which can otherwise disrupt physical
operations, compromise data integrity, and jeopardize safety. Machine Learning
(ML) and Deep Learning (DL) techniques are increasingly leveraged for detecting
anomalies by modeling the CPS’s normal behaviour and recognizing deviations.
This study presents an Automated Threat Detection using the Flamingo Search
Algorithm with Optimal Deep Learning (ATD-FSAODL) in CPS environments.
Initially, the ATD-FSAODL technique applies Flamingo Search Algorithm
(FSA)-based feature subset selection to identify optimal feature sets. The
ATD-FSAODL approach utilizes a modified Elman Spike Neural Network (MESNN) for
threat recognition and classification, with the Slime Mold Algorithm (SMA)
optimizing the MESNN parameters to enhance detection accuracy. Simulation
experiments on benchmark databases demonstrate the effectiveness of the
ATD-FSAODL technique, achieving a maximum accuracy of 99.58%, precision of
99.58%, recall of 99.58%, F-score of 99.58%, and MCC of 99.16%. |
Keywords: |
Cyber-physical system, Threat analysis, Industry 4.0, Deep learning, Feature
selection |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
INTELLIGENT STEGANOGRAPHY: CNN-BASED DATA HIDING IN EDGE REGIONS AND
NON-OVERLAPPING BLOCKS |
Author: |
ANUSHA REDDY NARA, SAYED SALMA SULTHANA, DASARI ANUSHA, V. CHANDRA KUMAR, KARI
VENKATA SUMANTH, PINJARI MASOOM BASHA, NARENDRA BABU PAMULA |
Abstract: |
Deep learning methods added to steganography have greatly simplified the hiding
of data in digital media without anyone noticing. This article discusses a
clever steganography technique hiding information in edge areas using
Convolutional Neural Networks (CNNs), hence covering picture blocks not
overlapping. Carefully selected edge sections feature several varied textures,
which helps to conceal injected data and reduce visual distortion. By means of
local pixel variation, the proposed CNN-based approach automatically locates the
optimal embedding sites. This guarantees great payload capacity without
compromising undetectability. Unlike other techniques that depend on manual
feature engineering, ours employs deep learning to modify the embedding strength
depending on the image material, hence enhancing safety and resilience. Blocks
that don't overlap save even more computing power. They let processing occur in
real time and lower mistakes that can lead steganography to be found, thus We
evaluated the proposed approach on several standard datasets and discovered
that, in terms of embedding capacity, visual quality (as assessed by PSNR and
SSIM), and resistance to statistical attacks, it performs better than
conventional LSB and DCT-based approaches. The technology also performs
effectively with various kinds of photos, which implies it can be utilized for
digital watermarking and secure interaction.By means of a secure, flexible,
high-capacity approach to conceal data, this work bridges the gap between
machine learning and steganography. Current CNN-based steganography methods
often suffer from low payload capacity, especially when data hiding is limited
to edge regions that constitute a very small percentage of the image. Moreover,
overlapping block methods compromise robustness and invisibility as they do not
adapt to local image complexity and incur redundancy and computational overhead.
Furthermore, many existing methods are vulnerable to modern steganalysis tools
and are not transferable to different types of images due to limited use of deep
features and static embedding strategies. Researchers in the future could
investigate how antagonistic training could reduce system susceptibility to
sophisticated steganography systems. |
Keywords: |
Steganography, Convolutional Neural Networks (CNN), Edge-Based Embedding,
Non-Overlapping Blocks, Deep Learning, Data Hiding, Image Security, Steganalysis
Resistance |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
VGG-DAGSVM FOR MULTI-CLASS APPLE LEAF DISEASE DETECTION: A TRANSFER LEARNING
APPROACH |
Author: |
JOICE SHAKILA.A, Dr.SARAVANAN.S |
Abstract: |
Apple cultivation is essential in world agriculture because of its high
nutritional benefits and economic importance. However, increasing pressures from
foliar diseases such as apple scab, cedar apple rust, and black rot seriously
threaten yield and quality within UK apple orchards. One strategy is to
capitalize on advanced technologies such as computer-aided diagnosis systems for
improved detection and classification of diseases. To address this challenge,
this paper proposes a generic and automatic solution to recognizing and
classifying diseases of apple fruit leaves leveraging the VGG-DAGSVM
architecture. The workflow begins with bilateral filtering to mitigate image
noise without losing core edge information, and Contrast Limited Adaptive
Histogram Equalization (CLACHE) for contrast enhancement. SegNet is used to
accurately segment diseased regions, and the VGG-19 deep learning model features
extraction. Finally, a Directed Acyclic Graph Support Vector Machine (DAGSVM) is
used as the classifier for accurate disease classification. Experimental
evaluation uses a publicly available dataset comprising 13,124 apple leaf images
across four categories. The proposed model achieves a classification accuracy of
96.50%, precision of 96.05%, sensitivity of 95.92%, specificity of 96.43%, and
F-score of 96.45%, outperforming several existing models. These results
demonstrate the robustness and applicability of the potential of integrating
image processing, deep feature extraction, and intelligent classification
systems to support early disease detection and promise sustainable apple farming
practices. |
Keywords: |
Apple leaf, Diseases, Deep Learning, Transfer Learning, Support Vector Machine. |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
LEARNED DICOM IMAGE COMPRESSION VIA SUPER-RESOLUTION WITH DILATED RESIDUAL
BLOCKS AND PIXEL ATTENTION |
Author: |
OBADA OTHMAN AGHA, YAHIA FAREED, LOUAY CHACHATI |
Abstract: |
The digital imaging and communications in medicine (DICOM) standard leads global
efforts to advance medical imaging. With the growing interest in this field,
especially given the rapid developments in telemedicine applications, achieving
efficient compression without losing diagnostic accuracy is critical. Therefore,
there is an imperative need to employ an advanced hybrid deep-learning model,
specifically designed for these medical images, to outperform current methods.
The proposed method tests the development of a non-autoregressive model for
parallel pixel prediction, avoiding sequential processing. This scheme uses
dilated residual blocks (DRBs) to capture long-range dependencies using fewer
blocks. It combines depthwise separable convolution (DSC) layers with the pixel
attention mechanism to reduce computational complexity while preserving the
diagnostic details in the reconstruction task. The method also leverages a
discretized mean and scale Gaussian distribution mixture model to achieve
efficiency across various DICOM image types. The results demonstrate that the
method achieves higher compression ratios, and improves the bpsp metric by
19.26%, 22.89%, and 23.94%, compared to the best competing methods, for each of
magnetic resonance imaging (MRI), computed radiography (CR), and computed
tomography (CT) images, respectively. Compared to the leading learning-based
methods, the system reduces the mean compression time by [16.96 - 19.02] %. At
the same time, the high values of PSNR and SSIM metrics demonstrate the ability
to ensure high quality. This approach balances compression efficiency, speed,
and diagnostic reliability, enhancing DICOM image processing for telemedicine. |
Keywords: |
DICOM images, Deep learning, Super-Resolution, Images Compression, DSC layers. |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
EXPLORING ENHANCEMENT IN SENTIMENT ANALYSIS USING DESCRIPTIVE-SEMANTIC
TECHNIQUES: A SYSTEMATIC LITERATURE REVIEW |
Author: |
NURAIN BATRISYIA BIDIN, KHYRINA AIRIN FARIZA ABU SAMAH, NOR AFIRDAUS BINTI
ZAINAL ABIDIN, LALA SEPTEM RIZA |
Abstract: |
Sentiment analysis (SA) is a critical component of natural language processing
(NLP) that enables the extraction of subjective information from large volumes
of textual data. However, achieving high accuracy and contextual understanding
remains challenging due to the unstructured nature of text and the complexity of
linguistic patterns. To address this issue, this study investigates applying
descriptive and semantic analysis (DSA) techniques to improve sentiment
reliability and contextual accuracy. This study conducts a systematic literature
review (SLR) of 28 publications from 2019 to 2024 using the PRISMA methodology,
focusing on works indexed in Electrical and Electronics Engineers (IEEE) and
Science Direct (SD). The review identifies Exploratory Data Analysis, Term
Frequency-Inverse Document Frequency, and Latent Dirichlet Allocation as the
most prevalent DSA techniques to enhance SA. Four thematic areas emerged from
the analysis: sector, purpose, algorithm, and method employed. The study
concludes that while DSA techniques contribute significantly to addressing
contextual and semantic limitations in SA, there remains a need for more
integrated approaches to handle complex sentiment structures. The findings
highlight key challenges, recent advancements, and directions for future
research in this evolving field. |
Keywords: |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
HYBRID CNN WITH ATTENTION-BASED FEATURE FUSION FOR LUNG DISEASE DETECTION FROM
RESPIRATORY SOUNDS USING CLASS-BALANCED OPTIMIZATION |
Author: |
Dr. P. GNANASUNDARI, Dr. PRADIP RAM SELOKAR, Dr. S. GOPINATH, V.KIRUTHIKA,
B.SUGANTHI, EPHIN M |
Abstract: |
Lung disease detection using respiratory sounds involves analyzing audio
recordings of breath sounds to identify abnormalities associated with conditions
such as Chronic Obstructive Pulmonary Disease (COPD), pneumonia, and asthma.
However, accurate detection remains challenging due to class imbalance in
datasets and the difficulty of extracting discriminative features from complex
audio signals. This study proposes a novel Hybrid Convolutional Neural Network
with Attention-Based Feature Fusion (HCNN-AFF) to enhance detection accuracy.
The model integrates spectrogram-based inputs with Mel-Frequency Cepstral
Coefficients (MFCCs), effectively capturing both time-frequency and cepstral
information for richer feature representation. To address class imbalance, a
combination of data augmentation and a class-balanced loss function is employed,
enabling improved learning from minority class samples. Furthermore, the model
is optimized using the AdamW optimizer enhanced with a LookAhead mechanism
(AWLA), promoting better convergence and generalization. Experimental results
demonstrate that the proposed HCNN-AFF method outperforms existing approaches in
respiratory sound analysis, achieving superior accuracy and robustness in lung
disease detection. |
Keywords: |
Lung Disease, Respiratory Sounds, Machine Learning, Feature Extraction,
Mel-Frequency Cepstral Coefficients, Early Diagnosis.. |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
UTILIZING ARTIFICIAL INTELLIGENCE PREDICTIVE MAINTENANCE IN LEAN MANUFACTURING
TO BOOST INDUSTRIAL SUSTAINABILITY AND ENERGY EFFICIENCY |
Author: |
WAHYU SARDJONO, DENY ADITYA PRATAMA |
Abstract: |
The implementation of Artificial Intelligence (AI)-driven Predictive Maintenance
(AI-PdM) within Lean Manufacturing has emerged as a transformative strategy to
improve energy efficiency and promote industrial sustainability. AI-PdM supports
continuous condition monitoring, advanced predictive analytics, and optimized
maintenance planning, which help mitigate unexpected downtimes, enhance
equipment longevity, and maximize energy utilization. This research adopts a
Systematic Literature Review (SLR) approach to evaluate AI-PdM’s impact on
manufacturing efficiency and its correlation with Sustainable Development Goals
(SDG) 7 (Affordable and Clean Energy) and SDG 9 (Industry, Innovation, and
Infrastructure). The study’s findings highlight AI-PdM’s significant
contribution to lowering carbon emissions, increasing operational dependability,
and fostering sustainable production methods. Nevertheless, obstacles such as
substantial initial capital requirements, a shortage of specialized workforce,
and difficulties in integrating with existing systems continue to hinder its
widespread adoption. Therefore, this paper advocates for the establishment of
standardized guidelines and stronger collaboration among academia, industry
stakeholders, and policymakers to accelerate AI-PdM adoption in sustainable
manufacturing. |
Keywords: |
Intelligence, Predictive Maintenance, Lean Manufacturing, Energy Efficiency,
Industrial Sustainability, Industry 4.0. |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
EXPLORING KEY INFORMATION REQUIREMENT FOR EFFECTIVE MONITORING OF COURSES AND
STUDENT PERFORMANCE: A QUALITATIVE THEMATIC ANALYSIS |
Author: |
MOHD HAFIZAN MUSA, SAZILAH SALAM ,MOHD ADILI NORASIKIN, NORAFFANDY YAHAYA,
RUJIANTO EKO SAPUTRO |
Abstract: |
Universities serve multiple stakeholders, including students, lecturers,
parents, alumni, and regulatory bodies. Among these, students stand out as a
primary focus, aligning with the university's core mission to educate. Achieving
stakeholder satisfaction necessitates enhanced education quality, encompassing
teaching methods, program delivery, and the relevance of course offerings.
Students' success in gaining knowledge, skills, and competencies is critical to
university effectiveness. In today’s dynamic educational landscape, digital
technologies offer new avenues to enrich learning, with Learning Analytics (LA)
emerging as a transformative field. The growth of big data has increased the
volume and complexity of available educational data, requiring LA systems to
evolve accordingly. With the universities maintaining more than one e-learning
platform, LA presents a new challenge in integrating this heterogeneous
e-learning data. To overcome this problem, graph databases and Resource
Description Framework (RDF) ontologies have proven valuable for managing and
analysing such extensive data. However, to build Web Ontology Language (OWL)
frameworks that effectively support LA, researchers must first identify faculty
administrators' and lecturers' specific data needs or key information, ensuring
only essential information is taken to be integrated in LA. To gather the key
information, interview sessions were conducted with six participants from two
institutions, including an e-learning coordinator, lecturers, and faculty
administrators as part of the case study. The interview result is crucial for
identifying the essential information needed in LA pertaining to course
performance and student performance. |
Keywords: |
Learning Analytics, Student Performance, Course Performance, Information Data
Retrieval, e-learning, Heterogeneous Data Retrieval |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
HYBRID DEEP LEARNING INTEGRATION ON ADVERSIAL NETWORK FOR AN ACCURATE
AGRICULTURAL SEQUENCE DATA PREDICTION |
Author: |
ARUMAI RUBAN J, SUNDAR SANTHOSHKUMAR , A. SUMATHI, J.JEGATHESH AMALRAJ, R.
BHAGAVATHI LAKSHMI |
Abstract: |
In the Drip Irrigation system, Water scarcity and inefficient irrigation
practices are major challenges that are faced in smart agriculture. To overcome
these issues lot of Deep Learning (DL) methods are processed for prediction.
Though there are lot of inefficiency and inaccurate in evaluation, an effective
prediction is required in modern agriculture. To attain higher accuracy and
robust prediction, this work presents an advanced sequence of Generative
Adversarial Networks (SeqGAN) to generate and predict agricultural data that is
specifically used to optimize irrigation practices and manage water resources
effectively. The proposed seqGAN architecture consists of a generator and a
discriminator. The generator involves a DL method of Long Short-Term Memory
(LSTM) networks to create realistic agricultural text sequences by learning from
previous data and it also has controlled variability through noise injection.
The discriminator includes a Gated Recurrent Unit (GRU) and a CatBoost
classifier to differentiate between real and generated sequences. The CatBoost
integration enhances the model's ability to handle categorical data efficiently
which enhances the accuracy and robustness of sequence classification. This
proposed method is particularly beneficial for agricultural datasets augmenting
and effective predictive sequences in agricultural fields. This proposed work
not only improves data availability but also supports innovative solutions in
agricultural research that ultimately contribute to more sustainable and
efficient farming tasks than conventional methods. |
Keywords: |
Agricultural data, SeqGAN, LSTM Generator, GRU Discriminator, CatBoost
classifier, Error Rate Analysis |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
EFFICIENT LLM INFERENCE ON MCP SERVERS: A SCALABLE ARCHITECTURE FOR EDGE-CLOUD
AI DEPLOYMENT |
Author: |
SWAPNA DONEPUDI, U POORNA LAKSHMI, NVS PAVAN KUMAR, S LALITHA, RUHISULTHANA
SHAIK, DESHINTA ARRORA DEVI |
Abstract: |
Organizations need to create deployable LLM models which use open frameworks
with privacy-protected standards. Cloud-based inference remains popular yet it
forces delays along with resource wastage and exposes security threats. The
processing limitations of edge computing create an isolated drawback due to the
closeness of data sources. The proposed research presents an edge-cloud joined
system with MCP servers that delivers efficient LLM inference workload
offloading. Research results showed that system performance delays occurred
concurrently with resource consumption measurements and throughput capacity
assessments and energetic performance metrics measurements. Through the system
analysis tool users can evaluate performance and resource utilization and
throughput performance simultaneously. Model predictions through this system
achieve accurate results while the system performs at industry standards
regarding latency and throughput acceleration. Within the proposed framework
researchers established edge-Cloud LLM orchestration capabilities to optimize AI
systems deployment in current real-world scenarios. |
Keywords: |
Resource Efficiency, Wireless, Large Language Models, Cloud |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
OPTIMIZED RELIABILITY FRAMEWORK FOR SERIES-PARALLEL SYSTEMS: STRATEGIC
REDUNDANCY ALLOCATION USING HAM AND IPM SOLUTIONS |
Author: |
SRIDHAR AKIRI, RAMADEVI SURAPATI, SRINIVASA RAO VELAMPUDI, ARUN KUMAR SARIPALLI,
BHAVANI KAPU & PHANI BUSHAN RAO PEDDI |
Abstract: |
When it comes to reliability engineering, the main goal is to make sure that
systems and their parts always work as they should within a certain time limit
and under certain conditions. Within the domain of reliability theory, system
robustness is improved by strategically incorporating redundancy, while
simultaneously accounting for constraints such as cost, mass, and spatial
requirements in series-parallel architectures. This research focuses on
examining how these constraints-specifically weight, volume, dimensional, and
spatial factors-influence system reliability enhancement. The investigation is
centered on spare parts utilized in a representative Automation Forum setup,
where factors like cost, weight, and size are vital to sustaining effective
functionality. Unlike electronic systems that may not emphasize these
constraints, the Automation Forum includes crucial components such as
compressors, condensers, and absorption towers, all of which require thorough
reliability evaluation. To address this, the Lagrange multiplier method is
employed to design and analyze an integrated redundant reliability model
configured in a series-parallel structure. This approach yields
continuous-valued outputs for essential variables including component count,
individual and stage reliabilities, and total system reliability. To derive
feasible and practical integer-based solutions, the study further implements
heuristic strategies and integer programming. These techniques significantly
enhance the precision and relevance of the reliability analysis conducted. |
Keywords: |
IRR Model, Series-Parallel Configuration, System Reliability, LAM Approach, HAM
Approach, IP Approach |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
Title: |
AI AND SCRIPTWRITING: A NEW THREAT? |
Author: |
LAURENSIUS VICKY CRISTANTO, EKKY IMANJAYA |
Abstract: |
As Artificial Intelligence (AI) becomes increasingly inevitable in the film
industry, a critical controversy arises: Can AI replace human scriptwriters, or
does it merely serve as a tool to assist them? This study addresses the debate
through an academic investigation of Sunspring (2016), the first AI-generated
film, to map the discourse and offer practical solutions. Using a desk review
methodology focused on 2021–2024 literature and critical assessments of
Sunspring, our findings highlight an urgent need for scriptwriters to master AI
as an assistive tool. This approach positions human writers as creative
masterminds who harness technological efficiency while avoiding artistic
compromises. The paper suggests a teamwork approach, saying that even though
using AI is necessary, it should be done carefully to avoid slowing down
creativity and to protect the role of human writers. These insights contribute
to the AI-creativity debate and provide pragmatic guidance for industry
professionals navigating this transformative era. The study concludes that while
AI cannot replace human scriptwriters, it can act as a valuable assistant by
enhancing ideation; however, it requires human oversight to maintain coherence,
emotional depth, and thematic integrity. |
Keywords: |
Artificial Intelligence, Scriptwriting, Sunspring, Human scriptwriters,
Filmmaking |
Source: |
Journal of Theoretical and Applied Information Technology
15th July 2025 -- Vol. 103. No. 13-- 2025 |
Full
Text |
|
|
|