|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
December 2024 | Vol. 102
No.23 |
Title: |
STRATIFICATION OF BREAST CANCER IMAGES BY UTILIZING SPATIAL INFORMATION AND DEEP
LEARNING MODEL |
Author: |
RATHLAVATH KALAVATHI, Dr M SWAMY DAS |
Abstract: |
For the purpose of health care imaging, accurate breast cancer (BC)
identification and classification is a critical task because of breast cancer
tissue is too complex. BC is primary reason for women death with cancer. Due to
potential strongest of deep learning in extraction of dominant features. In this
paper, hybrid model is proposed for automated identification and treatment of BC
namely EDNet-SVM i.e. Encoder-Decoder Net with SVM i.e., Support Vector
Mechanism. The deep potential characteristics are extricated from EDNet and then
breast cancer classification is done using SVM. EDNet is composition of Encoder
and decoder and features are derived using long short term memory (LSTM). The
segmented images are constructed using histogram equalization and morphological
operations and feed to the EDNet for deriving optimal features from the breast
cancer images which includes local features. The proposed EDNet-SVM model shown
the high accuracy of 98.44% on considered freely available dataset BreastHis
dataset and 96.34% on MIAS dataset. This paper listed performance values of
three kernels of SVM (linear, cubic and Gaussian). The test outcomes concluded
that the proposed model is superior than the existing state of art models. |
Keywords: |
Identification, Breast cancer, Support vector mechanism, long short term memory,
local features. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
DYNAMIC PROGRAMMING-ENHANCED ENERGY-EFFICIENT TASK SCHEDULING IN EDGE-CLOUD
ENVIRONMENTS |
Author: |
DR.SARAVANAN.M.S, MRS. MADHAVI KARUMUDI |
Abstract: |
In the wake of modern Internet of Things (IoT) use cases and workflow
applications, edge computing plays a crucial role in the faster execution of
specific tasks. Therefore, the edge-cloud environment offers computing resources
required by modern applications. Using cloud resources for every task may lead
to violation of Service Level Agreements (SLAs), as well as causing impediments
to the execution of tasks with deadlines. In this research, we suggested a
system concept and architecture for effective task scheduling in an edge-cloud
setting to address this issue. An algorithm called Dynamic Programming based
Energy Efficient Task Scheduling (DPEETS) is introduced for effective work
scheduling in edge-cloud environments. Our algorithm exploits dynamic
programming, hamming distance termination, and randomization to optimize
decisions made in the edge cloud about task scheduling. This suggested algorithm
uses a dynamic programming table to keep track of iterations and eliminate
unnecessary computations. DPEETS considers multiple objectives such as deadline,
energy efficiency, and latency of task execution. The algorithm heuristically
converges in a few iterations, leading to energy-efficient scheduling of tasks
in the edge cloud. A simulation study has revealed that DPEETS outperforms many
existing heuristic algorithms. |
Keywords: |
Task Scheduling, Cloud Computing, Edge Computing, Dynamic Programming,
Energy-Efficient Task Scheduling |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
A DISTRIBUTED ADAPTIVE ALGORITHM FOR EFFICIENT LOCALIZATION OF SENSOR NODES IN
AD HOC NETWORKS |
Author: |
PINAMALA SRUTHI, DR. Y AMBICA, DR. B.N.V. UMA SHANKAR, PRATHAP ABBAREDDY, VASAVI
OLETI |
Abstract: |
Node localization in a wireless ad hoc network of sensor nodes is essential for
making informed decisions and maximizing the network's effectiveness and
efficiency. Localization is often based on different measures with centralized
distributed approaches. With empirical study, we found that the distributed
approach to node localization is more robust than its centralized counterpart.
Therefore, in this paper, we proposed an algorithm for sensor node localization
in a distributed approach. Our algorithm, Distributed Optimized and Adaptive
Node Localization (DOANL), exploits a collaborative method that can achieve
efficient localization even in a smaller number of beacons. The foundation of
our methodology is the Angle of Arrival (AOA) and Time of Arrival (TOA). Node
and beacon deployment locations have an impact on the localization process as
well. This work presents the Distributed Optimized and Adaptive Node
Localization (DOANL) algorithm, which enhances localization accuracy in
distributed wireless ad hoc networks by leveraging AOA and TOA with minimal
beacons and minimizing error propagation, outperforming state-of-the-art. This
paper also investigates this proposition by analyzing node deployment and
network connectivity. We also found that error propagation across the network
limits the accuracy of the proposed algorithm. To address this problem, we
further optimized the DOANL algorithm by minimizing error propagation across the
network. The suggested approach outperforms state-of-the-art ones in wireless ad
hoc network localization, according to an empirical research conducted with an
NS-3 simulation study. |
Keywords: |
Wireless Ad Hoc Network, Wireless Sensor Network, Node Localization, Distributed
Adaptive Localization, Error Propagation. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
ADVANCED INSAR TECHNIQUES FOR LANDSLIDE DETECTION AND RISK ASSESSMENT: A CASE
STUDY OF THE TAZA-AL HOCEIMA EXPRESSWAY IN MOROCCO |
Author: |
ISHAK HBIAK , OMAR BACHIR ALAMI , ANOUAR KESMAT , NIA MAJDA, SAID RHOUZLANE ,
MERIEM HABIBELLAH |
Abstract: |
The Taza-Al Hoceima expressway is a vital infrastructure that facilitates the
integration of the region into Morocco's economic dynamics. Several challenges
related to the difficult geotechnical context of the northern region arose
during the construction period, and additional challenges continue to emerge
concerning the sustainability of this infrastructure. Indeed, the complex
geology and topography of the Taza province make it prone to land movements,
posing risks to critical infrastructure. The objective of this study is to
employ advanced Interferometric Synthetic Aperture Radar (InSAR) techniques to
detect, monitor, and measure land movements, particularly landslides, along the
infrastructure, using Sentinel-1 radar data covering the period from 2015 to
2020. Our work also focuses on explaining the instability movements measured
during the study period by identifying their main causative factors. The study's
findings also enable the identification of certain high-risk instability zones
within our geologically complex study area, which require long-term monitoring. |
Keywords: |
Geotechnics, Remote sensing, Landslide, Synthetic Aperture Radar, PCA |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
PREDICTING DAILY RIVER FLOW USING LONG SHORT-TERM MEMORY (LSTM): A DEEP LEARNING
APPROACH |
Author: |
ABDEL KARIM BAAREH, MONTHER TARAWNEH, FAISAL ALZYOUD, IBRAHIM ALTARAWNI MOHAMMED
AMIN ALMAIAH, ROMMEL ALALI, TAYSEER ALKHDOUR, MAHMAOD ALRAWAD, THEYAZN H. H
ALDHYANI, RAMI SHEHAB |
Abstract: |
Deep learning is a large-scale model in machine learning that uses multi-layer
neural networks, which mimic the complex structure of the human brain.
Combinations of layers and neurons in artificial neural networks create
excellent practical applications. This paper explores the use of Long Short-Term
Memory (LSTM), a subset of recurrent neural networks, known for its complexity
and versatility in deep learning. The primary goal involves using LSTM to
address the challenge of predicting daily river flow for two prominent rivers in
the USA. Real datasets related to the daily flow of the Black and Gila Rivers
were used and divided into different sets for training and testing. A
comparative analysis was performed between the training and test sets, and error
metrics were evaluated to confirm the effectiveness of the LSTM model. The
experimental results collected from this study using LSTM were remarkably good,
and showed significantly low error values, demonstrating its effectiveness in
predicting river flow. |
Keywords: |
Forecasting, Deep Learning, Long Short-Term Memory, Artificial Neural Network,
and Recurrent Neural Networks |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
A DWT AND PATTERN RECOGNITION APPROACH FOR FAULT DETECTION AND CLASSIFICATION IN
TRANSMISSION NETWORKS |
Author: |
G PRABHAKAR RAJU, V SRINIVASA RAO, G ASHOK KUMAR, RAMU BHUKYA, PRADEEP JANGIR,
SARIHADDU KAVITHA, B VARAPRASAD RAO |
Abstract: |
The reliable and efficient operation of high-voltage transmission lines is
essential for ensuring the stability and quality of electrical power
distribution. To address this concern, this research paper presents an in-depth
study on the application of wavelet transform for detecting and classifying
faults in high-voltage transmission lines. Fault detection and classification
are crucial tasks in the maintenance and operation of power systems to minimize
downtime and ensure the safety of personnel and equipment. The wavelet transform
has proven to be a powerful tool for analyzing transient signals in electrical
systems, making it a valuable technique for fault detection and classification.
This paper provides a comprehensive review of wavelet transform theory, its
application to fault detection, and classification algorithms. Additionally, it
discusses various case studies and practical implementations, highlighting the
advantages and limitations of wavelet-based techniques. The results demonstrate
the effectiveness of wavelet transform in enhancing the reliability and
efficiency of high-voltage transmission line monitoring and maintenance. |
Keywords: |
High-Voltage Transmission Lines, Wavelet Transform, Faults, Power Systems |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
OPTIMIZING CROP RECOMMENDATION SYSTEMS USING ADVANCED DEEP LEARNING TECHNIQUES |
Author: |
PAPPALA MOHAN RAO, KUNJAM NAGESWAR RAO |
Abstract: |
The agricultural sector plays a significant role in economic development,
especially for developing nations, but farmers experience hurdles such as
climate change interventions, soil management, and crop yield choices. To this
end, this study turns to Deep Learning (DL) to solve these problems and develop
a holistic approach that employs a range of DL approaches for crop selection
based on soil condition, climate, and past farming records. Predicted models and
methods: These are Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM),
and Transformers; precision agriculture through crop prediction; data-driven
decision-making on resource allocation to boost farm revenue; and soil
management through the prediction of the soil pH and recommended crops. This
plan of action is designed to help farmers make the most beneficial decision to
enhance the farming results. |
Keywords: |
Agricultural, Deep Learning (DL), Long Short-Term Memory (LSTM), Bidirectional
LSTM (BiLSTM), Transformers, crop recommendation. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
THE IMPACT OF ARTIFICIAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PROCESSES |
Author: |
ZHENGYAO FENG, JONGCHANG AHN |
Abstract: |
This article explores the impact of Artificial Intelligence (AI) technology on
Human Resource Management (HRM) processes, aiming to reveal the role of AI in
improving HRM efficiency and enhancing employee experience and engagement.
Firstly, a literature review indicates that AI technology can significantly
improve the efficiency of recruitment, training, performance management, and
employee support. Additionally, AI technology can enhance employee work
experience and engagement by utilizing personalized training and intelligent
employee care systems. Subsequently, it conducted a questionnaire survey among
206 employees engaged in HRM or other management roles, and used quantitative
research methods to analyze the survey data statistically. The analysis revealed
that respondents generally believe AI positively enhances HRM efficiency and
improves employee experience and engagement. They also showed a high acceptance
and adaptability to AI technology, although there were some concerns about its
potential risks. Multiple linear regression analysis showed that perceived risk
has a significant positive impact on job performance, while the influence of
other factors is not significant. The article suggests that companies should
focus on managing perceived risks when applying AI technology to ensure it has a
positive impact on business outcomes, and it also emphasizes the importance of
improving employee experience and engagement. This research provides theoretical
support and practical guidance for the application of AI technology in the HRM
field, and it also points out possible directions for future research. |
Keywords: |
Artificial Intelligence (AI), Human Resource Management (HRM), Efficiency,
Employee Experience, Perceived Risk |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
USING ARTIFICIAL INTELLIGENCE TO IMPROVE TAX SECURITY AND CONTROL OVER TAX
AVOIDANCE SCHEMES |
Author: |
YEVHEN TSIKALO, OLEKSANDR ZINEVYCH, DENYS OSIPENKO, VIKTORIYA KULYK, OLENA
LAGOVSKA |
Abstract: |
Tax security is a critical area in modern states in view of its importance for
ensuring the state budgets’ replenishment and reducing the shadow economy size.
At the same time, a serious obstacle to tax security is the development of tax
evasion schemes using modern technologies. The aim of the research is to analyse
the impact of artificial intelligence (AI) on the volume of losses from tax
abuse in European countries. The work uses the methods of statistical analysis,
case study, regression and correlation analysis. The conducted research
confirmed the initial assumption about the impact of the AI development and the
amount of losses arising from tax abuse. This was achieved by identifying a
statistically significant impact of the Intercept, Talent, Infrastructure and
Commercial variables on the Annual tax loss: Corporate tax abuse (% of GDP). The
relationship is inverse with Intercept and Talent, and positive with
Infrastructure and Commercial. It follows that the development of infrastructure
and the active commercial use of AI is accompanied by an increased volume of tax
losses from corporate tax abuse. Increased level of human capital development,
expressed through the Talent indicator, is accompanied by a decreased amount of
tax losses. Accordingly, the human capital development can become one of the
main success factors along with the establishment of legal restrictions and the
use of technology-based countermeasures. The research findings can be useful in
the process of updating the legal framework regarding the limitations of the AI
use and the development of an appropriate ethical framework. |
Keywords: |
Artificial Intelligence, Corporate Tax Abuse, Tax Evasion, Tax Fraud, Tax
Security. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
IMPROVING USABILITY THROUGH A GAMIFICATION-BASED E-LEARNING SYSTEM WITH MDA
FRAMEWORK |
Author: |
SALOMO H TIMOTHY PARDEDE , TANTY OKTAVIA |
Abstract: |
This research aims to develop a gamified e-learning platform of PT. Adhi Karya,
a construction engineering company in Indonesia, using the mechanics, dynamics,
and aesthetics (MDA) framework. The effect of gamification on the e-learning
system was examined using a questionnaire that measured usability, which
consisted of efficiency, errors, learnability, memorability, and satisfaction.
Purposive sampling technique was employed to recruit the participants, and
validity and reliability tests were conducted before the questionnaire was
distributed to the participants. Our results show the successful implementation
of gamification into PT Adhi Karya's e-learning system, where the majority of
the respondents considered that the system had good usability, demonstrating the
suitability of the framework for gamification-based e-learning system. |
Keywords: |
E-Learning System, Gamification, MDA Framework, Usability |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
FEATURE REDUCTION AND STROKE PREDICTION USING SPARSE SUBSPACE CLUSTERING
AUTOENCODER ON CLINICAL DATA |
Author: |
DURGA DEVI. P, K. AKILA |
Abstract: |
The use of deep learning techniques for feature reduction presents a novel way
to improve stroke prediction, as demonstrated by the article "Feature Reduction
and Stroke Prediction using Sparse Subspace Clustering Autoencoder on Clinical
Data". The volume and complexity of clinical data might be challenging for
traditional approaches to process, which can result in unsatisfactory prediction
results. However, this work offers a novel method that sorts through a lot of
clinical data to find and improve the most relevant parts using deep learning
algorithms. By employing state-of-the-art neural network topologies, the
proposed approach effectively identifies significant stroke-related predictors,
enabling more accurate and quick risk assessments. Early stroke prediction is
essential for emergency treatment and consequence avoidance. Clinical data used
to predict stroke is often high-dimensional, making evaluation and
interpretation challenging. This study proposes a novel feature reduction
technique for clinical data utilizing the sparse subspace clustering autoencoder
(SSC-AE) to predict strokes. The SSC-AE approach is contrasted with other deep
learning algorithms, such as CNNs, RNNs, and LSTMs. The SSC-AE technique works
better than other algorithms in terms of reconstruction error, clustering
performance, and stroke prediction accuracy, according to experimental results.
The proposed approach might improve stroke prediction models' efficiency and
accuracy. |
Keywords: |
Neural Network Algorithms, Deep Learning, Stroke Prediction, Feature Reduction,
Clinical Data Analysis. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
EMPIRICAL INVESTIGATIONS TO DETECTION OF LIVER CANCER USING MOBILEUNET |
Author: |
K. RAMANJANEYULU, P. SAROJA, P. VENU MADHAV, SRINU PYLA, VENKATA BHUJANGA RAO
MADAMANCHI, VENKATESWARARAO CHEEKATI |
Abstract: |
Identification of cancer tissue and biopsy is a procedure which is nevertheless
however a delicate and complex process as of now. Cancer CT images can be
segmented in order to help with the medical plan interventional and clinical
response assessment of liver lesions. Mobile U-Net, which has been shaped and
implemented as a reliable tool, is used for the segmentation of liver tumour and
to address the existing liver cancer problem. Liver lesions in Computerized
tomography may be used for assessing the volume and location of tumours,
predicting their treatment and assessing condition of the patient. This is the
amendment of the U-Net model structural design which takes been designed
specifically for the usage in mobile platforms. The deep learning structure
decodes the perception both by putting some enlightenment on what
characteristics are involved in internal layer examination and forecast and by
illuminating the part of a working model of a trained deep neural networks
beforehand. |
Keywords: |
Deep learning ,U-Net, Mobile Optimization, Liver Tumor segmentation, Computed
Tomography |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
LEVERAGING RESNET-152 AND WEB TECHNOLOGY FOR RAPID COVID-19 DIAGNOSIS FROM X-RAY
IMAGE |
Author: |
JENNY LING SIAW NIE, SOO SEE CHAI, KOK LUONG GOH, KIM ON CHIN |
Abstract: |
In December 2019, the SARS-CoV-2 virus gave rise to COVID-19, which was first
detected in Wuhan, China. The virus has infected over 700 million individuals on
Earth. This virus can spread through direct and indirect contact, making humans
vulnerable even in small places or through food consumption. The pandemic
highlighted challenges, including a shortage of radiologists and the
time-intensive interpretation of X-ray images, leading to discrepancies and
delays. To address this, a classification model based on X-ray images became
crucial for COVID-19 identification. Proposing a web-based system integrating
convolutional neural network (CNN) models, particularly the ResNet-152 model,
aims to enhance precision in monitoring and diagnosing COVID-19. After
fine-tuning a pre-trained ResNet-152 model using transfer learning on a COVID-19
dataset and adding a classification head, a COVID-19-specific classification
model is created. In this project, the pre-trained COVID-19 ResNet-152 model
achieved 86.84% accuracy, 89.95% sensitivity and 77.27% specificity. The model
is then integrated into the system, which enables healthcare professionals to
upload and receive a clear visualisation of the COVID-19 classification results
via Application Programming Interface (API) endpoints. This platform enables
healthcare professionals to login, upload, search, and classify COVID-19
diagnoses based on the uploaded X-ray pictures, providing an intuitive interface
and a user-friendly system. Leveraging advanced image processing and deep
learning, the system has the potential to expedite accurate diagnoses and
alleviate the workload on healthcare professionals, ensuring swift and accurate
detection of COVID-19 cases. |
Keywords: |
COVID-19, Classification, X-ray, ResNet-152 Model, Deep Learning |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
IDENTIFICATION AND CLASSIFICATION OF MANGO LEAVES DISEASE ON GREY WOLF
OPTIMIZATION OF MULTIVARIATE GATED RECURRENT NETWORK |
Author: |
AMIRTHA PREEYA V , DR.S.PRAVINTH RAJA |
Abstract: |
Plant provides essential nutrients and energy for daily life on reducing
sickness, stress and anxiety through its process of cleaning air. Further it
provides many medicinal and health benefits to the society to treat various
degenerative disease. Especially mango leaves are potential source of minerals
like nitrogen, potassium, calcium, magnesium and vitamins. In Particular, these
most beneficial plant were exposed to various disease in its growing regions due
to several environment changes. Hence it becomes mandatory to protect and
prevent the plant against the disease using precautionary measures. Even though
there exist large solutions to combat these issues still it requires a strong
solutions to examine those disease and categorize it into various forms to
establish a effective countermeasures. Many researches has been initiated using
machine learning and deep learning to detect the disease and its types but those
approaches fails to discriminate diseases effectively in early stages. In this
work, a new deep learning model entitled as Multivariate Gated Recurrent Network
composed of Input layer, hidden layer , abstract layer and output layer contains
various state like hidden state, forgot state, update state and Reset state is
employed to analyze the disease region of the image in detail towards effective
discrimination in early stages. In particular abstraction layer has been
combined with graph cut segmentation algorithm to improves the efficiency of the
network by searching and grouping of seed points which represents the infected
region of leafs has common attributes. Those common attributes generate the best
features for classification. Further optimization of gated recurrent network is
carried out using grey wolf optimization to eliminate the network over fitting
issues on increasing the speed and accuracy of the network to identify and
classify the regions infected of different diseases on the plant leafs on
different diseases perspectives. Experimental analysis of proposed architecture
on cross fold validation of plant village dataset explains the model accuracy
and efficiency with respect to execution performance in terms of training
accuracy and validation accuracy respectively for disease identification. GWO
and multivariate GRU work together to automate disease diagnosis with low
computing overhead, advancing accurate, real-time agricultural disease
management. This research advances our understanding of plant disease detection
while providing a scalable, reliable, and optimal approach that may be modified
for comparable horticultural and agricultural uses. |
Keywords: |
Mango Leaf Disease, Deep learning, Recurrent Neural Network, Multivariate Gated
Recurrent Network, Grey Wolf Optimization algorithm. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
AUGMENTED CLUTCH REALITY MOBILE: INNOVATIVE EDUCATIONAL MEDIA FOR MODERN
ENGINEERING |
Author: |
MUSLIM, AMBIYAR, ARWIZET KARUDIN, MUHAMMAD SYAFIQ HAZWAN RUSLAN, HSU-CHAN KUO,
NUZUL HIDAYAT, RIDO PUTRA |
Abstract: |
TOne of the interactive technologies that can be applied to improve the quality
of education in the era of the 4.0 revolution is augmented reality (AR). The
purpose of this study is to develop interactive media augmented clutch reality
based on mobile (ACRMobi) in the powertrain course. In addition, it analyzes the
responses of lecturers and students to the implementation of ACRmobi as a modern
vocational education media in the 21st century. The 4D model is used as a media
product development model while the instruments used are validation and response
questionnaires to the media developed. It uses descriptive qualitative and
quantitative analysis techniques to calculate the average score by looking at
the criteria for the results. The findings obtained in the form of ACRmobi media
that was developed are valid based on expert assessments and practical based on
lecturer responses and student responses after being implemented at one of the
vocational education institutions. Experts who act as validators report that
ACRMobi is an interactive media that can be applied to vocational education.
ACRMobi contains elements that can increase student motivation because it is
technology-based. Integrated learning ACRMobi is more motivating and more
flexible in its application. |
Keywords: |
Augmented Reality, Powertrain, 4D Model, ACRMobi, Quality Education |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
SOLVING PROBLEMS OF BIG DATA INFRASTRUCTURE BY USING BLOCKCHAIN |
Author: |
MOHAMED ESMAIL, MOHAMED A. EL-DOSUKY, TAHER T. HAMZA |
Abstract: |
Big data faces challenges like substructure security, data confidentiality and
data administration. With the inception of blockchain, massive data security
circulation has become possible. The paper surveys previous work, focusing on
security challenges in big data models, a summary of blockchain services in a
big data environment, and the challenges of research in big data working
together with blockchain. Proposed methodology begins with the mathematical
foundation of big data-blockchain mapping. Then the proposed big data blockchain
infrastructure is proposed. The infrastructure allows many operations such as
separating and storing data, querying the separated stored data, block
validation, and gossip protocol. This work established an archetype system using
python programming language to confirm that the suggested concept of isolating
and information storage is applicative to big data controlling systems on
blockchain technologies. Finally, forged block attack is scrutinized. The
methodology integrates big data with blockchain through a proposed
infrastructure, validated via a Python prototype, enabling data management and
addressing forged block attacks. |
Keywords: |
Big data, Blockchain, Security |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
OPTIMIZING DIABETES DIAGNOSIS: ADGB WITH HYPERBAND FOR ENHANCED PREDICTIVE
ACCURACY |
Author: |
SWAPNA DONEPUDI, G.N.V.G. SIRISHA, PAPPULA MADHAVI, S PHANI PRAVEEN, DESHINTA
ARROVA DEWI, MUSTAFA JABER, Massila Kamalrudin |
Abstract: |
This study introduces an innovative machine-learning framework to enhance
diabetes prediction accuracy and model interpretability. The methodology begins
with multiple imputations by chained equations (MICE) to address missing data
and ensure a complete dataset for analysis. To tackle class imbalance, the
Synthetic Minority Over-sampling Technique (SMOTE) is employed. Z-score outlier
detection is utilized to remove outliers, further improving model robustness. A
hybrid feature selection method hybrid GWAN combining Grey Wolf Optimizer (GWO)
and ANOVA optimizes selecting relevant features, balancing predictive power with
model simplicity. The core of the framework is the Adaptive Boosted Gradient
Boosting Machine (ADGB), an ensemble learning model that merges the strengths of
AdaBoost and Gradient Boosting Machines (GBM). Hyperparameter optimization
through the Hyperband algorithm fine-tunes the model, achieving a high
prediction accuracy of 97.84%. This comprehensive approach not only improves
accuracy but also enhances the precision, recall, and F1 score of the predictive
model. By integrating these advanced techniques, the framework demonstrates
significant potential in early diabetes diagnosis, emphasizing the importance of
ensemble methods in healthcare data analysis and the necessity of accurate,
interpretable models for developing reliable diagnostic tools. |
Keywords: |
Grey Wolf Optimizer, Gradient Boosting Machines, Synthetic Minority, Public
Health |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
ENHANCING CYBER ATTACK DETECTION IN NETWORK TRAFFIC USING ADAPTIVE REGRESSION
TECHNIQUES |
Author: |
Dr.TALLURI.SUNIL KUMAR, P.JYOTHI, Dr.RAJESH KUMAR VERMA, 4PADMINI DEBBARMA,
Dr.N.V.S.PAVAN KUMAR, I.NAGA PADMAJA, HYMAVATHI THOTTATHYL, Dr.M.SATHISH KUMAR |
Abstract: |
Cyber attack detection is pivotal for preempting threats, securing data, and
safeguarding critical systems against breaches in our digitally reliant world,
ensuring uninterrupted operations and user privacy. Timely detection of cyber
attacks is paramount to prevent potential damages, financial losses, and
reputational harm inflicted upon individuals, organizations, and critical
infrastructure.The proposed algorithm, "AdaptoReg," introduces a novel approach
to cyber attack detection within network traffic using the NSL-KDD dataset. By
integrating adaptive regression techniques inspired by Lasso Regression and
Ridge Regression, this algorithm aims to dynamically adapt to diverse attack
patterns while maintaining robustness against evolving cyber threats. Through
feature engineering and an ensemble strategy reminiscent of Random Forest
Regression, "AdaptoReg" identifies anomalies in network behavior, offering a
comprehensive solution for detecting and flagging potential cyber attacks. The
algorithm undergoes rigorous evaluation, demonstrating its effectiveness in
accurately identifying malicious activities and highlighting its potential as a
valuable tool in enhancing network security and mitigating cyber risks. |
Keywords: |
Cybersecurity, Network Intrusion Detection, Adaptive Regression, Anomaly
Detection, NSL-KDD Dataset |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
A HYBRID LION PRIDE AND BAT ALGORITHM (HLPBA) FOR OPTIMAL SPOT AND SIZE OF EV
CHARGING STATIONS IN DISTRIBUTION NETWORKS |
Author: |
RAGALEELA DALAPATI RAO, PADMANABHA RAJU CHINDA, KUMAR CHERUKUPALLI |
Abstract: |
A dramatic increase in the number of EVs on city streets has led to a dramatic
increase in the need for well-placed EV charging stations inside distribution
networks. Electric vehicle charging station (EVCS) installation is crucial to
avoid power losses, voltage instability, and network overload. In response to
these issues, this study presents a new algorithm called Hybrid Lion Pride and
Bat Algorithm (HLPBA) for determining the best size and location of electric
vehicle charging stations (EVCS) in radial distribution networks. By integrating
the strengths of the Bat Algorithm (BA) for global exploration and the Lion
Pride Optimization Algorithm (LPOA) for local exploitation, the HLPBA is able to
strike a good balance in its search for optimal solutions. The suggested
algorithm's goal is to keep the voltage stable across the network while
minimizing total active power losses. By implementing the HLPBA into the IEEE
33-bus radial distribution system, power losses are reduced by 72.5% and the
minimum bus voltage is improved to 0.98 p.u. The results show that the HLPBA
outperforms more conventional optimization methods like Genetic Algorithm (GA)
and Particle Swarm Optimization (PSO), therefore making it a great choice for
distribution systems' EVCS placement. |
Keywords: |
Electric Vehicle Charging Stations (EVCS); Distribution Network Optimization;
Power Loss Minimization, Voltage Stabilit; Hybrid Optimization Algorithm; Lion
Pride Optimization Algorithm (LPOA); Bat Algorithm (BA); Hybrid Lion Pride and
Bat Algorithm (HLPBA). |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
A HYBRID APPROACH COMBINING LONG SHORT-TERM MEMORY AND RANDOM FOREST FEATURE
SELECTION FOR NETWORK INTRUSION DETECTION |
Author: |
HANDRIZAL, FAUZAN NURAHMADI, ALBERTMAN PUTRA BARASA |
Abstract: |
Network intrusion detection is essential for identifying suspicious network
activity. However, with technological advancements and the increasing need for
data, data processing becomes a significant challenge in intrusion detection.
This research explores a detection system using Long Short-Term Memory (LSTM)
combined with Random Forest-based feature selection (RFUTE) to enhance
performance. RFUTE reduced data dimensions by selecting 29 features for binary
and 30 for multiclass classification from 43 total features. Tested on the
NF-UQ-NIDS-v2 dataset, the system showed significant improvements, achieving 99%
accuracy in binary classification and 95% in multiclass classification.
Additionally, the AUC-ROC curve of the model with RFUTE showed better
performance than the model without RFUTE, with an increase in AUC to 0.999 for
binary classification and 0.9987 for multiclass classification. These findings
demonstrate that the integration of Long Short-Term Memory networks with Random
Forest-based feature selection significantly improves the accuracy and
predictive performance of network intrusion detection systems, particularly in
large-scale and complex environments. This research contributes to the
development of a more accurate and efficient model for network intrusion
detection by integrating Long Short-Term Memory networks with Random
Forest-based feature selection. This model improves the ability to identify
network threats quickly and responsively. |
Keywords: |
Network Intrusion Detection, Binary Classification, Multiclass Classification,
LSTM, Random Forest Feature Selection (RFUTE), Feature Selection. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
DETECTION OF BREAST MASSES USING WATERSHED AND GLCM TEXTURE FEATURES |
Author: |
MOHAMMED RMILI , MOUSTAPHA M. SALECK , ABDELMAJID EL MOUTAOUAKKIL , ABDELLATIF
SIWANE |
Abstract: |
Nowadays, breast cancer is the primary cause of the increase in the rate of
death among women in both developed and developing countries. Examining
mammograms to find signs of breast lesions is a challenging task that
radiologists have to perform often. Thus, it is crucial to implement image
analysis techniques to detect and outline breast lesions, as they provide
essential morphological data that can assist in ensuring an accurate diagnosis.
In this paper, we propose an efficient split-and-merge approach for tumor
segmentation in digital mammogram images, using a watershed algorithm and
texture features. First, we adopt the median filter technique to reduce noise
and enhance the quality of the mammograms. Second, we apply Contrast-Limited
Adaptive Histogram Equalization (CLAHE) to improve the interpretability of
texture analysis results. Third, we exploit the watershed algorithm to split the
image into homogenous subregions. Finally, we use the Gray Level Co-occurrence
Matrices (GLCM) technique to localize the suspected subregions and merge these
subregions to detect and outline breast lesions. The mini-MIAS database is used
to evaluate the efficiency of the proposed approach. The experimental results
demonstrate that the proposed hybrid method achieves a highly satisfactory
success rate compared to state-of-the-art breast lesion segmentation methods in
mammogram images. |
Keywords: |
Digital mammogram, tumor segmentation, Watershed algorithm, texture features,
Split-and-Merge |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
EXPLORING ENHANCEMENT IN CONFIDENCE-BASED ASSESSMENT: A SYSTEMATIC LITERATURE
REVIEW |
Author: |
NUR MAISARAH NOR AZHARLUDIN, KHYRINA AIRIN FARIZA ABU SAMAH, MOHAMAD FAIZ BIN
DZULKALNINE, RASEEDA HAMZAH |
Abstract: |
Confidence-based assessment (CBA) is a technique designed to evaluate an
individual’s degree of confidence or expectation regarding their response to
discern their true level of knowledge. In this methodology, individuals assign
confidence scores to their answers, indicating their level of certainty about
the correctness of their choices. This approach enhances understanding an
individual’s abilities or comprehension by distinguishing between correct
responses with high confidence and those with low confidence. Consequently,
evaluators gain a more comprehensive understanding of an individual’s competence
by examining their cognitive processes and self-awareness. Despite its
potential, there is a lack of systematic reviews focusing on enhancing CBA. This
study addresses this gap by conducting a systematic literature review (SLR) on
improving CBA methodologies. The present study follows the Preferred Reporting
Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines,
systematically analysing 11 articles published between 2019 and 2024. These
articles were selected from three primary databases—Scopus, Web of Science, and
ScienceDirect—and one supplementary database, Google Scholar. The review of
these studies identified four major themes: sector, purpose, algorithm, and
methods used. The findings of this SLR provide valuable insights into the
current state of CBA research and suggest directions for future studies. In
conclusion, this research offers significant benefits for scholars in the CBA
field, providing a reference for enhancing the application and understanding of
CBA. |
Keywords: |
Confidence, Confidence-Based Assessment, Confidence-Based Learning,
Enhancement, Systematic Literature Review |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
DUAL Q-LEARNING WITH GRAPH NEURAL NETWORKS: A NOVEL APPROACH TO ANIMAL DETECTION
IN CHALLENGING ECOSYSTEMS |
Author: |
JOHNWESILY CHAPPIDI, DIVYA MEENA SUNDARAM |
Abstract: |
Detecting wild animals is crucial to prevent road accidents caused by their
crossings and mitigate intrusions into residential areas. Existing methods often
struggle with complex spatial contexts and environmental variability. This study
introduces an integrated approach using Graph Neural Networks (GNNs), advanced Q
Learning, Multi-Attribute Utility Theory (MAHP) with deep learning, and
Generative Adversarial Networks (GANs) for data augmentation. The model enhances
spatial awareness with GraphSAGE and Graph Attention Networks (GAT), employs
Deep Q-Networks (DQN) for adaptive decision-making, integrates MAHP with custom
CNNs for nuanced attribute evaluation, and utilizes Conditional GANs for
synthetic data generation. Comparative evaluations show substantial enhancements
in accuracy, precision, recall, speed, AUC, and specificity, establishing new
benchmarks for wildlife detection in challenging conditions. This research
advances automated wildlife monitoring, which is crucial for biodiversity
conservation and addressing ecological challenges through integrated
computational techniques. |
Keywords: |
Graph Neural Networks, Advanced Q Learning, Multi-Attribute Utility Theory,
Generative Adversarial Networks |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
PREDICTING THE WEAVABILITY OF A NEW WOOLLEN FABRIC USING FUZZY LOGIC |
Author: |
M. EL BAKKALI , R. MESSNAOUI , M. ELKHAOUDI , O. CHERKAOUI , Pr ATIKA RIVENQ
MENHAJ, Pr AZIZ SOULHI |
Abstract: |
Weaving saturation can lead to several undesirable problems, such as problems
with loom performance, premature wear of mechanical parts and loss of expensive
raw materials. Therefore, when designing and creating new woollen fabrics, it is
crucial to adjust yarn densities and qualities according to the weaves to pass
weavability tests. This study focuses on the development of a practical fuzzy
logic model to predict the saturation of new 100% Wool fabrics. To validate this
fuzzy model, an experimental part was carried out. The fabric samples used in
this study came from three different weave types (plain, twill and satin) and
included five weft counts (Nm) and nine different densities. The results
obtained using the fuzzy logic model developed were compared with the
experimental values. The predictions generated by the fuzzy logic model were
found to be satisfactory and accurate, demonstrating its effectiveness for
predicting the saturation of a new 100% wool fabric. |
Keywords: |
Saturation Index, Weaving, Fuzzy Logic, Modelling, Weavability. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
ENHANCED LDA MODEL FOR SENTIMENT ANALYSIS (ELDASA) |
Author: |
ALURI SUGUNA RATNA PRIYA , A . MARY SOWJANYA |
Abstract: |
In the contemporary era, participative Internet communication or Social media
platforms are widely used by people from all walks of life. Sharing opinions on
various products and services has become common over social media. Unlike the
conventional approach, opinions freely expressed over social media are goldmines
to businesses. Analysing public sentiments has the potential to leverage
business intelligence. Many researchers exploited Natural Language Processing
(NLP) and Machine Learning (ML) to mine and ascertain opinions in online
user-generated reviews. However, processing large text corpora is still
challenging and prone to deteriorated performance. In this paper, we proposed a
Generative Framework using an enhanced version of Latent Dirichlet Allocation
Model considering sentiment polarities and latent aspects and we developed an
algorithm named Enhanced Dirichlet Allocation Model for Sentiment Analysis
(ELDASA) to realize the framework. This model is supported by a learning-based
approach with ML toward the identification of sentiments and classifying them.
Our empirical study using three social media datasets, consisting of reviews on
hotels, music, and games, revealed that the proposed algorithm supports
effective sentiment analysis. |
Keywords: |
Latent Dirichlet Allocation, Machine Learning, Sentiment Classification,
Sentiment Analysis, Natural Language Processing. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
NEUTROSOPHICAL MULTIPLE REGRESSION ENRICHED CHAOS DEEP BELIEF NETWORK FOR
DYSLEXIA PREDICTION |
Author: |
VINODH M R , P.J. ARUL LEENA ROSE |
Abstract: |
Children with dyslexic can use the appropriate resources and specialised
software to enhance their skills when problem is diagnosed early. Deep learning
and machine learning techniques analyze dyslexia-related datasets from
healthcare and educational sources, yet conventional models struggle with the
inherent vagueness of dyslexia data, often represented in intervals. In this
paper, generalized model of fuzzy and intuitionistic fuzzy known as neutrosophic
multiple regression model is used for determining the degree of dependency among
the independent and dependent variables of dyslexia dataset. In neutrosophic
concept, each attribute is defined with the truthiness, falsity and
indeterminacy membership, and are independent to each other. The correlation
among the attributes are determined using neutrosophic least square error
method. In existing deep neural networks using gradient based optimization ends
up with the local minimum and results in early convergence. The proposed work
deep belief network hyperparameters are scrutinized with chaos synchronization
for classification. This work used two datasets, Dyslexic 12-4 dataset from Keel
software repository and real time dataset collected from dyslexia schools of
various districts are applied for performance comparison, the results proved
that the proposed neutrosophic regression model produced highest rate of
detection rate in dyslexia prediction compared with existing models. This study
leverages advancements in AI to address the complex task of early dyslexia
diagnosis. By employing neutrosophic multiple regression models alongside chaos
deep belief networks, it achieves high prediction accuracy and handles data
uncertainties effectively. This work significantly contributes to improve
educational and healthcare outcomes for dyslexic children, underscoring the role
of IT in learning disability interventions. |
Keywords: |
Dyslexia, Uncertainty, Chaos Theory, Neutrosophic Multiple Regression, Deep
Belief Network |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
IMPROVING TEXT CLASSIFICATION IN FEDERATED LEARNING THROUGH TRANSFER LEARNING: A
COMBINED APPROACH FOR ENHANCED CONVERGENCE AND PERSONALIZATION |
Author: |
Dr. MOHAMMED ABDUL WAJEED , Dr. ANNAVARAPU CHANDRA SEKHARA RAO , Dr. T.K. SHAIK
SHAVALI , Dr. ABDUL RASOOL MD |
Abstract: |
Federated learning (FL) has become a prominent decentralized approach to machine
learning, allowing collaborative model training across distributed clients
without compromising sensitive data. However, the diversity of data among
clients, especially in text classification tasks, often results in slower
convergence and subpar performance. This study introduces a novel framework that
combines transfer learning with federated learning to tackle the issues of
slower convergence as the learning begins from scratch, due to which the
parameters of the learning model are randomly initialized. By utilizing
pre-trained language models, we propose a two-phase approach where clients first
fine-tune a pre-trained model by taking consideration of the parameters already
obtained, on their local text data before engaging in a federated averaging
process. This integration facilitates quicker convergence, better generalization
across varied client data, and improved model customization for individual
clients. We assess our methodology on several text classification datasets with
differing levels of data heterogeneity and demonstrate that our approach
significantly enhances both overall accuracy and communication efficiency
compared to conventional federated learning techniques. The findings highlight
the potential of transfer learning in boosting the effectiveness and scalability
of federated learning for real-world text classification applications. |
Keywords: |
Five: Federated Learning, Transfer Learning, Text Classification`, Federated
Averaging, Heterogeneous data |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
PERFORMANCE ANALYSIS OF CONVENTIONAL AND DEEP LEARNING IMAGE FUSION METHODS |
Author: |
SURYA PRASADA RAO BORRA, RAJESH K PANAKALA, T.V. HYMA LAKSHMI, RAJA SEKHAR
PITTALA, D.N.V. SATYA NARAYANA, GUNTI SURENDRA, A. GEETHA DEVI |
Abstract: |
This paper presents a comparative study of three different image fusing
techniques such as hybrid image fusion designed using conventional image fusion
technique, enhancement of low dose Computerized Tomography (CT) image and fusing
with Magnetic Resonance Imaging (MRI) image, and Deep Learning (DL) based
image-fusing technique. Firstly, it is observed that the hybrid method of image
fusion outperforms single mode of image fusion such as Discrete Wavelet
Transform (DWT) or Principal Component Analysis (PCA)-based image fusion.
Secondly, it is worth noting that the performance of image fusion applied to
enhanced low dose CT and MRI images are on par with fused normal dose CT & MRI
images based on results obtained with the standard fusion quality metrics.
Finally, the performance of Deep Learning (DL)-based image fusion developed
using Convolution Neural Network (CNN) is evaluated. The Peak Signal to Noise
Ratio (PSNR) of the DL based image fusion is more than 25 dB that of the
conventional methods of image fusion. From that, it can be concluded that DL
based image fusion technique will be the advanced form of medical image fusion
to obtain a proper diagnosis of different diseases. |
Keywords: |
Discrete Wavelet Transform; Principal Component Analysis;Image Fusion; Deep
Learning; Convolutional Neural Network, Multi Modal Images |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
ENHANCING SMART HOME SECURITY: A BLOCKCHAIN- INTEGRATED IOT APPLICATION WITH NFT
AUTOMATION CONTRACTS |
Author: |
Mrs. SUDHA KAPUGANTI , Mr. SEETARAMANATH . M . N , Mr. V. KAMAKSHI PRASAD |
Abstract: |
With the ongoing growth of smart home technology, securing and managing IoT
devices has become increasingly critical. This study introduces an innovative
approach to smart home security by integrating blockchain technology with IoT
devices, utilizing NFT-automated smart contracts to enhance security and
management. The research aims to develop a solution that automates and
strengthens the security protocols of smart home IoT devices while establishing
a reliable framework for ownership management. By deploying a private
blockchain, the system ensures data integrity, restricts access to authorized
individuals, and reinforces device protection. NFT automation contracts further
advance the system by creating a verifiable ownership structure, promoting
secure and efficient device operations. Communication among IoT devices is
optimized with the MQTT protocol, enabling efficient data transmission and
consistent, rapid connectivity. Performance evaluations on real-time networks
confirm the system’s usability, security, and operational efficiency, showcasing
the value of blockchain and NFT technologies in enhancing the scalability and
security of modern smart home systems. This research expands the potential of
smart home technology, delivering a more secure, efficient, and scalable
automation framework. |
Keywords: |
Message Queue Telemetry Transport (MQTT) Protocol, Blockchain Technology,
Internet of Things, Smart Contracts, Home Automation, NFT-Automation. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
AI BASED CLUSTER HEAD BASED MOBILE ADHOC NETWORK FOR PERFORMANCE IMPROVEMENT |
Author: |
S.HEMALATHA , S.SHALINI , U.SATHYA , P. KUMARAVEL , S VAMSEE KRISHNA ,
PRAMODKUMAR H KULKARNI |
Abstract: |
The transmission of packets in Mobile Ad Hoc Networks (MANETs) often experiences
delays due to changes in internal parameter values caused by intruders or
attackers. Additionally, power consumption in wireless networks remains a
challenging issue, as the depletion of a node's internal battery can disrupt the
entire communication system. To address these challenges, various methods have
been proposed to enhance transmission time and optimize MANET battery
management, enabling better evaluation of performance metrics. This article
introduces novel Artificial Intelligence (AI)-based techniques for improving
transmission time and power efficiency in MANETs using cluster heads. Clustering
nodes are dynamically formed within a region based on factors such as high
forward time, low delay, residual power, mobility, and node connectivity. These
cluster nodes play a pivotal role in selecting optimal transmission paths,
thereby enhancing communication efficiency and power optimization. The proposed
Forward Parameter-Based Wireless Routing Protocol (FPWP) was integrated into the
on-demand AODV routing protocol, resulting in the FPWP-AODV protocol.
Comparative simulations with existing AODV protocols demonstrated that FPWP-AODV
significantly outperforms its counterparts, achieving superior results in terms
of transmission time and power management, making it an effective solution for
MANET performance improvement. |
Keywords: |
MANET, Cluster Head, Transmission Time , Route Management, Life Time, Forward
Time, Forwarded Packet |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
DYNAMIC TABEL ELIMINATION MODEL FOR SECURE NETWORK CONNECTIVITY IN WIRELESS
SENSOR NETWORK ENVIRONMENT |
Author: |
DR. MYLA THYAGARAJU, DR. VENKATESWARLU CHANDU, A.CHARAN TEJA, DR.CH.SAHYAJA,
ANKAM DHILLI BABU, NOURIEN MOHAMMAD, DR.CH.V.RAMA KRISHNA RAO |
Abstract: |
Wireless Sensor Networks (WSNs) consist of spatially distributed autonomous
sensors that monitor physical or environmental conditions, such as temperature,
humidity, or pressure, and cooperatively pass their data through the network to
a main location. As WSNs are widely deployed in critical applications, including
military, healthcare, and industrial automation, network security becomes
paramount. Ensuring the confidentiality, integrity, and availability of data in
WSNs is challenging due to their inherent constraints, such as limited
processing power, memory, and energy resources. Key security issues include
protecting against unauthorized access, data tampering, eavesdropping, and
denial of service attacks. This paper proposed a novel Dynamic Address Table
Removal (DATR). The proposed DATR model performs Weighted Dynamic Routing (WDR)
for the computation of routing path in the network. Upon the estimation of the
routing path in the network Table removal method, the address of each node is
altered for the data transmission in the network scenario. The proposed model
DATR employs the cryptographic process with the Table removal for the secure
data transmission in the network for the efficient data transmission and
reception of the data in the network. Simulation results demonstrated that the
proposed DATR model achieves a significant throughput of 92.56% with an attack
detection rate of 93%. Through extensive simulations, DATR demonstrates
significant performance gains over traditional protocols such as AODV and DSR.
It achieves up to 20% higher throughput, maintains a packet delivery ratio
exceeding 97%, and reduces energy consumption by approximately 10-15% across
varying network sizes. The simulation results demonstrated that the proposed
DATR model achieves a higher attack detection rate with higher network
throughput compared with the conventional technique. |
Keywords: |
Networking, Security, Confidentiality, Dynamic Routing, Table Removal, Attack
Detection |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
AN APPLICATION OF CLUSTERING TECHNIQUE FOR SELECTING SNP MOLECULAR MARKERS IN
RICE GENOME |
Author: |
LUXSANAN PLOYWATTANAWONG , SUCHA SMANCHAT , SISSADES TONGSIMA |
Abstract: |
A selection of Single Nucleotide Polymorphism (SNP) molecular markers whose
unique genotypic combinations represent individual rice breeds is a critical
consideration in rice breeding programs. Due to the complexity of SNP data with
unknown target phenotypes, identifying trait-associated markers presents a
significant challenge. Existing research has applied both supervised and
unsupervised techniques to genetic and protein datasets; however, little effort
has been directed toward SNP analysis. To mitigate the time and difficulties
involved in exploring and identifying important markers for biologists, we
employ a clustering technique for selecting significant SNPs from the rice
genome. The experimental dataset comprises genome-wide SNPs from 88 rice breeds,
each containing 50,172 SNPs. We propose an iterative application of the K-means
clustering method to cluster these rice breeds into an increasing number of
clusters. To identify potentially important SNP markers, the frequency with
which each SNP is closest to the centroid of its group is counted. The SNPs are
then ranked based on this frequency. The results demonstrate that the proposed
method can distinguishes certain SNPs that are more frequently closest to the
centroids, potentially indicating their importance as biomarkers. These SNPs
among thousands can be recommended for further investigation for biologists in
wet experiments. |
Keywords: |
Rice Genome; K-means Clustering; Molecular Markers; Single Nucleotide
Polymorphism; Bioinformatics |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
Title: |
MOBILE ADHOC NETWORK INTRUDER NODE DETECTION AND PREVENTION FOR EFFICIENT PACKET
TRANSFERING |
Author: |
S.HEMALATHA , TAVANAM VENKATA RAO , S. SHALINI , SHRUTHI S NAIR , SURYA LAKSHMI
KANTHAM VINTI , DR.G.KRISHNA MOHAN |
Abstract: |
Wireless networks, particularly those without infrastructure, are vulnerable to
security threats. Mobile Adhoc Networks (MANETs) are especially vulnerable to
security breaches, with intruders constituting a substantial risk. These
intruders seek to degrade network performance by sending duplicate packets to
surrounding nodes, increasing the burden on these nodes and reducing overall
network performance. Numerous research efforts are focused on detecting and
avoiding such invasions. This article focuses on intruders in MANET
communication, outlining their strategies and the negative impact on network
performance. This paper describes a study on identifying intruders in MANET
routing traffic using the Watch Dog Algorithm and a threshold-based
categorization technique. The study's goal is to verify whether the identified
nodes are invaders by running simulations with NS2.34 and evaluating the
outcomes using important parameters including attack rate and detection time,
PDR, and END. The suggested WDBIC model outperforms the standard AODV protocol
in a variety of MANET performance metrics. Specifically, the WDBIC model has a
greater attack rate, a slightly smaller percentage of normal nodes across
different node counts, detects attackers faster, and consistently gives superior
packet delivery ratios across various transmission parameters. Additionally, the
WDBIC model lowers end-to-end latency by 6.2% to 43.4% when compared to the AODV
protocol. These findings show that the WDBIC model outperforms the classic AODV
protocol in MANETs in terms of efficiency, detecting the attack. |
Keywords: |
Manet , Intruder Node, Packet, Intruder Detection, Intruder Detection |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2024 -- Vol. 102. No. 23-- 2024 |
Full
Text |
|
|
|