|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at /submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
March 2025 | Vol. 103
No.6 |
Title: |
DEEP LEARNING-DRIVEN FORECASTING MODELS FOR IOT DATA IN CLOUD COMPUTING
ENVIRONMENTS: LEVERAGING TEMPORAL CONVOLUTIONAL NETWORKS |
Author: |
OTHMAN EMRAN ABOULQASSIM, FARHAT EMBARAK, JAYASHREE S, ABDULHAMID ELTHEEB |
Abstract: |
Data-driven techniques for machines tool wear detection and forecasting have
gained prominence in the past several years. The study investigates how well
Temporal Convolutional Networks perform in cloud computing contexts for IoT data
prediction. Because TCNs are good at capturing temporal patterns and long-term
relationships, they are useful for time-series forecasting problems. Utilizing
convolutional layers, TCNs differ from conventional Recurrent Neural Networks in
that they analyse data in addition, to enhancing adaptability and decreasing
training time. Dilated convolutions are included in TCNs to further improve
their capacity to identify trends over long periods without adding to the
computational complexity, which makes them appropriate for connections that last
and recurrent trends in IoT data. The study shows that TCNs perform better than
existing models like RNN, LSTM, GRU-LSTM, and CNN-LSTM in terms of metrics like
R^2 alongside and Mean Absolute Percentage Error. The study was conducted on a
Python platform running Windows 11. TCNs attained an MAE of 98.7%, RMSE of
97.6%, MAPE of 98.0%, and R^2 of 97.7%, according to the results. Although the
error metrics are greater, the significant R^2 value suggests a strong model
fit. The study draws attention to many problems with TCNs, such as the
requirement for large labelled datasets, understanding, data quality, and
computationally demanding requirements. The study also highlights how
scalability and flexibility offered by cloud platforms enable effective
management of massive IoT data streams and real-time analysis. The results
indicate that TCNs may greatly increase resource use and forecasting accuracy in
IoT-cloud environmental systems, but more development and study are required to
fully realize their capabilities. |
Keywords: |
IOT Data Forecasting, Cloud Computing In Iot, Temporal Convolutional Network
(Tcns), Deep Learning, Dilated Convolutions, Machine Tool Wear Detection |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
IMPROVED DEEP LEARNING WITH SELF-ADAPTIVE ALGORITHMS FOR ACCURATE STRESS
DETECTION: CASCADED CNN_BILSTM_GRU METHOD |
Author: |
FATEH BAHADUR KUNWAR, RAKESH KUMAR YADAV, HITENDRA SINGH |
Abstract: |
Stress detection is crucial in various fields due to its significant negative
effects on individuals and groups. Eradicating stress is difficult, thus the
need to manage its physical and mental consequences. Current stress detection
methods are ineffective and require enhancement. Traditional approaches struggle
to accurately detect stress, particularly with complex and diverse data. The
paper introduces a new model, Cascaded NN_BiLSTM_GRU with self-adaptive Walrus
Optimization Algorithm (SA-WaOA), to address stress detection inefficiency by
using adaptive techniques. The aim is to develop a dependable deep-learning
model for early identification and support in mental stress intervention and
resource allocation to enhance individual well-being. This newly proposed
structure involves a cascaded CNN_BiLSTM_GRU where the dilation rate is adaptive
by the Weibull distribution function, adaptive dropout determined by Cumulative
Distribution Function (CDF), and establishment of adaptive loss function by
exploiting the Bernoulli distribution. The Gated Recurrent Unit (GRU), in a
cascaded CNN_BiLSTM_GRU, has been utilized for learning the spectral and
temporal features. The model using Python platform shows improved stress
detection accuracy. The research contributes significantly to the field of IT by
introducing an innovative deep-learning model that leverages adaptive mechanisms
for enhanced stress detection, setting a new benchmark in mental health
monitoring and intervention. The proposed approach offers a robust framework for
handling complex and diverse stress-related data, thereby improving the accuracy
and efficiency of stress detection systems.Model's performance evaluated using
various metrics. Results show high accuracy in different datasets and learning
rates. EEG Feature dataset: 95.05% accuracy (70/30), 96.33% accuracy (80/20).
Emotion dataset: 95.51% accuracy (70/30), 96.28% accuracy (80/20). Stress
Detection dataset: 95.95% accuracy (70/30), 96.65% accuracy (80/20). DASPS
dataset: 96.71% accuracy (70/30), 97.71% accuracy (80/20). |
Keywords: |
Cascaded CNN_Bilstm_GRU; Self-Adaptive Walrus Optimization Algorithm; Weibull
Distribution Function; Cumulative Distribution Function; Bernoulli Distribution
Function. |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
ROLES AND CHALLENGES OF BLOCKCHAIN TECHNOLOGY ADOPTION IN ACCOUNTING AND
AUDITING |
Author: |
MARCELLINE AUDI LEO , MARSHA MARGARETHA , LUSIANAH |
Abstract: |
The transformation of digital technology has had a significant impact on various
industries, one of which is in accounting and auditing practices, where
blockchain technology offers a potential in increasing transparency, security,
process efficiency, and data reliability in accounting and auditing. This study
examines various literatures to evaluate the benefits and constraints faced in
implementing blockchain technology adoption in accounting and auditing through a
literature review. Blockchain, as a decentralized technology that offers high
transparency and security, has the potential to develop accounting and auditing
practices by improving efficiency, effectiveness, security, accuracy, and
reliability of data. However, there are several significant challenges that
still hinder the widespread adoption of this technology, such as regulatory
constraints and cyber and hacker threats. This study contributes to providing a
comprehensive understanding of how blockchain technology can be integrated into
accounting and auditing processes. The findings of this study are expected to
help practitioners and researchers in developing more detailed and in-depth
effective strategies by utilizing the potential of blockchain technology to
increase transparency, efficiency, security, and trust in financial information
management. |
Keywords: |
Blockchain, Accounting, Auditing, Technology, Literature |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
ACCESSIBILITY GAPS IN OMANI HOSPITAL WEBSITES: A WCAG 2.1 COMPLIANCE STUDY |
Author: |
MOHAMMAD ABU KAUSAR , MOHAMMAD NASAR , SALLAM OSMAN FAGEERI |
Abstract: |
In the digital age, ensuring web accessibility in the healthcare sector is
critical for inclusivity, especially for individuals with disabilities. This
study evaluates the accessibility compliance of top-ranked hospital websites in
Oman against the Web Content Accessibility Guidelines (WCAG) 2.1 standards.
Through the use of automated tools (e.g. WAVE, TAW and EIII) accessibility
metrics were analyzed to find common accessibility problems such as: missing
alternative text, low contrast ratios between background and foreground, and
operability issues. Public and private sector websites of the top six hospitals
formed the sample. Results show significant differences, with private hospitals
being more prone to accessibility violations than public hospitals. Common
categories of accessibility mistakes included information and relationships,
non-text content, labels or instructions, headings and labels and keyboard
accessibility. The study concludes that most hospital websites in Oman generally
fail to meet web accessibility standards, restricting the functionality
available to many users. This study highlights the need for an organized web
accessibility criteria framework for the healthcare sector in Oman and
contributes valuable perspectives toward a more digitally inclusive future as a
part of the Vision 2040 goal. |
Keywords: |
Web Accessibility, WCAG 2.1, Healthcare Websites, Digital Inclusivity,
Accessibility Compliance, Vision 2040 |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
USING SENTENCE TRANSFORMERS FOR SELF-ASSESSMENT IN DIGITAL TRANSFORMATION |
Author: |
Bachira Abou El Karam , Tarik Fissaa , Rabia Marghoubi |
Abstract: |
Digital transformation requires rethinking a company’s organization to identify
the necessary changes for implementing digital initiatives. It goes beyond
technology, encompassing corporate strategy and impacting organizational
culture, employee involvement, customer orientation, and business models. To
embark on a digital transformation project, companies must first assess their
current state regarding strategy, digital maturity, and organizational culture.
Existing evaluation methods rely either on consulting services, which are
effective but costly for SMEs, or closed-response questionnaires, which are
quicker and standardized but limit expression, potentially introducing biases.
To address these challenges and find a compromise between the affordability and
the precision, this paper proposes an automated self-assessment approach based
on open-ended responses, leveraging Sentence Transformers to evaluate and score
SMEs' current state. Since the aforementioned evaluation requires high precision
due to strategic decisions and investments that are resulting, and characterized
by the diversity of unrestricted responses, particularly in a francophone
context where cultural and linguistic nuances can significantly influence
results, the approach must be tested and compared to the manual method which is
often considered as a reference. To achieve this, a case study was conducted on
a Moroccan SME that had previously been audited manually by a consulting firm,
and the open-ended responses from this audit were subsequently analyzed
automatically using a Sentence Transformer-based method as well as using
statistical techniques: TF-IDF and LSA. The results demonstrated a strong
alignment between the proposed approach and expert evaluations, confirming its
effectiveness as a cost-efficient and scalable solution for SMEs, while
outperforming other evaluated methods. |
Keywords: |
Digital Transformation, Self-Assessment, Neural network transformers, NLP,
Semantic Similarity, Decision Making. |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
ENHANCING POWER SYSTEM SECURITY WITH A HYBRID SATS ALGORITHM FOR OPTIMAL POWER
FLOW |
Author: |
KUMAR CHERUKUPALLI , BADDU NAIK BHUKYA , PADMANABHA RAJU CHINDA |
Abstract: |
The security and performance of electricity systems are the primary concerns in
their planning and operation. It is imperative to establish appropriate
procedures for the maintenance and enhancement of security within the power
system. This research introduces a hybrid simulated annealing and tabu search
(Hybrid SATS) approach to address the security-constrained optimal power flow
problem. The main aim of the research is to improve power system security and
reduce generator fuel expenses. Contingency ranking is employed to determine
line outages. The Hybrid SATS approach effectively alleviates line flow limit
breaches during various single line outages, maintaining power flows within
secure parameters. To evaluate the efficacy of the proposed Hybrid SATS
approach, simulation experiments are conducted on the standard IEEE 30 bus, and
the results are compared with those of the SA and TS methods. |
Keywords: |
Security Constrained Optimal Power Flow, Simulated Annealing, Tabu Search,
Contingency. |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
A STRUCTURED TRACEABILITY APPROACH FOR TRANSFORMING REQUIREMENTS INTO CLASS
DIAGRAMS |
Author: |
KELETSO J. LETSHOLO |
Abstract: |
In software engineering, requirement traceability is a critical factor in
ensuring that software systems comply with user requirements, enabling effective
management and verification throughout the development lifecycle. However, due
to inherent ambiguities and inconsistencies in user-specified requirements,
typically expressed in natural language, establishing accurate traceability
remains a significant challenge. This study introduces a novel, automated
approach to requirement traceability, addressing this challenge by transforming
natural language requirement specifications (NLRs) into class diagrams while
simultaneously generating traceability links. The proposed approach leverages
Natural Language Processing (NLP) techniques and Semantic Object Models (SOMs),
a structured and reusable pattern-based method that enhances the accuracy and
consistency of traceability links. To validate this approach, the TRAM
application was developed and evaluated against manually produced class diagrams
from requirements engineering experts. Precision and recall metrics were used to
assess the accuracy and completeness of the generated traceability links.
Results indicate that TRAM achieves high recall (0.60–0.96), demonstrating its
effectiveness in capturing relevant elements, although precision (0.25–0.51)
remains a challenge due to the integration of predefined elements through SOM
patterns. These findings highlight the contribution of this research in
advancing automated traceability solutions, reducing manual effort, and
improving consistency in requirements engineering. Additionally, the structured
use of SOMs suggests that this methodology can be extended beyond class diagrams
to other software artifacts, broadening its applicability in software
engineering. |
Keywords: |
Class Diagram; Natural Language Processing; Requirements Engineering;
Requirements Traceability; Semantic Object Model. |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
IMPLEMENTING NEUROMORPHIC COMPUTING USING NEURAL NETWORK TECHNOLOGY FOR DYNAMIC
OBJECT DETECTION AND OCCLUSION HANDLING IN AR IMAGES |
Author: |
ARUNA THETHALI, MANDAVA KRANTHI KIRAN |
Abstract: |
In recent times, people, objects, and other real-time elements have played a
major role in real-time videos. Most scenes are taken behind natural objects
like trees, grid wires, wireframes, etc. Identifying, detecting, and tracking
the objects from real-time videos is inaccurate since the objects are
over-placed on one another. Some earlier research has focused on providing
occlusion detection in face recognition to identify human faces and obtained
only 80% accuracy. In real-time industries like virtual and augmented reality,
video occlusion is high, and objects must be identified accurately. This paper
focused on occlusion removal and object detection to improve visualization
accuracy. To increase object detection accuracy without occlusion, this paper
proposes an SNN-based neuromorphic system to detect objects from occluded
images. The mask-RCNN model is applied to segment the input samples before
detection to detect the objects accurately. The overall workflow of the proposed
SNN-based model uses four steps: data pre-processing, feature extraction, data
segmentation, and detection. The entire model is implemented and experimented
with AR video images, and the results are verified. The output shows that SNN
and MRCNN increased the overall efficiency in object detection and
visualization, occlusion removal, reduced time, and computational complexities. |
Keywords: |
Neuromorphic Computing, Spiking Neural Networks, MR-CNN, Occlusion Removal,
Object Detection. |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
OPTIMIZING UNDERWATER IMAGE QUALITY: ADVANCED TECHNIQUES FOR HAZE REMOVAL,
CONTRAST ENHANCEMENT, AND COLOR CORRECTION |
Author: |
PRANJIT LAHON , DR. RANJAN SARMAH |
Abstract: |
The enhancement of underwater imagery is essential due to the numerous
challenges such as turbidity and light scattering that cause poor visibility,
reduced contrast, and color distortion. This study introduces a methodology
aimed at improving underwater images through techniques like contrast
enhancement, haze removal, and color correction. Core methods employed include
median filtering for noise reduction, gamma correction for brightness
adjustment, the Dark Channel Prior (DCP) for haze removal, and Contrast Limited
Adaptive Histogram Equalization (CLAHE) for contrast enhancement. We conducted
experimental evaluations on the "SHARK" test image using several algorithms,
including Histogram Equalization (HE), Spectral Information Divergence (SID),
Fusion (FU), and Image Analysis (IA). These algorithms were assessed based on
the Underwater Image Quality Measure (UIQM) and Processing Time (PT). The
proposed method achieved a leading UIQM score of 1.751 with the shortest
processing time of 0.452 seconds, surpassing other techniques in both image
quality and efficiency. This high-quality enhancement coupled with fast
processing renders the proposed method particularly suitable for real-time and
resource-limited scenarios. The proposed methodology notably enhances the
clarity, contrast, and color accuracy of underwater images, making them more
effective for applications in marine research and underwater exploration. The
significant improvements in underwater image enhancement techniques demonstrated
in this study highlights the potential for further advancements in various
fields, addressing the unique challenges posed by underwater imaging. |
Keywords: |
Contrast enhancement, Haze removal, Color correction, Dark Channel Prior (DCP),
Contrast Limited Adaptive Histogram Equalization (CLAHE), Underwater Image
Quality Measure (UIQM), Processing time (PT). |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
GEOPOLITICAL CONSEQUENCES OF ARTIFICIAL INTELLIGENCE GOVERNANCE |
Author: |
MARINA SHULGA , BOHDAN KACHMAR , ANDREY SHARYPIN ,SERHII STAVROYANY, ANDRII
TIMCHENKO |
Abstract: |
The article critically examines the geopolitical implications of artificial
intelligence (AI) governance, highlighting its role in reshaping international
power dynamics. This study addresses the gap in the literature regarding AI’s
impact on national security, economic dominance, and political control. By
analyzing policy frameworks and global AI leadership strategies, the study
provides a novel perspective on AI governance as a geopolitical tool. Findings
indicate that AI enhances technological sovereignty, strengthens defense
capabilities, and contributes to cyber and economic conflicts. The conclusions
emphasize the necessity of a comprehensive global regulatory framework to
mitigate AI-related risks and foster international cooperation. This study
contributes new knowledge by demonstrating how AI governance influences
strategic geopolitical stability and security. |
Keywords: |
AI Governance, Geopolitical Impact, AI Regulation, National Security, Economic
Power, AI Ethics |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
SOFTWARE PROJECT MONITORING USING MILESTONES AND PERT-AN EMPIRICAL ANALYSIS |
Author: |
DR. GADALA SUHASINI, DR S. SAI KUMAR,3MONICA KALBANDE, DR. P. PUNITHA, DR HARI
KISHAN CHAPALA, CHINNAM SABITHA, CHETLA CHANDRA MOHAN |
Abstract: |
This paper concentrates on monitoring projects through the PERT and MILESTONES.
Complex, multilayered and distributed projects require a series of activities,
some of which must be preferred sequentially and others parallely. This
collection of series and parallel tasks can be modeled as a network. PERT is
statistical technique applied to such a networks. In this paper we attempted to
simulate the PERT networks and graphically represented the tasks along with them
inter dependencies. Here project identifies the critical and non-critical tasks
and evaluates the critical path to determine which tasks have an impact on the
schedule. Once a project has advanced to the phase of performance, the focus
shifts from the discovery to tracking and reviewing it. MILESTONES are used to
track the progress of the project at different stages and the PERT chart, on
continual basis. This paper integrated both these techniques for an efficient
and easier monitoring. In order to reap the results of the project sooner, we
gave a provision to reduce the scheduled completion time with minimum cost
burden. This can be achieved by assigning more labor and resources to the
various activities. |
Keywords: |
PERT, MILESTONES, Project, Resources, Schedule. |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
BRIDGING THE IT SKILL GAP WITH INDUSTRY DEMANDS: AN AI-DRIVEN TEXT MINING
APPROACH TO JOB MARKET TRENDS USING LARGE LANGUAGE MODEL |
Author: |
TODSANAI CHUMWATANA , AYE KHIN KHIN HPONE |
Abstract: |
The growing influence of technological advancements and the complexity of
business requirements have revolutionized the IT industry underscoring the
critical need for a comprehensive understanding of the skills and tools most
valued and prioritized by employers. However, identifying these in-demand
competencies remains a challenge given the dynamic nature of the job market.
With previous studies exploring IT skill demands and very few studies utilizing
AI-driven text-mining techniques to systematically analyze a large number of job
advertisements, this research aims to address the ongoing gap between individual
competencies and industry requirements. Leveraging text mining techniques,
natural language processing (NLP), and large language models (LLMs) such as
GPT-based AI model to enhance keyword extraction, this study provides the
workflow process for generating knowledge-based insights using Term
Frequency-Inverse Document Frequency (TF-IDF) scoring from a collection of IT
job advertisements. The study identified vocabularies related to key skills and
tools, by effectively summarizing the occurrences in job postings and revealing
key terms that characterize specific roles and industry demands across IT
sectors such as Software Development, DevOps, Cloud, Database Administration,
Business Analysis, and Business Intelligence. Key findings reveal the demand for
programming languages like VB.NET, Bash, and Python, alongside specialized
languages including Dart, Kotlin, and Ruby, reflecting the growing importance of
niche expertise. IT professionals are expected to be proficient in tools such as
MS Visual Studio, and Crystal Report, as well as emerging frameworks like
Flutter while skills in NoSQL and tools like MS Excel, ERP platforms, and
Vertica are vital for supporting data-driven decision-making and business
intelligence. By offering actionable insights into evolving industry
expectations, this research serves as a valuable resource for educators with
curriculum development, for recruiters with their talent acquisition strategies,
and for job seekers with acquiring in-demand skills, ultimately ensuring a more
responsive and competitive IT labor market. |
Keywords: |
Generative AI, Keyword Extraction, IT Employment, Large Language Models (LLMs),
Natural Language Processing (NLP), Data Visualization, Data Analytics, Text
Mining |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
DERMXNET: A HYBRID DEEP LEARNING AND GRADIENT BOOSTING APPROACH FOR EFFICIENT
SKIN DISEASE DETECTION |
Author: |
N ANNALAKSHMI , S UMARANI |
Abstract: |
Skin diseases, including melanoma and non-melanoma, are among the most prevalent
health concerns worldwide. Early and accurate detection of these conditions is
critical for effective treatment and improved patient outcomes. This paper
introduces DermXNet, a hybrid model combining Artificial Neural Networks (ANN)
and eXtreme Gradient Boosting (XGBoost) for accurate classification of skin
conditions, including melanoma, non-melanoma, and normal skin. The methodology
involves systematic data pre-processing techniques such as resizing,
normalization, and artifacts removal. The ANN module extracts high-dimensional
features, which classified by XGBoost, leveraging the strengths of both deep
learning and gradient boosting. The proposed model evaluated against nine
existing deep learning models, including ResNet50, VGG16, and EfficientNetB0.
DermXNet achieved the highest accuracy of 98.38%, surpassing other models in
metrics such as precision, recall, F1-score, and AUC-ROC. Additionally, DermXNet
demonstrated computational efficiency with a training time of 95 seconds and a
model complexity of just 4 million parameters, making it suitable for real-world
deployment. The results underscore the effectiveness of hybrid architectures in
medical diagnostics. Future research can extend DermXNet to multi-class
classification and integrate advanced domain-specific features to enhance its
applicability. |
Keywords: |
Skin Disease Detection, Deep Learning, Artificial Neural Network, XGBoost,
Hybrid Model, Medical Imaging, Melanoma Classification, Feature Extraction |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
UGECGAD: EVALUATING THE EFFICACY OF UNSUPERVISED GAN ARCHITECTURES FOR ECG
ANOMALY DETECTION |
Author: |
SHAIK JANBHASHA, NARASIMHA RAO THOTA, PRASHANT ATMAKURI, G. ASHOK KUMAR, ATTRU
HANUMANTHARAO, NEERUKATTU MALLIKHARJUNA RAO |
Abstract: |
Anomaly detection in Electrocardiogram (ECG) signals is critical for reliable
diagnosis and continuous patient monitoring. Although traditional approaches are
incapable of describing the rich nonlinear structure in ECG data, they are
ineffective in detecting these abnormalities. Although Generative Adversarial
Networks (GANs) has recently attracted increasing interest for ECG anomaly
detection, research available currently does not provide a systematic evaluation
of their advantages and disadvantages for practical situations. This paper
addresses this gap by critically reviewing unsupervised GAN-based approaches,
evaluating their ability to reconstruct normal ECG signals and accurately detect
deviations. We introduce a rigorous empirical comparison of different GAN
architectures and their adversarial variants, highlighting key challenges in
their implementation. Our results show that the ECG-Adversarial Autoencoder
(ECG-AAE) is superior to the other GAN-based approaches in terms of training
anomaly, and provides the best performance in anomaly detection. This study
contributes new insights into the robustness of ECG-AAE, establishing its
potential for precise and reliable ECG anomaly detection in practical healthcare
applications. |
Keywords: |
Generative Adversarial Networks, Deep Learning, Electrocardiogram, Anomaly
Detection, Unsupervised Learning |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
USING ARTIFICIAL INTELLIGENCE SOFTWARE FOR CORRECTING INCORRECT INTONATION IN
SINGING |
Author: |
XIAO HUIWEN |
Abstract: |
The purity of intonation has an impact on the expressiveness of vocal
performance. The aim of this study is to determine the features of using
artificial intelligence (AI) software for correcting incorrect intonation in
singing. The research employed the following methods: analysis, comparison,
calculations of two-factor analysis of variance (ANOVA), Cognitive Change Index
(CCI). Training techniques included the use of the iZotope RX 10 application for
the purpose of flexibility of performance based on the development of singing
breathing. The Clip.audio application was used to work on ways of reproducing
sounds; Adobe Audition – for creating a musical style. It was established that
the advantages of digital applications are related to the possibility of
automatic error detection and the development of individual vocal approaches
that are important for students. The results showed that the learning process
primarily contributed to the preservation of the correct intonation at different
pitches, the development of correct pronunciation. The students developed a high
level of vocal intonation during training, which affected the expressiveness of
vocal performance. The practical significance of the article is aimed at the use
of AI technologies for the development of vocal intonation. Artificial
intelligence applications helped respondents achieve high vocal results, which
also had a positive impact on the formation of vocal skills. Further research
may focus on determining the ways of developing vocal intonation depending on
the features of the musical genre. |
Keywords: |
Vocal Performance, Voice Mobility, Musical Articulation, Sound Image,
Interactive Technologies |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
DEGREE OF IMBALANCE FOR TASK SCHEDULING IN CLOUD COMPUTING |
Author: |
NORA OMRAN ALKAAM, ABU BAKAR MD. SULTAN, MASNIDA HUSSIN, KHAIRONI YATIM SHARIF |
Abstract: |
The increasing reliance on cloud computing platforms for data-intensive
applications has made efficient task scheduling a critical factor for maximizing
resource utilization and minimizing the degree of imbalance. Inefficient task
scheduling in these complex, distributed environments can lead to significant
performance bottlenecks, including workload imbalance, which negatively impacts
overall system efficiency and scalability. Task scheduling in cloud environments
is a well-known NP-complete problem, further compounding the challenge of
achieving optimal solutions. While existing metaheuristic approaches offer some
mitigation, they often struggle to effectively balance exploration and
exploitation, leading to suboptimal solutions and slow convergence. This
study addresses this critical need by refining the previously proposed Henry
Gas-Harris Hawks Modified Opposition (HGHHM) algorithm to explicitly minimize
the degree of imbalance in task scheduling. We introduce a novel integration of
Henry Gas Solubility Optimization (HGSO) and Harris Hawks Optimization (HHO)
enhanced with a Modified Comprehensive Opposition-Based Learning (MOBL)
strategy. This unique combination allows HGHHM to effectively explore the
solution space while exploiting promising regions, leading to a more balanced
workload distribution. Simulations using the CloudSim toolkit demonstrate that
the improved HGHHM algorithm significantly reduces the degree of imbalance
compared to the Cuckoo-based Discrete Symbiotic Organism Search (CDSOS)
technique, achieving superior performance in terms of convergence speed and
solution quality while avoiding local optima. A t-test confirms the statistical
significance of these improvements, highlighting the potential of hybrid
metaheuristic methods for optimizing task scheduling in large-scale cloud
computing environments. |
Keywords: |
HGHHM; Cloud computing; Meta-heuristic; Scheduling; Optimization; Degree of
imbalance. |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
AUTOMATIC SEGMENTATION OF SPONDYLOLISTHESIS AND SCOLIOSIS OF X-RAY IMAGES USING
LIGHT-WEIGHT RESUNET ARCHITECTURE |
Author: |
SHIVA PRASAD K , Dr. JANA S |
Abstract: |
Scoliosis is the term for a curvature of the lumbar or thoracic spine in the
coronal plane. Young children may develop spondylolisthesis and scoliosis,
which, if left untreated, can grow into terrible agony. Lung and heart problems
can also be caused by severe scoliosis. Early diagnosis can therefore help to
arrest the disease's progression and make it easier to apply therapies or
interventions. Recent research has focused on the use of convolutional neural
networks for the diagnosis of scoliosis and spondylolisthesis on X-ray images.
Unfortunately, the majority of the current approaches ignore the larger-scale
image contextual feature information in favour of gathering feature information
for prediction from localised parts of images. Furthermore, important features
for classification—such as co-occurrence connections between labels and
anatomical segmentation knowledge—are not completely utilised. This research
suggests a Light-weight ResUNet (LW-RUnet) architecture that leverages the
Xception backbone for feature extraction in order to segment scoliosis and
spondylolisthesis using X-ray images. While the ResUNet decoder uses high-level
features, the middle decoder uses middle features to get spatial information. To
fine-tune the disease area, the suggested architecture integrates the ResUNet
decoder characteristics with the proposed middle decoder features. The outcomes
of the suggested segmentation technique are verified using a number of measures,
including accuracy, sensitivity, IOU, and dice similarity coefficient. With
regard to the normal, scoliosis, and spondylolisthesis classes, the suggested
method's segmentation accuracy is 99.28%, 98.25, and 98.34, respectively. |
Keywords: |
Deep Learning, Scoliosis, Spondylolisthesis, Radiology, and X-ray images |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
THE EFFECTIVENESS OF DIGITAL COMMUNICATIONS FOR PROMOTING BRANDS ON SOCIAL MEDIA
PLATFORMS |
Author: |
DIANA FAYVISHENKO, SVITLANA KOVALCHUK, DANYLO SIDIELNIKOV, OKSANA HOLIK, IVAN
KYIANYTSIA |
Abstract: |
The effectiveness of digital communications is a key factor in the successful
promotion of brands on social media platforms. This issue is especially relevant
in the context of the rapid development of digital technologies, which create
new opportunities for interaction with the audience and strengthening brand
positions. However, despite extensive research on social media marketing, there
remains a knowledge gap regarding the comparative effectiveness of different
platforms and their content strategies in driving audience engagement and
conversions. Existing studies often focus on individual platforms or specific
digital marketing techniques without a comprehensive cross-platform analysis of
digital communication effectiveness. This study addresses this gap by
analyzing the effectiveness of digital strategies in promoting brands on
Facebook, Instagram, TikTok, Twitter, and LinkedIn. Special attention is paid to
key performance indicators such as the Engagement Rate (ER), Click-Through Rate
(CTR), and conversions, as well as their dependence on content strategies and
platform specifics. The aim of the study is to assess the impact of digital
communications on the effectiveness of brand promotion in social media and
identify key factors that contribute to increasing the rate of interaction with
the audience. The study employs quantitative analysis, sociological surveys,
comparative analysis, and correlation analysis to provide a data-driven
evaluation of digital marketing strategies. The results demonstrate that
TikTok achieves the highest ER due to interactive content and active engagement
from a younger audience, while Instagram remains dominant in the premium segment
due to its visually appealing content. Facebook maintains a stable engagement
rate due to its broad user base but lacks innovative content formats, and
LinkedIn proves effective for professional communication. Conversely, Twitter
faces challenges due to declining user activity and engagement. The academic
novelty of this research lies in its comprehensive cross-platform analysis of
digital communication effectiveness, considering algorithmic operations,
audience behavior, and content strategies. This study contributes to the
existing body of knowledge by providing insights into the optimal digital
marketing strategies across different platforms and industries. Prospects for
future research include analyzing the adaptation of emerging content formats to
platform-specific requirements, assessing the role of micro-influencers in
audience engagement, and forecasting the evolution of digital communications
until 2028. |
Keywords: |
Digital Communications, Social Media, Branded Content, TikTok, Instagram,
Interaction, KPI, Conversions. |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
USING AI TO COUNTER CRIMINAL OFFENCES AGAINST THE WILL, HONOUR, AND DIGNITY OF A
PERSON |
Author: |
VIACHESLAV SHEVCHENKO , OLEH ZVONAROV , LEONID SHCHERBYNA , YANA LUTSENKO,
VOLODYMYR PYVOVAROV |
Abstract: |
In the era of the digitalization of society, crimes against the will, honour,
and dignity of a person, in particular cyberbullying, blackmail, and threats,
are becoming increasingly common. These phenomena are becoming increasingly
common due to the anonymity of online communication and the rapid dissemination
of information. This creates new challenges for law enforcement agencies,
requiring the implementation of innovative technologies, in particular
artificial intelligence (AI), for their detection, prevention, and
investigation. Artificial intelligence (AI) opens up opportunities for automated
analysis of threatening messages and prediction of risks of aggressive
behaviour. The research focuses on assessing the potential of AI in combating
such crimes, analysing its effectiveness, and identifying the limitations of
technology use in law enforcement practice. The study evaluates the
effectiveness of AI algorithms for detecting cyber threats and identifies their
key limitations. The aim of the study is to assess the effectiveness of AI
algorithms for automated detection of threatening messages, prediction of risks
of criminal behaviour compared to traditional methods. The study employed the
following methods: text data analysis using natural language processing (NLP),
behavioural pattern modelling to predict risks, and surveys of human rights
defenders and lawyers to study their attitudes towards AI in combating crime.
The methods used include text data analysis, behavioural pattern modelling, and
a sociological survey of human rights activists regarding their attitudes
towards AI. The results showed that AI algorithms demonstrate high accuracy
rates in detecting cyber threats, outperforming traditional methods in terms of
speed and scalability. NLP algorithms achieved 85% accuracy compared to 75% in
manual moderator analysis, confirming their effectiveness. At the same time, a
survey of specialists revealed a number of ethical and legal limitations in the
implementation of these technologies, in particular, 60% of respondents
indicated the need for strict regulation of AI, and 35% emphasized the risk of
false accusations. The academic novelty of the study is the interdisciplinary
approach that integrates technological analysis with legal aspects to assess the
effectiveness of AI, but also to identify obstacles to its practical
application. The novelty of the study lies in combining the analysis of the
effectiveness of AI with an assessment of the possibilities of its actual use in
the fight against cyber threats. Further research should focus on adapting
algorithms to the changing conditions of the digital environment and developing
regulatory mechanisms for their implementation in compliance with human rights.
Further research should improve AI algorithms and reduce the false positive
rate. |
Keywords: |
Artificial Intelligence, Criminal Offences, Cyber Threats, Legal Ethics, Text
Analysis. |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
BLOCK CHAIN-INTEGRATED MULTI-FACTOR AUTHENTICATION SYSTEM FOR ENHANCED SECURITY
AND DECENTRALIZED INTEGRITY IN ONLINE EXAMINATION PLATFORMS |
Author: |
VALLEM RANADHEER REDDY, SHANKAR LINGAM GOURISHETTY, AMGOTH ASHOK KUMAR, PEESALA
ILANNA |
Abstract: |
With the increasing depended on online examination systems, so providing
security, transparency, and scalability has become a significant challenge.While
existing methods with AI and deep learning-based multi-factor authentication
provide robustness, they may not address data integrity and traceability
effectively. In this research, we propose a novel framework that integrates
Blockchain technology with multi-factor authentication system to enhance
security and decentralization in online examinations. By combining
Blockchain’smethod with advanced facial recognition and behavioral biometrics,
the system ensures good data management and user authentication. The system uses
a Hybrid Deep Learning (HDL) model for biometric verification and Blockchain
smart contracts for secure transaction processing and audit trails. Experimental
results demonstrate that the proposed system achieves 99.1% accuracy in user
authentication while providing end-to-end security against unauthorized access
and data manipulation. |
Keywords: |
Blockchain, Multi Factor Authentication, Deep Learning, Online Examination. |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
HOT TOPICS DETECTION AND PERFORMANCE OPTIMIZATION FROM MICROBLOGS USING HYBRID
HADOOP FRAMEWORK |
Author: |
Dr. D. DAVID NEELS PONKUMAR, Dr. S. RAMESH, Dr. R. KALPANA, K. MANIKANDAN, N.
RAGHAVENDRAN, Y. HAROLD ROBINSON, Dr. G. UMA MAHESWARI, Dr. SANDRA JOHNSON, Dr.
B. PRATHUSHA LAXMI, R. SARAVANAKUMAR |
Abstract: |
The Hadoop framework's adoption is on the rise. We try to improve Hadoop's
performance by incorporating a more sophisticated framework into the MapReduce
paradigm, all while keeping the native Hadoop Framework's characteristics
intact. The improved Hadoop framework sorts a large collection of microblogs
according to the amount of attention they received from the social media site
during a certain period 't'. There is a 3% decrease in the execution time of
microblogs that have attained the attention level compared to those that have
not. By distributing the load evenly, the EHF speeds up the processing of
microblogs that have garnered a lot of interest. Global social media users may
dynamically develop insoluble information. Social media networks employ big data
to manage their massive data. Hadoop-based cloud platform provides large data
fault tolerance and dependability. The foundation of big data analytics is
Hadoop. The main drawback of Hadoop is processing massive configuration metrics.
This paper proposes the Hybrid Hadoop Framework to improve big data processing
by balancing workload, response time, network bandwidth, and hot topic detection
for microblogs using cloud-based Apache Spark. To accurately find hot topics in
large datasets, we purposely build MapReduce tasks. Experimental findings show
that the suggested system is more accurate than comparable systems. |
Keywords: |
Hadoop Framework, Social Media, Mapreduce, Big Data Analytics, Apache Spark,
Cloud, Workload, Response Time, Network Bandwidth, Hot Topic Detection |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
MAXIMIZING THE EFFICIENCY OF AI BASED IOT DEVICES WITH UPGRADED 802.11 AND
802.16 BASED NETWORKS TO OPTIMIZE NETWORK CONGESTIONS |
Author: |
KOUSHIK REDDY CHAGANTI, DR.Y. RAMAMOHAN REDDY, DR.Y. ALEKYA RANI, D. SANDHYA
RANI, PALLABOTHULA RAMESH, NANNAPARAJU VASUDHA |
Abstract: |
The proliferation of Internet of Things (IoT) devices has led to unprecedented
data generation and transmission, demanding efficient networking solutions. This
paper proposes a novel approach to maximize the efficiency of AI-based IoT
devices by leveraging upgraded 802.11 and 802.16 networks for effective
congestion management. Traditional networking protocols often struggle to cope
with the dynamic and data-intensive nature of IoT applications, leading to
network congestion and performance degradation. This study provides an in-depth
exploration of interference in wireless applications, with a particular focus on
the distinctive interference challenges posed by 5G and IoT. It elucidates
various optimization techniques aimed at mitigating these challenges.
Emphasizing the criticality of managing interference and optimizing network
performance in 5G environments, the paper underscores their pivotal role in
facilitating dependable and efficient connectivity for IoT devices. Such
connectivity is indispensable for ensuring the smooth operation of business
processes.To address this challenge, we integrate advanced artificial
intelligence (AI) algorithms with enhanced 802.11 (Wi-Fi) and 802.16 (WiMAX)
standards to optimize network resource utilization and mitigate congestion. Our
proposed solution dynamically adapts to changing network conditions,
intelligently prioritizes traffic, and optimally allocates resources to ensure
seamless connectivity and superior performance for AI-driven IoT devices. hrough
extensive simulations and practical validations, we demonstrate the efficacy of
our approach in significantly reducing latency, improving throughput, and
maximizing overall network efficiency, thus empowering AI-based IoT ecosystems
with enhanced connectivity and performance. |
Keywords: |
AI-Driven Iot Devices, Efficiency Enhancement, Advanced Networking, Congestion
Alleviation, Traffic Prioritization And Latency Reduction |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
EXPLORING THE ADVANCEMENTS AND CHALLENGES OF OBJECT DETECTION IN VIDEO
SURVEILLANCE THROUGH DEEP LEARNING: A SYSTEMATIC LITERATURE REVIEW AND OUTLOOK |
Author: |
M.KOTESWARA RAO , P.M. ASHOK KUMAR |
Abstract: |
Video surveillance has become increasingly important in recent years, leading to
a growing need for effective and accurate surveillance systems that can detect
objects and understand scenes. To address these challenges, methods based on
deep learning have emerged as the state-of-the-art approach and have shown
remarkable results in various applications. In this systematic literature review
(SLR) paper, we investigate the recent advancements and challenges of object
detection in video surveillance through deep learning. Our primary goal aim to
present a comprehensive overview of the recent research developments in this
field and to emphasize the challenges that should be addressed in future
research. In this study, we analyzed various deep learning-based object
detection methods and evaluated their performances based on several performance
metrics. Our findings indicate that deep learning-based methods have
demonstrated promising results in regards of accuracy and real-time performance
in video surveillance for object detection. However, the study also highlights
several challenges such as scalability, robustness, and interpretability that
require further research. Finally, this SLR paper concludes with a discussion of
future research directions in this field and offers a roadmap for future work.
Our study results can serve as a useful reference for researchers and
practitioners working in the field of video surveillance and deep learning. |
Keywords: |
Object detection, Video surveillance, Deep learning, Object tracking, Scene
understanding, Performance analysis |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
CYBER FORTIFICATIONS: ENSEMBLE SECURITY THROUGH DEEP LEARNING INNOVATIONS |
Author: |
Y. SUDHEER KUMAR , DR. A MARY SOWJANYA |
Abstract: |
Cyber threats using traditional approaches are more complex to detect in the
early stages. Cybersecurity is the domain that provides appropriate defense
against modern attacks. Cyber threats or cyber-attacks are attempts to destroy,
thieve, alter, disable, or destroy data or applications using unauthorized
access to a network, computer system, or digital device. Detection and
classification of cyber attacks is a tedious task for state-of-art algorithms
because of its more computation time and lack of understanding of attack
patterns. Deep Learning (DL) is a domain used in many cyber threat detection
systems, such as distributed denial-of-service (DDoS) attacks, phishing,
ransomware, and anomalies. The pre-trained model Attack Pattern Convolutional
Neural Networks (AP-CNNs) is introduced to detect attack patterns from the cyber
attacks dataset. This paper introduces an Ensemble Security Model (ESM) to
detect and classify cyber attacks from benchmark datasets. ESM combines Deep
Neural Network (DNN) and Adaptive Support Vector Machine (ASVM). DNN is used as
a vital feature extraction that extracts significant attack patterns. ASVM is
used to classify different types of attacks obtained from other datasets. This
paper uses two benchmark datasets to measure the strength of the ESM. The
pre-trained model Transfer Learning with a BERT-based Model is used to train on
given datasets. Finally, the results show that the proposed approach obtained
better results than existing models. |
Keywords: |
Distributed Denial-of-Service (DDOS) Attacks, Phishing, Ransomware, and
Anomalies, Ensemble Security Model (ESM), Deep Neural Network (DNN). |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
EFFICIENT SUPERVISED MACHINE LEARNING FOR CYBERSECURITY APPLICATIONS USING
ADAPTIVE FEATURE SELECTION AND EXPLAINABLE AI SCENARIOS |
Author: |
DR. NAGA ANURADHA, M. SAILAJA, DR. PRABHAKAR MARRY, DR. DASARI MANENDRA SAI,
PALLABOTHULA RAMESH, DR. SIVANANDA LAHARI REDDY |
Abstract: |
Cyber threats are changing continuously, so the need for robust, efficient, and
adaptive machine learning solutions is of utmost importance for the protection
of critical infrastructures. The current supervised machine learning models face
issues in dealing with high-dimensional data, real-time adaptability, and
privacy in data while delivering accurate and interpretable results. We present
a multi-faceted framework that combines state-of-the-art techniques to optimize
cybersecurity applications. We begin with a method called Adaptive Feature
Selection Using Reinforcement Learning (AFSRL), which dynamically identifies the
optimal feature subsets for the best classification, computational efficiency,
and detection latency. This reduces dimensionality by 40–60% and improves model
accuracy by 10–15%. We propose a Hybrid Ensemble Learning with Dynamic Weight
Adjustment, which dynamically integrates these diverse algorithms: Random
Forest, SVM, and Gradient Boosting. It obtains an 8-12% accuracy improvement
together with a reduction of 15% in false positives. For complex attack
patterns, CAGN exploits temporal and spatial relationships in network graphs for
20% improved precision while maintaining detection latency below 100ms. Our PPFL
framework preserves privacy by protecting sensitive data while retaining model
performance parity with centralized approaches. Finally, XAI-TADM brings trust
and interpretability with SHAP and LIME, explaining actionable insights which
improves response time by 25%. Such an all-round accurate, efficient, adaptive,
privacy-sensitive, and transparent framework would work very well with real-time
and high-stake environments in applications such as health care, finance, and
national security systems. |
Keywords: |
Cybersecurity, Adaptive Feature Selection, Explainable AI, Federated Learning,
Graph Neural Networks |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
EFFICIENT MRI SEGMENTATION OF SPINE HEMANGIOMAS: A NOVEL MODIFIED U-NET APPROACH
TO ENHANCE TUMOR BOUNDARY DETECTION |
Author: |
ABUJALAMBO MAHMOUD I M , NOR AZLINAH BINTI MD LAZAM , MALIK JAWARNEH , SHADI M S
HILLES , ABDALLAH ALTRAD |
Abstract: |
Magnetic Resonance Imaging (MRI) is a widely used, non-invasive method for
medical imaging, particularly effective in visualizing soft tissues and
identifying abnormalities like spine hemangiomas. One of the main challenges
remains the low segmentation accuracy of skeletal MRI images. Spine hemangioma
segmentation involves algorithmically identifying and localizing these tumors
within MRI scans, a process crucial for accurate diagnosis and treatment
planning. Although several segmentation methods exist, this paper introduces a
U-Net-based approach, implemented in PyTorch and optimized with the Adam
optimizer. This setup refines model weight adjustments and harnesses the full
capabilities of a fully connected convolutional neural network (CNN) for precise
semantic segmentation, including pixel-wise classification through an
encoder-decoder structure. This U-Net architecture is versatile and adaptable to
various analytical tasks across diverse applications. The model was trained on a
substantial dataset spanning the three primary anatomical planes used in medical
imaging—Axial, Coronal, and Sagittal without additional data augmentation. It
achieved real-time segmentation with a remarkable accuracy of 94.13% and
demonstrated strong performance metrics, including a Dice coefficient of 0.634
and Precision of 0.711, underscoring its robustness and potential clinical
utility. This work highlights U-Net’s effectiveness in spine hemangioma
segmentation and explores its matching capabilities, indicating promising
potential for advancements in automated MRI analysis. |
Keywords: |
MRI Spine Hemangioma Segmentation, U-Net Model, Convolutional Neural
Network (CNN), Semantic Segmentation, Precision, Dice coefficient, accuracy. |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
BUILDING ROBUST IOT NETWORKS WITH DYNAMIC LAYER PRIORITIZATION AND PREDICTIVE
FAULT MANAGEMENT PROCESS |
Author: |
DASARI ANUSHA, M. GAYATHRI, B V CHOWDARY, 4DR. NARASIMHA CHARY CH, DR. ATHMAKURI
SATISH KUMAR, DR. C NAGESH |
Abstract: |
The rapid proliferation of IoT networks places strict demands on the
communication frameworks to be efficient, scalable, and robust for dealing with
varied resource constraints. The existing architectures of IoT often bottle up
in issues related to high energy consumption, failure in reliability, and issues
related to interaction and complexity primarily in dynamic large-scale
deployments. This hampers the real-time performance and overall reliability
needed in applications such as healthcare systems, industrial automation, and
smart grids. With respect to the previous, we aim for improving such challenges
through using five novel techniques to optimize communication among machines as
proposed by this work, using the approach: Dynamic Layer Prioritization,
Context-Aware Energy Optimization, Adaptive Protocol Selection, Predictive Fault
Management, and Hierarchical Data Aggregation. Dynamic layer reallocation with
resource optimization in using reinforcement learning brings 30% to 40%
reduction of packet loss with latency improved between 20% to 25%. CAEO relies
on edge intelligence and context-aware sleep-wake cycles to boost battery life
by 50% while retaining 95% data accuracy. APS uses machine learning to select
the best protocols for communication, thereby increasing throughput by 20–30%.
PFM makes use of predictive analytics and blockchain integration to prevent
faults before they happen, thus enhancing network reliability by 25%. HDA
reduces redundancy in data, which in turn reduces the overhead of data
transmission by 40% and increases processing speed by 30%. This multi-layered
approach ensures resource efficiency, real-time performance, scalability,
reliability, security, and interoperability with diverse IoT ecosystems. The
proposed model shows great impacts, including reduced operational costs,
enhanced energy efficiency, and robust fault tolerance, making it a
transformative solution for next-generation IoT networks. |
Keywords: |
IoT Networks, Dynamic Layer Prioritization, Predictive Fault Management, Energy
Optimization, Adaptive Protocols |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
MORPHEME MASTERY FOR TAMIL QA: HARNESSING THE WISDOM OF NANNŪL FOR ACCURATE
ANSWERS |
Author: |
NIVEDITHA S , PAAVAI ANAND G |
Abstract: |
Morpheme Mastery represents a groundbreaking advancement in Tamil Question
Answering, introducing a pioneering answer alignment technique that leverages
Finite State Transducer (FST) technology to expertly handle complex
morphological structures, particularly nouns. This research contributes to the
advancement of IT in natural language processing by developing a novel approach
that ensures precise analysis and interpretation, enabling efficient and
accurate processing. The significance of this research lies in its potential to
revolutionize computational linguistics and Tamil Question Answering, setting a
new benchmark for accuracy and efficiency. By incorporating novel steps such as
stop words removal, Numeric Information Extraction (NIE), and Temporal Entity
Recognition (TER), Morpheme Mastery addresses a critical gap in existing
question answering systems. Our deliberate choice of a rule-based approach,
rather than machine learning alternatives, guarantees consistently high
accuracy, a critical prerequisite for building a robust computational language
framework. Empirical results demonstrate the efficacy of our approach, yielding
an impressive 11.87% improvement in CHAII and 9.58% in SQuAD dataset efficiency.
This significant enhancement underscores Morpheme Mastery's potential to
transform the landscape of Tamil Question Answering and computational
linguistics, enabling more accurate and efficient human-machine interaction. |
Keywords: |
Answer Alignment, Tamil, Question Answering, Morpheme Analysis, Nannūl |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
DESIGN AND DEVELOPMENT OF MICROSERVICES-BASED CRM SYSTEM |
Author: |
ANDRII PRYHODA , ROSTYSLAV SIKORA , VOLODYMYR MOSKALENKO , ANDRII ROSKLADKA |
Abstract: |
The aim of the article is to analyse the technical and organizational aspects of
the implementation of microservice architecture in CRM systems. The research
employed analytical and logical methods, technical analysis of microservice
architecture comparing it with monolithic systems, as well as Docker and
Kubernetes applied technologies. The use of Docker and Kubernetes
containerization tools is shown to be a key factor in successfully managing a
microservices-based CRM infrastructure. The analysis of implementation results
shows how the microservice architecture reduces operational costs and increase
the efficiency of system support. The role of automation of information systems
(IS) in business was analysed. It was established that the modern market creates
a situation where it is necessary to constantly increase production efficiency.
General requirements for the CRM system were formulated, including security and
protection, reliability and availability, user-friendly interface, ease of
implementation and maintenance, availability of basic functionality. Current web
application development technologies are explored and the technologies used in
the project, including Django, Django Rest Framework, React, MobX, and Ant
Design are described. |
Keywords: |
CRM system, Docker, Kubernetes, Microservice architecture, Monolithic
architecture. |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
A HYBRID FRAMEWORK FOR OBJECT DETECTION AND SEGMENTATION IN AUTONOMOUS VEHICLES
USING YOLO NAS AND MASK R-CNN |
Author: |
DARTHY RABECKA V ,BRITTO PARI J |
Abstract: |
A leading opinion prescription that can precisely establish and segregate
objects in complex environments is necessary for the hurried development of
self-driving autos. The new hybrid framework shown in this work improves object
detection and segmentation performance by combining Mask R-CNN with You Only
Look Once Neural Architecture Search (YOLO NAS). With the neck and head of
YOLO-NAS retained, this study tries to boost the performance of YOLO-NAS by
substituting a combination of Res Net and Feature Pyramid Network (FPN) for the
default Rep Ne X t backbone. Additionally, to increase segmentation
capabilities, the study integrates Mask R-CNN. Our methodology leverages the
efficiency of YOLO NAS for fast object detection and the precision of Mask R-CNN
for complex segmentation tasks using the KITTI dataset, a leading benchmark in
autonomous driving research. This approach resolves issues such as disparate
object sizes, obstructions, and complex backgrounds that are frequently
encountered when driving in urban areas. Our cloud-based approach outperforms
previous approaches in terms of precision, recall, and F1 scores. The results of
our experiments show that this combination approach could greatly contribute to
the development of more reliable and safer autonomous driving systems, paving
the way for advancements in real-time perception technology. |
Keywords: |
Convolution Neural Network, Object Detection, Autonomous Vehicles, You Only Look
Once |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
AI BASED MODIFIED APPROACH IN HIGH-RESOLUTION IMAGE RESTORATION TECHNIQUES FOR
SMART HEALTHCARE SYSTEM: A TECHNICAL REVIEW AND ANALYSIS |
Author: |
PRASANTA KUMAR SAHOO , DEBASIS GOUNTIA , RANJAN KUMAR DASH , MANAS KUMAR NANDA |
Abstract: |
Medical image analysis is the most prominent research area of digital image
processing. In digital image processing, medical images are more sensitive and
informative. Data loss in medical image processing harms patients and the
healthcare system. Therefore, several medical image analysis techniques play a
significant role in ensuring lossless information during disease diagnosis. The
visual clarity of an image depends upon its high resolution. Therefore, high
resolution is crucial in medical image analysis. The high-resolution problem
occurs due to spatial resolution limitations. These limitations can be
attributed to hardware limitations, low radioactivity doses, or the acquisition
time of a specific image. To avoid these limitations, several high-resolution
image restoration (or reformation) techniques are available. These techniques
include image compression, histogram equalization, edge detection, feature
extraction, image synthesis, and noise reduction, among others. Recently,
different AI-based techniques, like machine learning (ML) and deep learning
(DL), have been considered booming technologies for high-resolution image
reformation. Therefore, we felt it was necessary to review various AI
learning-based techniques and their applications in the reformation of
high-resolution digital images. The digital images may include various types,
such as medical images or those from the Indian Historical Society. In this
article, we first shed light on different modalities of medical image analysis
techniques with their image acquisition properties. We then apply machine
learning and deep learning AI approaches to the image acquisition properties to
address the issue of poor visual quality in the images. This serves as evidence
that ML and DL models must modify the component configuration of medical image
modalities to reform high-resolution images. To improve the quality of
low-resolution images, we are building new models for histogram equalization,
compression, decompression, and noise reduction based on this comprehensive
review. |
Keywords: |
High-Resolution, Image Enhancement, Medical Image Analysis, Image Compression,
Machine Learning, Deep Learning. |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
DEEP LEARNING-BASED CLASSIFICATION AND SEGMENTATION OF CHEST PATHOLOGIES |
Author: |
JAKKULA SRAVANTHI, BANDA VENKATA RAMANA, JUTHUKA ARUNA DEVI, SURAGALI CHANTI,
SRADHANJALI PATTANAIK, PALTHIYA ANANTHA RAO, MARADA SRINIVASA RAO |
Abstract: |
Chest diseases, such as COVID-19, viral pneumonia, and lung opacity, in most
severe cases, call for quick diagnosis and accurate treatment. Applying deep
learning methods, mainly convolutional neural networks (CNNs), has become
attractive among machine learning techniques for automated image diagnostics.
This paper reports a new ensemble approach that utilizes CNN-1, CNN-3 and VGG-16
structures to classify the disease and U-Net for segmenting the chest diseases
in X-ray pictures considered from the Kaggle repository. Data augmentation is
applied to original samples to increase the size and performance of a model. The
segmentation procedure shows a high capability to define interested regions in
the lung, which contributes to higher accuracy, performed comparison between the
proposed segmentation method and existing methods. The following results
obtained by the model are 96% in segmentation and 87.7% in classification-
reflecting the preferred method's high accuracy. The proposed model was
evaluated on performance metrics like F1-score, Precision, and Recall.
Therefore, this result may lead the way for enhanced diagnostic accuracy and
treatment decisions in clinical settings. |
Keywords: |
Viral Pneumonia, COVID-19, Lung Opacity, Deep Learning, Ensemble, CNN-1, CNN-3,
VGG 16, Segmentation, U-Net. |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
Title: |
IMPLEMENTATION OF HYBRID DEEP LEARNING CNN MODEL FOR MULTISPECTRAL SATELLITE
IMAGE CLASSIFICATION IN LAND CHANGE DETECTION |
Author: |
YERIK AFRIANTO SINGGALEN, STEPHEN APRIUS SUTRESNO, MUHAMAD NUR AGUS DASRA, RUBEN
WILLIAM SETIAWAN |
Abstract: |
This study investigates implementing a Hybrid Deep Learning Convolutional Neural
Network (CNN) model for classifying multispectral satellite imagery to detect
land cover changes in Untung Jawa Island, Indonesia. The research addresses
critical limitations in conventional image classification methods that struggle
with capturing subtle terrain modifications and complex land cover transitions
at accelerated rates. Our key contribution is the development of an innovative
CNN architecture that integrates multiple deep-learning approaches optimized for
different spectral bands, significantly enhancing feature extraction and
classification capabilities. By systematically applying the Normalized
Difference Vegetation Index (NDVI) and Normalized Difference Built-up Index
(NDBI), the study demonstrates substantial improvements in classification
accuracy, achieving rates exceeding 85% across multiple temporal datasets—a
significant advancement over traditional methods that typically achieve 65-75%
accuracy in similar contexts. The hybrid CNN model successfully processes over
1,000 image patches while maintaining consistent accuracy levels above 82% in
feature extraction tasks. Quantitative analysis reveals a 28% increase in
urbanized areas between 2013 and 2024 and a 19% decrease in vegetated surfaces,
providing crucial evidence for environmental planning. Implementing U-Net
architecture for image segmentation further enhances the model's capability to
detect subtle environmental modifications, particularly in coastal regions where
rapid urbanization intersects with sensitive ecological systems. This research
advances remote sensing technology by establishing new methodological benchmarks
for automated environmental monitoring systems and providing actionable insights
for sustainable urban development planning in vulnerable small island
ecosystems. |
Keywords: |
Hybrid Deep Learning, Convolutional Neural Network, Multispectral Satellite
Image, Land Change Detection, NDVI, NDBI, U-Net Architecture, Environmental
Monitoring |
Source: |
Journal of Theoretical and Applied Information Technology
31st March 2025 -- Vol. 103. No. 6-- 2025 |
Full
Text |
|
|
|