|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
October 2025 | Vol. 103 No.20 |
|
Title: |
INTEGRATING ZERO TRUST PRINCIPLES WITH MACHINE LEARNING FOR ANOMALY DETECTION
AND ACCESS CONTROL IN DIGITAL EDUCATIONAL SYSTEMS |
|
Author: |
MANAL SOBHY ALI ELBELKASY, HEBA M.SABRI, ALIAA K.ABDELLA, MAI A.ELNADY, AND
KAREEM KAMAL A. GHANY |
|
Abstract: |
Considering that educational institutions are now integrated into the digital
world, the protection of information systems has gained great significance. This
research uses a machine learning-based anomaly detection approach incorporated
with a Zero Trust framework, to explore and secure the system log datasets from
various resources like Android, Apache, Hadoop, Linux, and Windows. By utilizing
an unsupervised learning algorithm such as Isolation Forest, anomalies were
defined relying on engineered metrics including EventId_encoded and
TemplateLength. A uniform performance was observed across all the platforms with
regards to anomaly detection with an average of 10% across all datasets. Using
the Zero Trust access control model, a total number of 500 anomalies were
successfully blocked despite there being over 10000 log entries. In comparison
with other platforms, it can be interpreted that Linux as the OS with the most
amount of variability since it had the highest anomaly count of 200. The
confusion matrix showed an excellent performance of the model as it produced a
low false positive which was ap-proximately 2.5%. As was expected, the Zero
Trust framework was effective in blocking events that should not be accessible
while ensuring that normal events were reachable as well. This study emphasizes
the practicality and effectiveness of the combination of the Zero-Trust
architecture principle and machine learning for the protection of data in
educational systems. |
|
Keywords: |
Security, Zero-Trust, Machine Learning, Anomaly Detection, Access Control |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
DEVELOPING INTERCULTURAL COMMUNICATIVE COMPETENCE THROUGH VIRTUAL REALITY |
|
Author: |
VIRA CHORNOBAI, NATALY ZATSEPINA, NORIK GEVORKIAN, VIKTORIIA ROMANIUK, IHOR
BEREST |
|
Abstract: |
The development of intercultural communicative competence has a positive effect
on the adaptation of a person in society, the formation of a new worldview. The
aim of the study is to determine the advantages of developing intercultural
communicative competence using virtual reality (VR). The study employed the
methods of analysis, observation, comparison, the mathematical method of
parallel forms, calculations of the J. Phillips criterion, Cronbach’s α. The
process of developing communicative skills in foreign language learning involved
the analysis of cultural stereotypes, expanding verbal and non-verbal
techniques, solving intercultural situations, and student interaction. The
training was conducted using ClassVR (Group 1) and ENGAGE (Group 2). The results
showed that the ClassVR application had a greater impact on the development of
language skills (M=45.3), ENGAGE — social skills (M=47.2). It was found that the
learning process had a positive significance for the development of language
competence (94%), critical thinking (92%) for the students of Group 1. The
students of Group 2 developed the skills of clarity of development of cultural
strategies (95%), flexibility of communication (93%). It was determined that the
students of Group 1 achieved a high level of communication during interaction
with other students from the group (M=47.2), Group 2 – with students from other
countries (M=48.2). The obtained results reveal the advantages of VR for the
possibility of developing students’ intercultural communicative competence. The
practical significance of the article is the possibility of adapting VR to the
foreign language learning. The research prospects are aimed at ensuring the
development of intercultural communication for students from Ukraine and Great
Britain. They provide for the analysis of the possibility of achieving
intercultural competence in the study of English and Ukrainian and achieving
high-quality interaction between students. The results will be put in practice
through the use of ClassVR and ENGAGE as well. |
|
Keywords: |
Social Environment, Interactive Opportunities, Cultural Stereotypes, Verbal
Techniques, Intercultural Situations, Virtual Reality, Educational Technologies. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
DESIGN AND IMPLEMENTATION OF A SMARTPHONE-BASED IOT CONTROL SYSTEM FOR ELECTRIC
HOSPITAL BEDS IN HOME HEALTHCARE APPLICATIONS |
|
Author: |
NUTJIRED KHEOWSAKUL, KORRAPAT CHALERMWONG, WORAPONG BOONCHOUYTAN, SUPAWADEE
MAK-ON, CHATREE HOMKHIEW |
|
Abstract: |
This study presents a novel, smartphone-based Internet of Things (IoT) control
system for electric hospital beds, providing a proactive physical intervention
solution to enhance home healthcare for bedridden patients. The system is based
on a three-tiered conceptual model, with key concepts like low latency and
usability rigorously operationalized. The architecture integrates a mobile
application (Thunkable), real-time cloud communication (Firebase), and an
embedded microcontroller (ESP32). Its dual-mode control logic supports both
automated repositioning at predefined intervals to mitigate risks like pressure
ulcers and manual control via a user-friendly interface. Empirical evaluation
confirmed the system's robust performance, with the automatic mode achieving a
mean repositioning interval of 2.003 minutes (SD = 0.045) and an average
response time across varying network conditions consistently under 0.6 seconds.
A user satisfaction study with 20 nursing students yielded a high mean score of
4.32 out of 5.0. The study also identified a key limitation a peak electrical
current of 1.13 A in one actuator which informs a detailed discussion on the
need for enhanced safety mechanisms in future iterations. This work demonstrates
a scalable and cost-effective solution with high user acceptance, providing a
validated framework for next-generation assistive care technologies. |
|
Keywords: |
Smart Bed, IoT, Mobile Interface, Healthcare Application, Elderly Support |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
UNRAVELING SCHIZOPHRENIA: A PRECISION DIAGNOSTIC APPROACH WITH SEQUENTIAL
CONVOLUTIONAL NEURAL NETWORKS |
|
Author: |
B.PRABHA, N.MUTHURASU, V.AUXILIA OSVIN NANCY, G.SARAVANA GOKUL, E.SABITHA |
|
Abstract: |
Schizophrenia, an intricate mental health disorder, poses challenges for early
detection, emphasizing the imperative of timely intervention to optimize
treatment outcomes. It tends to estrange individuals from social life,
highlighting the importance of early diagnosis, particularly for fostering
healthy brain development in affected young individuals. Early intervention not
only holds the potential to enhance treatment outcomes but also significantly
mitigate disability in young patients grappling with schizophrenia. The proposed
work focuses on diagnostic methodology by utilizing a Sequential Convolutional
Neural Network (CNN) applied to Electroencephalogram (EEG) signals. Leveraging
EEG data sourced from the CHB-MIT dataset, the research sheds light on brain
activity patterns associated with schizophrenia. The preprocessing stage
incorporates wavelet transformation, while statistical feature extraction aids
in discerning normal and schizophrenic classes. The CNN architecture integrates
convolutional layers, dropout mechanisms for regularization, and max pooling for
downsizing, resulting in a robust classification model. Experimental results
illustrate the model's efficacy in comparison to alternative approaches,
demonstrating promising outcomes in terms of accuracy and loss rates. The
research lays the groundwork for future enhancements, suggesting potential
applications of the model across diverse diagnostic criteria for schizophrenia.
The proposed CNN model exhibits a high accuracy of 92% in distinguishing
individuals with and without schizophrenia, underscoring its effectiveness as a
diagnostic tool. |
|
Keywords: |
Electroencephalogram (EEG), Convolutional Neural Network (CNN), Sequential CNN,
Classification. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
THE ROLE OF DIGITAL TECHNOLOGIES IN INCREASING THE INFORMATION OPENNESS OF
SELF-GOVERNMENT BODIES IN UKRAINE |
|
Author: |
PAVLO TROFIMCHYK, MAKSYM KHOZHYLO, SERGII PROKOPENKO, OLHA ANTONOVA, LIUDMYLA
IVASHOVA |
|
Abstract: |
Relevance of the research The relevance of this study is determined by the
need to increase the information openness of local government through the
implementation of sustainable, interoperable, and cognitively oriented digital
technologies. Aim of the research The aim of the study is to formalize an
integrated digital framework for local government aimed at ensuring information
openness, taking into account the parameters of cyber resilience, administrative
efficiency, as well as ethical and technological coherence. Research methods
The study employed the following methods: comparative cross-analysis,
decomposition analysis, architectural modelling, sequential modelling,
requirements and class modelling, Kanban modelling, cognitive modelling of user
interaction. Results The study confirmed the key role of digital
technologies in increasing information openness of self-government bodies, in
particular through the use of e-Government portals, Open Government Data (OGD),
Customer Relationship Management (CRM) and e-Participation as balanced tools of
transparency and accessibility. The proposed framework is based on centralized
Application Programming Interface (API) orchestration, traceable transaction
routing, and a cyber-resilient monitoring module (CyberMonitor), which ensures
regulatory verifiability of service interaction. Institutional scalability,
techno-ethical compliance, and cognitive accessibility are achieved through
modular architecture, class typification, and Kanban strategy for implementing
digital openness. Academic novelty of the research The academic novelty of
the research is the formalization of a multi-level framework for digital
governance with an interoperable architecture centralized through the
e-Government Portal. A metamodel for evaluating digital technologies based on
the criteria of openness, cyber resilience, and institutional resilience is
proposed. Prospects for further research Further research should be
directed towards launching a controlled pilot project to empirically test the
functioning of the framework in local e-government settings. The testing results
should become the basis for its adaptive implementation, taking into account the
parameters of institutional stability, cyber resilience, and regulatory
integration. |
|
Keywords: |
Public Governance, Digitalisation, Digital Transformation, Administrative
Services, Communications, Local Self-Government, Transparency, Transformation,
European Integration, Public-Private Partnership, Corruption Risks, Regional
Partnership, Development Programmes, Local Budgets, Local Taxes, PPP Projects |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
TRANSFER LEARNING ARCHITECTURES FOR BREAST CANCER DETECTION IN MAMMOGRAPHIC
IMAGE DATA |
|
Author: |
HELMI IMADUDDIN , WIDI WIDAYAT , MUHAMMAD SYAHRIANDI ADHANTORO |
|
Abstract: |
Breast cancer remains a leading cause of mortality among women globally, with
early and accurate detection being critical for effective treatment. This study
aims to evaluate the effectiveness of transfer learning architectures in
detecting breast cancer using mammographic image data, addressing challenges
posed by dense breast tissues and limited annotated datasets. The research
utilizes the INbreast dataset, comprising high-resolution mammograms, and
applies pre-trained convolutional neural network models such as DenseNet-201 and
MobileNetV2 to classify benign and malignant classes. The methodology includes
data preprocessing resizing, normalization, and augmentation, followed by model
training with hyperparameter optimization, and validation. Performance is
assessed through accuracy, precision, sensitivity, and specificity. DenseNet-201
outperformed MobileNetV2 across all metrics, achieving 0.99 accuracy and
sensitivity, 0.98 specificity and 0.97 precision, demonstrating its robustness
in capturing intricate features critical for cancer detection. MobileNetV2,
while computationally efficient, showed lower diagnostic reliability, making it
more suitable for resource-constrained scenarios. The findings confirm that
DenseNet-201 is a highly effective model for breast cancer classification in
mammographic images, offering strong potential for integration into clinical
diagnostic systems. This study contributes to the growing literature on
AI-assisted diagnostics and supports the deployment of deep learning for
improving public health outcomes. Future work should explore ensemble models and
broader dataset generalization to ensure scalable and equitable implementation. |
|
Keywords: |
Breast Cancer, Deep Learning, DenseNet-201, MobileNetV2, Transfer Learning |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
ENRICHED DOLPHIN OPTIMIZATION-BASED SECURED ROUTING PROTOCOL FOR
MOBILITY-ENABLED WIRELESS SENSOR NETWORK |
|
Author: |
S.A.GUNASEKARAN, DR.M.SENTHIL KUMAR |
|
Abstract: |
In a wireless sensor network (WSN), many sensors with limited processing power,
data storage capacity, and a central base station exchange information over a
wireless connection. These sensors can be utilized to create a self-sufficient
distributed network. An essential characteristic of Mobility enabled WSN (MWSN)
is the ability for nodes to move around in communication having multiple hops.
Since there is no hard and fast rule on how close or far apart nodes must be to
communicate, nodes are free to roam. MWSN face more security issues in routing
than the traditional WSN. To address routing issues in MWSN, this paper has
proposed a reactive protocol, Enriched Dolphin Optimization based Secured
Routing Protocol (EDOSRP), inspired by dolphins’ natural behaviour. EDOSRP
evaluates nodes in the route by analyzing the service it had done to the
network, and if that value is less than the threshold value, then it is avoided
in the routing. Fitness and node movement synchronization assures the node
evaluation and enhances the packet delivery ratio. EDOSRP is simulated in NS3
for measuring the performance. The results highlight that EDOSRP performs better
than the current routing protocols in all considered routing protocols. |
|
Keywords: |
Routing, Optimization, Dolphin, Fitness, Sensor, WSN |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
CLOUD-BASED HEALTH CARE DATA MANAGEMENT SYSTEM USING TRANSFORMER ARCHITECTURE
FOR ENHANCED PREDICTIVE ANALYTICS |
|
Author: |
NAGA SAI RAM NARNE, GANGADHARA RAO KANCHARLA |
|
Abstract: |
Healthcare prediction models encounter significant challenges in managing
temporal medical data while ensuring privacy across decentralized institutions.
Current transformer architectures do not have specific features for dealing with
irregular medical time intervals, and they do not allow for secure collaboration
between multiple institutions without putting sensitive patient data in one
place. This research presents the inaugural cloud-based transformer architecture
meticulously designed for healthcare prediction, incorporating
privacy-preserving federated learning functionalities. It attains 94.2%
accuracy, reflecting a 5.1% enhancement over the leading BEHRT model, while
identifying disease progression 4.3 months sooner than traditional approaches
and ensuring total patient data sovereignty. |
|
Keywords: |
Cloud Computing, Transformer Architecture, Healthcare Prediction, Temporal
Medical Data, Federated Learning, Disease Progression Forecasting |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
FEW-SHOT AND ZERO-SHOT SATELLITE IMAGE CAPTIONING WITH BLIP VISION-LANGUAGE
MODELS |
|
Author: |
M BALAKRISHNA MALLAPU , DEEPTHI GODAVARTHI |
|
Abstract: |
Satellite image captioning has emerged as a critical task in remote sensing and
geospatial analysis, enabling automated understanding and interpretation of
complex aerial imagery through natural language descriptions. This paper
proposes a novel architecture that integrates a Swin Transformer for
hierarchical visual feature extraction with Bootstrapped Language-Image
Pretraining (BLIP) for vision-language alignment. The dataset comprises 10,000
satellite images annotated with corresponding captions, divided into training,
validation, and testing subsets. The images were preprocessed to a resolution of
224×224×3, and captions were tokenized with a fixed maximum length of 50 tokens.
The proposed framework outperforms several existing models by effectively
capturing both global and local contextual information from diverse land-cover
scenes. Experimental results demonstrate a BLEU-4 score of 6.5, which exceeds
prior benchmark models such as Convolutional Neural Network (CNN) and Long
Short-Term Memory (LSTM), Transformer-EfficientNet, and ViT-SemanticTagging. The
architecture's design ensures both semantic richness and structural flexibility,
making it highly adaptable to a wide variety of satellite image domains.
Comparative analysis, qualitative outputs, and quantitative evaluation confirm
the robustness and efficiency of the model. This work contributes a scalable,
interpretable, and high-performing solution for remote sensing applications in
environmental monitoring, urban planning, and disaster assessment. From an IT
research perspective, this work demonstrates how advances in multimodal deep
learning can be effectively transferred to specialized domains such as satellite
image analysis. By bridging computer vision and natural language processing, the
framework contributes to the broader IT goal of making unstructured visual data
more interpretable, searchable, and actionable across multiple application
areas. |
|
Keywords: |
BLIP; Vision Language, CNN; LSTM, Transformer-EfficientNet, Vit |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
DIGITAL TECHNOLOGIES IN DOCUMENTING WAR CRIMES: LEGAL AND ETHICAL ASPECTS |
|
Author: |
IVO SVOBODA, ANDRII HUSAK, YURII KOLLER, OLHA KOSYTSIA, VALENTYN KARASOV |
|
Abstract: |
The relevance of the research The relevance of the research is determined by
the need to create a standardized, legally verified, and ethically acceptable
digital infrastructure for documenting war crimes in the context of growing
volumes of digital evidence, interjurisdictional fragmentation, and the needs of
post-conflict society in mechanisms for reparations and ensuring human rights.
Aim of the research The aim of the research is to formalize and model the
framework of a global digital platform for documenting war crimes, taking into
account multi-source aggregation, forensic validation, legal stratification, and
institutional interoperability in accordance with international standards.
Methods of the research The research methodology consists of: comparative
analysis of digital technologies, synthetized framework modelling, ontological
modelling, functional and procedural modelling, scenario sequence modelling.
Obtained results A comparative analysis of digital technologies for
documenting war crimes revealed differentiation of evidentiary capacity
according to the parameters of chain of custody, forensic reliability, metadata
control, and normative congruence, which is critically important in the context
of protecting human rights and implementing procedures for compensation for
damage caused by military actions. Digital repositories and mobile evidence
applications demonstrated the highest evidentiary stability, while artificial
intelligence (AI)/machine learning (ML) modules and automated attribution
systems have risks of bias, opacity of inference, and limited explainability. In
response to the identified challenges, a hybrid framework was developed with a
synthesis of Open Source Intelligence (OSINT), Geospatial Intelligence (GEOINT),
blockchain, AI/ML, which ensures traceological integrity, legal validity and
cross-jurisdictional interoperability of digital evidence in the context of
post-conflict justice. The framework for digital documentation of war crimes is
focused not only on optimizing the evidentiary process, but also on integration
into mechanisms for protecting human rights and transitional justice.
Academic novelty of the research The academic novelty of the research is the
formalization of an integrated forensic legal framework of a digital platform
for documenting jus in bello crimes, which combines AI/ML discrimination,
blockchain anchoring, OSINT/GEOINT aggregation, metadata control, and legal
stratification. The article is the first to develop an ontology of
inter-component interaction focused on preserving chain of custody,
tamper-resistance, explainability, and transjurisdictional admissibility.
Prospects for further research Further research may focus on the development
of a pilot project with phased validation of the framework in simulated criminal
proceedings. The testing should cover the criteria of procedural relevance,
evidentiary integrity, and transjurisdictional consistency. |
|
Keywords: |
Human Rights, Reparation Of Damages, Reparation For War-Related Damages,
Post-Conflict Society, Transitional Justice, Digital Platform, Evidentiary
Framework |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
MYELOMANET: ADVANCES IN DEEP LEARNING FOR THE SEGMENTATION OF MULTIPLE MYELOMA |
|
Author: |
REVANTH BOKKA, MINAKHI ROUT, SAGIRAJU SRINADHRAJU, SUDHIRVARMA SAGIRAJU,
PITCHIKA P N G PHANIKUMAR |
|
Abstract: |
Multiple myeloma (MM) is a complex hematological malignancy which needs
individual therapy plans to enhance the efficiency of treatment and quality of
life of patients. Although AI-based healthcare has been developing, existing
models show little attention to spatial cell organization and its possible
potential in person-tailored treatment optimization. This paper presents a new
framework involving deep learning-based segmentation of plasma cells, Minimum
Spanning Tree (MST)-based graph analysis, and wavelet-based feature selection to
facilitate the accurate therapy decision-making of the MM patients. We then
isolate plasma cells by a model based on UNet that considers the SegPC-2021
dataset, and with an average Intersection over Union (IoU) of 0.8658 and F1
score of 0.9265. After segmentation, we extract post-segmentation,
morphological, intensity and texture descriptors and construct an MST on the
center of each cell to learn about the relationship as far as spatial features
are concerned. Along with the wavelet-transformed data on clinical and genetic
factors, these structural features, e.g. diameter of trees, entropy of edges,
and density of clustering, are used to settle on the most predictive biomarkers.
These features are chosen, and then the AI models, such as Random Forest and
Support Vector Machines, are trained with the selected feature set, with an
accuracy of 94% and an AUC-ROC of 97%. Such an integrated model not only
enhances the level of personalization in treatment but also helps in optimizing
the quality of life, as it reduces overtreatment, allows for minimizing the drug
toxicity, and improves patient stratification. The findings are also significant
and reliable, as revealed by statistical tests (paired t-tests and chi-square
tests). The suggested method proves that the implementation of the spatial
topology and multiscale analysis in AI-powered myeloma treatment can deliver
clinically meaningful and understandable results. This work forms a blueprint
for precision medicine in MM with a concrete orientation toward improving
patient well-being. |
|
Keywords: |
Minimum Spanning Tree, Optimization, Machine Learning, Segmentation,
Feature Selection, Deep Learning |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
HYBRID FEATURE FUSION FRAMEWORK FOR CLASSIFICATION OF TOCKLAI VEGETATIVE TEA
CLONE SERIES OF ASSAM USING HANDCRAFTED FEATURES AND DEEP FEATURES EXTRACTED
FROM INCEPTIONV3 TRANSFER MODEL INTEGRATED WITH SQUEEZE –EXCITATION BLOCK |
|
Author: |
JASMINE ARA BEGUM, RANJAN SARMAH |
|
Abstract: |
To address the growing demand of Tea, Tocklai Assam developed various tea clones
that are highly resistant to soil and climatic conditions. However, tea farmers
often struggle to differentiate the various Tocklai Vegetative Tea Clones making
them dependent on domain experts. Thus, an accurate automated method of
identifying Tea Clones will not only assist the farmers in identifying the
proper Tea Clone but also alleviate the need of relying on domain experts.
Recent advances in Computer vision and Deep Neural Network have reportedly
improved the identification performance with extensive focus on handcraft and
deep features. Thus, in this paper a hybrid feature fusion is proposed to
extract the discriminative feature required for identification of Tea Clone
leaves. The proposed pipeline initially extracted the handcrafted features
followed by a combined technique of Harris Hawk Optimization and Mutual
Information that optimized and eliminated the less contributing handcrafted
feature. Thus, the resultant feature set named as HandcraftedFeature class was
then fused with InceptionV3Feature class comprising the deep features extracted
from the modified InceptionV3 transfer model integrated with Squeeze Excitation
Block. The integration of InceptionV3 and Squeeze Excitation Block was done with
an aim to capture both the spatial and channel based discriminative feature of
Tea Clone. Further to ensure that no pair from the two-feature class was
redundant, Pearson Correlation Coefficient combined with Mutual Information was
applied so as to eliminate any redundant features. The final feature class
obtained was then used to train the modified InceptionV3 model with dropout and
L2 regularization which upon 5-fold cross validation showed a better accuracy of
98.37%. The experimental analysis successfully outperformed the traditional
classifiers and demonstrated the effectiveness of the proposed pipeline in
extracting discriminative features required for identification of TV Tea Clone,
thereby reducing reliance on manual clone identification methods. |
|
Keywords: |
Handcrafted Feature, Deep Feature, Transfer Learning, Feature Fusion |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
CAT SWARM OPTIMIZATION FOR OPTIMIZING NEURAL NETWORK WEIGHTS IN LUNG
DISEASE CLASSIFICATION |
|
Author: |
LISHA KURIAN , P AMUDHA |
|
Abstract: |
Over the past few years, COVID-19 has had a significant and wide-ranging impact
on many facets of society, the economy, and daily life worldwide. The accurate
and early classification of multiple lung diseases is vital for timely diagnosis
and treatment, particularly in the case of conditions such as COVID-19,
pneumonia, emphysema, pleural effusion, and tuberculosis. Medical imaging,
particularly chest X-ray (CXI), has emerged as a valuable tool for early
detection and triaging of suspected COVID-19 cases. To enhance diagnostic
efficiency, there is growing interest in leveraging artificial intelligence (AI)
techniques, particularly deep learning (DL) algorithms, for automated
interpretation of medical images. In this paper, we propose a convolutional
neural network (CNN) architecture based on EfficientNet-B0 for the
classification of lung diseases using CXI images. We integrate Cat Swarm
Optimization (CSO), a bio inspired optimization technique into the training
pipeline to optimize the model's performance and convergence. Our approach
addresses class imbalance in the dataset and provides a comprehensive evaluation
comparing the proposed model with state-of-the-art DL models. Experimental
results demonstrate the effectiveness of our model in accurately classifying
various respiratory diseases, including COVID-19, pneumonia, emphysema,
effusion, tuberculosis, and normal cases, with superior performance in terms of
accuracy (95.8%), precision (84.2%), recall (81.9%), F1 score (83.1%), and
Matthews Correlation Coefficient (MCC) (79.7%). This framework highlights the
potential of integrating biologically inspired optimization algorithms with
state-of-the-art deep learning models for medical image analysis, offering a
promising tool for automated and scalable diagnostic systems. |
|
Keywords: |
Cat Swarm Optimization, Neural Network Optimization, Lung Disease
Classification, Convolutional Neural Networks |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
ADDRESSING MALWARE RESILIENCE IN CLOUD ENVIRONMENTS THROUGH PATTERN ANALYSIS AND
AUTO-ACTIVATION SELF-REPAIR MODEL |
|
Author: |
SANABOYINA. MADHUSUDHANA RAO, ARPIT JAIN |
|
Abstract: |
As a type of Service Oriented Architecture (SoA), the cloud aspires to
adaptability. The services outlined in the SoA cover a wide range of
governmental, private, and commercial uses. These services are crucial to the
cloud and should be protected. For cloud security and resilience, it's not
enough to be able to recognize existing risks; it must also be prepared to deal
with emerging threats to the cloud's underlying architecture. Organizations and
users alike rely largely on the efficacy of antivirus and other security
software. While this type of software is necessary, the methods it employs are
insufficient to identify and prevent most harmful activity, and they also place
a heavy burden on the host machine's resources. In recent years, there has been
a meteoric rise in the number of Cloud Service Providers (CSPs) and the breadth
of services and features they offer. Taking advantage of such services has paved
the way for businesses' infrastructure to shift towards the cloud, which in turn
has helped them provide services to their clients in a more convenient and
adaptable manner. The infrastructure supporting cloud computing must be strong,
scalable, and fast. For a cloud computing platform to be trusted, it must be
capable of self-diagnosis and self-healing in the event of malfunctions or
downgrades. This research proposes the self-healing function, a difficult
problem in modern cloud computing systems, and the value of autonomic computing
in the cloud from the perspective of the consequences it entails. In this
research, an approach to malware pattern analysis on cloud environment is
proposed that increases the quality of service levels. Malicious actors in
particular frequently use malware to attack cloud services with the goal of
stealing sensitive data or disrupting their operation. This research proposes a
Dynamic Malware Pattern Analysis with Auto Activation Module (DMPA-AAM) for
accurate analysis and removal of malware in the cloud environment for enhancing
the quality of service levels. The proposed model when contrasted with the
traditional model performs better in pattern analysis and malware removal. |
|
Keywords: |
Malware Detection, Cloud Environment, Pattern Analysis, Cloud Service Provider,
Malicious Actions, Auto Triggering, Quality of Service. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
QUANTIFYING THE EFFECT OF TEST-DRIVEN DEVELOPMENT ON SOFTWARE QUALITY |
|
Author: |
U.S.B.K.MAHALAXMI, R.PRASANTHI, T. KRISHNA MOHANA, RAGHUNATH MANDIPUDI, M MADHU
MANIKYA KUMAR, K SANGEET KUMAR |
|
Abstract: |
Test-Driven Development (TDD) has been widely studied, yet its benefits remain
unclear because existing research often struggles to balance internal and
ecological validity. This research paper introduces a hybrid empirical
methodology that integrates large-scale repository mining, a subject crossover
architecture, a Randomised Controlled Trial (RCT), and mixed-methods analysis.
The aim is to evaluate the impact of TDD on software quality while maintaining
both experimental integrity and practicality. Individual differences were
controlled using crossover architecture, in which both advanced students and
professional developers performed structured programming tasks (BSK, MRA and SSH
katas) under randomised TDD and non-TDD conditions. Simultaneously, extensive
GitHub datasets and industry repositories were mined to uncover varied and
long-term development trends. Metrics were collected using static analysis,
continuous integration pipelines, and surveys, and were subsequently examined
using regression models, hypothesis testing, and effect size calculations.
Additionally, the hybrid model achieved greater ecological validity and more
robust causal assertions than traditional single-method studies. RCT results
showed significant improvements in code coverage and maintainability for TDD
groups, with cross-sectional analysis confirming robustness across individuals.
Repository mining revealed consistent but modest quality improvements in
real-world projects. By uniting the controlled and naturalistic studies, this
approach offers a consistent framework for future software engineering studies.
These results provide organisations with immediate, evidence-based guidance for
TDD adoption strategies. |
|
Keywords: |
Test-Driven Development, Software Quality, Code Coverage, Professional
Developers, Productivity. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
DEEP LEARNING-DRIVEN FORECASTING MODELS FOR IOT DATA IN CLOUD COMPUTING
ENVIRONMENTS: LEVERAGING TEMPORAL CONVOLUTIONAL NETWORKS |
|
Author: |
OTHMAN EMRAN ABOULQASSIM, FARHAT EMBARAK, JAYASHREE S, ABDULHAMID ELTHEEB |
|
Abstract: |
Data-driven techniques for machines tool wear detection and forecasting have
gained prominence in the past several years. The study investigates how well
Temporal Convolutional Networks perform in cloud computing contexts for IoT data
prediction. Because TCNs are good at capturing temporal patterns and long-term
relationships, they are useful for time-series forecasting problems. Utilizing
convolutional layers, TCNs differ from conventional Recurrent Neural Networks in
that they analyse data in addition, to enhancing adaptability and decreasing
training time. Dilated convolutions are included in TCNs to further improve
their capacity to identify trends over long periods without adding to the
computational complexity, which makes them appropriate for connections that last
and recurrent trends in IoT data. The study shows that TCNs perform better than
existing models like RNN, LSTM, GRU-LSTM, and CNN-LSTM in terms of metrics like
R^2 alongside and Mean Absolute Percentage Error. The study was conducted on a
Python platform running Windows 11. TCNs attained an MAE of 98.7%, RMSE of
97.6%, MAPE of 98.0%, and R^2 of 97.7%, according to the results. Although the
error metrics are greater, the significant R^2 value suggests a strong model
fit. The study draws attention to many problems with TCNs, such as the
requirement for large labelled datasets, understanding, data quality, and
computationally demanding requirements. The study also highlights how
scalability and flexibility offered by cloud platforms enable effective
management of massive IoT data streams and real-time analysis. The results
indicate that TCNs may greatly increase resource use and forecasting accuracy in
IoT-cloud environmental systems, but more development and study are required to
fully realize their capabilities. |
|
Keywords: |
IoT Data Forecasting, Cloud Computing In Iot, Temporal Convolutional Network
(Tcns), Deep Learning, Dilated Convolutions, Machine Tool Wear Detection |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
EMPOWERING OR OVERBURDENING? THE PARADOX OF AI CONTRIBUTION, USEFULNESS, AND
USABILITY IN SHAPING AUDIT QUALITY |
|
Author: |
DHIMAS RAYENDRA HIMAWAN , RINDANG WIDURI |
|
Abstract: |
This research aims to explore how perceived contribution, perceived usefulness,
and perceived ease of use influence perceived audit quality, while also
examining the moderating effect of auditor competence on these relationships. A
quantitative method was employed, with data collected through questionnaires
distributed to auditors who have experience using technology in the audit
process. The findings reveal that perceived contribution, usefulness, and ease
of use each have a positive and significant effect on perceived audit quality,
thereby supporting hypotheses H1, H2, and H3. Regarding the moderating role,
auditor competence significantly enhances the relationship between perceived
contribution and audit quality (supporting H4), but does not significantly
affect the links between perceived usefulness or ease of use and audit quality
(H5 and H6 not supported). These results highlight that auditor competence is
particularly critical in strengthening audit quality when technology is seen as
directly contributing to audit tasks, but it does not significantly reinforce
the perceived usefulness or ease of use of the technology. This study
underscores the importance of auditor competence in the successful
implementation of technology-driven audit systems. |
|
Keywords: |
Perceived Contribution, Perceived Usefulness, and Perceived Ease of Use, Auditor
Competence, Audit quality |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
DEEP LEARNING BASED AUTONOMOUS VEHICLES ENHANCING SAFETY AND NAVIGATION IN
COMPLEX ENVIRONMENTS |
|
Author: |
SUBHASHINI PALLIKONDA, JAYAMMA RODDA2, AVULA CHITTY, DURGA DEVI BODDANI,
TATHIREDDY MALLESWARI, PRAVEENA MANDAPATI, DR. VALANTINA STEPHEN |
|
Abstract: |
A critical question in the paper concerns the enhancement of safety, efficiency,
and navigation in autonomous vehicles (AVs) in complex environments through
combining deep learning techniques, namely Convolutional Neural Networks (CNNs)
and Reinforcement Learning (RL). Object detection, sensor fusion, and path
planning of the proposed hybrid structure are advanced as multi-modal sensors
(using cameras, LiDAR, radars) are used to identify any object more precisely.
The framework proved to have 96.5 percent accuracy in detecting objects, 29.9
percent less travel duration utilizing the RL-based path planning, 78.4 percent
fewer collisions during normal weather, and a 55.4 percent drop in collisions
during the bad-weather condition. The findings reaffirm the complementarity of
RL with multi-sensor integration, which is a stronger solution than the
traditional ones and has led to a considerable enhancement in AV performance in
the real-world situation. |
|
Keywords: |
Autonomous Vehicles, Deep Learning, Reinforcement Learning, Sensor Fusion,
Object Detection, Collision Avoidance |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
UNVEILING DISEASE ETIOLOGY: EXPLAINABLE CHROMOSOME ONTOLOGY-DRIVEN
CLASSIFICATION WITH DEEP NEURAL NETWORKS |
|
Author: |
SREERAM DEVIKA , DR. B. BHASKARA RAO |
|
Abstract: |
This paper proposes advanced techniques to classify chromosomes and detect
genetic syndromes. The dataset containing chromosome images is initially
preprocessed by normalizing and diversifying it to improve quality and
variability. This chromosome is classified by a hybrid deep learning model made
of the Stack-RNN and the SupEx-Enhanced AlexNet. The temporal patterns are
captured by the Stack-RNN, which handles sequential data, while on the other
hand, complex features from images or genomic data are extracted through a SupEx
activation function, hence enhancing the learning process. Afterwards, these two
models’ outputs are combined in order to get a complete set of features, which
can then be utilized in classifying them. Moreover, after carrying out
classification, chromosomal variations are mapped into an ontology, with
contextual information such as symptoms and diagnostic criteria added to refine
genetic syndrome detection. To ensure precise chromosomal analysis and present
possible genetic disorders in detail, similarity measures between patient data
and known syndromes are computed for ranking probable diagnoses. |
|
Keywords: |
Syndrome Classification, Chromosome Ontology, Feature Extraction, Stack-RNN |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
DWMECLUST: A HYBRID UNSUPERVISED FRAMEWORK FOR CONCEPT DRIFT DETECTION IN
HIGH-DIMENSIONAL MEDICAL DATA STREAMS |
|
Author: |
PRIYANKA RAJAMANI, DR.J.SAVITHA |
|
Abstract: |
Real-world biomedical data streams are dynamic, high-dimensional, and often
unlabeled, making concept drift detection extremely challenging. This study
introduces DWMEClust, a hybrid unsupervised framework that integrates Deep
Embedded Clustering (DEC) for robust representation learning with Dynamic
Weighted Majority (DWM) for adaptive ensemble decision-making. This paper
evaluate DWMEClust on three benchmark datasets—MIMIC-III/IV (ICU time-series),
UK Biobank (genomic/clinical data), and MedMNIST (medical images)—against
state-of-the-art unsupervised ensemble approaches (DWM, Learn++, and EDDC).
Results demonstrate that DWMEClust achieves Precision up to 0.89, Recall up to
0.86, F1-score of 0.87, and Accuracy of 0.91, while also reducing memory usage
by 40% and execution time by up to 45%. These findings confirm that DWMEClust
provides a scalable, noise-resistant, and real-time solution for drift detection
in high-dimensional biomedical streams, enabling more reliable patient
monitoring and decision support. |
|
Keywords: |
Unsupervised Concept Drift Detection, High-Dimensional Data Streams, Dynamic
Weighted Majority, Deep Embedded Clustering, Biomedical Streaming Data, Ensemble
Learning, Drift Adaptation, MIMIC-III, UK Biobank and MedMNIST |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
STACKED Light GBM AND DEEP LEARNING APPROACH FOR ACCURATE MONKEYPOX
DIAGNOSIS USING CLINICAL DATA |
|
Author: |
Dr.KARTHIK ELANGOVAN, KAVURI SANTOSH KUMAR, Dr.HARIHARASUDHAN 3, MARGARET FLORA
B, Dr. KALIDOSS RAJENDRAN, Dr.PARTHIBAN K |
|
Abstract: |
This research addresses the challenge of inaccurate early monkeypox diagnosis by
devising a new system that ensembles the Light GBM algorithm with a deep neural
network. This combination accelerates the prediction process while improving
accuracy. The system is trained on real-time patient datasets that include
clinical information—such as age, symptoms, lab results, and medical history—as
well as images of the affected skin regions. We carefully cleaned this data,
converting categories into numbers and scaling values to ensure everything was
consistent and fair for the models. This research chose Light GBM as our first
model because it's incredibly fast and has a great talent for highlighting which
symptoms and factors are the most important predictors. The DNN processes
complex patient data in a non-linear way, whereas the other models often
struggle. To leverage the unique advantages of both models, we integrated them
using a stacking ensemble method. In this approach, the predictions from the
Light GBM and DNN models serve as inputs to a meta-learner, which in our case
was a Logistic Regression model, to generate a final, refined prediction. We put
our unified system to the test against all the current leading models. We didn't
just look at accuracy; we carefully measured precision, recall, and F1-score,
with a special focus on minimizing false positives. Misdiagnosis is more than a
clinical error—it can deeply frighten patients and drain precious medical
resources. Solving this problem was our most important goal. The outcome was
undeniable: our unified model proved to be more effective than any single
method, demonstrating a superior ability to catch the disease early. This
strategy can speed up decision-making and lead to better care for every patient.
Our results significantly outperforms the current leading models like AdaBoost,
XGBoost, and standard Light GBM. It consistently delivered better accuracy
across every test we ran. It excelled at spotting Monkeypox in its early stages
while almost never raising a false alarm. We believe this makes it a truly
practical tool, giving doctors a reliable way to make faster, more confident
diagnoses and start treatment sooner. |
|
Keywords: |
Monkeypox, Clinical Features, Light GBM, Deep Neural Network, Machine learning,
Ensemble Learning, Stacking Classifier |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
ASSESSING SUICIDE RISK THROUGH SOCIAL MEDIA POSTS USING TRANSFORMER-BASED MODELS |
|
Author: |
ALI ALSHAHRANI, XIUWEN LIU, FALLON RINGER, RAED ALHARBI, KHALID ALHARTHI, SAAD
ALSHAHRANI, AHMED ABDELGHANY |
|
Abstract: |
Suicide remains a serious public health concern, with its prevalence exacerbated
by the recent global pandemic. Social media platforms, where individuals often
share their thoughts in an unfiltered and natural manner, have emerged as
valuable resources for assessing suicide risk. However, analyzing such content
requires models capable of capturing long-range dependencies and handling
complex linguistic structures. This study presents a novel approach to suicide
risk assessment using textual data from two Reddit communities focused on mental
health: SuicideWatch and Depression. Utilizing the transformer-based BERT
architecture and its self-attention mechanisms, our method effectively captures
contextual nuances critical for identifying suicide risk. Evaluated on a
high-quality dataset of Reddit posts, the proposed model demonstrates a
significant performance advantage over existing approaches, achieving an AUC of
0.92 compared to 0.61 from a CNN baseline. Additionally, our attention-based
analysis of pessimistic and optimistic expressions reveals that the model
effectively identifies key features closely associated with indicators of
suicide risk. |
|
Keywords: |
Suicide Risk Assessment; Social Media Analysis; Transformer Architecture; BERT |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
ENHANCING AUTOMATED KIDNEY SEGMENTATION IN CT IMAGES |
|
Author: |
SURYA PRASADA RAO BORRA, DR. SATTI SUDHA MOHAN REDDY, B.V. VASANTHA RAO, PRAVEEN
TUMULURU, CH. V RAVI SANKAR, SRAVANTHI KANTAMANENI, BHAVITHA BORRA |
|
Abstract: |
Renal disease diagnosis, treatment planning, and monitoring all depend on
accurate and effective kidney segmentation from computed tomography (CT) images.
Robust automated segmentation techniques are necessary since traditional manual
kidney delineation is labour-intensive, time-consuming, and susceptible to
inter-observer variability. Deep learning and sophisticated image processing
techniques have become effective tools for medical image analysis in recent
years; yet, segmentation accuracy is still limited by issues including uneven
kidney shapes, low contrast between nearby organs, and the presence of diseased
abnormalities. By utilizing hybrid deep learning frameworks in conjunction with
pre-processing and post processing techniques, this study aims to improve
automatic kidney segmentation in CT images. In order to enhance image quality
and draw attention to kidney borders, noise reduction and contrast enhancement
techniques are first used. The baseline model is a U-Net architecture based on a
convolutional neural network (CNN), with changes including residual connections
and attention mechanisms to capture both local and global contextual data. In
order to lower prediction variance and increase resilience across a variety of
datasets, ensemble techniques that integrate different segmentation models are
also investigated. To further improve kidney outlines and get rid of false
positives, post-processing techniques including morphological filtering and
conditional random fields are used. When tested on publically accessible
kidney CT datasets, the suggested methodology shows notable gains in
segmentation accuracy over traditional deep learning models. To ensure a
thorough assessment of both border accuracy and volumetric overlap, performance
is evaluated using the Dice similarity coefficient, Jaccard index, precision,
recall, and Hausdorff distance. According to experimental results, the improved
framework offers more dependable segmentation in difficult circumstances, such
as those involving tumors, cysts, or low contrast. By tackling the shortcomings
of existing segmentation techniques, this work advances computer-aided diagnosis
systems. Potential uses for the enhanced automated framework include surgical
planning, tumour monitoring, renal disease identification, and therapy response
evaluation. Real-time deployment in clinical settings, domain adaption
strategies for cross-dataset generalization, and the integration of multi-modal
imaging data could all be future developments of this research. In the end, the
suggested improvement of automated kidney segmentation can help physicians and
radiologists provide more precise and individualized treatment. |
|
Keywords: |
CT Images, Automated, Kidney, Segmentation, U-Net |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
ASSESSMENT OF THE RISKS OF USING SMART CONTRACTS IN CRYPTOCURRENCY TRANSACTIONS
ON DOMESTIC CAPITAL MARKETS |
|
Author: |
OLEKSII MOSTOVENKO, VOLODYMYR TSAP, VOLODYMYR BORKOVYCH, NATALIIA RUDYK,
VIACHESLAV NYKONENKO |
|
Abstract: |
The study relevance is determined by the critical dependence of cryptocurrency
market stability on the technical reliability of smart contracts and the
increasing risks of financial losses due to their defects. Aim: The aim of the
study is to formalize the ranking of technical vulnerabilities of smart
contracts by their impact on the economic stability of domestic capital markets
through systematization, simulation modelling, and quantitative assessment of
financial indicators. Methods: The research used the following techniques:
vulnerability typing, simulation modelling, financial analytics, and comparative
analysis. Obtained results: The study confirmed the critical impact of smart
contract technical vulnerabilities on the financial stability of the markets,
with peak VaR of up to -68.5% and liquidity deterioration of over -80% for
reentrancy attack, delegatecall injection, and oracle manipulation. The risks
were reduced by more than half after implementing multi-level optimisations,
demonstrating the effectiveness of comprehensive mitigation to stabilise key
financial indicators. Academic novelty of the study: The academic novelty of the
study is the formalized classification of technical vulnerabilities of smart
contracts and the first empirical assessment of their impact on the economic
stability of capital markets based on comprehensive financial and economic
metrics, which extends the theory of DeFi structural risks. Prospects for future
research: Prospects for further research include the development of a pilot
project for technical optimization of smart contracts with a focus on increasing
resilience to logical and synchronization defects. |
|
Keywords: |
Economic Growth, Reentrancy, Delegatecall Injection, Oracle Manipulation,
Integer Overflow, Front-Running, Dos Attacks |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
DEEP LEARNING-BASED SEGMENTATION AND CLASSIFICATION OF FUNDUS IMAGES FOR
DIABETIC RETINOPATHY DIAGNOSIS |
|
Author: |
K SHIVA KUMAR , B SANDYA |
|
Abstract: |
One of the most common causes of vision loss in people of working age is
diabetic retinopathy (DR), a serious side effect of diabetes mellitus.
Preventing irreparable blindness requires early DR stage detection and
categorization. Conventional diagnostic methods depend on ophthalmologists'
manual screening, which is laborious, arbitrary, and error-prone. Automated deep
learning-based techniques have been created for effective and precise DR
detection in order to overcome these constraints. The internal structure of the
eye is captured by fundus imaging, which is essential for DR detection.
Discriminative feature extraction from these intricate images is still quite
difficult, though. Known for their exceptional image processing skills,
Convolutional Neural Networks (CNNs) have demonstrated encouraging outcomes in
DR classification. However, using CNNs directly on unprocessed fundus pictures
could miss important areas and minute lesion patterns. This study incorporates
deep learning-based segmentation and classification algorithms to improve DR
detection performance and ensure accurate localization of diseased
characteristics, hence increasing diagnostic accuracy. Additionally,
segmentation methods like Attention U-Net are used to precisely identify
impacted areas, improving feature extraction for classification. By lowering
false positives and increasing sensitivity, the combination of segmentation and
classification guarantees a more reliable and understandable diagnosis. This
study shows how well deep learning-based methods work to automate DR detection,
which helps with early diagnosis and improves patient outcomes. Unlike previous
works, this study uniquely integrates Attention U-Net based segmentation with
multiple pre-trained feature extractors (ResNet50, VGG16, EfficientNetB0) and
diverse classifiers to assess diabetic retinopathy detection. The systematic
comparison between segmented and non-segmented images provides new insights into
the impact of segmentation on classification accuracy, which has not been
comprehensively addressed in prior literature. |
|
Keywords: |
Deep Learning, Convolutional Neural Networks, Attention U-Net, Image
Classification, Automated Diagnosis, Feature Extraction, Lesion Detection |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
CAUSE OF RESIDUAL NODE IN WSN AND NETWORK OPTIMIZATION |
|
Author: |
REJINA PARVIN, VANITHA, VASANTHANAYAKI, AJAY ROY3, RAJKUMAR |
|
Abstract: |
Wireless Sensor Networks (WSN) has its own fame because of its excellence in its
characteristics. Generally, the sensor nodes are more flexible, easy to design,
user friendly and economical makes the WSN to be used for various applications.
WSN can be used for various applications including Healthcare, Home Automation,
Agricultural Monitoring, Border surveillance, Animal Habitat Monitoring etc.
Though WSN has various merits, Energy consumption of wireless sensor node and
enhancing the overall network lifetime is always a major challenge that has to
be considered. Clustering mechanism helps to group the nodes and the node which
is accessible to almost all the nodes in that particular group is elected as
Cluster Head (CH). During cluster formation, there may be few nodes may not be a
member of any of the cluster and is termed as Residual Nodes. Various clustering
mechanism in WSN is presented in related work. This paper focusses on the
analysis of lifespan of the network with effective utilization of such residual
nodes to enhance the overall network lifetime. This proposed mechanism is termed
as Network Life Time Enhancing Mechanism (NEM) and it is implemented using
network simulator software. |
|
Keywords: |
Wireless Sensor Networks, Clustering, Residual Node, Optimization |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
INNOVATIVE STRATEGIES FOR DATA SECURITY IN THE CONTEMPORARY IT LANDSCAPE |
|
Author: |
Z. ENNEFFAH, A. AL KARKOURI, Y.FAKHRI, S.BOULAKNADEL, S.KHOULJI, S.BOUREKKADI |
|
Abstract: |
In todays rapidly evolving digital landscape, ensuring robust data security
measures is paramount for organizations to safeguard sensitive information. This
paper presents innovative strategies derived from a comprehensive analysis
utilizing R, a statistical computing tool, to fortify data security in the
contemporary IT environment. Leveraging advanced data analytics techniques,
including machine learning algorithms and statistical modeling, our research
identifies key vulnerabilities and proposes proactive measures to mitigate risks
effectively. By integrating real-time monitoring systems and adaptive security
frameworks, our approach offers a dynamic defense mechanism against emerging
threats. Furthermore, through the utilization of R's versatility in data
visualization, we provide insights into complex security patterns, enabling
organizations to make informed decisions for enhancing their cybersecurity
posture. Our findings underscore the importance of adopting data-driven
methodologies to address the evolving challenges of data security in the modern
IT landscape. |
|
Keywords: |
Data Security, Innovative Strategies, IT Landscape, Artificial Intelligence,
Advanced Encryption, Regulatory Compliance |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
ADDRESSING NOISE, VARIANCE SHIFTS, AND TRAINING INSTABILITY IN TIME SERIES
FORECASTING: A HYBRID WAVELET–VARIANCE EMBEDDING LSTM APPROACH |
|
Author: |
SIRISHA KOLLATI, VUDI SRILAKSHMI,KIRAN VARMA GADIRAJU, DONTAMSETTI SATYA PRASAD,
DIVYASRI NARPINA, KANUMURU DURGA BHAVANI |
|
Abstract: |
Time series forecasting plays a critical role in various domains. Traditional
statistical models, such as ARIMA and VAR, often fails to capture the complex
temporal dependencies and non-stationary characteristics inherent in real-world
Time Series data. Deep learning-based models, particularly Long-Short Term
Memory (LSTM) networks, have demonstrated superior performance by leveraging
their ability to learn long-term dependencies. However, standard LSTM
architectures struggle with handling high-frequency noise, variance shifts, and
non-stationary behaviours, limiting their forecasting accuracy.To overcome these
challenges, propose an advances hybrid model, Principal Component Analysis based
Wavelet Transform -Temporal variance Embedded LSTM with Layer Normalization
(TWL-Net). The model integrates Wavelet transform (WT) to decompose the time
series into multi-frequency components, TVE to capture dynamic variance
fluctuations, LN to stabilize the training process and improve model
generalization. The wavelet decomposition isolates significant patterns while
filtering out high-frequency noise, TVE enhances variance adaptability, and LN
mitigates covariate shift issues, leading to a more robust forecasting
framework. Extensive experiments conducted on real-world datasets, and
demonstrate the superiority of TWL-Net over traditional and state-of-the-art
deep learning models.The suggested approach improves model stability and
interpretability while drastically lowering prediction errors. The efficacy of
our method in attaining high-precision time series forecasting is validated by
performance evaluation using metrics like Mean Squared Error (MSE), Root Mean
Squared Error (RMSE), and Mean Absolute Percentage Error (MAPE). |
|
Keywords: |
Principal Component Analysis, Wavelet Transform, Temporal Variance Embedding,
Time series forecasting, LSTM with Layer Normalization. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
MALWARE CLASSIFICATION MODEL BASED ON THE AUGMENTED TRANSFER LEARNING USING
RESNET 50 CLASSIFIER |
|
Author: |
E.VANI, K.AKILA |
|
Abstract: |
Malicious software has the potential to cause harm to computer networks and
systems. The field of computer security has experienced a significant increase
in the number of malware attacks, posing a threat to the integrity and safety of
computer systems. The level of complexity exhibited by malware variants has
experienced a proportional increase in tandem with the overall magnitude and
extent of malware. The identification of malware is a critical task in the field
of cybersecurity. Signature-based malware detection is a commonly employed
method by multiple antivirus engines. Domain specialists typically develop
signatures for malware detection by analyzing malware occurrences, focussing on
specific byte code patterns and text sequences. The pattern matching of
signatures may be used to identify malware after the signatures have been
established. Nevertheless, since signature matching can only identify malware
that doesn't fluctuate considerably, signature-based solutions can't keep up
with the fast changes in these malware types. Deep learning has grown and
achieved a significant breakthrough in recent years. Traditional learning
algorithms like CNN, RNN, SVM, autoencoder, TC droid show its satisfied
performance but they may not be the best fit for different datasets was seems to
be a big drawback. But Resnet-50 architectures that have already been trained in
deep learning are improved in this study by using augmentation and transfer
learning. ResNet-50 architectures may be fine-tuned using this transfer learning
approach to increase performance and shorten training time. Image resizing to
128x128, 224x224, and finally 229x229 pixel sizes, followed by network tweaking,
is how this is accomplished. On the mailmg, malevis, and Microsoft big2015
datasets, we were able to obtain a 99.23% accuracy rate with the augmentation
and transfer learning. The majority of the testing was done in a Python-based
environment. When compared to other traditional approaches, the suggested
strategy has a superior accuracy in identifying malware than any of the other
methods tested in this study. |
|
Keywords: |
Malware, cybersecurity, Augmented transfer learning, Resnet 50 |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
DESIGN AND ANALYSIS OF NOVEL QUANTUM ALGORITHMS FOR OPTIMIZATION IN LOGISTICS,
NETWORK ROUTING AND SCHEDULING |
|
Author: |
K. YASUDHA , M. SRIVENKATESH |
|
Abstract: |
Quantum computing has rapidly evolved as a groundbreaking computational model
capable of addressing complex problems that are currently intractable for
classical systems. Among its promising areas of application are combinatorial
optimization problems, which frequently arise in domains such as logistics,
network routing, and scheduling—areas characterized by their high
dimensionality, dynamic constraints, and NP-hard nature. This research
investigates the design and analysis of novel quantum algorithms, with a primary
focus on Grover’s search variants and the Quantum Approximate Optimization
Algorithm (QAOA), to tackle such real-world challenges. Grover’s algorithm,
known for providing a quadratic speedup in unstructured search, is extended and
adapted for structured problem spaces commonly encountered in logistics and
scheduling. Meanwhile, QAOA-a variational hybrid quantum-classical algorithm—is
leveraged to solve complex optimization tasks by encoding them into cost
Hamiltonians, allowing iterative refinement through classical feedback loops.
The proposed work involves the theoretical modeling of optimization problems as
quantum circuits, transformation into Quantum Unconstrained Binary Optimization
(QUBO) form, simulation on near-term quantum hardware, and comparative
performance analysis with classical heuristics such as Genetic Algorithms and
Simulated Annealing. Furthermore, the study evaluates the practical implications
of implementing these algorithms on Noisy Intermediate-Scale Quantum (NISQ)
devices, exploring error mitigation strategies and parameter tuning challenges.
By examining case studies including vehicle routing, network path selection, and
multi-resource job scheduling, this research highlights the potential of quantum
algorithms to deliver meaningful performance benefits in select scenarios, while
also acknowledging current hardware limitations. The outcomes aim to contribute
foundational insights toward the scalable application of quantum computing in
critical real-world operations. |
|
Keywords: |
QQuantum Computing, Combinatorial Optimization, Grover’s Algorithm, Quantum
Approximate Optimization Algorithm (QAOA), Hybrid Quantum-Classical Algorithms,
NISQ Devices |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
IMPLEMENTATION OF ARTIFICIAL INTELLIGENCE: CAN IT OPTIMIZE THE MARKETING OF
ISLAMIC FINANCIAL SERVICES? |
|
Author: |
HASSAN ALI AL-ABABNEH , IBRAHIM RADWAN ALNSOUR |
|
Abstract: |
The modern development of Islamic financial services is accompanied by both
accelerated digitalization and the need for strict compliance with Shariah
standards. However, the existing literature demonstrates a significant gap: the
studies do not sufficiently cover the specifics of the use of artificial
intelligence (AI) tools in the marketing strategies of the Islamic financial
sector. Traditional approaches to marketing do not provide high accuracy of
segmentation and adaptation of services to individual customer needs, which
reduces their effectiveness in the face of increasing competition. The purpose
of this study is to analyze the potential of using AI to optimize marketing in
Islamic financial organizations, including personalization of customer
experience, forecasting consumer behavior and improving the effectiveness of
communication strategies. The methodological framework is based on comparative
analysis, machine learning methods and predictive analytics. The empirical basis
is data on the behavior of clients of Islamic banks and investment funds, as
well as open sources on global trends in digital marketing. The results
demonstrate that the implementation of AI allows to reduce customer acquisition
costs by 25-30%, increase the accuracy of personalized offers up to 80% and
improve interaction with the audience through intelligent chatbots and automated
recommendations. The scientific novelty of the study lies in the development of
a model for integrating AI into the marketing strategies of Islamic financial
institutions, taking into account both the requirements of Sharia and the
challenges of global digitalization. The practical significance of the work is
that the proposed model contributes to increased competitiveness, expansion of
the customer base and the formation of sustainable trust in the brand. |
|
Keywords: |
Artificial Intelligence, Islamic Finance, Marketing |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
EVALUATING SUPPORT VECTOR MACHINE CLASSIFICATION ACCURACY ON MICRO-DOPPLER
SIGNALS OF DRONES AND BIRDS |
|
Author: |
GYOO SOO CHAE |
|
Abstract: |
The popular use of small unmanned aerial vehicles (UAVs) brings considerable
technological advantages but also raises serious security challenges,
particularly in situations where distinguishing drones from birds is critical.
This study introduces a systematic approach for drone–bird discrimination
through the analysis of simulated radar micro-Doppler signatures using a Support
Vector Machine (SVM) classifier. Synthetic radar signals were generated by
modeling the characteristic motion patterns of drones—namely rotor blade
rotation—and the wing flapping dynamics of birds, thereby capturing their
distinct micro-Doppler features. From these signals, spectrogram-based
time–frequency representations were extracted and condensed into average
spectral profiles, which served as inputs for classification. The SVM model was
trained on spectrogram-derived features while incorporating variability in both
drone and bird motion parameters to improve robustness. Experimental results
indicate that, with training data based on drone rotor speeds of 3,000rpm
(+16.7% noise) and body speeds of 10m/s (+40% noise), the classifier maintained
near-perfect accuracy even when tested on cases with rotor speeds as low as
1,200 rpm and body speeds reduced to 6m/s. These findings highlight the strong
generalization ability of the proposed method under realistic variations in
flight dynamics. Moreover, the SVM-based technique not only achieves highly
reliable airborne object discrimination but also offers computational efficiency
suitable for real-time surveillance applications, with strong potential for
adaptation to real-world radar datasets. |
|
Keywords: |
Unmanned Aerial Vehicles(UAVs), Drone, Classification, Support Vector Machine
(SVM), Micro-Doppler, Spectrogram, Accuracy |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
ADVANCED MACHINE LEARNING FRAMEWORKS FOR AUTOMATED IDENTIFICATION AND ANALYSIS
OF CRITICAL PARAMETERS IN VLSI CIRCUIT PERFORMANCE AND RELIABILITY |
|
Author: |
SINDHU NALLA , G. NAGARAJAN |
|
Abstract: |
VLSI circuits play a vital role in the functionality of electronic devices;
however, the characteristics of circuits need to be analyzed and optimized using
developed methods to ensure their performance and reliability. This work
outlines an integrated framework that can autonomously recognize and model the
dominant parameters influencing VLSI circuits across chips, design, and logic
level. By utilizing supervised and unsupervised learning methods, this framework
enables analyzing large-scale datasets to extract fundamental features and
assess their relevance to circuit behavior. First, it collects a diverse dataset
comprising of simulation, experimental, and design data with varying factors of
process, voltage, temperature, and design parameters. Supervised learning models
are used to infer predictions of circuit performance based on design parameters
while unsupervised learning methods identify latent significance patterns and
irregularities that impact circuit reliability. They also apply feature
selection and dimensionality reduction techniques to improve model accuracy and
interpretability. Using large-scale case studies from practical VLSI designs, we
show that the framework can identify key parameters that, in turn, impact delay,
power, signal integrity, and fault rate. The results show that the developed ML
framework predicts accurately which design parameters greatly influence the
circuit performance, so that circuit designers can target these as design
parameters to be optimized. The work also investigates the integration of ML
models into design automation workflows, with the goal of informing predictive
analysis and decision-making throughout the design process. The results
highlight the ability of machine learning to transform VLSI designs, providing
data-driven approaches to improve circuit performance and reliability. This can
then be extended in the future work to emerging technologies, like post-silicon
validation and advanced packaging solutions, to make sure the framework adopts
future generations of electronic systems. |
|
Keywords: |
Machine Learning, VLSI, Circuit Performance, Reliability, Critical Parameters,
Design Automation |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
ANALYZING THE INFLUENCE OF SOCIAL MEDIA INFLUENCERS ON CONSUMER PURCHASE
DECISION |
|
Author: |
NATHANIEL RIFAN NG , RUDY TJAHYADI |
|
Abstract: |
The emergence of social media has profoundly altered the dynamics of consumer
habits and marketing approaches. A notable trend is the engagement of social
media influencers—those with a significant online presence and a dedicated
following—as a way to influence consumer buying choices. However, there remains
a lack of understanding regarding how social media influencers affect consumer
purchase decisions through factors such as system quality, information quality,
and interaction on digital platforms. This examination seeks to explore how
social media personalities impact the buying habits of consumers. Using
quantitative techniques, data was gathered by distributing Likert-scale surveys
to active social media users. The findings indicate that elements like
reliability, appeal, knowledge, and regular interaction with followers
significantly impact buying choices. This study reveals important insights for
marketers aiming to refine their influencer campaigns and boost brand
interaction on social media platforms. |
|
Keywords: |
Social Commerce, Influencer Marketing, Technology Acceptance Model, System
Quality, Purchase Decision. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
BREAST CANCER DETECTION USING CNN AND RESNET |
|
Author: |
JOELRY KEEGAN TARIGAN, SHAWN TRISTAN VIERY, AMALIA ZAHRA |
|
Abstract: |
Breast cancer is one of the most common cancers worldwide and remains a major
cause of mortality among women. Accurate diagnosis from histopathological images
is often challenging due to subtle visual differences between benign and
malignant tissues. Manual diagnosis is time-consuming and prone to subjectivity,
which highlights the need for automated approaches. This study compares a
standard Convolutional Neural Network (CNN) with a ResNet-34 model using
transfer learning to improve classification performance. We hypothesize that
ResNet, with its deeper architecture and pre-trained features, will achieve
higher accuracy than CNN. Experiments were conducted on BreakHis dataset
consisting of 7,909 images, where data augmentation and balancing techniques
were applied to address class imbalance. The results confirmed our hypothesis,
with ResNet achieving 95.49% accuracy compared to CNN’s 81.58%. These findings
indicate that transfer learning offers a more reliable solution for breast
cancer image classification. This work contributes by showing the effectiveness
of ResNet for histopathology images, demonstrating how preprocessing strategies
improve classification, and suggesting a potential pathway for supporting early
diagnosis in clinical practice. |
|
Keywords: |
Breast Cancer, Transfer Learning, Histopathological Images, Classification |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
CURSIVE HANDWRITING SEGMENTATION FROM OVERLAPPING STROKES TO ISOLATED CHARACTERS |
|
Author: |
M.SARAVANAKUMAR ,Dr.S.KANNAN |
|
Abstract: |
Cursive handwriting recognition presents significant challenges due to the
inherent continuity and overlap of character strokes, leading to ambiguities in
character boundaries. Effective segmentation of cursive words into isolated
characters is a critical preprocessing step in offline handwriting recognition
systems. This paper proposes a robust hybrid segmentation approach that combines
projection profile analysis and skeleton-based structural analysis to accurately
segment characters from cursive handwriting, even in the presence of overlapping
strokes and connected loops. The method begins with image preprocessing,
including gray scale conversion, binarization, denoising, and thinning. Vertical
projection profiles are then used to identify low-ink valleys between
characters, while skeleton analysis assists in detecting potential segmentation
points in complex or touching regions. Experimental results demonstrate the
effectiveness of this method on real-world cursive samples, enabling
high-quality character isolation and supporting further recognition using
machine learning or template matching techniques. This segmentation framework
can serve as a foundation for full-page handwritten text recognition systems,
particularly for cursive English scripts. “Cursive Handwriting Segmentation from
Overlapping Strokes to Isolated Characters” is a critical step in the
development of accurate handwriting recognition systems. Due to the continuous
nature of cursive writing and overlapping strokes between adjacent characters,
traditional segmentation techniques often fail to isolate individual letters
accurately. This paper presents a novel and efficient approach titled “Cursive
Handwriting Segmentation from Overlapping Strokes to Isolated Characters”, which
combines vertical projection profiles with skeleton-based stroke analysis to
tackle the complexities of cursive scripts. It then applies a hybrid
segmentation strategy that identifies low-intensity valleys between strokes
while also analyzing skeleton intersections to locate accurate segmentation
points, even in tightly coupled or overlapped regions. Experimental results on
handwritten samples show the effectiveness of this approach in separating
connected characters, reducing both over-segmentation and under-segmentation.
The proposed Cursive Handwriting Segmentation from Overlapping Strokes to
Isolated Characters method lays a strong foundation for reliable character
recognition and can be extended to full-page handwritten text analysis systems.
The graph-based approach models each character’s skeleton as nodes and edges,
capturing topological relationships and enabling robust recognition despite
stylistic variations. Experimental results on real-world cursive samples
demonstrate that the proposed approach achieves high segmentation accuracy and
effective character recognition. This method lays a strong foundation for
building full-page cursive handwriting recognition systems, offering an
explainable, resource-efficient, and adaptable solution for diverse writing
styles. |
|
Keywords: |
Cursive Handwriting, Character Segmentation, Overlapping Strokes,
Skeletonization, Vertical Projection Profile, Handwriting Recognition
Pre-Processing, Connected Components, Stroke Analysis.Shi-Tomasi Corner
Detection, Adjacency Matrix, Depth-First Search (DFS) Using Graph-Based
Character Recognition, Times New Roman &Courier New Font, Graph Edit Distance,
Graph Based Matching (GBM). |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
EMOJI_SENTI_WORD: AN ENHANCED METHOD FOR EMOJI SENTIMENT CLASSIFICATION AND
POLARITY SHIFT HANDLING USING EMOJIS |
|
Author: |
A. JAPHNE , Dr. R. MURUGESWARI |
|
Abstract: |
The increased use of emojis in online textual communication highlights the
critical need for their effective classification in sentiment analysis to
extract valuable information. Since most social media posts are very short, a
careful understanding of emoji usage is necessary to effectively handle the
polarity shift/ change in the overall sentiment. Additionally, the voluminous
social media data requires a more efficient sentiment classification system with
greater accuracy and speed for modern real-time applications. Therefore, this
research aims to analyse the impact of emojis on sentiment classification,
especially when they act as catalysts for polarity shifts in sentences, and
develop an enhanced method for emoji classification that effectively handles
polarity shifts with better accuracy and speed compared to existing systems/
methods. Nine Machine Learning classifiers and the Long Short-Term Memory
(LSTM) Deep Learning model were trained on a dataset of corona-related tweets.
The inclusion of emojis in the text resulted in a notable increase in accuracy
from 67% (text only) to 84%. For tweets where emojis caused polarity shifts,
accuracy increased to 97%, even when emojis alone were used, and the processing
time was reduced by half as well. Among the classifiers, LSTM demonstrated high
accuracy. To further enhance emoji sentiment classification, a new method
named ‘EMOJI_SENTI_WORD’ was proposed in this research, which improved accuracy
up to 10% across various classifiers. In contrast to the traditional method,
‘EMOJI_SENTI_WORD’ replaced emojis with sentiment words based on their sentiment
polarity intensity. This unique approach enables EMOJI_SENTI_WORD to achieve
improved accuracy and performance. Apart from the corona-related dataset, the
EMOJI_SENTI_WORD algorithm was also tested on two additional tweet datasets:
‘MovieReview’ and OnlineClass, to validate its scalability and adaptability,
achieving a notable 8% increase in accuracy. |
|
Keywords: |
Sentiment Analysis, Emojis, Emoticons, Polarity Shift, Opinion Mining |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
Title: |
MED-RESATTXNET: A HYBRID DEEP LEARNING FRAMEWORK FOR DETECTION AND
CLASSIFICATION OF LIVER DISEASES USING BIOMEDICAL IMAGE ANALYSIS |
|
Author: |
K.VENKATA LAKSHMI , JAMES STEPHEN M , PRASAD REDDY P.V.G.D |
|
Abstract: |
Liver diseases remain one of the most critical public health challenges
worldwide, contributing significantly to mortality and long-term disability.
Early and accurate diagnosis is central to successful treatment, but traditional
methods involving biopsy and manual analyses of images present many problems,
including invasiveness, inconsistency and delay. This paper presents
Med-ResAttXNet as a hybrid cell that incorporates three complementary cells: the
attention-augmented U-Net++ to segment liver and lesions, the Residual Attention
Network to focus feature learning, and Vision Transformer to learn a global
context. These modules are integrated as part of a single pipeline, which then
classify liver conditions as being healthy, benign and malignant. The main idea
behind the presented work is that properly architected hybrid scheme may surpass
traditional CNN-based or standalone models and take the best of both worlds,
offering both local accuracy and broader context. Compared to existing methods
on a publicly-available liver tumor segmentation dataset (LiTS), the proposed
model demonstrated the highest score in segmentation accuracy and Dice
coefficient, in addition to classification measures, validating the
effectiveness and potential of the proposed model in clinical applications. This
sample proves that Med-ResAttXNet can be a reliable decision-support tool that
will help radiologists make an early diagnosis and plan the treatment. |
|
Keywords: |
Residual Attention Network, Vision Transformer, Medical Image Segmentation,
U-Net++, Hybrid Deep Learning Architecture, CT Scan Classification,
Med-ResAttXNet. |
|
Source: |
Journal of Theoretical and Applied Information Technology
31st October 2025 -- Vol. 103. No. 20-- 2025 |
|
Full
Text |
|
|
|