|
|
|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
September 2025 | Vol. 103 No.18 |
|
Title: |
PANTUNDUDA: EXPLAINABLE AI TO LANDMARK FACE RECOGNITION USING CANONICAL VARIATE
ANALYSIS |
|
Author: |
AZREE NAZRI, OLALEKAN AGBOLADE, FAISAL AZIZ, ISZUANIE SYAFIDZA CHE ILIAS |
|
Abstract: |
The explainable face recognition (XFR) paradigm is concerned with the difficult
challenge of deciphering the findings obtained by a facial matching system in
order to shed light on the underlying rationale behind the selection of a
specific identity in response to a given probe. The usefulness of biologically
significant face cues in the context of gender classification is investigated in
this study. The suggested statistical methodology involves detecting
interpretable geometric features near anatomical markers, known as PANTUNDUDA.
One classifier, consisting of 68 landmarks (68 -LM), was meticulously examined.
An empirical investigation was carried out using a dataset of 100 facial images
that were evenly divided among male and female subjects. The procedural
framework seamlessly incorporates PCA, TPS, Procrustes superimposition, and
Canonical Variate Analysis (CVA), with each component contributing independently
to the extraction of comparable shape data from facial profiles. This method
makes use of the designated landmark classifier. The findings highlight the
importance of 20 facial landmarks as biologically interpretable markers for the
classifier. These revelations have far-reaching consequences for XFR,
particularly for landmark classifiers designed for gender-specific face
comparisons. As a result, facial recognition algorithms have become more
transparent and understandable. |
|
Keywords: |
PANTUNDUDA, AI, Variate Analysis, Facial Recognition |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
APPLICATION OF KANSEI ENGINEERING METHOD FOR PRODUCT FORM DESIGN |
|
Author: |
FENGTAO ZHANG, ABDUL AZIZ BIN ZALAY, HAIXIA JIANG |
|
Abstract: |
With the development of social and the increasingly fierce market competition,
users' perceptual needs have become a factor that can’t be ignored in product
design, and people's perceptual cognition of products plays an increasingly
important role in purchasing decisions. Kansei engineering(KE)as a design method
that can quantify users' perceptual needs, transform them into specific design
elements, and improve users' satisfaction with design, has attracted a lot of
attention and research. this paper takes 2020 to 2024 as the research time span,
summarizes the working principles, basic processes and methods of KE through
systematic review of 221 publications and mining of 67 core literatures. This
paper analyzes the current research status of KE from four aspects: the
construction of user perceptual image lexicon, the deconstruction of product
modeling, the construction method of KE model and the application of KE, and
forecasts the future research. This research summarizes the research framework
of Kansei Engineering in the field of product styling design, provides a data
basis for future theoretical research on Kansei Engineering, and also provides
research methodological assistance to more designers. |
|
Keywords: |
Kansei Engineering, Perceptual Image, Form Design, KE Model, Review |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
THE IMPACT OF DIGITAL TECHNOLOGIES ON ACCOUNTING TRANSFORMATION |
|
Author: |
IRYNA SHCHYRBA, NATALIIA LOKHANOVA, NATALIІA HOLIACHUK, VALENTYNA SHEVCHENKO,
YEVHENIIA KARPENKO |
|
Abstract: |
Digital technologies are significantly transforming accounting, changing the
methods of collecting, processing, and analysing financial information on a
global scale. The aim of the study is to assess the impact of digital
technologies on accounting transformation in seven countries with different
levels of digitalization. The research employed the following methods:
quantitative approach, analysis of panel data for 2020-2024, and econometric
modelling. The assessment was carried out using a fixed-effects model that takes
into account institutional and technological factors of transformation. The
results showed that the United Kingdom (UK) and Germany have the highest
accounting transformation indicators: within 43.5-47.2 points and from 41.0 to
44.8 points, respectively. This is explained by a high level of digitalization
(DigitalTechIndex over 80) and the full application of International Financial
Reporting Standards (IFRS). At the same time, Ukraine, despite limited digital
resources (Digital Technology Index within 36.2-49.7), demonstrates positive
dynamics of transformation — from 28.0 to 33.2 points. This is determined by the
implementation of IFRS and existing regulatory support. It was found that
digital transformation significantly improves accounting practices in
combination with the adoption of IFRS and the development of human capital. The
results emphasize the need for strategic implementation of digital technologies
in accounting, which has implications for politicians and financial
institutions. The study can be useful for shaping digital development policies
and modernizing financial reporting. Further studies should focus on industry
analysis and studying the impact of artificial intelligence (AI) in accounting
decisions. |
|
Keywords: |
Accounting, Digital Technologies, International Standards, Financial Reporting,
Econometric Modelling, Transformation, Digitalization, Human Capital. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
MULTI-CORE VIRTUAL MACHINE PROCESSING IN DATA CENTRE WITH IMPROVED DATA SHARING
AND SECURITY OF THE DATA ENVIRONMENT |
|
Author: |
J. MANIKANDAN, DR.UPPALAPATI SRILAKSHMI |
|
Abstract: |
The efficient placement of virtual machines (VMs) in cloud environments is
crucial for optimizing resource utilization and achieving balanced workloads.
This paper proposed a Multi-Core Virtual Data Centre Resource Sharing Security
(MCVDC-RSS). The proposed MCVDC-RSS comprises data centers that consider the
multi-core nature of physical machines and leverage vector heuristics to achieve
balanced and efficient VM placement. MCVDC-RSS considers various vector
heuristics, including L1-norm, L2-norm, cosine angle, and dot product, to
evaluate the distance and similarity between VMs and physical machine resources.
By incorporating these heuristics, the algorithm aims to minimize resource
wastage and distribute VMs across physical machines in a balanced manner. To
evaluate the effectiveness of MCVDC-RSS, we conducted simulations using both
Amazon EC2 instances and the GoCJ dataset. The results demonstrate that
MCVDC-RSS outperforms other algorithms in terms of resource wastage, overhead,
active and overloaded pCPUs (physical CPUs), and overall system performance.
Furthermore, MCVDC-RSS effectively manages compilation and execution plan times,
CPU and memory utilization. The algorithm ensures efficient utilization of
computational resources, leading to improved performance and workload
management. The MCVDC-RSS provides a robust and efficient solution for VM
placement in cloud environments. The proposed MCVDC-RSS model exhibits
significant performance in the security of the data centers with the optimized
utilization of resources, wastage of resources and increased efficiency |
|
Keywords: |
Data Centers, Multi-Core Environment, Virtual Machine, Security, Resource
Sharing |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
STREAM PROCESSING ALGORITHMS FOR UNSTRUCTURED DATA ANALYSIS |
|
Author: |
VIKTORIIA SHUTENKO, MYKOLA MEDVEDIEV, YURII DOROSHENKO, VALERIA DROMENKO,
NATALIIA OMETSYNSKA |
|
Abstract: |
Relevance of the selected research The rapid growth of unstructured streaming
data and the need for real-time decision-making necessitate the development of
effective algorithms tailored for dynamic processing, semantic enrichment, and
adaptive analytics. Purpose The purpose of the study is to develop an
optimized mathematical model for the processing of unstructured data streams to
support real-time decisions. Methods The study used the following methods:
decomposition analysis, mathematical formalism, optimization modeling,
reinforcement learning, simulation modeling.The research employed a diverse
array of methodologies, including decomposition analysis, mathematical
formalism, optimization modeling, reinforcement learning, and simulation
modeling. Results obtained This study presents an optimized algorithm for
real-time processing of unstructured data streams, derived from the
decomposition of tested models and the formalization of a generalized
mathematical framework. By integrating Kafka with Zero-Copy I/O, CEEMDAN-based
preprocessing, ONNX-inference models, knowledge graph enrichment, and QoS-driven
orchestration (Kubernetes, KEDA), the proposed solution achieved a 3.8×
reduction in Latency (from 250.0 to 65.0 ms), increased Accuracy to 91.3%, and
doubled Throughput (1100 to 2200 events/sec). Additionally, CPU Load was reduced
by 27%, Memory Usage by 45%, while the Adaptability Score and Semantic Alignment
Score improved from 0.52 to 0.88 and from 0.64 to 0.91, respectively. These
results confirm the algorithm’s efficiency, scalability, and applicability in
intelligent Decision Support Systems under real-time constraints. Scientific
novelty of the study The scientific novelty of the research lies in the
development of a mathematical model for streaming processing, characterized by
dynamic ingesting, semantic enrichment, and optimization through reinforcement
learning, all aimed at enhancing real-time productivity. Prospects for future
research Further research involves validating the optimized algorithm within
high-entropy streams, the scaling to multimodal datasets, and the enhancement
through adaptive retraining and dynamic orchestration. |
|
Keywords: |
Kafka, Flink, Semantic Enrichment, Reinforcement Learning, RocksDB, CEEMDAN
Decomposition, ONNX Inference. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
ENHANCING CLOUD DATA SECURITY WITH POLYNOMIAL ELLIPTIC CURVE ZERO KNOWLEDGE
PROOF (POLYECC-ZKP) |
|
Author: |
E.JANSIRANI, DR.N.KOWSALYA |
|
Abstract: |
Cloud computing has revolutionized data storage and access, but it remains
vulnerable to various security threats. Cryptographic approaches like Zero
Knowledge Proof (ZKP) and Elliptic Curve Cryptography (ECC) have been widely
used to address these issues. In order to improve cloud data security, this
study presents a novel Polynomial Elliptic Curve Zero Knowledge Proof
(PolyECC-ZKP) algorithm. By including polynomial functions into the ECC
architecture, the suggested technique provides secure data authentication and
strong encryption. We present a thorough analysis of the PolyECC-ZKP algorithm
and evaluate its performance in comparison to other methods that have already
been developed, such as Lattice-Based Zero Knowledge Proof (LZKP), Multi-Party
Computation and Zero Knowledge Proof (MPC-ZKP), Hybrid Elliptic Curve
Cryptography and Zero Knowledge Proof (HECCZKP), Hybrid Zero Knowledge Proof
with ECC and ECDSA (Hybrid ZKP with ECDSA), and ECC. Scalability, quantum
resistance, computation overhead, and security are the basis for the comparison.
According to experimental findings, PolyECC-ZKP improves cloud security while
requiring little computing power and is resistant to both conventional and
quantum attacks. The results demonstrate PolyECC-ZKP's ability to emerge as a
formidable contender for safe cloud settings. |
|
Keywords: |
Cloud Data Security, Polynomial Elliptic Curve Cryptography , Zero Knowledge
Proof,Hybrid Cryptography Quantum-Resistant Algorithms, Multi-Party Computation,
Lattice-Based Cryptography |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
DETECTION OF BRAIN TUMOR AND PREDICTION OF SEVERITY ANALYSIS USING OPTIMIZATION
DRIVEN 3D CONVOLUTION NEURAL NETWORK |
|
Author: |
Mr.BHASKAR MEKALA, Dr.NEELAMADHAB PADHY, Dr. P.KIRAN KUMAR REDDY |
|
Abstract: |
Accurate assessment of brain tumor severity plays a pivotal role in effective
diagnosis and treatment planning. In this research, a comprehensive framework
for brain tumor severity analysis is proposed, leveraging advanced
preprocessing, segmentation, feature extraction and deep learning techniques.
The preprocessing phase employs an adaptive bilateral filter to mitigate noise
and enhance image quality. Subsequently, a segmentation approach utilizing slap
swarm boosted rough fuzzy c-means optimally partitions the brain images,
facilitating precise tumor localization. Feature extraction is performed using
Mayfly optimization driven 3D Convolution Neural Network (MO-3DCNN), effectively
capturing discriminative information from the segmented regions. The core
innovation of this research lies in the development of a novel MO-3DCNN model,
driven by the Mayfly optimization-derived features. This model enables a more
comprehensive analysis by considering the spatial context of the brain tumor.
The proposed MO-3DCNN model is trained to recognize intricate patterns within
the segmented regions, enabling automated severity classification. Extensive
experiments on a diverse dataset determine the superiority of the proposed
methodology over existing approaches. Moreover, the proposed MO-3DCNN model
outperforms in accurately stratifying brain tumor severity, highlighting its
potential as a valuable clinical tool. |
|
Keywords: |
Convolution Neural Network, Rough Fuzzy C-Means, Slaps Swarm Optimization,
Mayfly Optimization, And Adaptive Bilateral Filter. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
IMPROVING USER EXPERIENCE WITH DIGITAL TRANSFORMATION PRACTICES IN ACADEMIA |
|
Author: |
NAJWA H SAMRGANDI |
|
Abstract: |
Digital transformation in higher education fundamentally reshapes how
universities function and interact with students, faculty, and external
stakeholders. Beyond technological upgrades, this transformation has direct
implications for user experience (UX), influencing satisfaction, engagement, and
retention. This study conducts a comparative analysis of digital transformation
practices and their impact on UX at Carnegie Mellon University’s Digital
Transformation and Innovation Center (CMU DTIC) and the University of
California, Irvine’s Center for Digital Transformation (UCI CDT). A comparative
case study approach was employed, examining website content, publications, and
digital initiatives from both centers. Key dimensions included research focus,
projects, faculty structure, collaborations, funding sources, and the
integration of UX principles. The findings indicate that CMU DTIC adopts a
predominantly technology-driven and solution-oriented approach, emphasizing
responsible artificial intelligence (AI), machine learning (ML), and
rapid-response initiatives. While this strategy produces technically advanced
outcomes, it sometimes limits the depth of user-centered design integration. By
contrast, UCI CDT follows a strategic and long-term perspective, prioritizing
strategic digital transformation, leadership, and societal impact, though its
initiatives occasionally lack strong practical UX application. Both institutions
demonstrate strong collaborations and diverse funding models, with CMU
leveraging a combination of industry collaborations and external funding
sources, while UCI emphasizes internal stability. Overall, the analysis
demonstrates that successful academic digital transformation is less dependent
on the level of technological sophistication than on the adoption of inclusive,
user-centered practices that enhance engagement, accessibility, and adoption. A
balanced framework that integrates CMU’s technical responsiveness with UCI’s
strategic, interdisciplinary orientation can optimize both research outcomes and
practical impact within higher education. |
|
Keywords: |
Academia, Comparative Study, Digital Transformation, Interdisciplinary
Collaboration, User Experience. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
THE SECURE DYNAMIC WIRELESS SENSOR NETWORK |
|
Author: |
ASO AHMED MAJEED, YALMAZ NAJM ALDEEN TAHER, HOGER K. OMAR, KAWA M KAKY |
|
Abstract: |
Wireless Sensor Networks (WSNs) vary in size by application and are often
deployed in uncontrolled areas, making them vulnerable to attacks, especially on
routing protocols. Due to weak transmission security, messages can be easily
intercepted or altered. Thus, efficient key management is essential to reduce
these risks. There are many challenges to securing key management, such as key
distribution, routing algorithms, overhead, scalability, efficiency, and time
consumption for encryption and decryption. Thus, it is challenging to create
efficient security protocols while decreasing costs. The proposed scheme is more
adaptive and secure because it works like a one-way function to save capacity
and time execution compared with other schemes, and it achieves the security
goals (integrity, authentication, confidentiality, data refresh, and
scalability). Finally, it prevents the message from being repeated compared with
another scheme, avoiding the adversary guessing the keys and penetrating the
network. |
|
Keywords: |
Cryptography, Key Management, Security, Wireless Sensor Network, Data Sequence. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
FRAMEWORK FOR DATA INTEGRATION IN CROSS-DOMAIN PERVASIVE ENVIRONMENTS:
LIGHTWEIGHT AND SECURE RESTFUL API |
|
Author: |
AJAI GOPAL BHARTARIYA, S. K. SINGH, AJAY KUMAR BHARTI |
|
Abstract: |
The digital revolution has compelled organizations to operate within
increasingly complex digital environments. In such contexts, effective
coordination is essential to ensure seamless interaction among diverse systems
and stakeholders. This paper presents a structured framework for API adoption in
business settings, developed through an extensive review of existing literature
on API adoption strategies. Emphasizing the growing significance of RESTful APIs
in today’s interconnected world, the study illustrates how APIs serve as a
critical enabler for integrating heterogeneous technologies, systems, and
platforms. By leveraging APIs, organizations can unlock new business
opportunities, optimize operational efficiency, and deliver innovative services
to customers. The central theme of this research is the development of a
framework for data integration using RESTful APIs, with a particular focus on
security. The paper highlights the pivotal role of security in API-based
integration, examining authentication and authorization mechanisms alongside
best practices for safeguarding data privacy and integrity. In addition, it
explores monitoring and logging practices, as well as developer testing
approaches, underscoring the importance of proactive issue detection and
resolution to ensure the reliability and performance of APIs. The framework
is flexible enough to enable organizations to focus on the specific areas that
need their attention because digital organization structures differ in their
degree of maturity. In order to address security, usability, real-time
capabilities, and monitoring, this research study offers a thorough overview
Framework for Data Integration in Cross Domain Pervasive Environments using
Lightweight and Secure RESTful API. |
|
Keywords: |
RESTful, API, Security, Integration, Lightweight, Domain. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
A UNIFIED APPROACH COMBINING FUZZY LOGIC AND KERNEL-BASED TWIN PROXIMAL SVM FOR
DETECTING LUNG CANCER |
|
Author: |
VIRINCHI PUVVADA , PRABAKERAN SARAVANAN |
|
Abstract: |
Lung cancer is the leading cause of cancer-related fatalities globally,
emphasizing the importance of early detection to improve survival rates.
Traditional diagnostic methods often struggle with issues of sensitivity and
specificity, which calls for the development of more advanced techniques. This
study introduces an improved method for detecting lung cancer using CT images,
combining deep learning, fuzzy logic, and Twin Proximal Support Vector Machine
(TPSVM). The proposed hybrid model applies fuzzy logic for feature
fuzzification, uses the VGG19 model for transfer learning to extract features,
and implements TPSVM for classification, further optimized with the Dolphin
Echolocation Optimizer (DEO). When tested against conventional classifiers such
as decision trees, Naive Bayes, and k-nearest neighbors (KNN), the model showed
exceptional performance across various evaluation metrics. This approach offers
a blend of high accuracy and computational efficiency, providing an effective
tool for precise and early lung cancer diagnosis, while also contributing to the
advancement of AI-based medical imaging technologies. |
|
Keywords: |
Deep Learning, Artificial Intelligence, Medical Image Diagnosis, Fuzzy Logic,
Computer Vision. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
EFFICIENT MULTI-CLASS ATTACKS CLASSIFICATION FOR INTRUSION DETECTION USING
WRAPPER-BASED FEATURE SELECTION |
|
Author: |
KURRA UPENDRA CHOWDARY, BHAVNA BAJPAI, V ANJANI KRANTHI, B MADHAV RAO, B
VEERAMALLU, G SATYANARAYANA |
|
Abstract: |
Cloud computing's massive uptake across industries has resulted in huge
advancements, but it has also introduced massive security holes. Intrusion
detection systems are crucial for identifying and reducing risks in cloud
environments since these weaknesses make them accessible to a wide range of
harmful actions. Nevertheless, classifier performance could be negatively
impacted by the abundance of irrelevant and superfluous features included in the
massive amounts of modern network traffic data stored in the cloud. moreover,
processing complexity rises with such massive information, thus reducing the
efficacy of IDS. This study presents a hybrid feature selection method that
integrates two bio-inspired algorithms: Particle Swarm Optimization (PSO) and
Genetic Algorithm (GA) to address these challenges. This hybrid approach
optimizes function selection, ensuring a more efficient search for relevant
features, which can subsequently be utilized to train a Random Forest (RF)
classifier. Furthermore, to address the imbalance in elegance, the observation
incorporates a hybrid resampling procedure associated with the Synthetic
Minority Over-sampling Technique (SMOTE) for the minority class. This technique
improves the true positive rate (TPR) while simultaneously reducing the false
positive rate (FPR), hence enhancing IDS performance.. The proposed approach
become evaluated the usage of the CICDDoS2019 dataset, displaying superior
overall performance in multiclass category. The hybrid IDS outperformed several
different classifiers, inclusive of Stacking, Extra trees, Multi-Layer
Perceptron, XGBoost, K-Nearest Neighbors, Logistic Regression, Naïve Bayes,
Support Vector Machine, and Decision Tree.The method also showed consistent
performance across more than one evaluation metric, which was better than the
best methods at the time. |
|
Keywords: |
Cloud Computing, Intrusion Detection System, Particle Swarm Optimization,
Genetic Algorithm (GA), Synthetic Minority Oversampling Technique , Class
Imbalance, Cicddos2019 Dataset |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
SNOMEDTM: A NOVEL TRANSFORMER-BASED ARCHITECTURE FOR ADVERSE DRUG EVENT
EXTRACTION FROM CLINICAL TEXT |
|
Author: |
SALISU MODI, KHAIRUL AZHAR KASMIRAN, NURFADHLINA MOHD SHAREF, MOHD YUNUS SHARUM |
|
Abstract: |
Extraction of adverse drug event (ADE) mentions and their attributes and
relations within electronic health records are crucial for adequate
pharmacovigilance studies and drug safety surveillance. Transformer-based large
language models (LLMs) have recently shown promising results in this research.
However, clinical LLMs are few and have a limited number of parameters. General
LLMs are domain-agnostic models developed for varying NLP tasks. However, due to
the domain-specific nature of ADE extraction with ambiguous, polysemous and
infrequent entities, general LLMs lacking prior medical knowledge have been
observed to perform sub-optimally in handling these complex situations within
clinical narrative documents. Consequently, researchers further pre-train the
general models on domain-related knowledge before finetuning them for the
downstream clinical tasks. Nevertheless, this approach is associated with
several risks. The model may overfit when the domain-specific data is small. In
addition, catastrophic forgetting may occur. To propose a new architecture
tailored to extract ADEs called the SNOMED Transformer Model (SNOMEDTM)
pre-trained from globally standard medical knowledge bases. The process is in
two phases: A new transformer architecture was designed and pre-trained on the
medical terminology-based SNOMEDCT and MedDRA. The model is tuned using
fine-tuning and soft prompt tuning for multi-task ADE concept and relation
extraction tasks. This study experimented with two tuning strategies, frozen and
unfrozen model parameters. The model’s performance was evaluated using the TAC
2017 and n2c2 2018 clinical challenge datasets. On TAC 2017, the proposed model
outperformed the five compared transformer-based models and the top five systems
contributing to the TAC 2017 challenge for fine-tuning. On n2c2 2018, the model
outperformed GatorTron-base for soft prompting with unfrozen model and JNRF
systems. This research demonstrates the potential of incorporating prior medical
knowledge into LLMs tailored for clinical research. |
|
Keywords: |
Adverse Drug Event, Fine-tuning, Soft Prompt Tuning, Prior Knowledge,
Pretraining.
|
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
FUSION OF GLOBAL OPTIMIZED CONVOLUTIONAL NEURAL NETWORK WITH DEEP FLEXIBLE
NETWORK FOR DIGIT RECOGNITION |
|
Author: |
PRIYANGA K.K, S. SABEEN |
|
Abstract: |
Digit recognition refers to the process of identifying and classifying digits,
especially from handwritten or printed sources. It involves analyzing the
features of each digit to match it to the correct number. Combining a Global
Optimised Convolutional Neural Network (GOCNN) with Deep Flexible Network (DFEN)
has resulted in a unique and powerful digital identification system. It can
manage scalability, accuracy, and flexibility. This hybrid model outperforms the
others by combining the high dynamic flexibility of DFEN for handling complex
patterns and shifting the structures with the robust feature extraction
capabilities of CNNs. Global Optimized Convolutional Neural Networks provide
consistent feature extraction across several datasets. The Deep Flexible Network
acts as a counterweight by continuously learning and adapting to new input,
including distorted handwriting. The convolutional layers of the proposed
systems employ two complex optimization approaches, including gradient-based
corrections and adaptive learning rates. This strategy enhances the model's
dependability by enabling the extraction of consistently high-quality features.
This fusion strategy allows the model to interpret both coarse-grained and
abstract data by combining the information at several levels. As it is more
precise and efficient in digit recognition, its design is appropriate for
real-time use. The combination of technologies, including digital forensics,
computerised handwriting analysis, secure authentication methods, and word
recognition in scanned documents, benefits several disciplines. Even though it
is resilient and flexible enough to handle complex data and adapt to various
input patterns, strong identification performance is maintained. This study
provides a foundation for future research on hybrid architecture, focusing on
the balance between accuracy and flexibility in pattern recognition. |
|
Keywords: |
Accuracy, Convolutional Neural Network (CNN),Deep Flexible Networks (DFEN),
Gradient, Secure authentication. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
DEVELOPING AN ENSEMBLE OF NEURAL NETWORKS TO ASSESS OPERATIONAL RISKS TRIGGERED
BY EMPLOYEE ERRORS OR MISCONDUCT |
|
Author: |
EKATERINA CHUMAKOVA , DMITRY KORNEEV , MIKHAIL GASPARIAN , ANDREY PONOMAREV ,
ILIA MAKHOV |
|
Abstract: |
The article addresses operational risk management in credit institutions,
specifically risks arising from employee actions. It examines personnel impact
on the continuity of business processes as one of the four sources of
operational risk events. For example, insufficient employee qualifications can
lead to various losses for a credit organization. Consequently, one potential
approach to minimizing operational risks related to personnel activities is the
use of artificial intelligence technologies in developing tools for assessing
the criticality level of operational risks triggered by employee errors or
misconduct. The study aims at developing an intelligent system for preventive
monitoring of critical conditions in business processes caused by staff actions
or inaction to prevent operational risks. To achieve this objective, the
study analyzes professional and personal criteria for employee evaluation, their
impact on business processes, and accumulated statistical indicators. A general
structure for a business process status indication system is proposed, organized
according to a modular principle. Feedforward neural networks (NNs) are
suggested as modular system components. The article outlines the main data
streams feeding into the inputs of NNs and compares different NN models used
within each system module. The results can be used by credit institutions to
enhance operational efficiency by adopting new approaches to assessing the
criticality of operational risks associated with employee behavior. |
|
Keywords: |
Operational Risk, Personnel Actions, Artificial Neural Network, Machine
Learning, Feedforward Neural Network. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
UTILIZING FUZZY-IDENTITY-BASED ENCRYPTION WITH PROXY-RE-ENCRYPTION FOR DATA
SHARING |
|
Author: |
JIBIN JOY , DR. S. DEVARAJU , DR. J. RAMKUMAR |
|
Abstract: |
Replication of information also means that efficiency of storing cost is assured
since only a copy of information is kept according to the different types of
variations. This has been necessitated even further by the fact that, the world
information has increased at a greater pace. The sharing of data in the cloud is
assessed using Fuzzy-Identity Based-Encryption with Proxy-Re-Encryption
(FuzzyIBE PRE) in the new research. The team of researchers is interested in
identifying the answers to the question of discovering the effectual mechanism
of sharing applied by users of cloud by embracing the benefits of PRE FuzzyIBE.
FuzzyIBE PRE proves that it can use open access control scheme by maintaining
the privacy of the information. In order to realize this research paper, the
research shall provide a critical narration of all the available literatures on
the attribute-based encryption and the proxy re encryption in specific with
regard to the fuzzy identity description. This will be followed by the running
of the FuzzyIBE PRE on real life assessments to determine the degree of
effectiveness FuzzyIBE is going to have when it comes to actual implementation
of the application cloud sharing data. The study objective will be to get an
understanding of the secure privacy based data exchange functions in clouds. |
|
Keywords: |
Fuzzy-Identity-Based-Encryption with Proxy-Re-Encryption (FuzzyIBE-PRE),
Fuzzy-Identity-Based-Encryption (FIBE), Media-Access-Control (MAC),
Information-Management-Table (IMT), Non-Volatile memory (NVM) |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
AN OPTIMIZED ENSEMBLE APPROACH FOR ALZHEIMER’S DISEASE DETECTION: INTEGRATING
GRADIENT BOOSTING TECHNIQUES WITH FEATURE SELECTION AND HYPERPARAMETER TUNING |
|
Author: |
T MAHA LAKSHMI, G SUDHAVANI, K RAMANJANEYULU, G V PRASANNA ANJANEYULU, LAVANYA
GOTTEMUKKALA, RUHISULTHANA SHAIK |
|
Abstract: |
Early diagnosis and correct diagnosis of Alzheimer's disease (AD) are crucial
for intervention on time. The present research proposes E3-Boost, a new ensemble
learning approach that is effective in improving the early diagnosis of
Alzheimer's disease (AD). The novelty of the research lies in the unified
integration of Gradient Boosting Machine (GBM), Extreme Gradient Boosting
(XGBoost), and Light Gradient Boosting Machine (LightGBM), which together ride
on the strength of combinations of boosting algorithms to enhance predictive
performance. One of the main novelties is L1L2-FS, a hybrid L1 (Lasso) and L2
(Ridge) regularization feature selection approach that, with respect to the
conventional practice, enables even more efficient identification of features as
well as their redundancy removal. Not only is the model increased in
interpretability, but so is its capability for generalization. Additionally, the
methodology leverages SMOTE for handling imbalanced classes as well as for
outlier detection on the basis of Z-Scores to guarantee excellent data quality.
Another milestone is the implementation of Optuna-based hyperparameter tuning
through Bayesian Optimization and Tree-structured Parzen Estimators (TPE), which
iteratively optimizes model performance to achieve a startling accuracy rate of
98.82%. The last model realizes a substantial boost in accuracy (97.85%), recall
(96.92%), F1-score (96.88%), and lowered RMSE (0.1023), outperforming
conventional ML and deep learning models. In contrast to computationally
intensive deep learning methods, E3-Boost presents a computationally light
solution with high predictive power, which is well suited for real-world
clinical applications. The most important innovation is three pillars they are
L1L2-FS, a hybrid feature selector that merges Lasso's sparsity and Ridge's
resilience to collinearity; E3-Boost, the first ensemble that brings together
GBM, XGBoost, and LightGBM through dynamically weighted meta-learning; and third
one is Optuna-TPE optimization, automating hyperparameter tuning with
Bayesian-adaptive sampling. This triad fills crucial gaps in interpretability,
scalability, and reproducibility for clinical AI. |
|
Keywords: |
E3-Boost, Alzheimer’s Disease Prediction, Ensemble Learning, Feature Selection,
SMOTE, Hyperparameter Tuning |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
EXTENDING THE THEORY OF PLANNED BEHAVIOR: THE ROLE OF INTRINSIC AND EXTRINSIC
MOTIVATIONS IN PREDICTING DIGITAL GAMING BEHAVIOR AMONG COLLEGE STUDENTS |
|
Author: |
WANG XIAODAN, AINI AZEQA MA’ROF, HASLINDA ABDULLAH, ZEINAB ZAREMOHZZABIEH, YAN
TANG |
|
Abstract: |
The purpose of this study was to enhance the Theory of Planned Behavior (TPB) by
integrating constructs from Self-Determination Theory (SDT), specifically
intrinsic and extrinsic motivation, to better predict digital gaming intentions
and behaviors among college students. A cross-sectional design was employed, and
data were collected from a sample of 850 Chinese college students using a
multi-stage cluster random sampling method. Structural equation modeling was
used to test the extended model and assess the predictive relationships between
motivational and cognitive factors influencing digital gaming behavior. The
findings indicated that attitudes toward gaming, subjective norms, and perceived
behavioral control significantly predicted the intention to engage in gaming,
which, in turn, predicted actual behavior. Intrinsic motivation had a stronger
impact on attitudes and perceived behavioral control than extrinsic motivation,
and it influenced behavior indirectly through serial mediation pathways. The
results also showed gender did not significantly affect gaming behavior. The
findings suggest that game designers and educators should prioritize intrinsic
motivational elements—such as autonomy, challenge, and relatedness—when
developing game-based learning tools. Tailored interventions based on students’
academic levels and geographic backgrounds are also recommended to enhance
engagement. This study extends the TPB by integrating motivational dimensions
from SDT, offering a more comprehensive framework for understanding
technology-related behaviors. It also demonstrates the value of multilevel
serial mediation in explaining digital engagement. By addressing both
psychological and contextual factors, the extended model provides new insights
into how youth adopt and engage with digital gaming technologies. |
|
Keywords: |
Theory of Planned Behavior, Intrinsic Motivation, Extrinsic Motivation, Digital
Gaming Behavior, Self-Determination Theory |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
ADAPTATION OF LOSS RECOVERY MECHANISMS FOR IMPROVING SCALABILITY AND QUALITY OF
SERVICES IN IOT NETWORKS |
|
Author: |
ABOURRICHE SAMIRA , ZYANE ABDELLAH , GHAMMAZ ABDELILLAH |
|
Abstract: |
Over the past decade, the Internet has undergone significant transformations in
various sectors such as healthcare, transportation, and the environment. These
developments are built upon the TCP/IP model protocols, a four-layer
architecture where the transport layer plays a crucial role in enabling Internet
communication and managing Quality of Service (QoS). This evolution spans from
the original ARPANET design to the emergence of Web 3.0, including Web 1.0 and
Web 2.0. It has enabled the development of new communication paradigms such as
the Internet of Things (IoT), which seeks to connect physical objects and
devices across diverse domains within a unified infrastructure, supporting
real-time monitoring and control [1].As a result, IoT has become a leading
technology in multiple sectors, with connected devices projected to increase
from 30.7 trillion in 2020 to 75.4 trillion by 2025 [2]. This massive number of
connected things introduces numerous challenges, particularly in terms of QoS
and scalability, which present obstacles for IoT and, more specifically,
Machine-to-Machine (M2M) networks.The main objective of this paper is to provide
a comprehensive contribution toward implementing mechanisms for autonomous
scalability management and Quality of Service (QoS). The approach integrates
Transport Layer loss recovery protocols from the TCP/IP model within the
middleware layer of IoT networks.. The outcome of this work proposes a new
architecture for the existing IoTScal approach, incorporating additional
components to simulate loss recovery mechanisms. This enhancement boosts the
success rate of e-Health traffic from 95% to 99.99%, delivering a clear and
significant improvement over the existing reference approach. |
|
Keywords: |
Quality Of Service,Internet Of Things,Scalability,M2M Networks,Loss Recovery
Mechanisms |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
OPTIMIZING SUSTAINABILITY IN SMART MICRO GRIDS: NATURE-INSPIRED MULTI-OBJECTIVE
ECONOMIC EMISSION DISPATCH MODELS |
|
Author: |
V. SAI GEETHA LAKSHMI, M. DEVIKA RANI , R BALUSULARAO MALUKURT, RAMAVATH
SRINIVAS, SRIKANTH KILARU |
|
Abstract: |
In order to prevent transmission losses and keep the electricity on
continuously, the micro grid systems generate and distribute power on a smaller
scale to a limited geographic area. With the goal of reducing air pollution
caused by burning fossil fuels, it has been standard practice to use renewable
energy sources (RES). Distribution energy resource (DER) sizing with fuel cost
minimization is to focus on economic load dispatch (ELD). Optimal sizing of DERs
sources is accomplished via emission dispatch, which minimizes atmospheric
pollutants. For the best results in reducing fuel costs and pollution, a
multi-objective Combined Economic-Emission Dispatch (CEED) is used to size
distributed energy resources (DERs). Using a newly-developed unique technique
enhanced golden jackal optimization algorithm (EGJO) this paper executes all
three phases that is ELD (economic load dispatch), ED (emission dispatch), and
CEED. In this study, four different situations using DER load sharing are
examined on hourly basis. The suggested method's efficacy is confirmed by
comparing the findings to other newly created bio-inspired algorithms.
Additional statistical analysis, including a two-way ANOVA test, is carried out
to demonstrate that the suggested method outperforms than the other optimization
strategies. |
|
Keywords: |
Bio-Inspired Algorithms, Micro Grid, Multi- Objective Combined
Economic-Emission Dispatch, Renewable Energy Resources. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
QUANTITATIVE BENCHMARKING AND CROSS-MODAL ANALYSIS OF DEEP LEARNING, MACHINE
LEARNING, AND BIOSENSOR FRAMEWORKS FOR EARLY COLORECTAL CANCER DIAGNOSIS AND
PROGNOSIS |
|
Author: |
DIVYA MIDHUNCHAKKARAVARTHY, G MUNI NAGAMANI, V LAKSHMAN NARAYANA |
|
Abstract: |
This text is an iterative major study that discusses colorectal cancer (CRC),
which is also a formidable malignancy for most countries in the world, with
early detection making it a boon for improving survival rates. This is an
empirical review of recent works that apply machine learning (ML), deep learning
(DL), and multi-omics-based biosensor systems toward CRC diagnosis and
prognosis. In the field of Information Technology (IT), this research fills a
fundamental requirement for the development of deployable healthtech systems by
providing a standardized, computationally grounded framework for performance
benchmarking. This framework allows for fair comparisons and reproducibility.
This review, unlike other previous reviews, applies a six-metric evaluation
framework accuracy, precision, recall, RMSE, inference delay, and computational
complexity to benchmark models systematically between imaging and non-imaging
modalities. Hybrid models such as CMNV2, DeepCPD, and MACGAN achieved
classification accuracies exceeding 99%, with CMNV2 proving most effective at
99.95% and perfect recall for histopathological datasets. Furthermore, designs
utilizing a transformer architecture like MLPFormer and MSNet outperformed
baseline models in segmentation tasks, improving Dice scores of 3-5%. Among
these mutants, however, genomic and survival models for example DeepSEA Further
enhance this prediction with good interpretability but have moderates
performance (CIindex ~0.71). The visual analytics using the above medium like
violin plots, heat maps, and correlation will reveal the performance trends and
the expression of trade-offs made between accuracy and model complexity. The
paper, therefore, establishes a high-resolution benchmarking map that informs
one on model selection depending on application needs ranging from polyp
detection to survival predictions. Future research directions are identified
toward the goal of having explainable and lightweight multi-modal architectures
and validation in multi-center prospective clinical trials in process. |
|
Keywords: |
Colorectal Cancer, Deep Learning, Machine Learning, Diagnostic Modeling,
Survival Prediction, Scenarios. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
INTELLIGENT NETWORK INTRUSION DETECTION USING ENSEMBLE CLASSIFIER AND RELIEFF
WITH PROBABILISTIC OPTIMIZATION OF SIMULATED ANNEALING |
|
Author: |
SVSV PRASAD SANABOINA, Dr.M CHANDRA NAIK, Dr.K RAJIV |
|
Abstract: |
Recently, the Intrusion Detection System (IDS) analyzes Internet of Things (IoT)
network packets and further reports security violations to the authorized system
administrator. The reports become unmanageable in the larger networks, and the
existing systems suffer from a high false-alarms-rate. In order to address this
concern, an efficient intelligent automated IDS is proposed based on the ReliefF
with Probabilistic Optimization of Simulated Annealing (RPOSA) algorithm and
ensemble classification model. In the initial phase, the label encoder and
Z-score technique are applied on the NSL-KDD and UNSW-NB15 databases to
normalize feature categories and eliminate outliers. Furthermore, the optimal
intrusion attributes are selected from the pre-processed data using RPOSA
algorithm. In this algorithm, the ReliefF method is initially employed for
ranking the attributes based on its relevance to the target variable, and
further, the simulated annealing optimization algorithm is applied for searching
optimal attributes utilizing the ReliefF rankings. These selected optimal
intrusion attributes are fed into the Stacked Ensemble Classification (SEC)
model for efficient and accurate classification of intrusion attacks, and it
comprises three models: Deep Neural Network (DNN), logistic regression, and
Gradient Boost Decision Tree (GBDT). The RPOSA-SEC model obtained an impressive
classification accuracy of 99.37% and 99.52% on the databases of NSL-KDD and
UNSW-NB15, particularly in the case of both multi and binary classification. |
|
Keywords: |
Ensemble Classification Model, Internet of Things, Intrusion Classification,
Label Encoder, ReliefF, Simulated Annealing, Z-score Technique |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
ENSEMBLE MACHINE LEARNING AND SWARM INTELLIGENCE FOR ROBUST HEART DISEASE
PREDICTION: LEVERAGING THE POWER OF CUCKOO HARMONY OPTIMIZATION |
|
Author: |
G. V. S. N. R. V. PRASAD, CH. DEVI CHAITANYA |
|
Abstract: |
Heart disease is still a major worldwide health issue that demands the creation
of precise and effective predictive models. This study seeks to provide an
innovative approach for predicting cardiac disease through ensemble machine
learning techniques using swarm intelligence. In this research the risk of heart
disease was split into four separate groups using K-Means clustering applied on
the Cleveland Cardiovascular Disease dataset from the University of California,
Irvine: no risk, low risk, medium risk, and high risk. We examine several
machine learning techniques including Logistic Regression, Naive Bayes, Decision
Tree, Multilayer Perceptron, Support Vector Machine, K-Nearest Neighbors, Random
Forest, CNN—both with and without the Cuckoo Harmony approach concurrently. The
CNN model attaining almost ideal performance (97% accuracy, 100% precision, and
100% ROC-AUC) shows a notable increase in prediction accuracy and resilience by
means of swarm intelligence. Our results show that this development is really
significant. The results of this work show that it is successful to include
advanced machine learning and statistical inference into the process of risk of
heart disease prediction and opens the path for further developments in the
discipline of healthcare data analysis. |
|
Keywords: |
Heart Disease Prediction, Machine Learning, Swarm Intelligence, Cuckoo Harmony
Algorithm, Ensemble Learning. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
ADVANCING BIOMETRIC IMAGE GENERATION AND VERIFICATION WITH DEEP LEARNING-BASED
GENERATIVE AI METHODS |
|
Author: |
SHANTHI PANNALA, Dr B SATEESH KUMAR |
|
Abstract: |
This study introduces a novel biometric authentication framework integrating
GenBio-Net, BioSynth-VerifyNet, SecureGen-ID, and Ethical-BioGuard into a
unified system capable of achieving an accuracy of up to 98.5% on synthetic and
real fingerprint datasets. The proposed approach addresses critical challenges
in generative biometric security, privacy-preserving verification, and ethical
usage policies. Our framework synthesizes biometric data using advanced
generative modeling, enhances verification robustness with multimodal learning,
and incorporates federated learning for secure deployment. Experimental results
on benchmark and synthetically generated fingerprint datasets demonstrate
superior performance over recent state-of-the-art methods in terms of accuracy,
false acceptance rate (FAR), and false rejection rate (FRR). Comparative
analysis shows consistent improvements of 3–6% across metrics, highlighting the
system’s applicability to high-security authentication environments. |
|
Keywords: |
Generative AI, Deep Learning, Biometric Image Synthesis, Biometric Validation,
Generative Adversarial Networks (GANs), Diffusion Models, Identity Verification,
Anomaly Detection |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
FedDNN-LDP: PRIVACY-PRESERVING FEDERATED RECOMMENDER SYSTEM USING DEEP
NEURAL NETWORK AND LOCAL DIFFERENTIAL PRIVACY |
|
Author: |
THENMOZHI GANESAN , PALANISAMY VELLAIYAN |
|
Abstract: |
Federated recommender system utilizes federated learning to preserve user data
in local and centralized system by training intermediate parameters instead of
training raw user data. Currently, deep neural network gaining significant
attention in recommender system due to its efficiency in processing in massive
training samples and capturing intricate user-item interactions. However, global
sharing of the entire user-item interaction in centralized network is prevented
and limited for privacy concern. To address this gap, several integrated
techniques utilizes the feature of pseudo-interaction of users and items in the
neural network to recompense the missing values for each user and item, which
adds random noise to the model and raise the privacy threatening in the network.
This research proposes FedDNN-LDP, a novel federated deep neural recommendation
system where centralized deep neural network is formed and preserved to enable
the intricate user-item interactions on the server-end. On the client-end, to
preserve the server from leaning user local data, pseudo-interaction is applied
on the user data for data obfuscation. Furthermore, for privacy concern of local
user-item interaction and obscure intermediate gradient parameters, integration
of pseudo labeling and local differential privacy is exploited. Extensive
experiments performed on two real world benchmark datasets namely MovieLens 100k
and MovieLens 1M and experimental results show the outperformance of proposed
model with existing competitive approaches ensuring privacy preservation by the
following performance measures with the improvement: NDCG (15.47%), precision
(30.06%) and recall (28.66%) for 100K dataset and NDCG (32.54%), precision
(58.38%) and recall (40.86%) for 1M dataset. Further root mean squared error,
mean absolute error, training and validation loss and hit ration are measured to
exhibit the effectuality of the proposed model in terms of performance and
privacy preservation. |
|
Keywords: |
Federated Recommender System, Deep Neural Network, Pseudo Labeling, Local
Differential Privacy, Recommender System and Privacy Preservation. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
BAND SELECTION USING COEFFICIENT OF VARIATION AND BAND DENSITY RANKING IN
HYPERSPECTRAL IMAGE |
|
Author: |
O. SUBHASH CHANDER GOUD, T. HITENDRA SARMA, C. SHOBA BINDU |
|
Abstract: |
In Hyperspectral Image (HSI) processing, one of the critical challenges is
addressing Dimensionality Reduction through Feature Selection, especially given
the high volume of spectral bands and often limited labeled data. This study
introduces an innovative Band Subset Selection (BSS) technique that employs a
Ranking-Based Approach to tackle this problem efficiently. The proposed approach
is distinguished by its unsupervised nature, leveraging the fusion of two
essential statistical measures: Coefficient of Variation (CV) and Band Density
(BD). By synergistically combining these metrics, each band in the HSI dataset
is analyzed, ranked, and subsequently filtered, allowing the model to identify
an optimal subset of bands with the most relevant spectral information. This
curated Band Subset (BS) method, termed CV-BDS-BS, is meticulously compared
against an existing ranking procedure called SAM-SC (Spectral Angle Mapper with
Spatial Coherence). Both methods undergo rigorous evaluation using
state-of-the-art machine learning algorithms to ensure the efficiency,
robustness, and reliability of the dimensionality reduction process. This
integrated CV-BDS-BS methodology streamlines HSI data by reducing dimensionality
and preserving essential spectral and spatial information. |
|
Keywords: |
Dimensionality Reduction, Feature Selection, Band Subset Selection, Coefficient
of Variation, Band Ranking. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
LEARNING PATHWAYS OF HIGH-LEVEL ARGUMENTATION SKILLS IN ASYNCHRONOUS ONLINE
DISCUSSION FORUM VIA EDUCATIONAL DATA MINING |
|
Author: |
SITI NAZLEEN ABDUL RABU , SITI KHADIJAH MOHAMAD , SITI NUR BAHIYAH KHALID |
|
Abstract: |
This study explores the learning pathways of high-level argumentation skills
among undergraduate students participating in an asynchronous collaborative
online discussion forum. Toulmin’s Model of Argumentation, with its six elements
is employed to identify dominant types of argument contributions, while Peter
and Wilson’s triads framework is used to assess the levels of argumentation. To
uncover the pathways of argumentation skills, this study applies the data mining
tool WEKA. Using a qualitative case study approach, the research combines
content analysis and educational data mining to derive meaningful insights. A
sample of 45 undergraduate students enrolled in an instructional multimedia
course participated, with students grouped into nine small groups of five. The
results show that while students were expected to explain their ideas with
strong reasoning, most of their arguments remained basic. Evidence (36.5%) and
Warrant (34.6%) were the most common contributions, as students often shared
facts to support their opinions and attempted to connect those facts to their
main points. However, more advanced argumentation skills, such as Rebuttal, i.e.
questioning or challenging others’ views were rarely observed. Overall, 91.6% of
student responses were classified as low-level, consisting mainly of simple
opinions or explanations rather than deep, critical reasoning. Interestingly,
not all groups performed equally. One group, G8_Skinner, demonstrated stronger,
higher-level arguments. Students in this group participated more actively,
contributed detailed explanations, and included supporting elements like Backing
and Qualifiers, suggesting that active engagement and deeper reasoning go hand
in hand. To explore how students can achieve stronger argumentation, a decision
tree classifier was applied, identifying five key predictors of high-level
performance: Add Post, Warrant, Backing, Qualifier, and Claim. The results show
that students who posted more frequently, justified their reasoning, provided
extra support, and used careful language were more likely to produce convincing
arguments. However, the overall lack of rebuttals and limited critical
engagement across most groups indicate a need for better instructional support
to help students feel confident questioning ideas and building more complex
arguments. These findings offer practical implications for educators, course
designers, and educational technologists aiming to enhance argumentation,
critical thinking, and student engagement in online learning environments. |
|
Keywords: |
Argumentation, Online Discussion Forum, Decision Tree Data Mining,
Self-regulation, Reflection, Learning Analytics |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
DIGITALIZATION OF HIGHER EDUCATION AS A TOOL FOR IMPROVING HUMANITARIAN SECURITY |
|
Author: |
OLEKSANDR HORBAN, MYKOLA STADNYK, LARYSA TARASIUK3, RUSLANA MARTYCH, SVITLANA
VINTONIV-BAKHARIEVA, LEONID PANASIUK |
|
Abstract: |
Digitalization of the educational process provides access to quality knowledge
and affects the development of thinking. The combination of the foundations of
humanitarian security and digital technologies affects the consideration of the
characteristics of modern society. The aim of the study is to determine the
effectiveness of digitalization of higher education as a tool for strengthening
humanitarian security. The research employed the following methods: analytical
comparison, survey, observation, Likert scale, SWOT analysis, Student’s t-test.
Digitalization of the educational process provided for the involvement of the
Open edX application for the development of theoretical knowledge; the use of
the Zoom platform to expand students’ pedagogical skills; the use of the Graasp
platform to develop students’ creative abilities. It was established that the
use of digital technologies in the educational process most contributed to the
manifestation of such humanitarian security components as the ability to acquire
academic knowledge (98%), the development of creative skills (97%). The results
showed that after the study, students were able to achieve the development of
critical thinking (98%), digital literacy (99%), emotional and psychological
resilience (97%). The established indicators contribute to the development of
humanitarian security. A SWOT analysis showed the advantage of strengths from
involving digital technologies in the educational process to maintain
humanitarian security. The practical significance of the article is the
identified effective tools for implementing interactive learning for future
teachers to improve humanitarian security. The research prospects are aimed at
comparing traditional and digital approaches to education in order to determine
their impact on the development of humanitarian security. |
|
Keywords: |
Digital Applications, Creative Opportunities, Self-Control, Educational Goals,
Social Responsibility, Digitalization, Humanitarian Security. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
BLOCKCHAIN FOR SECURE AND TRANSPARENT VOTING SYSTEMS IN DEMOCRATIC ELECTIONS |
|
Author: |
D. LEELA DHARANI, MEDARAMETLA ANUSHA RANI, DR PRITI M BIHADE, DR O. RAMA DEVI, D
VENKATA RAVI KUMAR, SIVAGANGA BADIPATI, DR S. SATHISH KUMAR, CHETLA CHANDRA
MOHAN |
|
Abstract: |
Transparency For democratic processes to be fair, the mechanisms for voting must
be kept honest and open. Menace plagues the conventional voting methods,
involving physical presence that includes tampering of votes and fraud that
destroys the voter's confidence. This work introduces a novel blockchain voting
scheme which provides a decentralized, immutable, and secure environment to
alleviate these problems. The solution combines biometric authentication for
voters to confirm their identity, smart contracts for automatically counting
votes, and blockchain to ensure no one can access and modify the voting records.
A detailed simulation using artificial data ranging from 1,000 to 100,000 voters
shows the efficiently provides vote integrity, authentication strength,
scalability and latency. Experiment results have demonstrated that the proposed
design can achieve 100% vote integrity, 99.8% authentication accuracy and much
lower polling time than traditional e-voting systems. It also maintains
real-time transparency using a blockchain ledger that can be used to audit voted
ballots. This system provides a more secure, scalable, and transparent solution
to trust-based voting methods. It can be used in future democratic elections to
increase trust in the electoral process and decrease the possibility of fraud. |
|
Keywords: |
Blockchain, Voting System, Biometric Authentication, Smart Contracts,
Transparency, Scalability |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
MQTT-ENABLED IOT FRAMEWORK FOR EFFICIENT HEALTHCARE MONITORING AND PREDICTION
SYSTEM |
|
Author: |
AFROZ PASHA, P S PRASAD, RIYAZULLA RAHMAN J, IMPA B H, SONIA MARIA D'SOUZA,
BHAVANA A |
|
Abstract: |
For the clinically ailed subjects, the better enhancement, and opportunities in
different applications given based on Internet of Things (IoT). Because of
healthcare infrastructure, the residents of rural contains limited access for
diagnostic and nursing services. As a result, when heart failure develops,
people frequently fail to seek for help and use the services. An IoT provides an
essential benefit in managing cardiac issues. This paper presents a novel
stacked ensemble-based machine learning algorithm with cardio vascular (CV)
monitoring model based on electrocardiogram (ECG) in corporate with IoT. The
proposed framework involves the core steps namely data acquisition, data
transmission based on IoT based Thingspeak cloud platform and CV prediction.
Fetch the properties of ECG signal such as P, Q, R, S, and T then the
pre-process the data using Pan Tomkins algorithm and it provide for CV
prediction. For future health management, the level of age also forecasted. The
hypertext transfer protocol (HTTP) and message queuing telemetry transport
(MQTT) servers save the ECG data in the cloud. The error rate and ECG
characteristics impacts are determined using Stacked Ensemble-based Machine
Learning (SEML) algorithm. In an ECG monitoring device, utilize the PQRST
regularity and achieve the acceptable outcomes. For CV patients, an effective
cost with 95% prediction accuracy is accomplished via proposed methodology. |
|
Keywords: |
Heart Disease, Stacked Ensemble Machine Learning (SEML), Cardio Vascular, ECG
signal and Thingspeak cloud platform. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
THE USE OF DIGITAL TECHNOLOGIES IN THE PROCESSES OF SOCIAL GROUPS CONSOLIDATION
AS A TOOL FOR ENSURING STATE SECURITY |
|
Author: |
EDUARD PRYS |
|
Abstract: |
Digital technologies can serve as a pivotal element in strengthening social
consolidation, which entails the cohesion of diverse social groups united by a
single goal, particularly the enhancement of state security. The purpose of this
study was to evaluate the synergistic influence of social consolidation and
social integration within the digital environment, stemming from the
interrelated actions of society and government concerning the state security
level. The methods of correlation, variance and regression analysis are employed
in the work. The study findings illuminated the correlation between indicators
of social integration in the digital environment, social consolidation,
institutional prerequisites, and state security. The regression analysis
demonstrated, firstly, that the degree of social integration in the digital
sphere is significantly contingent upon institutional prerequisites, with the
model’s explanatory power reaching 58.57%. Secondly, the analysis revealed that
the synergistic effect of social integration and social consolidation partially
elucidates the level of state security, with an explanatory capacity of 31.92%.
The results obtained may be instrumental in establishing a foundation for the
formulation of policies aimed at harmonizing digital transformation with
strengthening social stability. |
|
Keywords: |
Digital Technologies, Consolidation of Social Groups, State Security, Social
Integration, E-Governance, Social Trust, Budget Transparency, Independence of
Courts. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
ADDRESSING KEY MANAGEMENT AND DATA INTEGRITY IN IIOT BASED HEALTHCARE: A
BLOCKCHAIN APPROACH USING FERNET ENCRYPTION AND MERKLE ROOT PROOF OF WORK |
|
Author: |
KUMAR M P , AKILA A |
|
Abstract: |
A blockchain based healthcare system in the Industrial Internet of Things (IIoT)
improves transparency and interoperability by decentralizing patient data access
and storage. It ensures tamper-proof records by enabling secure and seamless
sharing of healthcare information among stakeholders and connected devices.
However, IIoT devices are vulnerable to security issues like unauthorized
attacks due to improper key management, which makes it easy for attackers to
steal data. To overcome this issue, the Fernet Encryption Algorithm with Merkle
Root Proof of Work (FEA-MRPoW) is proposed in blockchain-based healthcare
systems for IIoT to increase security by generating strong data encryption which
prevents unauthorized access and data confidentiality. The structure of Merkle
root provides integrity of healthcare records whereas PoW protect against
tampering for block validation. Therefore, this integration provides robust
performance and ensures the safety of healthcare data in IIoT environments. The
proposed FEA-MRPoW achieves a less encryption time of 0.298s when the number of
attributes is 10 compared to existing methods like the Diagonal Digital
Signature Algorithm (DDSA) with Merkle Patricia Hash Trie (MPHT). |
|
Keywords: |
Blockchain, Fernet Encryption Algorithm with Merkle Root Proof of Work,
Healthcare, Industrial Internet of Things, Unauthorized Access. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
NEURAL NETWORK-BASED ENSEMBLE APPROACH TO DETECT INFORMED ATTACKS FOR MULTIPLE
TARGETS IN RECOMMENDER SYSTEMS |
|
Author: |
ASHISH KUMAR , YUDHVIR SINGH |
|
Abstract: |
Recommender systems (RSs) based on collaborative filtering (CF) are frequently
used to deliver personalized services. However, profile injection attacks can
easily affect them. As a result, attackers can easily manipulate the outcomes of
these RSs. Informed attacks, which are also a type of profile injection attack,
are very challenging to identify due to their high resemblance to genuine user
profiles. Very limited research has been carried out to identify informed attack
profiles, so there is a significant research gap to propose a technique to
identify these profiles with good accuracy. In our experiment, we proposed a new
data partition scheme for better training and testing of machine learning
models. We injected informed attacks to promote and demote a specific item and a
set of 10 items in the MovieLens dataset, and we proposed a novel neural
network-based ensemble approach. Its performance is evaluated based on accuracy,
precision, and recall by comparing it with that of a classical voting-based
ensemble model (CVBEM), along with other supervised machine learning models in
the detection of informed attacks. Robustness of performance is ensured by using
k-fold cross-validation. We conducted our experiment in 24 attack scenarios with
varying types of attacks, intentions, target item sizes, and attack sizes. Our
study found that the proposed model's accuracy outperforms the other models'
accuracy by a good margin of nearly 4% in most of the test scenarios. The CVBEM
comes out as the second-best performer among all. The proposed model performs
better not only in predicting biased users but is also more stable compared to
traditional machine learning models. |
|
Keywords: |
Informed Attacks, Ensemble Model, Recommender System, Probe Attacks, Power User
Attacks, Neural Network |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
BIG DATA AND ARTIFICIAL INTELLIGENCE REVOLUTIONIZING FINANCIAL FRAUD DETECTION
SYSTEMS |
|
Author: |
CHALLAPALLI SUJANA, A. SASI HIMABINDU, DR. DIVVELA SRINIVASA RAO, SASIKALA
RASAMSETTY, ARUNKUMAR M S, P. MUTHUKUMAR, SATHISH KUMAR SHANMUGAM |
|
Abstract: |
Fraud detection is a significant challenge when it comes to detecting financial
fraud, as the activities of financial fraud occur at a high frequency. The model
of Big Data analytics hybrid with Artificial Intelligence, which comprises a
convolutional neural network (CNN) and a long short-term memory (LSTM) network,
is presented in the current paper. The aim is to enhance the detection rate and
minimize the rates of false positives and false negatives, especially when
fraudulent transactions are identified. The predicted system is implemented in
the form of feature extractors, utilizing CNNs after LSTMs that model temporal
dependence on transaction data. The synthetic data set on which it was tested
has been designed to emulate the real-life application of the model in making
financial transactions, and it performed better than the conventional
machine-learning algorithms in Random Forest, SVM, and Gradient Boosting
(accuracy, 96.2%; precision, 95.2%; recall, 92.6%). The findings indicate that
our hybrid CNN-LSTM solution is feasible for carrying out fraud detection with a
relatively low false positive rate, which is quite significant in preventing
customer inconvenience. The implications of this model are powerful, as it
provides the financial industry with a real-time, scalable, and efficient
solution to prevent fraud, streamline business procedures, and foster customer
trust in the industry. |
|
Keywords: |
Financial Fraud Detection, Big Data Analytics, Artificial Intelligence,
Convolutional Neural Network, Long Short-Term Memory, Fraud Classification |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
Title: |
IMPROVED ATTENTION-BASED RCNN SEGMENTATION AND ENSEMBLE CLASSIFIER FOR LUNG
CANCER CLASSIFICATION AND SEVERITY LEVEL ASSESSMENT USING CT IMAGE |
|
Author: |
A NAGA KALYANI , VIJAYA KUMAR V |
|
Abstract: |
Lung cancer (LC) is a horrible ailment that impacts many countries across the
world, and early identification remains difficult. Oncologists analyze the
cancer using blood specimens and CT images, which consumes time and requires
additional human work. To minimize mortality, an automated approach for
identifying and assessing lung cancers should be developed. To address this, a
new approach is proposed in this paper for LC classification (LCC), which
initially employs an Averaged Gaussian-Median Filtering (AG-MF) model for
pre-processing the CT image. In the next phase, a segmentation is performed with
an Improved Attention Layer-Based Mask RCNN (IAL-MRCNN). Subsequently, features,
like deep features, Entropy features, statistical and color features are
derived. Then, detection and severity classification are done by utilizing an
ensemble classifier combining Improved AlexNet, Bi-LSTM, and SqueezeNet to
accurately classify lung tumors and assess their severity. From the analysis,
the high precision proves the classification performance, as the improved
AlexNet achieved a higher value of 0.956 at 90% of training data. |
|
Keywords: |
Lung cancer; Averaged Gaussian-Median Filtering; IAL-MRCNN; SqueezeNet; Improved
AlexNet. |
|
Source: |
Journal of Theoretical and Applied Information Technology
30th September 2025 -- Vol. 103. No. 18-- 2025 |
|
Full
Text |
|
|
|