|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
February 2024 | Vol. 102
No.3 |
Title: |
UNVEILING TRUST DYNAMICS: A NOVEL EXAMINATION INTO INFLUENTIAL FACTORS OF
INDONESIAN C2C SOCIAL MARKETPLACES |
Author: |
JUAN DANIEL WIJAYA, NILO LEGOWO |
Abstract: |
Facebook Marketplace is one of the biggest social-media-based marketplaces in
the world, in which includes Indonesia. With many transactions occurring in
Indonesia, there are different aspects that user determines when deciding
whether to trust Facebook Marketplace and the sellers within. Therefore, this
study aims to analyze and determine factors influencing a person's trust aspect
when transacting on Facebook Marketplace. This study uses a quantitative method
to collect data by distributing questionnaires to 151 respondents via Google
Forms; then, the data is processed and analyzed using the Partial Least Square
Structural Equation Modeling (PLS-SEM) method. The findings reveal that
Credibility, Payment Options, Price, and Ease of Use is positively impacting
Satisfaction. Moreover, Satisfaction, along with Privacy and Word of Mouth,
positively influences Trust. However, this research shows that Security and
Brand Awareness do not significantly affect Trust and may not be the key
determinant of user trust. Overall, this research provides valuable insights
into the factors influencing the trust of users on the Facebook Marketplace in
Indonesia. The results highlight the significance of Credibility, Payment
Options, Price, Ease of Use, Satisfaction, Privacy, and Word of Mouth in shaping
users trust. The research results can guide Facebook Marketplace and other
social marketplace platforms to improve security measures and users brand
awareness, to drive users trust. |
Keywords: |
C2C, Social Marketplace, Social Commerce, Satisfaction, Trust, Indonesia |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
ENHANCING ANDROID MALWARE DETECTION THROUGH ENSEMBLE STACKING CLASSIFIERS AND
REGULARIZATION-BASED FEATURE SELECTION |
Author: |
ELISA BAHARA SORITUA, DITDIT NUGERAHA UTAMA |
Abstract: |
With the persistent evolution of Android malware, advanced detection techniques
have become paramount. This paper introduces a novel approach to Android malware
detection, harnessing the power of ensemble stacking classifiers combined with
regularization-based feature selection techniques, specifically Lasso, Ridge,
and Elastic Net, applied to the Drebin-215 dataset. Using base classifiers
including Random Forest, K-Nearest Neighbors, Support Vector Machine, Logistic
Regression, and Bernoulli Naive Bayes, with Logistic Regression as the meta
classifier, our methodology aims to capture the collective strengths of diverse
algorithms. Initial results of individual classifiers laid the foundation, upon
which the ensemble model furthered the detection rates. Implementing the
regularization-based feature selection significantly enhanced the model's
efficiency, leading to improved classification accuracy. Compared to traditional
methods, our proposed system offers a notable enhancement in malware detection
capabilities, providing a resilient solution to the prevailing challenges in
Android security. This study underscores the potential of integrating machine
learning-driven ensembles with advanced feature selection techniques for
bolstered security measures. |
Keywords: |
Android Malware Detection, Machine Learning, Ensemble Stacking, Genetic
Algorithm. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
THE PERFORMANCE OF THE 3DES AND FERNET ENCRYPTION IN SECURING DATA FILES |
Author: |
SITI MUNIRAH MOHD, SHAFINAH KAMARUDIN, NORHANIZAH YAHYA, SAZLINAH HASAN,
MUHAMMAD LUQMAN MAHAMAD ZAKARIA, SAHIMEL AZWAL SULAIMAN, DJOKO BUDIYANTO
SETYOHADI |
Abstract: |
The number of cyber attacks launched has increased five-fold since the advent of
the Coronavirus Disease Pandemic (COVID-19) in 2019. Ransomware is currently one
of the highest digital risks as it aids cybercriminals to use persistent threat
tools and techniques to get access to targeted networks by way of third parties.
Therefore, this study aims to implement the symmetric encryption algorithms
known as 3DES and Fernet methods as a means for securing files. In addition,
this study evaluates the 3DES and Fernet encryption methods' performance in
protecting confidential file. This study significantly contributes in
comprehending Fernet and 3DES methods for securing confidential files,
identifying the most efficient cryptographic symmetric algorithm for file
security, and providing comparative results between Fernet and 3DES. This study
applies both 3DES and Fernet in the scenario of client-server architecture in
performing encryption and decryption processes. The results from the current
study has shown successful implementation of these two encryption methods for
both the encryption and decryption processes. In addition, this study evaluates
the temporal efficiency for the encryption process. Five different text file
sizes ranging from 10KB to 50KB were used for the experimental trial in
evaluating the performance of both encryption methods. The outcome reveals that
the Fernet encryption method performs better than 3DES. |
Keywords: |
Fernet Algorithm, 3DES Algorithm, Encryption, Decryption, File Security |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
SECURE AND ISOLATED COMPUTING IN VIRTUALIZATION AND CLOUD ENVIRONMENTS: A
SYSTEMATIC REVIEW OF EMERGING TRENDS AND TECHNIQUES |
Author: |
MOHAMMAD SHKOUKANI, SAMER MURRAR, RAWAN ABU LAIL, KHALIL YAGHI |
Abstract: |
In an era defined by the omnipresence of virtualization and the increasing
dependence on cloud computing, ensuring efficient and secure virtual machine
management is of paramount importance. This paper presents a detailed review of
some innovative studies, each addressing distinct challenges in the realm of
cybersecurity and virtual machine performance. The first study examines the use
of Lightweight Kernels and Trusted Execution Environments to optimize security
isolation capabilities, demonstrating promising results for high-performance
computing platforms. The second study explores the application of machine
learning techniques for detecting anomalies in cloud based virtual machine
resource usage, contributing to a proactive security approach. The third study
presents SecFortress, an approach that enhances hypervisor security using
cross-layer isolation techniques. Together, these studies underscore the
significance of continual research in secure, efficient computing and offer
promising avenues for future development. |
Keywords: |
Virtual Machine, High-Performance Computing, Cybersecurity, Machine Learning,
Hypervisor Security |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
DEVELOPMENT OF PATHFINDING USING A-STAR AND D-STAR LITE ALGORITHMS IN VIDEO GAME |
Author: |
RISKA NURTANTYO SARBINI, IRDAM AHMAD, ROMIE OKTOVIANUS BURA, LUHUT SIMBOLON |
Abstract: |
Currently Artificial Intelligence (AI) is a very important component and is
commonly used in video games. AI is used in games to make the player's
experience more interesting and interactive. Artificial intelligence (AI) is
utilized in a number of video games to enhance the player's experience and make
it more interactive, particularly in role-playing games (RPG). Find a path
(pathfinding) is essential to many computer games, especially role-playing
games, and is required of the AI in the majority of games. Using the A*
algorithm is one of the pathfinding methods used in video games to find the
shortest path on the track to avoid static or dynamic obstacles, and The D* Lite
algorithm is highly successful and capable of eliminating several difficulties
and producing more ideal outcomes. The purpose of our research is to build and
evaluate the performance of an artificial intelligence pathfinding system in
searching for the fastest route using the A-star and D-star Lite algorithms.
This work was carried out to collect data on how NPC movement works using path
finding and study the results of this research. The path finding methods that
will be studied are the A* algorithm and the D* Lite algorithm. The contribution
of this research provides benefits to path finding problems which are commonly
used by NPCs for technological games in the future, especially when using the A*
algorithm and D* Lite algorithm in game technology. However, the pathfinding
method is not only for games, but can also be implemented in other fields. The
results of the experiment at five points under Pathfinding mechanism using the
D-star Lite algorithm were faster in finding the closest route with a record
time of 00:01.847 while using the A-star algorithm obtained 00:05.231. |
Keywords: |
Video Game, Artificial Intelligence, Pathfinding, A-Star Algorithm, D-Star Lite
Algorithm |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
DEEP LEARNING AND STATISTICAL OPERATIONS BASED FEATURES EXTRACTION FOR LUNG
CANCER DETECTION SYSTEM |
Author: |
SULEYMAN A. AlSHOWARAH |
Abstract: |
Lung cancer is considered as most common cause of cancer death than other. This
kind of cancer is growing in human body without previous symptoms. So, using
systems to diagnosis the patients in early stages is very essential and
conducting studies in this field to find a good accuracy is also required. This
research aims to examine the possibility of using of Deep Learning techniques
for the lung cancer classification based on VGG-19 using images. Layer 6 and
layer 7 of VGG19 were used. Also, new datasets will be created from these two
layers named as statistical operations, which includes: average, minimum,
maximum and combination between the two layers. Then, the datasets will be
classified using different ML classifiers, which includes: KNN, Random Forest,
Naïve Bayes and Decision Tree. Three scenarios will be used based on the
training dataset size when classifying data. In the results, KNN scored the best
accuracy (98.40), precision (0.98), recall (0.98) and F-measure (0.98). The
results were nearly similar in all layers and scenarios; this means that the
extracted features can provide high accuracy if applied in classification
researches. It can be proved that the lung cancer can be detected with best
accuracy even if the size of dataset in the training set was small. Also, the
second-best accuracy after KNN algorithm is Random Forest in all layers and
scenarios. |
Keywords: |
Lung Cancer Detection Using DL, Vgg-19, Lung Tumor, Benign Or Malignant Of Lung
Cancer. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
A METHOD FOR ENHANCING WEB SEARCH RESULTS USING CONTEXT-BASED INDEXING |
Author: |
MENNATOLLAH MAMDOUH MAHMOUD, DOAA SAAD ELZANFALY, AHMED EL-SAYED YAKOUB |
Abstract: |
While the current search engines are deemed sufficient, they often overwhelm the
user with irrelevant results. This is primarily because most indexing structures
fail to incorporate the context of the query. This omission significantly
impacts the effectiveness of the search results. Despite extensive research is
carried out to enhance search engine indexing outcomes, the problem of
retrieving the most relevant results by context still exists. This study
attempts to bridge this gap and contributes with a new approach to context-aware
indexing, aiming to enhance the relevancy of the retrieved documents to the
user s query. Unlike traditional methods that rely solely on keywords, the
proposed approach leverages document context. The effectiveness of the proposed
method is evaluated based on three criteria: index size, indexing construction
complexity, and the relevancy of the results. Furthermore, the proposed method
is compared to the Boolean retrieval model employed by the traditional
information retrieval systems. The experimental results show that the proposed
method outperforms the traditional information retrieval systems in terms of the
indexing size and complexity, as well as the relevancy of the results. |
Keywords: |
Context-based Indexing, Information Retrieval (IR), Modifying Index, Search
Engine, Web Document Indexing. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
DEPTH ESTIMATION METHOD BASED ON RESIDUAL NETWORKS AND SE-NET MODEL |
Author: |
MOHAMMAD ARABIAT, SUHAILA ABUOWAIDA, ADAI AL-MOMANI, NAWAF ALSHDAIFAT, HUAH YONG
CHAN |
Abstract: |
The objective of this study is to examine the problem of monocular depth
estimation, which is essential for understanding a particular scene. The
application of deep neural networks in generative models has resulted in
significant progress in the precision and efficiency of depth estimations
derived from a solitary image. However, most previous approaches have shown
shortcomings in accurately calculating the depth barrier, resulting in less than
optimal outcomes. Image restoration refers to the procedure of enhancing the
quality of an image that has been degraded or has reduced clarity. This study
introduces a novel and direct method that utilises the attention channel of the
depth-to-depth network. This network has encoded elements that are useful for
guiding the process of creating depth. The attention channel network consists
exclusively of convolution layers and the Squeeze-And-Excitation Network
(SENet). The deconvolution technique has the ability to produce images of
excellent quality and can be efficiently taught with single-depth data. To
enhance the acquisition of information and abilities in analysing the
relationship between a colour value and its associated depth value in an image,
we propose using a training method. The approach being suggested entails the
utilisation of a color-to-depth network. This is enhanced by the inclusion of a
loss function that is explicitly defined within the system. The combination with
features derived from the hidden space. Regarding our attention channel network.
One notable advantage of the suggested methodology is valuable due to its
capacity to enhance local capabilities. To a large extent. Precise and thorough
information can be Attained even in locations with intricate surroundings. The
outcomes The data used in the study was derived from the NYU Depth v2 benchmark.
The dataset showcases the effectiveness and durability of the proposed solution.
Methodology in contrast to current cutting-edge approaches. |
Keywords: |
Deep Learning, Depth Estimation, Resnet-101, Features Map, NYU Depth v2. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
ENHANCING COLLABORATIVE FILTERING: ADDRESSING SPARSITY AND GRAY SHEEP WITH
OPPOSITE USER INFERENCE |
Author: |
ABDELLAH EL FAZZIKI, YASSER EL MADANI EL ALAMI, JALIL ELHASSOUNI, MOHAMMED
BENBRAHIM |
Abstract: |
Collaborative filtering (CF) is a popular recommendation approach which seeks to
find similar users to predict what an active user might like. However, CF
suffers from two main challenges: sparsity and gray sheep. In both cases,
recommending useful items is a difficult task. In this paper, we propose a new
approach to address these challenges. It consists of combining Singular Value
Decomposition and Association rule methods with enriched rating matrix. In
addition to actual users, this matrix incorporates virtual users inferred from
opposing ratings provided by real users. Our approach attempts to increase the
density of similar users and makes it easier to make useful recommendations. We
conducted a comparative study showing that our method outperformed traditional
CF approaches in terms of accuracy. |
Keywords: |
Recommendation System; Collaborative Filtering; Opposite Preferences;
Model-Based CF; SVD; Association Rules; Sparsity Problem. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
DIABETIC MELLITUS PREDICTION WITH BRFSS DATA SETS |
Author: |
MARWA HUSSEIN MOHAMED, MOHAMED HELMY KHAFAGY, NESMA MOHAMED MAHMOUD KAMEL, AND
WAEL SAID |
Abstract: |
One of the chronic diseases that affect many people worldwide is diabetic
mellitus. If the disease is predicted at an early stage, the risk and severity
can both be significantly decreased. In this research, we need to predict the
type 2 diabetic patients at an early stage to reduce the cost of treatment for
countries because this is a long time disease we use many machine learning
algorithms to find the accuracy for these diseases applied to BRFSS datasets for
two years 2014 and 2015 with a different selection of features to predict the
disease as decision tree, logistic regression, ADA Boost Classifier, extreme
gradient boosting, Linear Discriminant Analysis, Light Gradient Boosting
Machine, and catboost classifiers. While applying our experiments with the 2014
BRFSS data sets Neural network has the highest accuracy with 82%and with the
2015 BRFSS datasets the best accuracy model was 86% for CatBoost Classifier and
Extreme Gradient Boosting where the lowest model was Linear Discriminant
Analysis. Also, in our research we compare our results with others using the
same datasets with different features selection and get high accuracy. |
Keywords: |
Chronic Diseases; Diabetic Mellitus; Machine Learning; Artificial Intelligence;
Classification Models. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
PRUNING-BASED HYBRID NEURAL NETWORK FOR AUTOMATIC MODULATION CLASSIFICATION IN
COGNITIVE RADIO NETWORKS |
Author: |
NADIA KASSRI, ABDESLAM ENNOUAARY, SLIMANE BAH3, IMAD HAJJAJI, HAMZA DAHMOUNI |
Abstract: |
Today, there is a greater need than ever to focus on realistic solutions to
efficiently manage the available frequency spectrum, due to the scarcity of this
limited resource. Thanks to its opportunistic communication paradigm allowing
fair access for all users, Cognitive Radio (CR) has been commonly recognized as
an enabling technology to mitigate the spectrum scarcity problem and ensure
optimal use of the available spectrum. Automatic Modulation Classification (AMC)
is an indispensable function for carrying out various CR tasks, including link
adaptation and dynamic spectrum access. Recently, Deep Learning (DL) networks
have shown promising results in dealing with AMC compared to traditional
methods. However, most DL-based approaches proposed so far have high
computational and memory requirements, which make them impractical for
resource-constrained IoT applications. To address this challenge, we introduce
in this work a novel lightweight hybrid Neural Network for AMC, consisting of a
combination of Gated Recurrent Unit (GRU) and Convolutional Neural Network (CNN)
structures, in which the fully connected layers are omitted. Our proposed design
outperforms the baseline models in terms of recognition accuracy across various
Signal-to-Noise Ratios (SNRs), while maintaining a lightweight structure and low
computational complexity. To further enhance model compression and resource
utilization, we have introduced an Iterative Magnitude-Based Pruning approach
combined with Quantization Aware Training (QAT). Simulation results using the
RadioML 2018.01A dataset validate the efficacy of our proposed model. With a
sparsity of 0.7, the pruned variant, comprising 80 GRU cells, attains an average
accuracy of 60.88% across all SNRs. Remarkably, it achieves a highest accuracy
of 95.95% at 20 dB, while utilizing only 8 210 parameters. |
Keywords: |
Automatic Modulation Classification, Convolutional Neural Network, Gated
Recurrent Unit, Cognitive Radio Networks, Spectrum Sensing,
Resources-Constrained Objects, Pruning, Quantization Aware Training. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
ENHANCED INSIDER THREAT DETECTION THROUGH MACHINE LEARNING APPROACH WITH
IMBALANCED DATA RESOLUTION |
Author: |
PENNADA SIVA SATYA PRASAD, SASMITA KUMARI NAYAK, Dr. M. VAMSI KRISHNA |
Abstract: |
An insider threat is the risk that person inside an organization may pose to the
company's security, data, or resources. Insider threat detection is a crucial
component of a comprehensive cybersecurity strategy. By identifying and
mitigating risks from within the organization, businesses can better protect
their assets, maintain trust, and ensure compliance with legal and regulatory
requirements. This paper addresses the detection of insider threats using
machine learning algorithms. A famous CERT dataset was used for the experiments.
The collected dataset is largely imbalanced. The ML algorithms cannot perform
well with imbalanced datasets. So, data imbalance can be resolved by using three
over sampling techniques namely random oversample, smote, adasyn and three under
sampling techniques namely random under sample, Cluster centroids and Edited
Nearest Neighbors. Later, five ML algorithms namely Logistic Regression,
Adaboost, Decision Tree, Random Forest and Naïve Bayes applied to the datasets
generated through over sampling and under sampling techniques. To further
increase the performance of the model, an ensemble learning is proposed along
with principal component analysis. The experimental results demonstrated that
the proposed model surpassed the performance of existing models for insider
threat detection. |
Keywords: |
Insider Threat Detection, Data Imbalance, Over Sampling, Under Sampling, Machine
Learning, Ensemble Learning. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
PROVISIONING AN UNCERTAINTY MODEL FOR ELEPHANT MOVEMENT PREDICTION USING
LEARNING APPROACHES |
Author: |
R. VASANTH, A. PANDIAN |
Abstract: |
This work concentrates on resolving Static/Dynamic Body movement estimation and
rigid body orientation of animals. Algorithm has to be modeled with a
complementary structural model that exploits measurements of the magnetometer,
accelerometer, and gyroscope. Usually, attitude information is essential to
evaluate animals' movement estimation to compute active sensor nodes to gather
information. The risk factor is measured with Weighted Uncertainty Priority
(WUP), an IVF framework. WUP considers various relative weights of complex risk
factors by examining their degree of ambiguity/uncertainty. Uncertainty measure
is an evidence theory used to generate exponential weight of every risk factor.
This ambiguity measure shows a subjective assessment of internal coordination
between sensors and animal movements. The anticipated algorithm reduces network
overhead and effectually classifies samples with the pre-trained Yolov5 Network
model to estimate the arrival of animals and to determine the correlation
between normal and uncertain data. Feature-aware pattern modeling is done to
schedule elephant movement versus risk factors. The theoretical analysis has to
validate that the anticipated model outperforms other prevailing models.
Simulation has been carried out in a MATLAB 2020a environment where detection
accuracy is estimated as 99.5%. |
Keywords: |
A Priority-Based Weighted Scheduling, Weighted Uncertainty Priority, Non-Linear
Filter, Uncertainty, Kalman Filter, Static/Dynamic Body Movement Estimation,
Rigid Body Orientation |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
MINING DEVIATION WITH MACHINE LEARNING TECHNIQUES IN EVENT LOGS WITH AN ENCODING
ALGORITHM |
Author: |
DR. V.V. JAYA RAMA KRISHNAIAH, DR BANDLA SRINIVASA RAO, DUGGINENI VEERAIAH, MR.
S. SUBBURAJ, DR. MOHAMMED SALEH AL ANSARI, DR. CHAMANDEEP KAUR |
Abstract: |
A study field called "commercial operation divergence analysis" tries the
identify how a commercial system varies beyond the results which were
anticipated. Approaches in this field identify the qualities of a collection of
procedure executes may are related to shifts in process efficiency, exposing the
characteristics of procedure behaviours which generate undesirable procedure for
the process as well as understandings of what process behaviour contributes the
greatest efficiency. Success in this scenario can relate to any domain-dependent
efficiency measurement as well as the expense, time, and resources factors. The
finding of trends from the logs of events using various trend mining approaches
is the basis of the present-day company deviation mining methodologies. Such
extractor methods are now provide to a small degree of flexibility because
they're unable to represent the intricate linkages which could be present in
highly variable systems. Within this paper, then offer an innovative decoding
strategy for vector-based representations of procedure scenarios, followed by
then utilise Machine Learning for a unique approach in the context of Deviance
Mining to pinpoint the aspects of a procedure which most significantly affect
its efficiency. The outcomes demonstrated how machine learning delivered
pertinent as well as emotive conclusions on the event's logs when combined with
the suggested Declare-based coding, constituting an effective tool for the
analysis of processes. |
Keywords: |
Machine Learning, Mining Deviation, Encoding, Logs |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
ANALYZING LARGE SCALE BUSINESS DATA TO PREDICT COMPANY'S GROWTH USING AN
INTEGRATED HYBRID APPROACH OF DATA REDUCTION UNIT AND CONVOLUTIONAL NEURAL
NETWORKS |
Author: |
KATHARI SANTOSH, DR. G. SUNDAR, DR.P. PATCHAIAMMAL, SWETHA V P, BADUGU SURESH,
DR. SANTHOSH BODDUPALLI |
Abstract: |
In current quickly changing business world, large-scale business data analysis
has become critical for predicting the development of a company as well as
taking effective decisions. The paper proposes an integrated hybrid strategy
that leverages the benefits of data reduction units and convolutional neural
networks (CNNs) to reliably estimate the growth potential of a company. The
strategy's initial component is data reduction units, which minimize the
complexity of data while keeping informative quality. The strategy's second
component makes use of the ability of convolutional neural networks, a
deep-learning framework suitable for processing both organized and unstructured
input. CNNs excel at collecting geographical and time-related trends in data,
making them excellent for finding complicated links in large-scale corporate
information. The system is capable of learning key characteristics and
hierarchies by utilizing many layers of convolutional filters, allowing it to
generate accurate growth predictions. The work proposes a unique architecture
for integrating these components, which mixes the outputs of the data reduction
unit with the input data and passes them into the CNN. The combined strategy
lets the CNN to concentrate on the most important information, improving
prediction accuracy while lowering computational overhead. The study conducted
tests with real-world company datasets to assess the efficiency of the strategy.
The findings show that the integrated hybrid strategy exceeds standard
methodologies in terms of accuracy in predicting company development.
Furthermore, the study demonstrates that the technique is adaptive and scalable,
with the ability to handle large-scale datasets. |
Keywords: |
Big Data, Convolutional Neural Network (CNN), Data Reduction, Deep Learning |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
BUILDING A MODEL FOR AUTOMOTIVE SOFTWARE DEFECTS PREDICTION USING MACHINE
LEARNING AND FEATURES SELECTION TECHNIQUES |
Author: |
RAMZ TSOULI FATHI, MAROI TSOULI FATHI, MOHAMMED AMMARI, LAÏLA BEN ALLAL |
Abstract: |
Over the past few decades, the integration of software into vehicles has
experienced exponential growth. This expansion affects nearly all aspects of
recent automotive engineering, encompassing various components. Simultaneously,
testing becomes increasingly crucial. Significant efforts are required for
software verification and validation to meet security, quality, and reliability
standards. These essential tests become more complex and costly with the
proliferation of functionalities, risking delays in market deployment. The
integration of new technologies, such as predictive analysis based on machine
learning algorithms, could anticipate the expected number of anomalies,
improving resource management, reducing deployment time, and mitigating the risk
of potentially disastrous software errors. This paper aims to assess the
applicability of software defect prediction methods in the automotive domain. To
achieve this, we will construct a dataset from real industrial software
projects, anonymized and confidential. This effort will culminate in the
creation of a new and unique database that will serve as the foundation for our
study. Through a series of experiments, we will evaluate the relevance of
various machine learning algorithms, aiming to surpass classical approaches in
constructing our predictive analysis model. This paper introduces a novel
solution for predicting software faults in the automotive domain. The innovation
lies in the realistic approach, relying on the application of complex domain
knowledge to a unique database derived from real industrial automotive software
projects. Systematic efforts were exerted not only to optimize the usability of
this dataset but also to achieve superior performance. |
Keywords: |
Machine Learning, Features Selection, Predictive Analysis, Automotive Software,
Software Metrics, Automotive Software Projects, Software Defect Prediction |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
DYNAMIC ACCESS CONTROL AT THE NETWORK EDGE USING AN ADAPTIVE RISK-BASED ACCESS
CONTROL SYSTEM (ad-RACs) |
Author: |
MUHAMMAD BELLO ALIYU, DR. MUHAMMAD GARBA, DR. DANLAMI GABI, DR. HASSAN U. SURU,
DR. MUSA S. ARGUNGU |
Abstract: |
The widespread adoption of edge computing models owes to their
cost-effectiveness and performance advantages for both users and service
providers. However, the expanding user base and application scope raise security
concerns, including potential malicious attacks due to unrestricted system
resource access. Hence, this study focuses on implementing an Adaptive
Risk-based Access Control System (ad-RACs) at the network edge. The ad-RACs
utilizes four key inputs—user context, resource sensitivity, action severity,
and risk history—to enable the CatBoost risk estimation module to evaluate
security risks associated with access requests. Upon meeting the acceptable risk
threshold, the Chinese wall access control policy determines access decisions.
This model adapts to user behavior and patterns, updating risk history to
dynamically adjust access requests. Evaluation results showed that the ad-RACs
exhibited satisfactory recall and F1 score values of 100% and 98%, respectively,
and a precision value of 95%, outperforming the existing system's recall of 98%,
F1 score of 96%, and precision of 97%. Conclusively, the ad-RACs excelled in
recall and F1 score values compared to the existing system, indicating its
potential to enhance access control. Its adoption is recommended for
governmental and private organizations seeking to bolster user access to
sensitive resources. |
Keywords: |
Access Control, Adaptive Access, Adaptive Security, Network Edge, Risk-based
Access |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
DECODING ADVERSARIAL MACHINE LEARNING: A BIBLIOMETRIC PERSPECTIVE |
Author: |
JEENA JOSEPH, JOBIN JOSE, DIVYALAKSHMI S, RAJIMOL A, GREETY TOMS, ANAT SUMAN
JOSE, GILU G ETTANIYIL |
Abstract: |
The rapid improvement of machine learning techniques has led to an extraordinary
rise in the prominence of adversarial attacks and their accompanying defenses.
This study performs a comprehensive bibliometric analysis of adversarial machine
learning, providing insight into the field's evolution from its foundation to
the present. We identify the primary themes, foundational works, and influential
figures in this field using state-of-the-art bibliometric technologies and
databases. Our findings demonstrate the impressive growth of adversarial machine
learning research and emphasize its transdisciplinary nature. We also highlight
the collaborative networks and important hubs that have fueled advancements in
this discipline. This report offers a thorough perspective on adversarial
machine learning, its major turning points, and valuable insights for
researchers, educators, and practitioners. |
Keywords: |
Adversarial Machine Learning, Bibliometric Analysis, VOSviewer, Biblioshiny. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
BLOCKCHAIN IN LAND REGISTRY FOR TRANSFORMING LAND ADMINISTRATION |
Author: |
VISHNU SHUKLA, Dr. ABHIJEET R RAIPURKAR, Dr. MANOJ B CHANDAK, VEDANT BARAI |
Abstract: |
Land registry systems play a critical role in recording and verifying property
ownership, ensuring transparency, and facilitating secure transactions. However,
traditional land registry systems often face challenges related to
inefficiencies, fraudulent activities, and lack of trust. The emergence of
blockchain technology has the potential to address these issues and
revolutionize land registries. This paper present blockchain-based land registry
system that leverages the unique features of blockchain technology to provide a
secure, transparent, and tamper resistant platform for recording and managing
land ownership information. By utilizing distributed ledger technology, the
proposed system aims to eliminate the need for intermediaries, reduce the risk
of fraud, enhance data integrity, and streamline the land registration process.
The blockchain-based land registry system would involve the creation of a
decentralized network where each participating node stores a copy of the
registry, ensuring redundancy and immutability. The use of cryptographic
techniques would provide secure and verifiable transactions, allowing for a
trustworthy and auditable history of land ownership transfers. A
blockchain-based land registry system has the potential to transform traditional
land registry practices by providing a secure, transparent, and efficient
platform for recording and managing land ownership information. By leveraging
the decentralized and immutable nature of blockchain technology, this system can
significantly enhance trust, streamline processes, and reduce fraud in land
transactions, leading to more reliable and effective property markets. |
Keywords: |
Blockchain, Land Registration, Smart Contracts, Solidity, Ownership-Transfer,
Migration, Map, DAPP. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
NUMBER PLATE AND LOGO IDENTIFICATION USING MACHINE LEARNING APPROCHES |
Author: |
SYED ARFATH AHMED, PROF (DR) SARVOTTAM DIXIT |
Abstract: |
The ability to recognize license plates is affected by a number of issues, such
as the presence of extraneous information, variations in the size and type of
the text, blur, skew, and other environmental factors. It is not always easy to
recognize the numbers on license plates because there are so many different
types of licence plates and so many different situations in which they can be
displayed. As a result of the relatively poor segmentation results produced by
region-based approaches, better segmentation algorithms are required for the
process of separating out individual license plates. The license plate can be
placed in any part of the image that you like. Because the license plate can be
identified based on its attributes, the algorithm processes only the pixels that
share analogous qualities. The amount of time required to process the image will
increase if each and every pixel in it is processed. In this study, two
additional methods that are proven to be effective in segmenting the license
plate of a vehicle are investigated. The first technique divides the license
plate by utilizing edge information. This segmentation technique demonstrates a
high level of effectiveness, providing an average segmentation outcome of 85%.
The second approach utilizes information about number plate shape and color.
Visualization techniques are applied to segment number plates using the
information of the hybrid feature. The approach achieves an average segmentation
rate of 93%. Although segmentation using the edge-based and hybrid approaches is
executed at a similar speed to segmentation using the region-based approach, the
latter provides better segmentation outcomes. The fundamental objective of
this paper is to devise a method that is capable of producing an automatic
licence plate recognition system that is durable, accurate, and reliable. |
Keywords: |
Automatic Number Plate Recognition (ANPR), Convolution Neural Network (CNN),
Character Segmentation Image Segmentation, Optical Character Recognition, Edge &
Pixel Detection. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
DEVELOPMENT AND EVALUATION OF PNEUMFC NET: A NOVEL AUTOMATED LIGHTWEIGHT FULLY
CONVOLUTIONAL NEURAL NETWORK MODEL FOR PNEUMONIA DETECTION |
Author: |
SHUBHRA PRAKASH, B RAMAMURTHY |
Abstract: |
The aim of this study is to address the challenges of pneumonia diagnosis under
constraint resources and the need for quick decision making. We present the
PneumFC Net, a novel architectural solution where our approach focuses on
minimizing the number of trainable parameters by incorporating transition blocks
that efficiently manage channel dimensions and reduce number of channels. In
contrast to using fully connected layers, which disregard the spatial structure
of feature maps and substantially increase parameter counts, we exclusively
employ only convolutional layer approach. In the study, X-ray image dataset is
used to train and evaluate the proposed Convolutional Neural Network model. By
carefully designing the architecture, the model achieves a balance between
parameters and accuracy while maintaining comparable performance to pre-trained
models. The results demonstrate the model's effectiveness in detecting pneumonia
images reliably. In addition, the study examines the decision-making process of
the model using Grad-CAM, which helps to identify important aspects of
radiographic images that contribute to the positive pneumonia prediction.
Furthermore, the study shows that the proposed model, Pneum FC Net not only has
the highest accuracy of 98%, but the total trainable model parameters is only
0.02% of the next best model VGG-16, thus establishing the potential of this new
robust Deep Learning model. This research primarily addresses concerns related
to mitigating significant computational requirements, with a specific focus on
implementing lightweight networks. The contribution of this work involves the
development of resource-efficient and scalable solution for pneumonia detection. |
Keywords: |
PneumFC Net, Fully Convolutional Neural Network, Pneumonia Detection, Computer
Aided Diagnosis |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
OPTIMIZING SENTIMENT ANALYSIS IN ONLINE SHOPPING: UNVEILING THE PRECISION OF
SIMPSON RULE-OPTIMIZED SUPPORT VECTOR MACHINE |
Author: |
R. ANITHA, D. VIMAL KUMAR |
Abstract: |
In the domain of online shopping, customer reviews serve as invaluable
repositories of insights, encapsulating the sentiments and experiences
associated with a wide array of products. However, the extraction of nuanced
sentiments from this extensive pool of reviews faces a formidable challenge due
to the inherent complexities of language and context. Sentiment analysis, a
pivotal tool for distilling sentiments from textual data, grapples with
accurately deciphering nuanced expressions, sentiment subtleties, and contextual
intricacies. To address these challenges, this study introduces the Simpson
Rule-Optimized Support Vector Machine (SR-SVM) for sentiment analysis in online
shopping. Built on Support Vector Machines (SVM) principles, SR-SVM leverages
Simpson's rule to optimize sentiment pattern identification within the expansive
landscape of online shopping reviews. Through the application of mathematical
optimization techniques, SR-SVM refines sentiment analysis outcomes, promising a
more nuanced understanding of customer sentiments. Preliminary results indicate
a noteworthy enhancement in sentiment classification accuracy, underscoring the
transformative potential of SR-SVM in optimizing sentiment analysis for online
shopping platforms. This study opens a promising avenue for refining and
advancing sentiment analysis methodologies in the dynamic and complex context of
online retail. |
Keywords: |
Sentiment, Classification, Amazon, Simpson Rule, SVM, Optimization |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
LEARNING OF ENERGY EFFICIENT AND NETWORK TRAFFIC DELAY IN WIRELESS NETWORKS
USING CHANNEL AWARE ROUTING |
Author: |
K. BALASUBRAMANIAN, R. MANIVANNAN, S. MANIKANDAN, A. HAJA ALAUDEEN, MISHMALA
SUSHITH, KIRUTHIGA BALASUBRAMANIYAN |
Abstract: |
This paper studies different analysis metrics of wireless networks using channel
aware routing protocol. During communication the following characteristics are
important performance metrics like network traffic delays and energy efficient.
Energy Efficient is a important solution with limited buffer capacity and life
time of battery in wireless environment. The status of each node can be
identified by using co-channel forwarding mechanism. The traffic delays can be
measured by using Q-Learning random early method and routing can be optimized by
channel aware routing. The transmission changes in one node to another can be
monitored by router database and energy level can be verified by additional
changes in wireless network nodes. In this paper, we formulate to achieve
analysis the energy performance and reduced traffic delay using co-channels
stochastic optimization procedure. The co-channel method used to check various
analytical bounds and check all the nodes are uniformly shares the energy level.
The mathematical analysis report provides uniform distribution energy values and
traffic delays. The simulation shows efficient energy utilization and network
traffic delays in wireless networks. |
Keywords: |
Wireless Networks, Energy Efficient, Network Traffic Delays, Channel Aware
Routing, Routing Protocol. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
CHOOSING THE RIGHT CHAOTIC MAP FOR IMAGE ENCRYPTION: A DETAILED EXAMINATION |
Author: |
SAAD A. ABDULAMEER |
Abstract: |
This article investigates how an appropriate chaotic map (Logistic, Tent, Henon,
Sine ...) should be selected taking into consideration its advantages and
disadvantages in regard to a picture encipherment. Does the selection of an
appropriate map depend on the image properties? The proposed system shows
relevant properties of the image influence in the evaluation process of the
selected chaotic map. The first chapter discusses the main principles of chaos
theory, its applicability to image encryption including various sorts of chaotic
maps and their math. Also this research explores the factors that determine
security and efficiency of such a map. Hence the approach presents practical
standpoint to the extent that certain chaos maps will become relevant toward
implementing image encryption system. This helps them select the best chaotic
map for image encryption to ensure secure digital data. |
Keywords: |
Image Encryption, Chaotic Map, Chaos-based, Image Security, Cipher |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
DYNAMIC THRESHOLD GENERALIZED ENTROPIES-BASED DDOS DETECTION AGAINST
SOFTWARE-DEFINED NETWORK CONTROLLER |
Author: |
HUSSEIN A. AL-OFEISHAT |
Abstract: |
This study presents a new strategy that combines dynamic thresholding with
generalized entropy approaches to identify Distributed Denial-of-Service (DDoS)
attacks in Software-Defined Networking (SDN) environments. This research intends
to address the limitations of traditional detection approaches, such as high
false-positive rates and restricted adaptability. We conducted thorough testing
in various simulated SDN scenarios to assess the efficacy of our approach
compared to existing methodologies. The findings exhibited a substantial
enhancement in both the precision of detection and the decrease in false
positive occurrences, signifying a remarkable progression compared to existing
techniques. This research not only fills an important void in the realm of SDN
security but also lays the foundation for more adaptable and efficient ways for
detecting DDoS attacks. The results have practical implications for improving
network security by providing a strong answer to the changing danger of DDoS
attacks in intricate network environments. In summary, this study offers a fresh
viewpoint to the field of SDN security research, proposing a possible change in
DDoS detection methods towards more flexible and entropy-driven methodologies. |
Keywords: |
Network; SDN Controller; Telecommunications Infrastructures; Renyi; Networking;
SDN. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
OPTIMAL APPROACH FOR MINIMIZING DELAYS IN IOT-BASED QUANTUM WIRELESS SENSOR
NETWORKS USING NM-LEACH ROUTING PROTOCOL |
Author: |
J. RAMKUMAR, A. SENTHILKUMAR, M. LINGARAJ, R. KARTHIKEYAN, L. SANTHI |
Abstract: |
IoT-based Quantum Wireless Sensor Networks (IoT Q-WSN) merge Quantum principles
into the realm of Wireless Sensor Networks, introducing intricate routing
challenges that demand innovative solutions for efficient data transmission.
This study introduces NM-LEACH, an inventive routing protocol inspired by the
leadership principles of Shri Narendra Modi. A groundbreaking feature of
NM-LEACH is its distinction as the inaugural optimization protocol inspired by a
human personality. NM-LEACH operates through adaptive strategies, clean coding
practices, and continuous feedback loops, embodying a comprehensive and
disciplined approach to network development. Through simulation in NS3, the
protocol undergoes meticulous evaluation against existing counterparts. Results
demonstrate NM-LEACH s superior performance in minimizing delays and optimizing
data transmission within IoT Q-WSNs. This research advances Quantum IoT and
underscores the potential of drawing inspiration from human leadership qualities
to innovate and enhance wireless sensor network functionality by minimizing
delay and energy consumption. |
Keywords: |
IoT, WSN, Quantum, LEACH, Narendra Modi, Energy Consumption |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
PROVING THE CAR SECURITY SYSTEM MODEL USING CTL AND LTL |
Author: |
RAFAT ALSHORMAN |
Abstract: |
One of the most important legal issues that troubles the courts is car theft.
Therefore, car companies race to build security systems to reduce this problem.
In this research, we propose a system based on remote control and a keypad to
decrease the risk of car theft. To prove the system, firstly, we described it
using a finite-state machine. Secondly, a set of correctness conditions are
introduced. These correctness conditions are encoded in temporal logic CTL and
LTL. Lastly, the NuSMV model checker is used to verify the correctness
conditions. The importance of the proposed system verification is to ensure that
the system is correct under these conditions. Additionally, this research also
found that CTL and LTL can be used to specify and verify systems of this kind.
The main contribution of this research is to demonstrate that CTL and LTL model
checking can be used to specify and verify the proposed system. Moreover, this
can be a stepping-stone to prove similar systems that their behaviors generate
complex finite-state machines and it is difficult to trace all possible states
of the system. |
Keywords: |
CTL, model checking, NuSMV, LTL, Car Security. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
OPTIMIZING WATER DESALINATION: A NOVEL FUSION OF EXTREME LEARNING MACHINE AND
GAME THEORY FOR ENHANCED PH PREDICTION - UNVEILING REVOLUTIONARY INSIGHTS |
Author: |
DR. MOHAMMED SALEH AL ANSARI |
Abstract: |
The essential step in guaranteeing an adequate supply of water is water
desalination. Nevertheless, since many water quality measures interact in a
complex way, it is difficult to estimate pH levels accurately. To further
comprehend and regulate pH levels in water from desalination, this study
proposes a synergistic structure that blends the predictive ability of Extreme
Learning Machines (ELM) with the strategic insights offered by game theory. The
precision needed for effective desalination is frequently lacking in current pH
prediction techniques. Traditional models could find it difficult to represent
the complex interactions between changes in pH and input factors. Furthermore,
the lack of a strategic decision-making component in dynamic operational
contexts exposes processes to less-than-ideal results. The merging of game
theory and ELM is innovative. Because of its quick training and ability to
generalize, ELM is a good pH predictor. Concurrently, Game Theory is utilized to
simulate strategic exchanges between stakeholders, taking into account how pH
forecasts affect decision-making procedures. High-quality data on water quality
is used to train an ELM model as part of the suggested methodology. Next, using
the concepts of game theory, stakeholders' strategic behaviors that are impacted
by the anticipated pH levels are modelled. This research enhances pH prediction
accuracy using Extreme Learning Machine and Game Theory, optimizes desalination
resource allocation for sustainability, addresses real-world challenges, and
provides a versatile framework for widespread application in diverse
desalination scenarios. The suggested method's efficacy is determined by a
thorough performance review. The integrated strategy is thought to be superior
to traditional approaches in terms of metrics like accuracy of predictions,
satisfaction with stakeholders, and operational effectiveness. These metrics
present a potential path forward for the advancement of water desalination
procedures. |
Keywords: |
Extreme Learning Machine, Water Desalination, Game Theory, pH prediction |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
DEEP NEURAL NETWORKS FOR EFFICIENT CLASSIFICATION OF SINGLE AND MIXED DEFECT
PATTERNS IN SILICON WAFER MANUFACTURING |
Author: |
HUSSEIN YOUNIS, AMJAD RATTROUT, MOHAMMED YOUNIS |
Abstract: |
Semiconductor wafer manufacturing is a complex and costly process with inherent
defect risks that can significantly impact the industry. Utilizing Deep Learning
(DL) for wafer defect classification offers benefits such as improved
performance, reduced human error, and time savings. This paper presents an
advanced DL-based approach for wafer defect classification, based on a modified
GoogLeNet model and data augmentation technique. The approach achieves
state-of-the-art results on the WM-300K+ wafer map dataset, demonstrating
robustness to image noise and variation. Our research introduces a pioneering
approach that outperforms previous methodologies, integrating auto-cast and CUDA
to enhance efficiency, addressing dataset imbalance through innovative data
augmentation, and creating a new "WM-300K+ wafer map [Single & Mixed]" dataset.
The methodology yields exceptional results, with an average classification
accuracy of 99.9% for both single and mixed defect types, surpassing previous
studies. Hyperparameter tuning with Optuna and a patient stop mechanism further
fortifies the robustness and reliability of the approach. |
Keywords: |
Semiconductor Wafer Manufacturing, Deep Learning, Convolutional Neural Network,
Wafer Defect Patterns, GoogLeNet |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
STRENGTHENING QA SYSTEM ROBUSTNESS: BERT AND CHARACTER EMBEDDING SYNERGY |
Author: |
RACHID KARRA, ABDELALI LASFAR |
Abstract: |
The inherent nonlinearity of neural networks renders them vulnerable to
adversarial attacks. As artificial intelligence-based question-answering (QA)
systems continue to proliferate across sectors like education and health,
ensuring their security and predictable behavior becomes imperative for
widespread adoption among the general public. Robustness measures the resilience
of Natural Language Processing (NLP) models against adversarial attacks. A
robust QA system exhibits resilient behavior when encountering maliciously
crafted questions. Previous experiences have indicated that utilizing character
embeddings enhances resistance against contradictory misspelled questions. To
validate this, we fine-tuned BERT with character embeddings on SQuAD dataset for
question-answering tasks and assessed both models using {question, context,
answer} tuples. The results unequivocally demonstrate that BERT with character
embeddings yields superior performance. Finally, we propose a comprehensive
framework aimed at safeguarding question-answer type dialogue systems. |
Keywords: |
Adversarial Attacks, BERT, Character Embedding, QA Systems, SQuAd |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
DETERMINANTS OF QRIS USAGE AS A DIGITAL PAYMENT TOOL FOR MSMEs |
Author: |
KHALISA MARSAHANDA RAFIANI, ROCHANIA AYU YUNANDA, TOTO RUSMANTO |
Abstract: |
The development of technology and information in the globalization era
contributes significantly to the growth of the digital economy in Indonesia. It
has significant impacts on all types of industries including MSMEs (Micro,
Small, and Medium Enterprises). The development of MSMEs is currently increasing
in various regions throughout Indonesia. The utilization of technology by MSMEs
is on the rise. Various digital technologies are being employed by MSMEs, such
as electronic money and non-cash payments are gaining momentum with the
introduction of QR-code payment systems. Using the Technology Acceptance Model,
this study aims to examine the determinants of QRIS as a digital payment tool
used by MSMEs. This study analyzes 162 from MSMEs players. Out of the six
hypotheses tested, four were found to be significant or accepted. Perceived
Usefulness, Perceived Ease of Use, Revenue, and Perceived Risk have significant
influences on the Interest in using QRIS. |
Keywords: |
Technology Acceptance Model, Financial Literacy, QR Payment |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
A NETWORK ANALYTICS APPROACH TO EXAMINE THE RELATIONSHIP BETWEEN LEARNING
REFLECTION AND SELF-REGULATED LEARNING SKILLS |
Author: |
SAHAR ALQAHTANI |
Abstract: |
Previous studies have demonstrated the effectiveness of Epistemic Network
Analysis (ENA) to explain students' epistemic interaction with specific learning
activities or tasks. However, the potential of ENA has not been widely explored
in investigating the relationship between students' self-regulated learning
skills and their reflective behaviors in a new learning experience. This paper
demonstrates how ENA and cluster analysis can reveal and analyze differences in
the reflective behaviors of groups of students with varying self-regulated
learning constructs. The results of this study show that the most prominent
reflections among students with a high level of self-regulation use positive
feeling about their good experience and try to overcome their obstructing
feelings that hinder their learning process. The following are the learning
constructs: intrinsic/extrinsic goal orientation, task value, expectancy
beliefs, self-efficacy, test anxiety, metacognitive awareness and metacognitive
writing strategies. By contrast, students with low self-regulation in these
learning constructs more frequently reflected by recollecting their negative
feelings and examining the knowledge obtained from the course. The analytical
approaches proposed in this study reveal that the reflective behaviors among
students with both high and low motivation to learn through “intrinsic goal
orientation”, “expectancy beliefs” and “self-efficacy contain no negative
feelings towards their learning experience. |
Keywords: |
Epistemic Network Analysis, Model Graph-Based Analysis, Self-Regulated Learning,
Reflection, Reflective Writing, Reflective Practice. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
CSF AND INFLUENCE OF ITS ON 4D EDUCATION: SYSTEMATIC LITERATURE REVIEW |
Author: |
JAKA SUWITA, HARJANTO PRABOWO, HARCO LESLIE HENDRIC SPITS WARNARS, MEYLIANA |
Abstract: |
Computer literacy has evolved to become more integrated with AI approaches as
technology advances, enabling the development of an adapted education system.
Intelligent Tutoring System (ITS) is the name of this system. ITS is a system of
tutoring that uses technology to provide assignments without the need for an
educator to be there. A systematic literature review( SLR) was conducted in this
study to identify the factors of an intelligent tutoring system, the determining
factors for the success of ITS perpetration in the 4d-education framework, and
the results are presented. The system used in this literature review is the
prism system. The Prisma system uses the SLR protocol to search, opt for
literature, recapitulate substantial results, and propagate results. The SLR
protocol consists of determining a research question, searching for scientific
reference sources with the keyword intelligent tutoring system, sorting out the
appropriate articles, and making conclusions. This article provides a detailed
overview of how the critical success factors and influence of ITS are related to
4D education. Our contribution is in the form of complete information regarding
the ITS components that influence 4d-education related to the factors that
determine the successful use of IT in higher education to improve learning and
practical use of this technology. |
Keywords: |
Components, SLR, Its, Intelligent Tutoring System. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
CREDIT CARD APPROVAL PREDICTION: A SYSTEMATIC LITERATURE REVIEW |
Author: |
INDRAJANI SUTEDJA, JACKY LIM, ERICK SETIAWAN, FRANS REXY ADIPUTRA |
Abstract: |
Credit cards have become an everyday payment choice, leading to a surge in
applications and posing challenges for swift approval processes. This study aims
to find the best machine learning algorithms for credit card approval by
searching 208 papers from various sources then through inclusion and exclusion,
are narrowed into 23 most relevant papers. The highlighted issues include
time-consuming manual approval, high use of human resources, and possible human
errors in the process. The manual method's inefficiencies can lead to wrong
approvals, which could cause financial losses and harm the banks' reputation,
especially if it happens in large scales. In this situation, machine learning
can offer a solution for faster approval procedures and fewer errors. To address
these concerns, this research evaluates different machine learning algorithms,
including decision trees, random forests, logistic regression, support vector
machines, and artificial neural networks. The evaluation considers algorithm
ranking, statistical measures, and the nature of the algorithms to understand
their effectiveness and potential for overfitting. The author's findings
emphasize that adopting machine learning algorithms like random forests,
logistic regression, and support vector machine can significantly enhance credit
card approval processes. These algorithms exhibit higher accuracy and a reduced
risk of overfitting, contributing to overall bank performance improvement.
Furthermore, recommendations are provided for a more effective model development
process, including choosing suitable methodologies, exploring data, reducing
complexity, tuning parameters, and validating results. By following these
suggestions, banks can enhance model performance while minimizing overfitting
concerns.This systematic review not only underscores the importance of embracing
machine learning for credit card approval but also offers practical insights
into selecting algorithms and refining model development strategies. By
embracing advanced algorithms and improved model building techniques, banks can
navigate challenges posed by increasing credit card applications and establish
more efficient, accurate, and dependable credit approval systems. |
Keywords: |
Credit Card, Credit Card Approval, Manual, Machine Learning |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
REVOLUTIONIZING HISTORICAL DOCUMENT ANALYSIS: HOW DOES DEEP LEARNING UNVEIL NEW
INSIGHTS IN ANCIENT TEXTS? |
Author: |
HASSAN HAZIMZE, SALMA GAOU, KHALID AKHLIL |
Abstract: |
Exploring the depths of history through its documents, the field of historical
document analysis is critical in reconstructing our past. Traditionally, this
field has relied on detailed manual examination by specialists. However, it is
currently undergoing a significant transformation with the integration of deep
learning methodologies, a shift that our study rigorously investigates.
Emphasizing the transition from traditional techniques to deep learning
approaches, particularly convolutional neural networks (CNN), we highlight how
these advanced algorithms substantially automate and enhance the recognition and
transcription of ancient scripts. This digital transformation not only expedites
text processing but also achieves a level of precision surpassing that of manual
methods. The core of our research is the innovative application of deep
learning in character recognition, an essential step for accurately digitizing
centuries-old manuscripts. We demonstrate the effectiveness of deep learning,
especially CNNs, in identifying and converting diverse styles of handwritten
script. This critical advancement is pivotal for preserving and thoroughly
studying historical documents. Our findings reveal the profound impact of CNNs
in enhancing both the accuracy and speed of character identification, marking a
turning point in historical document analysis. By incorporating proven
historical methodologies with advanced deep learning technologies, our study
makes a substantial and explicit contribution to the field of historical
document analysis. This fusion offers new perspectives for the detailed study of
ancient texts and aids in their digital preservation. Our approach not only
enriches our understanding of historical documents but also significantly
enhances their analysis with the precision and accessibility afforded by
advancements in deep learning. Conclusively, this research establishes a new
paradigm in historical document analysis, addressing the research problem of
efficient and accurate character recognition and contributing novel insights
into the application of CNNs in this domain. |
Keywords: |
Character Recognition, Historical Documents, Deep Learning, Convolutional Neural
Networks, Document Analysis, Information Extraction. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
AN END-TO-END DEEP LEARNING APPROACH FOR AN INDIAN ENGLISH REPOSITORY |
Author: |
PRATHIBHA SUDHAKARAN, ASHWANI KUMAR YADAV, SUNIL KARAMCHANDANI |
Abstract: |
A voice recognition system that immediately translates raw audio waveforms into
text without the need for separate components for language and acoustic
modelling or manually constructed feature engineering is known as end-to-end
deep learning for continuous speech recognition. It learns the whole audio to
text mapping using a single deep neural network model. Conventional speech
systems rely on intricate processing pipelines, but this method is far simpler.
An end-to-end model in voice recognition is a simple single model that operates
directly on words, subwords, or characters and may be trained from the ground
up. This simplifies decoding by doing away with the requirement for both
explicit phone modelling and a pronunciation lexicon. Deepspeech was used to
construct and test the model, which was designed for Indian English.
Additionally, a comparison is made between the results of the bi-directional
RNN-based system and the traditional HMM model. With our method, we can quickly
get a large amount of heterogeneous data for training due to a number of unique
data synthesis techniques and an extremely effective RNN development system that
utilises several GPUs. The connectionist temporal classification (CTC) objective
function is used to infer the alignments between speech and label sequences,
obviating the requirement for pre-generated frame labels. Experiments
demonstrate that the RNN-based model has been observed to have equal word error
rates (WERs) while also significantly speeding up the decoding process when
compared to traditional hybrid HMM based on Kaldi. |
Keywords: |
End-To-End,HMM,CTC,RNN,CSR, Indian English, Deep Speech |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
RESEARCH TRENDS IN CONFIDENCE ASSESSMENT: A SYSTEMATIC LITERATURE REVIEW |
Author: |
HANA FAKHIRA ALMARZUKI, KHYRINA AIRIN FARIZA ABU SAMAH, LALA SEPTEM RIZA,
SHARIFALILLAH NORDIN |
Abstract: |
Confidence assessment is vital for validating decisions, managing risks, and
improving the quality of knowledge. However, there is a noticeable gap in
comprehensive research aimed at investigating, understanding, and interpreting
the evolving trends in confidence assessment. In light of this, the study
involves conducting a systematic literature review to evaluate the nuances and
key findings of previous works, shedding light on the current state of knowledge
in this field. Emphasizing the significance of confidence, particularly in
learning, is essential for accurately determining a student s level of
knowledge. Our systematic review, following Preferred Reporting Items for
Systematic Reviews and Meta-Analyses (PRISMA) guidelines, analyzes 39 studies
(2018-2023) from Scopus, Web of Science, and Science Direct. Four primary
themes—aims, methods, approaches, and Likert scale types are unfolded into 27
sub-themes, offering a comprehensive view of confidence assessment research
trends. Notably, in fields like medicine, confidence assessment is pivotal, with
pre-and post-surveys using a 5-point Likert scale being predominant. By
synthesizing findings, it informs future research, enhances methods, and
contributes to advancing our collective understanding of confidence assessment
in knowledge creation. |
Keywords: |
Confidence Assessment, Systematic Literature Review, Knowledge, Confidence,
Research Trend |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
BLOCKCHAIN BASED GENUINE AND TRANSPARENT CHARITY APPLICATION |
Author: |
ABHIJEET R. RAIPURKAR, MANOJ B. CHANDAK, AASHIA SORATHIA, ISHA MANKAR, PRASANNA
TAPKIRE, PUSHKAR POPHALI |
Abstract: |
In an era where online philanthropy has gained momentum, ensured the
authenticity of participants, and maintained transparency in transactions
becomes imperative. Addressing these concerns, this research presents a two-fold
solution. At the core of this system is a Decentralized Identity Verification
feature, which ensures the authenticity of both donors and charity organizers.
Participants validate their identities using established documents, which are
then verified by respective issuing authorities. This platform enables users to
initiate charity campaigns. Potential donors can view these campaigns,
contribute as desired, and are further incentivized with a unique NFT
(Non-Fungible Token) reward system upon their contributions. These NFTs, serving
as digital memorabilia, commemorate the donor's charitable actions, adding an
element of appreciation and potential digital value. Moreover, donors witness
real-time transparent transactions, ensuring their contributions reach intended
beneficiaries. Together, these systems, combined with the allure of NFTs,
instill trust in digital charitable endeavors, paving the way for more secure,
rewarding, and transparent online giving. |
Keywords: |
Blockchain, Decentralization, Smart Contracts, Rewards, NFT. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
NOVEL DESIGN OF VOLUMETRIC ANTENNA FOR WIDEBAND APPLICATIONS |
Author: |
SURYA PRASADA RAO BORRA, A. GEETHA DEVI, , K. VIDYA SAGAR, SUMALATHA MADUGULA,
MORSA CHAITANA, VUTUKURI SARVANI DUTI REKHA |
Abstract: |
This paper focused on designing a volumetric antenna to achieve high gain and
high band width. For ground penetrating radars, medical microwave imaging to
detect breast cancer, wideband antennas are significant. This paper focused on
designing a volumetric wideband antenna to achieve high directivity high gain
and high band width to address the challenges of both radar and medical imaging
applications. The width of the antenna, effective dielectric constant, defective
length, extension length, and micro strip patch length are meticulously tuned to
achieve high directivity, high gain, and bandwidth. The proposed antenna is
designed until the return losses, directivity, VSWR, gain and bandwidth values
meet the requirements of volumetric wideband applications. The top layer is a
hexagon patch. Octagonal is subtracted from hexagonal patch. And then circle is
subtracted from the square. FR 4 is used as a dielectric substrate. Return loss
(S11) parameter value is analysed at each stage of the antenna. The antenna
Resonates at14.22GHz and15.33GHz, Voltage standing wave ratio (VSWR) is
estimated, maximum peak gain (G) is achieved at 4.59dB. The band width achieved
is 1.88. Input impedance is at 50Ω, current distribution and electric field
distribution are evaluated. The VSWR and gain values, return losses, bandwidth
are analysed at each stage of the design. The results achieved with the proposed
antenna are matched with wide range of wireless sensor applications including Ku
band and in specifically in medical era. |
Keywords: |
Bandwidth, Gain, Voltage Standing Wave Ratio (VSWR) And Volumetric Antenna,
Return Losses. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
BUILDING TRUST IN IOT: LEVERAGING CONSORTIUM BLOCKCHAIN FOR SECURE
COMMUNICATIONS |
Author: |
MOHAMMED AMIN ALMAIAH, AITIZAZ ALI, RIMA SHISHAKLY, TAYSEER ALKHDOUR, ABDALWALI
LUTFI, MAHMAOD ALRAWAD |
Abstract: |
As the Internet of Things (IoT) proliferates, ensuring the security and
trustworthiness of communications becomes paramount. This paper introduces a
novel approach to address these concerns by leveraging Consortium Blockchain
technology. The proposed system focuses on building trust in IoT environments
through a decentralized and transparent framework. We explore the integration of
Consortium Blockchain as a foundational layer for secure communication within
IoT ecosystems. The consortium model, involving a group of trusted entities,
facilitates consensus mechanisms and smart contracts to establish and maintain a
reliable reputation system. This approach mitigates traditional vulnerabilities
associated with centralized systems and enhances the overall security posture of
IoT networks. Key components of the proposed system include a consensus
algorithm for agreement among consortium members, a transparent and immutable
ledger for recording interactions, and smart contracts governing trust and
reputation protocols. By utilizing blockchain technology, the system not only
ensures data integrity and confidentiality but also instills confidence in the
reliability of IoT devices and the information exchanged. Through simulation and
analysis, we demonstrate the effectiveness of our Consortium Blockchain-based
solution in enhancing the security and trustworthiness of IoT communications.
The results indicate improved resistance to malicious attacks and a resilient
foundation for building trust in the dynamic and interconnected world of IoT.
This research contributes to the ongoing discourse on securing IoT ecosystems,
offering a practical and scalable solution for building trust through Consortium
Blockchain technology. |
Keywords: |
Internet of Things (IoT); Consortium Blockchain; Cyber-attacks; Trust in IoT. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
OBJECT-ORIENTED ONLINE COURSE RECOMMENDATION SYSTEMS BASED ON DEEP NEURAL
NETWORKS |
Author: |
HAO LUO, NOR AZURA HUSIN, SINA ABDIPOOR, TEH NORANIS MOHD ARIS, MOHD YUNUS
SHARUM, MASLINA ZOLKEPLI |
Abstract: |
In the era of widespread online learning platforms, students commonly face the
challenge of navigating an extensive array of available courses. Identifying
relevant and fitting options aligned with students' educational objectives and
interests is highly complex. The impact of system maintainability and
scalability on escalated development costs is often neglected in the literature.
To tackle these issues, this paper introduces a comprehensive analysis and
design of an object-oriented online course recommendation system. Employing a
deep neural network algorithm for course recommendation, our system adeptly
captures user preferences, course attributes, and intricate relationships
between them. This methodology facilitates the delivery of personalized course
recommendations precisely tailored to individual needs and preferences. The
incorporation of object-oriented design principles such as encapsulation,
inheritance, and polymorphism ensure modularity, maintainability, and
extensibility, thereby easing future system enhancements and adaptations. The
main contribution of this paper is to propose a new idea of an adaptive learning
system that combines deep learning for personalized recommendations with
object-oriented design for scalability and continuous improvement. This
practical solution demonstrably enhances online learning experiences by
tailoring recommendations to individual needs and evolving trends. Evaluation of
the proposed system's performance utilizes real-world online course datasets,
demonstrating its efficacy in furnishing accurate and personalized course
recommendations, ultimately enhancing the overall learning experience for
students. |
Keywords: |
Object-oriented, Deep Neural Network, Online Course Recommendation, System
Design |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
DEVELOPMENT AND SIMULATION OF MICROSTRIP PATCH ANTENNAS FOR 5G WIRELESS
CONNECTIVITY |
Author: |
NEERU MALIK, SHRUTI VASHIST, AJAY N. PAITHANE, MUKIL. ALGIRISAMY |
Abstract: |
This study introduces a microstrip patch antenna designed to support 5G
communication technology. It operates effectively at central frequencies of
38GHz and 54GHz, offering respective bandwidths of 1.94GHz and 2GHz. The antenna
design prioritizes compactness, affordability, and suitability for miniature
devices. It consists of an FR4 epoxy substrate, patch, and ground. The substrate
boasts a dielectric constant of 3.8, a minimal loss tangent of 0.02, and adheres
to a standard thickness of 1.57mm. The substrate measures 6mm x 6.25mm, and the
patch's dimensions are 2mm x 2mm, employing the microstrip-line feeding
technique.For mobile applications within the millimeter-wave spectrum, including
frequencies of 38.6GHz, 47.7GHz, and 54.3GHz, accompanied by bandwidths of
3.5GHz, 2.5GHz, and 1.3GHz, this research also proposes an antenna array
comprising four components, spaced at 4mm intervals. The overall antenna size is
6mm x 6.25mm x 0.578mm. The proposed antenna design undergoes rigorous
simulation using HFSS software for performance validation. |
Keywords: |
5G, Microstripline,Antenna Array, Taperedline Feeding, HFSS Software. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
Title: |
A NOVEL APPROACH TO UNIVERSAL SYSTEM WITH LOSSES THAT FOCUS ON THE ECONOMIC
DISPATCH FOR THE ENERGY INTEGRATION IN MICRO GRID AND ACCENTUATE THE HEURISTIC
OPTIMIZATION |
Author: |
V. SAI GEETHA LAKSHMI, M. VANITHASRI, M. VENU GOPAL RAO |
Abstract: |
Rapid development of renewable infrastructure and popular support for green
energy have led to the emergence of hybrid generating systems in power networks
that is micro grid. The effective scheduling of all power producing facilities
to meet the increasing power demand is one of the most significant challenges in
the design and operation of an electric power generation system. Scheduling
power generating units to minimize costs and meet system restrictions is an
example of economic load dispatch (ELD), a generic operation in the electrical
power system. Due to their greater global solution capacity, flexibility, and
deriv-ative-free construction, metaheuristic algorithms are rising in favor for
addressing ELD problems. This study develops a novel hybrid optimization–based
solution model for the ELD problem of integrating re-newable resources. In
addition, this work takes into account multiple objectives, including the full
cost of wind generation, the full cost function of thermal units, and a penalty
cost function. The optimal output of thermal power plants is maximized using the
hybrid optimization model. Limits, both upper and lower. In addition, the hybrid
optimization model selects the best turbines to maximize wind energy production
in response to specific needs. The efficiency and viability of the suggested
algorithm were demonstrated using test system with 10 units. The Heuristic
Optimization techniques is applied to get numerical results for the static and
dynamic ELD problem show that the suggested elephant herding optimization (EHO)
algorithm outperforms state-of-the-art methods in most of the test situations,
proving its superiority and practicality. |
Keywords: |
Green Energy, Hybrid Generating Systems, Economic Load Dispatch, Heuristic
Optimization, Elephant Herding Optimization. |
Source: |
Journal of Theoretical and Applied Information Technology
15th February 2024 -- Vol. 101. No. 3-- 2024 |
Full
Text |
|
|
|