|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
May 2024 | Vol. 102 No.9 |
Title: |
ENERGY EFFICIENT ROUTING IN QUANTUM FLYING AD HOC NETWORK (Q-FANET) USING
MAMDANI FUZZY INFERENCE ENHANCED DIJKSTRAS ALGORITHM (MFI-EDA) |
Author: |
S. P. GEETHA, N. MOHANA SORUBHA SUNDARI, J. RAMKUMAR, R. KARTHIKEYAN |
Abstract: |
Quantum Flying Ad Hoc Networks (Q-FANETs) present a unique paradigm for
communication, leveraging quantum principles to enable secure and efficient data
transmission. However, routing in Q-FANETs poses significant challenges due to
dynamic topology changes and limited communication resources. This paper
proposes a novel routing approach utilizing the Mamdani Fuzzy Inference Enhanced
Dijkstras Algorithm (MFI-EDA) tailored for Q-FANET environments. The working
mechanism of MFI-EDA involves integrating fuzzy logic with Dijkstras algorithm
to intelligently adapt routing decisions based on environmental conditions, such
as node mobility and energy levels, and network dynamics, such as link quality
and traffic congestion. This hybrid approach enhances traditional routing
algorithms by incorporating fuzzy logic to provide robustness and adaptability
in Q-FANETs. The essential contribution lies in the seamless integration of
fuzzy inference, which enables MFI-EDA to dynamically adjust routing paths based
on real-time environmental feedback, resulting in improved energy efficiency and
reliability. The performance of MFI-EDA in Q-FANET scenarios is evaluated
through extensive simulation experiments, demonstrating its effectiveness in
achieving energy-efficient and reliable routing. Results indicate that MFI-EDA
outperforms traditional routing approaches, offering promising prospects for
efficient communication in quantum-enabled ad hoc networks. |
Keywords: |
Quantum Network, Q-FANETs, Mamdani Fuzzy Inference, Dijkstras Algorithm,
Routing |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
NEW METHOD FOR FINDING THE WEIGHT DISTRIBUTION AND SPECTRUM BY TESTING THE
OPTIMAL SOLUTION OF THE POINTS IN PG(3,2) |
Author: |
HAMID MOHAMMED KHALAF , ASMAA SALAH ALDDIN SULAIMAN |
Abstract: |
In this paper, the basic solution was found by using Vogel's approximation
method for the projective space PG(3,2) when mi=5,7,15 by converting the points
and lines of PG(3,2) into a transportation problem with five rows and seven
columns. The problem was balanced and treating Degeneracy. Then we tested the
optimality of the basic solution using the modified distribution method. And
then we construct a new linear codes of dimension p=4 and smallest length mi for
which a [mi, p, d]-codes exists, and by a geometric method we found the weight
distribution and the spectrum of a linear code [mi, p, d] over a field F2. |
Keywords: |
Vogel's Approximation, Projective Space, Linear Codes, Weight Distribution,
Spectrum. |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
EXPLORING ARO-LOGISENT: DELVING INTO DATA COLLECTION FOR ADVANCED SENTIMENT
ANALYSIS WITH AMAMI RABBIT OPTIMIZATION-BASED LOGISTIC REGRESSION |
Author: |
D. J. ANITHA MERLIN , D. VIMAL KUMAR |
Abstract: |
Sentiment analysis, a crucial component of natural language processing, aims to
discern the underlying sentiment conveyed in textual data. This research
explores the fusion of Amami Rabbit Optimization (ARO) with Logistic Regression
(LR) to enhance sentiment analysis performance. ARO, inspired by the foraging
behavior of Amami rabbits, offers a novel metaheuristic approach for optimizing
model parameters, while LR provides a robust framework for sentiment
classification. The proposed integration leverages the strengths of both
methodologies to overcome challenges inherent in sentiment analysis, including
feature selection, model training, and accuracy optimization. This study
investigates the effectiveness of the ARO-LR hybrid approach through empirical
experiments conducted on diverse datasets sourced from social media platforms
and product reviews. Evaluation metrics such as precision, recall, F1-score, and
accuracy are employed to assess the performance of the integrated model. Results
indicate significant improvements in sentiment classification accuracy and
robustness compared to traditional LR models. The findings highlight the
efficacy of integrating metaheuristic optimization techniques with conventional
machine learning algorithms for advancing sentiment analysis capabilities in
real-world applications. |
Keywords: |
Sentiment, Reviews, Online Shopping, Classification, Amazon, Neural Network,
Rabbit Optimization, Logistic Regression |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
SKIN LESION CLASSIFICATION FOR MELANOMA USING DEEP LEARNING |
Author: |
K SANDHYARANI KUNDRA , I V S VENUGOPAL , CH SITA KUMARI , ANUSHA GUTTI |
Abstract: |
Deep learning is a branch of machine learning and Artificial Intelligence that
imitates the way people learn specific types of information. Image
classification is an area in Computer Vision where a computer can analyze an
image and identify or estimate the probability of the class or category the
image falls under. Melanoma has become more common over the past 30 years, and
early detection has a big effect on lowering death rates from this type of skin
cancer. The existing system consists of several physical laboratory test reports
that are analyzed by a doctor or a cancer expert to detect the presence of
melanoma. This alone is insufficient in countries with huge population like
India, where cancer hospitals and labs are few. The proposed system consists of
an image classifier that takes a dermatoscopic image as input and predicts
whether a person has melanoma or not. A reliable automated system that can tell
whether melanoma is present in a dermatoscopic image of lesions is a very
helpful tool for medical diagnosis. This system uses an ISIC dataset specially
curated for melanoma analysis. Thus, the proposed system serves as an automated,
expedient, and practical method for detecting melanoma from the image of a skin
lesion. |
Keywords: |
Computer Vision, Deep Learning, Image Classification, Machine Learning, Transfer
Learning |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
HYBRID FINGERPRINTING AND PEDESTRIAN DEAD RECKONING USING MACHINE LEARNING FOR
INDOOR POSITIONING SYSTEM |
Author: |
ALFITO HARLIM, LIM , GEDE PUTRA KUSUMA |
Abstract: |
Indoor Positioning System is one of the hot research topics in the last years
for it gives opportunity to be used in many business platform. BLE technology is
used for indoor positioning system to reduce material and energy cost over time
compared to other technologies which cost more. There has been a lot of study to
increase positioning accuracy. One of the latest improvements is to use hybrid
approach which combines the results of two methods. In this study, we propose a
positioning algorithm for indoor positioning system using fingerprinting
approach using two machine learning: Artificial Neural Network (ANN) and Support
Vector Regression (SVR), and a hybrid of fingerprinting and Pedestrian Dead
Reckoning (PDR) using ANN and SVR. Hybrid fingerprinting using Weighted
K-Nearest Neighbor (W-KNN) and PDR is used as benchmark of this experiment. ANN
and SVR are proposed as the machine learning used for both fingerprinting and
hybrid method to combine with PDR. On benchmark, hybrid fingerprinting KNN using
ANN achieve positioning root-mean-squared error 147.94 cm. The proposed hybrid
fingerprinting ANN achieved 103.29 cm using ANN and 160.86 cm using SVR. |
Keywords: |
Indoor Positioning System, Fingerprinting Methods, Pedestrian Dead Reckoning,
Hybrid Method, Machine Learning |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
IMPLEMENTATION OF ATTACK DETECTION AND MITIGATION FOR SECURING CLOUD BASED IOT
NETWORK |
Author: |
ARCHANA D.WANKHADE , KISHOR P.WAGH |
Abstract: |
Now a days Internet of Things (IoT) has become fastest computing technology
which makes human life easier and comfortable. IoT is important for smart system
like homes, transportation, farming etc. Using this robust technology improves
efficiency, mobility and cost reduction. But due to heterogeneous nature of this
IoT network, it has many security issues. The devices in IoT network are
generally attacked by intruders. Hence risk of security is high in IoT network
than other computing paradigm. This is the reason why traditional security
techniques are not useful in these IoT network. A holistic solution is required
for fulfilling security requirement of IoT network. The existing security issues
like authentication, access control, network security are not fulfilling
challenges in large IoT system having number of smart devices. In this
communication of Cloud based IoT network sensor data is transmitted from IoT
network to cloud. This path is vulnerable. The objective of this work is to
secure this path by proposed Attack Detection and Mitigation using ML approach
for Cloud Internet of Things (IoT) network. For this proposed work, license
software NetSim Standard v13.3 is used for identification of malicious node in
Cloud based IoT network. The work was tested with ML approaches used for Attack
Detection and Mitigation and efficiency is calculated to improve the
performance. Performance of proposed model is improved using Decision Tree(Train
Score 100%,Test Score 99.39% ) comparing with K-Nearest Neighbors (Train Score
98.52% and Test Score 98.24%) and Logistic Regression algorithms(Train Score
92.87% and Test Score 92.31%). |
Keywords: |
Attack Detection and Mitigation, IoT, KNN, Lightweight Cryptography, Machine
Learning. |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
AN ENHANCED COMMUNITY DETECTION METHOD USING LABEL PROPAGATION ALGORITHM WITH
ANT COLONY OPTIMIZATION TECHNIQUE |
Author: |
D. DHANALAKSHMI , DR. G. RAJENDRAN |
Abstract: |
The structure of a community is a crucial element in understanding complex
networks. It provides valua-ble insights into both the arrangement and function
of the network, aiding our comprehension of dynamic phenomena like epidemics and
information propagation. While the Label Propagation Algorithm (LPA) is widely
recognized for community detection due to its linear time complexity, it has a
notable drawback. In comparison with other algorithms label propagation has
advantages in its running time and amount of a priori information needed about
the network structure. The biggest advantage of label propagation algorithm is
that it owns excellent running time as well as simple algorithm process. LPA
generates unstable community assignments, resulting in different combinations of
communities with each execution on the same network. This unpredictability
fosters instability and the emergence of large, less informative communities. To
overcome these drawbacks, an Enhanced Community Detection approach, a
combination of the Label Propagation Algorithm and Ant Colony Optimization
(ECDLPA-ACO) tech-nique has been proposed in this paper. ECDLPA-ACO not only
propagates labels but also optimizes modu-larity measures by clustering similar
vertices based on local similarities within the network. Experimental results on
established social network datasets showcase the superiority of ECDLPA-ACO over
comparable community detection algorithms like Louvain Algorithm, Infomap
Algorithm, and traditional Label Prop-agation Algorithm. ECDLPA-ACO outshines in
scalability, average execution time, modularity, and com-putational efficiency. |
Keywords: |
Community Detection, Louvain Algorithm, Infomap Algorithm, Label Propagation
Algorithm, Ant Colony Optimization technique |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
INTEGRATING AI FOR ENHANCED FAULT DETECTION IN INDUSTRIAL SYSTEMS: EVALUATING
MACHINE AND DEEP LEARNING APPROACHES IN THE INDUSTRY 4.0 AND IIOT ERA |
Author: |
YOUSSEF ZERGUIT , YOUNES HAMMOUDI , MOSTAFA DERRHI |
Abstract: |
In the transformative landscape of Industry 4.0, the integration of Artificial
Intelligence (AI) within the Industrial Internet of Things (IIoT) has emerged as
a cornerstone for advancing operational efficiency and reliability. This paper
explores the application of various AI methodologies, including both machine
learning and deep learning approaches, to enhance fault detection in industrial
systems, particularly focusing on three-phase electrical systems. Utilizing an
integrated system architecture comprising Power Monitoring Units (PMUs) and
advanced computational units, we implement and evaluate a suite of AI models
such as Support Vector Machines (SVM), Decision Trees, K-Nearest Neighbors
(KNN), Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN),
Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) networks. Our
comprehensive analysis reveals the nuanced capabilities and performance metrics
of these models in the context of real-time fault detection, thereby providing
pivotal insights for deploying AI-driven diagnostics in industrial settings. |
Keywords: |
Three-Phase Systems, Industry 4.0, Industrial Internet Of Things (Iiot),
Artificial Intelligence (Ai), Machine Learning, Deep Learning, Support Vector
Machines (Svm), Decision Trees, K-Nearest Neighbors (Knn), Artificial Neural
Networks (Ann), Convolutional Neural Networks (Cnn), Recurrent Neural Networks
(Rnn), Long Short-Term Memory (Lstm), Gated Recurrent Units (Gru). |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
NOVEL ROUTING PROTOCOL TO OVERCOME PACKET DELAY IN MOBILE ADHOC NETWORK |
Author: |
DR.S.HEMALATHA, RABINARAYAN SETHI, M.RAJASEKARAN, RAVULA ARUN KUMAR, M. VIMALA5,
VELMURUGAN V, SHAIK RAZIA, J.DEEPA |
Abstract: |
Packet delay in the wireless network leads the degrading the overall
performance, many of the research work was done to overcome the packet delay in
the nodes by inventing routing protocols with latest methods but the research
remains the same stage. Existing routing strategy determines by using the
dynamic manipulation of different parameters this is additional overhead to the
routing protocol for determines the path. In this article concentrates on
inventing a new routing protocol to overcome packet delay in Mobile Adhoc
Network with a support of simple parameters called forwarding time of the each
packet. To achieve this objective the Forward Time Based Routing Protocol (FTRP)
introduces to monitoring the forwarded time of the every nodes present in the
communication. The proposed forward time routing protocol was implemented with
Network simulator and the simulation results are compared with existing methods
of Proposed_TAODV, C-AODV, A-AODV and ML-AODV based routing protocol then the
compared results are proved the packet delivery ratio is 90% to 94% and End to
End Delay is 6.2% to 43.4 %. Simulation result in all the factors the proposed
FTRP modes proved best result overall MANET the performance factors are
excellent in 78%. |
Keywords: |
MANET, Attackers, Gray Hole Attackers, Forward Time Detection Technique, Forward
Time |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
INTENT CLASSIFICATION FOR MALAYSIAN ACADEMIC WRITERS PROOFREADER CHATBOT USING
MACHINE LEARNING |
Author: |
SITI NOOR BAINI MUSTAFA, LAILATUL QADRI BINTI ZAKARIA |
Abstract: |
The intent classification component is important in developing a chatbot as it
helps the chatbot system to understand the meaning and purpose of conversation
from the user. Earlier researchers have developed datasets for niche domains and
analyzed various input representations and machine learning techniques for
chatbot intent classification. However, there is no dataset and intent
classification analysis for chatbot readily available in the niche of
proofreading. Other than that, this study finds out the feature, input
representation, and the best machine learning classifier that are suitable for
intent classification analysis. This research is divided into seven main phases.
The first phase is the feasibility study. The second phase is the dataset
development. The third phase is the text preprocessing phase where input is
cleaned and normalized. The fourth phase is the feature extraction phase whereby
features are extracted using POS tagger, bag of words technique, and bigram
words technique. The fifth phase is the input representation phase using Term
Frequency – Inverse Document Frequency (TF-IDF) technique, Word2Vec embedding,
or One-Hot encoding technique. Finally, the sixth phase is the intent
classification phase using machine learning algorithms. The machine learning
methods tested were Support Vector Classifier, Support Vector Machine,
Stochastic Gradient Descent, and Naïve Bayes. The final phase is the testing
phase. This study finds that the combination of noun and verb as features and
using One-Hot encoding as input representation together with Support Vector
Machine as the machine learning technique produces the best performing
classifier for this research with 0.89 accuracy. This study hopes to pioneer the
development of a proofreading chatbot that can help and take over a
proofreaders task of answering the questions asked by Malaysian academic
writers regarding the grammar corrections made. |
Keywords: |
Chatbot; Intent classification, Machine learning, Natural language
processing; Proofreading corpus |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
UTILIZING GENETIC ALGORITHM INTEGRATED WITH INTELLIGENT OPERATORS AND SARSA FOR
EXTRACTING HIGH UTILITY ITEMSETS |
Author: |
LOGESWARAN K, SAVITHA S, SURESH S3, ANANDAMURUGAN S |
Abstract: |
This research article presents a novel approach for mining High Utility Itemsets
(HUIs) by integrating Genetic Algorithm (GA) with SARSA algorithm. It begins by
providing a comprehensive overview of GA's fundamental principles and
operational procedures, followed by an in-depth exploration of SARSA algorithm
components, supported by diagrammatic representations. The core contribution of
this study is the introduction of the Intelligent Genetic Algorithm with
on-policy Reinforcement Learning (IGA_RLON) methodology, which is thoroughly
elaborated upon. The effectiveness of IGA_RLON is meticulously evaluated in
terms of execution time, convergence speed, and the percentage of successfully
mined HUIs, through comparative analysis with established methods such as
IGA_RLOFF, HUPEUMU-GRAM, and HUIM-BPSO. This article aims to advance the field
of HUI mining by proposing a robust and efficient algorithmic framework. |
Keywords: |
Genetic Algorithm, SARSA Algorithm, Reinforcement Algorithm, High Utility
Itemset Mining, Data Mining, Control Parameters |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
DEEP LEARNING FOR SENTIMENT BASED DYNAMIC STOCK SYMBOL ANALYSIS |
Author: |
PAUL JOSHI, SANTA MARIA, LAKSHMI K.S, DIVYA JAMES, LIYAN GRACE SHAJI |
Abstract: |
The financial markets are highly dynamic and constantly affected by a multitude
of global events, ranging from political crises and global pandemics to trade
competition, innovation, and scientific advancements. These disruptions
reverberate through supply chains across the globe and impact the overall demand
for goods and services. Research across various domains suggests that stock
market behavior cannot be solely explained by predictable patterns or trends.
Emotions play a significant role, influencing rational thinking and social
behavior, thus making the stock market an embodiment of the collective social
mood. In light of this perspective, analyzing public sentiment becomes crucial
in predicting stock market movements. As the volume of textual data, such as
news articles and tweets, continues to surge, they serve as valuable indicators
of the prevailing public sentiment. Consequently, the proposed approach involves
developing a sophisticated system capable of dynamically fetching recent tweets
related to specific sectors and employing deep learning techniques, such as LSTM
and GRU, for sentiment analysis. By leveraging these insights, investors can
make informed decisions in the dynamic realm of financial markets. |
Keywords: |
Deep Learning, GRU, LSTM, Sentiment Analysis, Stock Market |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
RISK ASSESSMENT THREAT MODELLING USING AN INTEGRATED FRAMEWORK TO ENHANCE
SECURITY |
Author: |
P. SUBHASH, MOHAMMED QAYYUM, K. MEHERNADH, K. JEEVAN SAHIT, C. LIKHITHA VARSHA,
M. NEVAN HARDEEP |
Abstract: |
Today, in the digital world, the security of systems is crucial because, with
continuous information exchange and huge quantities of data being processed, the
protection of operation processes and data assets becomes paramount. Threat
modeling is an important part of cybersecurity management methodology that looks
for any possible threats or weaknesses that can break the system or endanger the
environment. This paper is an attempt to analyze all the models of threat
modeling by taking STRIDE, PASTA, DREAD, TREK, VAST and Attack Trees as
references. An integrated model is suggested that combines the benefits of
existing approaches, which includes the adoption of a comprehensive frame to
deal with cyber threats. This methodology emphasizes iterative refinement and
rigorous testing to ensure the effectiveness of threat mitigation strategies. By
incorporating user-friendly web portals and the integration of new technologies,
this framework enhances usability and addresses emerging threats. Overcoming the
key issues in cybersecurity through integrated threat modeling is provided by
combining approaches like STRIDE (security threat rating indicating, systemwide
generic method), and PASTA (risk-focused, even in its extensive nature), DREAD,
TREK, VAST and Attack Trees. This is due to the fact that the six methodologies
and the ones that from Attack Trees offer a solution that is adaptable to
various organizational contexts. An addition a unique triad is provided -
asset-centric, attack-centric, and software-centric - to expand the protection
coverage against the different kinds of threats and vulnerabilities. Proposed
model Risk assessment threat modelling using an integrated framework to enhance
security of action comprises the iterative revision and the exhaustive
verification to guarantee the efficiency of prevention power. With the help of
easy-to-use web portals and the integrated new technologies that make it all
usable and design flexible, and timely as it faces new threats. Conducted on the
basis of comparison; this analysis highlights the exclusive advantages of the
design in the exact localization and mitigation of threats, achieving an
accuracy of 94.2%. The integrated threat modeling framework that is presented in
this essay represents a robust and dynamic approach toward cybersecurity, and it
aims to improve the security and resilience of the system in the context of the
continually changing threats in the cybersecurity world. |
Keywords: |
Threat Modelling, Integrated Framework, Iterative Refinement, Rigorous Testing,
Web-portal |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
OPTIMIZING HEART DISEASE PREDICTION MODELS USING GENETIC ALGORITHMS: A
METAHEURISTIC APPROACH |
Author: |
SUSHILA PALIWAL, SURAIYA PRAVEEN, M. AFSHAR ALAM, JAWED AHMED |
Abstract: |
Cardiovascular diseases, including heart disease, remain a significant global
health concern. Early detection and accurate prediction of heart disease risk
factors are crucial for effective prevention and intervention. This research
paper uses a metaheuristic approach with genetic algorithms (GAs) to optimize
hyperparameter settings to improve predictive accuracy of heart disease models.
Using genetic algorithms as a metaheuristic approach, this research article aims
to improve the accuracy and generalizability of heart disease prediction models.
The primary objective is to converge towards optimal model parameters,
ultimately maximizing accuracy or minimizing error. The study investigates the
effectiveness of combining genetic algorithms with machine learning in improving
heart disease prediction, with a focus on enhancing accuracy, generalizability,
and scalability. Through comparison with traditional methods, the research
assesses the superiority of the proposed approach and its potential
contributions to heart disease diagnosis and treatment. Notably, the research
stands out for its use of genetic algorithms in conjunction with
cross-validation to optimize hyperparameters, identify optimal model parameters,
and evaluate performance by minimizing errors. The application of the Support
Vector Machine (SVM) classifier with optimized hyperparameters yielded a
significant 97% improvement in accuracy with the heart disease dataset,
surpassing results from previous studies. This research thus highlights the
promise of genetic algorithms in enhancing heart disease prediction models and
advancing healthcare analytics |
Keywords: |
Metaheuristic Approach, Genetic Algorithm, Hyperparameter Optimization, Support
Vector Machine, Heart Disease Prediction |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
TRANSFORMING HEALTHCARE WITH FEDERATED LEARNING: SECURING FOG SYSTEMS FOR THE
NEXT GENERATION |
Author: |
TAYSEER ALKHDOUR, AITIZAZ ALI, MOHAMMED ALMAIAH, TING TIN TIN, MOHMOOD A.
AL-SHAREEDA, ROMMEL ALALI,THEYAZAN ALDAHYANI, ABDALWALI LUTFI |
Abstract: |
Recent advancements in fog computing, coupled with the Internet of Things (IoT)
technology, encompass data analysis and artificial intelligence (AI) systems.
Nonetheless, the inherent weakness of the current paradigm lies in its
susceptibility to security risks and vulnerabilities. Security concerns and
cyber-attacks remain significant challenges within fog computing environments.
Collaborative attacks such as phishing assaults, along with replay attacks,
exemplify common security threats. In this scenario, each layer the edge layer
for sensing, the fog layer for processing, and the top layer encompassing
storage and administration (cloud) - is vulnerable to attacks. The Internet of
Things (IoT) in the fog (Fog-IoT) is widely acknowledged as the cornerstone of
the contemporary world. Consequently, intelligent healthcare systems are
increasingly prevalent. However, the rapid proliferation of IoT-based medical
devices and technologies presents challenges in maintaining a comprehensive
medical IoT system within budget constraints. While single Cloud Platforms (CP)
would be immensely beneficial if standardized, achieving this through a
decentralized fog computing system proves challenging. To address this, we
propose a hybrid-deep learning protocol aimed at safeguarding electronic medical
records from security breaches while simultaneously reducing latency.
Additionally, we introduce scalable federated centered (FC) learning integrated
with Blockchain-based data storage and retrieval. The proposed framework offers
a secure, reliable, and low-latency approach to healthcare systems using a
homomorphic distributed protocol. |
Keywords: |
EMR; IoMT; Cyber-risks; Sensors, Fog Computing; Cloud Computing; Security;
Privacy. |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
THE EFFECT OF INFLUENCER REPUTATION ON DIGITAL ENGAGEMENT AND CONSUMER BRAND
RELATIONSHIP |
Author: |
ZING-YUN ZENG, YISITIE XING, CHANG-HYUN JIN |
Abstract: |
The purpose of this study is to identify the sub-components of the value of an
influencer to understand their relationship with digital integration. Influencer
value components are divided into attractiveness, reliability, and expertise.
The study also attempts to explore how these sub-factors affect digital
engagement and is intended to understand the causal relationship between digital
engagement, consumer-brand relationship, and purchase intentions. The study
attempts to identify the effect of adjusting influencers authenticity,
self-identity, and product fit set as moderating variables. The influencer who
is the subject of this study was selected from a pool of active celebrity
influencers by a panel of experts. Consumers who participated in the survey were
asked to watch an influencer's YouTube, online presence, broadcasts, and SNS
before completing the survey. A total of 528 consumers participated. The method
of SEM with EQS 6 was used to verify the hypothesis in this study. The analysis
results are as follows. Of the influencer value system sub-elements,
attractiveness and expertise have a definitive impact on digital engagement.
However, the sub-factor of influencer reputation has been found to have no
significant effect on digital engagement. Digital engagement has been determined
to have a positive impact on consumer-brand relationships, and consumer brand
relationships have a positive impact on purchase intentions. Influencer
authenticity and self-identity have moderating effects when attractiveness,
reliability, and expertise affect the consumer-brand relationship. |
Keywords: |
Internet Influencer, Digital Engagement Consumer-Brand Relationship Purchase
Intention |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
THE RELATIONSHIP BETWEEN E-GOVERNMENT EFFECTIVENESS AND E-GOVERNMENT USE: THE
MEDIATING EFFECT OF ONLINE TRUST AND THE MODERATING EFFECT OF HABIT |
Author: |
SHADI AHMED KHATTAB, ISHAQ SHAAR, LINA AL ABBADI, AHMAD YOUSEF KALBOUNEH, WAEL
BASHEER ALHYASAT |
Abstract: |
Governments in both rich and developing countries need to offer suitable
e-government services to ensure citizens have trust and use them effectively and
efficiently in today's digital and automated world. Users' trust and involvement
in e-government have been negatively impacted by the current state of
implementation, particularly in Jordan. Furthermore, to continue being
responsive and effective, public administrations need to change the way they
operate by utilizing more technology for communication and information. Client
satisfaction and system trust will rise as a result of this. This article looks
into the variables that influence the use of e-government. It further elaborates
and investigates the mediating role of online trust in the relationship between
e-government effectiveness and e-government use. It tests whether habit has any
moderating effect on the relationship between e-government effectiveness and
e-government. Data from 471 e-government users were used to support the research
model, which was examined using a structural and measurement model using
SmartPls 3.3.0. The findings show a direct relationship between e-government
effectiveness and e-government use. Furthermore, the relationship between
e-government use and efficacy is mediated by online trust. The results also show
that habit moderates the relationship between the effectiveness and use of
e-government. As a result, we can see the intention to utilize e-government
again may be influenced by the practice of conducting business through the
Internet. The implications for theory and practice in the area of e-government
can drive public policy, and suggestions for future research are also
highlighted. |
Keywords: |
E-government Effectiveness, E-government Use, Information Quality, System
Quality, Service Quality, online Trust, habit. |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
CAAMP NET: AMP BASED CBAM ATTENTION NETWORK FOR CS RECOVERY |
Author: |
PAVITRA V , DR. SRILATHA INDIRA DUTT V.B.S |
Abstract: |
Deep learning based compressed sensing is a novel technology that has led to
complete signal recovery with fewer measurements that are well below the
required number of samples defined by Nyquist rate. It outperforms the existing
non-deep learning-based methods by exploiting the inherent structures present in
the signal or image of interest. There are iterative deep network translations
of state of art recovery methods, as well as standard deep learning
architectures in literature. The proposed method uses joint optimization of
Compressed sensing as well as recovery using an Approximate Message Passing
(AMP) based iterative method. This work also introduces use of attention gates
along with AMP based recovery of compressively sensed image. Using attention
helps the model to focus on high data areas and faster recovery. Experimental
results show that joint optimization as well as using attention mechanism in the
model out performs other Deep learning-based CS recovery methods in a range of
measurement rates. |
Keywords: |
Approximate Message Passing, AMP, Attention, Compressed Sensing, Compressive
ensing, CBAM Attention |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
ENERGY CONVERSION TECHNOLOGY TREND AND COMMERCIAZATION CASE USING ENERGY-ICT
CONVERGENCE TECHNOLOGY |
Author: |
JONG-YUN KIM, YOUNG-SU KIM |
Abstract: |
This study explains the development trend of Energy-ICT convergence technology
in Korea and major countries that is being promoted based on the policy of
expanding advanced ICT infrastructure and knowledge information services, and
the trend of expanding P2P power transactions, which is becoming a
representative energy-ICT convergence platform. Based on this, we will present a
plan to activate an Energy-ICT convergence business model based on distributed
resources, a profit model through energy conversion, such as commercialization
of domestic Energy-ICT convergence technology, and overseas commercialization
cases(USA, Germany). The establishment of an environment that activates P2P
power trading will undergo a very big change in the current paradigm of change
in the electricity market. Therefore, it is necessary to have optimal solutions
to many pending issues(power load/power market participant competitive
structure/trading system, etc.). Furthermore, it is necessary to focus on
building a foundation that can maximize the value of small-scale distributed
resources by differentiating integrated control of distributed resources and
services that can be traded in the electricity market through the advancement of
core technologies. |
Keywords: |
Energy-ICT convergence, P2P power transactions, Business model, Distributed
resources, Electricity market, Power load, Power market, Trading system,
Integrated control |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
COMPARISON OF ACTIVITY-BASED COSTING AND TIME-DRIVEN ACTIVITY-BASED COSTING FOR
PRINTED CIRCUIT BOARD ASSEMBLY PRODUCTION |
Author: |
NUR SYAFIKAH PINUEH, MOHD YAZID ABU, NURUL HAZIYANI ARIS, MUHAMMAD ARIEFFUDDIN
MOHD JAMIL, EMELIA SARI |
Abstract: |
The number of resources employed determines product costs in traditional costing
analysis (TCA). TCA is flawed because manufacturing overhead may be
substantially larger than the basis of allocation, therefore a little change in
resource volume causes a big change in overhead. Thus, activity-based costing
(ABC) was created to solve TCAs cost allocation issues by analyzing overhead
expenses and cost drivers more thoroughly. Many cost factors may be utilized to
identify overhead causes and reduce overhead expenses. ABC was theoretically
inappropriate to overlook spare capacity that may help forecasting. For that
reason, time-driven activity-based costing (TDABC) has been studied because it
maximizes capacity cost rate and time equations to create underused capacity
information. This purpose of this study observes, analyses, and compares ABC and
TDABC methods to determine which is more effective for production. Four steps
comprise this studys approach. Phase 1 defines the problem, while phase 2
involves data collecting on location. Phase 3 implements ABC and TDABC
methodologies and compares their costing analyses. The last phase, phase 4
concludes this research. As a result, this study shows ABC is transparent and
can forecast unit product cost using cost driver rate. TDABC has objective cost
driver determination, eliminates time-consuming processes, various cost drivers,
and capacity utilization analysis for forecasting and planning. Thus, the
comparative study met the third purpose of comparing ABC and TDABC costs
sustainment utilizing numerous aspects. Finally, TDABC is the best practice
because it provides better information than ABC in cost allocation, driver
determination, action taken for an additional activity, cost consideration,
informative, transparency, oversimplification of activities, and capacity
forecast. |
Keywords: |
ABC, TDABC, PCBA, Cost Allocation, Cost Forecasting |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
DESIGN AND IMPLEMENTATION OF A CONTAINER ORCHESTRATION SYSTEM FOR DISTRIBUTED
REINFORCEMENT LEARNING DATA ANALYSIS |
Author: |
SEOK JAE MOON, SEO YEON GU |
Abstract: |
Recently, reinforcement learning has shown excellent performance in solving
complex data analysis problems in the real world, and many companies are
actively introducing it. However, as the diversity and complexity of corporate
business processes continues to increase, existing reinforcement learning
approaches have limitations in solving complex problems. To cope with this, more
sophisticated algorithms have been developed, but these algorithms require high
computational resources. As a result, many enterprises seek to obtain the
computing resources they need by leveraging distributed environments such as the
cloud. However, as the diversity and complexity of enterprise business processes
increase, the workload that cloud service providers must manage becomes more
complex. Therefore, container orchestration mechanisms are becoming more complex
and resource utilization is becoming more difficult. Therefore, in this study,
we propose a container orchestration system for distributed reinforcement
learning data analysis. This proposed system consists of User Interface, Task
Processing Layer, and Infrastructure. In addition, through performance
comparison experiments with existing centralized processing methods, that the
proposed system is suitable for data analysis in a distributed environment. |
Keywords: |
Container Orchestration, Distributed Reinforcement Learning, Data Analysis |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
PERFORMANCE ANALYSIS OF NHPP-BASED SOFTWARE RELIABILITY MODEL WITH INVERSE-TYPE
LIFE DISTRIBUTION PROPERTY |
Author: |
SEUNG KYU PARK |
Abstract: |
In this work, the performance of the NHPP-based software reliability model
applying the Inverse-type distribution, which is widely utilized to various
types of reliability life distributions, was newly identified. Accordingly,
software failure time data was used to analyze reliability performance by
predicting failures that may occur in the software operation, and the solution
of parameters were estimated using maximum likelihood estimation. As a result,
first, as a result of evaluating the criteria value (MSE and R^2) for efficient
model selection, the efficiency of the Inverse-Exponential model was evaluated
as the best. Second, as a result of analyzing the attributes data (m(t), λ(t), R
̂(τ)) that determine reliability performance, the Inverse-Exponential model was
the most efficient. In conclusion, through various comparative analyses, the
Inverse-Exponential model was found to have the best performance. Through the
results of this study, the reliability performance of the Inverse-type life
distribution for which there is no existing research data was newly analyzed,
and basic design and test data necessary for an efficient software development
process could also be presented. In the future, after exploring applicable
statistical distributions for each software convergence industry, follow-up
studies to find an optimal model will be needed. |
Keywords: |
Goel-Okumoto, Inverse-Exponential, Inverse-Rayleigh, Inverse-type, NHPP Model,
Reliability Performance |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
A STUDY OF VIRTUAL REALITY CYBERSICKNESS AND CHANGES IN BINOCULAR VISUAL
FUNCTION AND POSTURAL BALANCE |
Author: |
JAE-BEOM SON, JAE-MIN LEE, JEONG-LAE KIM, HYUN-SUNG LEEM |
Abstract: |
To analyze the causes of virtual reality (VR) cybersickness, which varies in
severity from person to person after playing VR games using a head-mounted
display (HMD) equipment, this study attempts to see how it affects binocular
visual function and postural balance, and how it correlates with cybersickness.
Before and after playing the VR game, the virtual reality sickness questionnaire
(VRSQ) was used to objectively assess changes in cybersickness. Changes in
binocular visual function were determined by measuring distance and near phoria,
near point convergence (NPC), positive and negative fusional vergence at
distance and near, and vergence facility. Finally, the stability index (SI) and
fall risk index (FI) were confirmed using the TETRAX (balance ability test) to
examine postural balance.Subjects were selected by controlling for variables
such as age, corrected vision, and medical conditions that could affect the
results. Significant changes in VRSQ, binocular visual function, and postural
balance occurred after the VR game. |
Keywords: |
Virtual Reality(Vr), Head-Mounted Display, Cybersickness, Virtual Reality(Vr)
Motion Sickness, Binocular Visual Function, Balance Test (Tetrax) |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
NEUMANN STACKED BILATERAL DEEP LEARNING BASED BIG SENTIMENT DATA ANALYTICS |
Author: |
M ANOOP, K SUTHA, UMA SHANKARI SRINIVASAN, M BALAMURUGAN, V ANITHA, J EMERSON
RAJA |
Abstract: |
Sentiment analysis extracts information from several text sources like, blogs,
reviews, news, and so on. The purpose of sentiment analysis on big data is to
classify emotions or opinions into variegated sentiments. Conventional deep
learning methods have been developed to classify the tweets. However, longer
sentiment analysis time was considered. To address the issue, Neumann Mutual
Informative and Stacked Bilateral Deep Learning (NMI-SBDL) for sentiment
investigation is proposed to research products or services before making a
purchase. First, through the tweets obtained from the Sentiment140 dataset, the
Knowledge Sentimental Graph is constructed. Second, computationally-efficient
dimensionality reduced tweets are generated by the Neumann Mutual
Information-based Feature selection algorithm. Finally, the Stacked Bilateral
LSTM-based model is utilized for classifying the tweet polarity. With this
robust sentiment analysis is made by the Twitter Application Programming
Interface (API) with higher accuracy and lesser computation time. Experimental
assessment of the proposed NMI-SBDL and existing methods are carried out with
different factors using Python libraries. The results of NMI-SBDL provided for
improving the sentiment analysis accuracy, precision, recall and lesser time by
13%, 6%, 6%, and 23% than the existing approaches. The paper concludes with
accurate and robust sentiment analysis for big data. |
Keywords: |
Big Data, Sentiment Analysis, Neumann Mutual Information, Feature Selection,
Stacked Bilateral, Long Short-Term Memory |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
A STUDY ON STRESS REDUCTION USING INTELLIGENT EMOTIONAL INFORMATION SYSTEM |
Author: |
HYE-KYEONG KO |
Abstract: |
Stress is an integral part of modern life, and everyone experience if at some
point. The stress experienced by teens has a wide range of mental and physical
effects. This study examines an intelligent emotional information system that
applies color therapy to measure the degree and cause of stress among
adolescents with the goal of relieving it. Various types of emotional
information can be gathered through emotional information design, which is
important in understanding the end user as part of human-centered design,
whereby a color information design method is proposed for each emotion using
fuzzy neural networks. |
Keywords: |
Emotional Information System, Stress Reduction, Color Therapy, Fuzzy Neural
Network |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
REGION-BASED FULLY DEEP CONVOLUTIONAL NEURAL NETWORKS ENHANCED WITH CARNIVOROUS
PLANT ALGORITHM FOR PLANT DISEASE DETECTION AND CLASSIFICATION |
Author: |
E SARASWATHI , J FARITHA BANU |
Abstract: |
The pace of agricultural production is one of the most critical factors
determining a nation's overall economic well-being. Diagnosing symptoms at an
earlier stage can significantly reduce the spread of infectious diseases,
increasing agricultural productivity. This research aims to advance agricultural
technology and eventually boost agricultural productivity and financial outcomes
by providing an effective approach for early detection and management of plant
diseases. The automated detection of plant diseases is essential to plant
monitoring since plant diseases are one of the most significant challenges faced
in agriculture. This research proposes a method for recognizing and categorizing
plant illnesses using region-based fully deep convolutional neural networks
(RFDCNN). This method also utilizes Carnivorous Plant Algorithm (CPA) to enhance
the average, mean precision of the results obtained using RFDCNN architecture
and feature extraction. 230 plant disease leaf images like potato, Strawberry,
pepper, peach, Squash, citrus, tomato, cherry, etc., from a farm field were
gathered for this plant disease classification, along with a dataset of plant
villages. According to the findings of the experiments, the RFDCNN-CPA deep
learning framework that was suggested is able of accurately and efficiently
classifying the many different varieties of plant leaves. The end results
demonstrated that the RFDCNN-CPA obtained a higher accuracy rate of 97.92%,
precision analysis of 95.82%, an f-score of 90.52%, and an overall execution
time of 0.42ms. |
Keywords: |
Agriculture, Deep Learning, Plant Disease Detection and Classification,
Region-based Fully Deep Convolutional Neural Networks, Computer Vision |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
AFFORDANCE-DRIVEN DESIGN FOR DIGITAL LEARNING |
Author: |
JIHWA NOH, YESON KIM , BYEONGSOO KIM |
Abstract: |
This investigation delves into the utilization of digital math learning
platforms from an affordance design perspective, aiming to pinpoint crucial
elements that significantly enrich learning experiences within the digital
realm. Through an in-depth dissection of ten specific affordances categorized
into Content, Pedagogy, and Functionality domains, this study meticulously
evaluates how these components are manifested across six digital platforms
widely adopted within South Korean public education. Employing a comprehensive
analytical approach, the research uncovers notable deficiencies in the
integration of these essential affordances, with a particular focus on the
widespread presence of Nonlinearity contrasted against the stark
underrepresentation of Adaptivity. These findings illuminate a prevalent
design-practice gap, suggesting that the pedagogical potential of digital
learning tools is not being fully harnessed to facilitate enhanced learning
outcomes. The analysis further reveals how the integration or lack thereof of
these affordances impacts the efficacy of digital learning environments,
suggesting a critical need for a more holistic integration strategy.
Contributing a structured analytical framework to the ongoing discourse on
digital education, this study not only highlights current shortcomings but also
paves the way for the development of next-generation learning technologies.
These advancements are envisioned to support diverse educational strategies more
effectively, promoting dynamic and adaptive learning environments that can
better cater to the evolving needs and preferences of students. This research
advocates for a paradigm shift in the design and development of digital learning
platforms, aiming to fully exploit the transformative potential of digital
education in fostering more engaging, personalized, and effective learning
experiences. |
Keywords: |
Affordance, Digital learning, Learning platform, Mathematics Education |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
EMBEDDED PATTERN ANALYSIS OF PLANAR PHASED ARRAY ANTENNA FOR X BAND
COMMUNICATION SYSTEMS |
Author: |
KALAIARASI. D, M.R.EBENEZAR JEBARANI |
Abstract: |
This study utilises the embedded element pattern to evaluate the Phased Array
Antenna (PAA) in the context of the X-band communication system. All of them are
linked to a finite array reference impedance. The central element of the
embedded pattern is determined by a single element that is embedded within the
finite array. The preparation of a Phased Array Antenna involves the use of a
large number of radiating components that are connected to a phase shifter. This
phase shifter is responsible for adjusting the phase of the radiating elements
and producing a beam in the desired direction. PAA offers several advantages
over a single radiating element, including increased power density, efficiency,
directivity, and gain. This study examines the design and analysis of a
rectangular phased array antenna commonly employed by military forces for
satellite communications in the X-band frequency range. |
Keywords: |
Dipole Antenna, Directivity, Rectangular planar, Phased Array Antenna, Satellite
Communication, X-band |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
EFFICIENT VIDEO COMPRESSION USING DEEP JOINT OPTIMIZATION METHOD WITH MOTION
ESTIMATION AND INTER-FRAME PREDICTION |
Author: |
DR. M. CHANDRA SEKHAR, SAMPURNIMA PATTEM , B.SHRAVAN KUMAR DR. SUMAGNA PATNAIK |
Abstract: |
In the contemporary era, there is unprecedented increase in multimedia content,
especially videos, leading to consumption of more bandwidth when transmitted.
Video compression is the technique that leverages performance of video
transmission as it reduces original size of the video. Though the conventional
video compression methods have classical architecture to encode motion and
residual information efficiently, it lacks the ability to have non-linear
representation of data. In this paper, we proposed a framework named Artificial
Intelligence (AI) enabled Video Compression Framework (AIVCF) which exploits the
traditional classical architecture and combines it with a deep learning model
for non-linear data representation. This framework has ability to have joint
optimization of underlying components. Convolutional Neural Network (CNN) is
used to reconstruct current frames by getting motion information through a
process known as optical flow estimation. The information of given video is
compressed using deep learning models in auto-encoder fashion. The framework
strikes balance between quality and compression ability. An algorithm named Deep
Joint Optimization for Video Compression (DJO-VC) is proposed to realize the
AIVCF. The proposed framework is evaluated with empirical study. The
experimental results, in terms of PSNR and SSIM revealed that the proposed
framework outperforms existing models such as H.264. |
Keywords: |
Video Compression, Deep Learning, Convolutional Neural Network, Artificial
Intelligence Enabled Video Compression Framework |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
A GLOBAL REVIEW OF CLOUD COMPUTING ADOPTION IN SMALL AND MEDIUM ENTERPRISE |
Author: |
RUSLAINI , DARMAWAN NAPITUPULU |
Abstract: |
Cloud computing is an emerging computing architecture that has gained favor
among government agencies and businesses that need large-scale, low-cost
computing. It allows services to be delivered in a scalable manner using
machines in massive data centers. Cloud computing technology is becoming more
and more significant as small and medium enterprises (SMEs) go through a digital
transformation. In addition to enhancing accessibility, scalability, efficiency,
and data security, it might help SMEs save money. Though several studies have
been conducted on the subject, there is still a paucity of bibliometric research
on the use of cloud computing in the SMEs sector. Bibliometric analyses provide
a crucial role in understanding the evolution of academic research in particular
fields. Consequently, a detailed examination of the trend and acceptance of
cloud computing in the public sector is needed. To achieve this, bibliometric
performance analysis and scientific mapping methodologies were used to analyze
most publications and citations on cloud computing adoption in SMEs in this
study. Finding the most productive authors, organizations, countries, and
publishing sources was part of this. In addition, a clustering analysis was
carried out to identify the primary research theme about SMEs' use of cloud
computing during a ten-year period. With the use of the statistical programs
Vosviewer and Biblioshiny, the dataset is shown and conclusions are reached
while accounting for 246 articles that were extracted from the Scopus database
and produced data between 2010 and 2024. The results of this study have added
significantly to the body of literature and current knowledge by deepening our
understanding of how cloud computing is used in SMEs. |
Keywords: |
Cloud Computing, SME, Adoption, Bibliometric Analysis, Review |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
STRENGTHENING PAYMENT CARD DATA SECURITY: A STUDY ON COMPLIANCE ENHANCEMENT AND
RISK MITIGATION THROUGH MFA IMPLEMENTATION UNDER PCI DSS 4.0 |
Author: |
PIETERS NICHOLAS PARADONGAN TAMBUNAN , NILO LEGOWO , DENNIS RYDARTO TAMBUNAN |
Abstract: |
The increasing use of electronic payments and the growing reliance on payment
card transactions have underscored the importance of robust security measures to
protect payment card data. The Payment Card Industry Data Security Standard (PCI
DSS) has long been recognized as a crucial framework for ensuring security and
compliance in handling payment card data. Amidst evolving cyber threats, the
adoption of Multi-Factor Authentication (MFA) has emerged as a critical strategy
to enhance the security of payment card data, improve compliance with PCI DSS
4.0, and mitigate associated risks. This study involves payment gateway
organizations subject to PCI DSS 4.0 compliance requirements. Qualitative data
confirms the effectiveness of MFA in thwarting cyber threats and enhancing
overall payment card data security. In an era marked by evolving cyber threats,
this research emphasizes the importance of implementing MFA to bolster payment
card data security, achieve compliance with PCI DSS 4.0, and mitigate risks. The
findings of this study offer actionable insights for organizations seeking to
strengthen their payment card data security measures and align with regulatory
standards. |
Keywords: |
Payment Card Data Security, PCI DSS 4.0, Multi-Factor Authentication (MFA),
Compliance Enhancement, Risk Mitigation. |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
ADVANCED HYBRID PREDICTION MODEL: OPTIMIZING LIGHTGBM, XGBOOST, LASSO
REGRESSION, AND RANDOM FOREST WITH BAYESIAN OPTIMIZATION |
Author: |
MR. SANJAY KUMAR , DR. MEENAKSHI SRIVASTAVA , DR.VIJAY PRAKASH |
Abstract: |
This paper proposes an advanced hybrid prediction model that combines the
strengths of LightGBM, XGBoost, Lasso Regression, and Random Forest. To optimize
the hyperparameters of these diverse models, we leverage Bayesian Optimization,
a powerful technique for efficient hyperparameter search. The proposed model
integrates predictions from optimized individual models, potentially leading to
improved accuracy and robustness. Our experimental evaluation demonstrates the
effectiveness of the proposed hybrid model compared to baseline models. This
study compares the performance of various regression models, including Random
Forest, Lasso Regressor, XGBoost Regressor, and LightGBM, against a proposed
hybrid model. Evaluation metrics such as Mean Squared Error (MSE), Root Mean
Squared Error (RMSE), Mean Absolute Error (MAE), R2 Score, Explained Variance
Score (EVS), Mean Absolute Percentage Error (MAPE), and Mean Percentage Error
(MPE) are analyzed. The proposed hybrid model demonstrates superior performance
across all metrics, with observed values of 1.622813174 for MSE, 1.273896846 for
RMSE, 0.652113986 for MAE, 0.99996681 for R2 Score, 0.999966815 for EVS,
0.177679435 for MAPE, and -0.001810521 for MPE. These results show the potential
of the hybrid model for accurate prediction in regression tasks. This research
contributes to the field of advanced prediction modeling by offering a novel
hybrid approach that leverages Bayesian Optimization for improved performance
and interpretability. In future work, researchers plan to explore additional
machine learning algorithms and optimization techniques to further enhance the
performance of the hybrid model. The hybrid prediction model developed in this
study holds great promise for advancing predictive analytics and decision
support systems in diverse application domains. |
Keywords: |
LightGBM, XGBoost, Lasso Regression, Random Forest, Bayesian Optimization,
Hybrid prediction model, Interpretability. |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
COLOR IMAGE ENCRYPTION BASED ON ARNOLD CAT MAP- ELLIPTIC CURVE KEY AND A HILL
CIPHER |
Author: |
DESAM VAMSI , PRADEEP REDDY CH |
Abstract: |
The role of Image encryption in communication over the internet is gaining a lot
of interest in recent days with the increased use of the internet. Transfer of
crucial images over the internet is prone to attack and stealing when shared on
an unsecured medium. To avoid these attacks and theft of image data, encryption
is the best process to be used. In this paper, a novel approach of sub- image
shuffled Arnold cat followed by an elliptic curve is used for encryption and
decryption. The RGB planes of the color image are extracted, and each plane is
divided into four sub-images that undergo the Arnold cat transformation that
effectively scrambles and shuffles the image values, and four sub-images are
combined back. The Arnold transformed RGB planes are combined into a single
image that is further processed with an Elliptic curve key to compute a
self-invertible key matrix and perform Hill cipher operation for encryption and
decryption. The proposed method generates good quality cipher images with an
entropy value of 7.9, low correlation, and can resist statistical and
differential attacks. |
Keywords: |
Encryption, Arnold Cat, Elliptic Curve, Self-Invertible Key Matrix, Hill Cipher,
RGB Planes |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
AI ENABLED SYSTEM WITH REAL TIME MONITORING OF PUBLIC SURVEILLANCE VIDEOS FOR
ABNORMALITY DETECTION AND NOTIFICATION |
Author: |
PRATHAP ABBAREDDY, DR. SK. YAKOOB, RAMU KUCHIPUDI, BHUJANGA REDDY BHAVANAM |
Abstract: |
Public surveillance videos are increasingly playing key role in identification
of certain incidents and people who misbehave or perform illegal activities.
Monitoring surveillance videos manually to detect abnormalities is time
consuming and it may lead to delay in getting required information. With the
usage of Artificial Intelligence (AI) video analytics in real time can help in
acquiring such information on time so as to make well informed decisions.
Particularly deep learning is great help in learning from incidents and detect
anomalous behaviours. In this study, we suggested an autonomous system for
anomaly detection from surveillance films, based on deep learning. For anomaly
detection, an improved Convolutional Neural Network (CNN) model is employed. We
presented a method that utilizes the upgraded CNN model for its functionality,
called Learning based Video Anomaly Detection (LbVAD). To lower the prediction
process's error rate, a loss function is defined. For our empirical
investigation, we gathered data from many benchmark datasets, including UMN,
UCSD, Ped1, and Ped2. The suggested approach works better than the current
models, according to the results of our experiments. |
Keywords: |
Machine Learning, Deep Learning, Artificial Intelligence, Video Abnormality
Detection |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
A HYBRID MALWARE DETECTION FRAMEWORK WITH DRIFT ADAPTATION FOR TIMESTAMPED DATA |
Author: |
HARINADH VARIKUTI , VALLI KUMARI VATSAVAYI |
Abstract: |
As technology is growing rapidly, new malware variants are evolving. Attackers
are generating new malware patterns using code obfuscation and mutation
techniques to escape from anti-malware engines and traditional models. Models
built over historical malware data cannot perform well in detecting or
classifying dynamic or real-time malware data. The Evolution of new malware
patterns may lead to concept drift. Static models built over historical data
trained from scratch whenever new data arrives which is time-consuming, so
models with adaptive nature are used. Adaptive machine learning models are used
to identify the drift in data and perform appropriate changes dynamically to the
models over time. In this paper, a hybrid model is proposed which is the
combination of Leveraging bagging and AdaBoost methods for incremental ensemble
learning. Dynamic weights are assigned to the models used in the ensemble
learning to increase the adaptivity. The use of boosting and bagging methods
adds error adoption and diversity to the model respectively. Experiments also
show the proposed model outperforms all the state-of-the-art drift adaption
methods such as ADWIN Bagging, ADWIN Boosting, SRP Classifier, etc. |
Keywords: |
Concept drift, Machine learning, Malware, drift detection, Bagging, Boosting |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
FINANCIAL REVOLUTION: INNOVATION POWERED BY FINTECH AND ARTIFICIAL INTELLIGENCE |
Author: |
HASSAN CHIKRI , MANAR KASSOU |
Abstract: |
The article explores the dynamic merger between Fintech (financial technology)
and Artificial Intelligence (AI), marking the start of a global financial
revolution. The partnership between Fintech and AI is becoming the driving force
behind significant revolution in the finance industry, reshaping our approach,
use, and comprehension of financial matters in an environment that is always
evolving. An in-depth analysis of these recent advancements brings attention to
the untapped possibilities and difficulties that arise from this merging of
technologies. The development of Fintech, which occurs at the intersection of
finance and technology, centers on two essential elements: providing client
service and ensuring transaction security. Fintech aims to enhance customer
experience by creating a more customized and user-friendly financial
environment. Intuitive applications, automated financial advisers, and easily
available services remove conventional obstacles, enabling quick and transparent
transactions. Fintech prioritizes transaction security with the use of
sophisticated processes, including biometrics and two-factor authentication,
bolstered by technology like machine learning to identify and prevent fraud.
Regtech solutions are used to guarantee adherence to regulatory requirements,
streamlining the administration of complex financial rules. The paper emphasizes
the significance of agility and innovation in the Fintech industry, specifically
stressing AI as a crucial catalyst for innovation. Artificial intelligence (AI)
has a significant influence on several industries, ranging from banking to
medical research, via the automation of intricate procedures, enhancement of
service customisation, and reinforcement of transaction security. The
amalgamation of Fintech with AI has substantial advantages, fundamentally
transforming the manner in which financial services are provided, used, and
controlled. Artificial intelligence facilitates thorough examination of
financial data, identification of fraudulent activities, and automation of
processes, hence facilitating prompt and well-informed decision-making.
Nevertheless, the paper also emphasizes the difficulties associated with data
security, ethical ramifications, and regulatory apprehensions in this constantly
evolving digital landscape. Artificial intelligence (AI) is playing a pivotal
role in the financial revolution, introducing substantial modifications that
redefine the norms of the financial sector. Fintech, driven by artificial
intelligence, offers a smooth, easily accessible, and highly secure experience,
which is reshaping the future of finance in the contemporary digital
environment. |
Keywords: |
Fintech, Artificial Intelligence, Financial Revolution, Financial Innovation,
Blockchain Technology |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
EVALUATING THE PERFORMANCE OF XGBOOST AND GRADIENT BOOST MODELS WITH FEATURE
EXTRACTION IN FMCG DEMAND FORECASTING: A FEATURE-ENRICHED COMPARATIVE STUDY |
Author: |
MURARI THEJOVATHI , DR M.V.P. CHANDRA SEKHARA RAO |
Abstract: |
In this paper We are proposing the inclusion of Gradient Boost, another ensemble
technique, to broaden the scope and potentially improve forecasting accuracy.
This research looks at how XGBoost and Gradient Boost, two powerful ensemble
learning methods, can be used to predict demand in the FMCG sector. The
suggested method also includes advanced feature extraction techniques to make
the model work better. The current method uses XGBoost, a well-known and
effective gradient-boosting technique that is fast and easy to scale. The
suggested method includes gradient boost, which is another ensemble technique,
as well as feature extraction techniques that help find and use the dataset's
most important information. The research aims to compare the performance of
XGBoost and Gradient Boost models in the context of demand forecasting for
Fast-Moving Consumer Goods (FMCG) data. Additionally, the study incorporates
feature extraction methods to enhance the models' predictive capabilities. We
test both models thoroughly using FMCG data to see how well they work in terms
of accuracy, reliability, and how quickly they can be run. To find the factors
that have the most influence on demand prediction, feature extraction techniques
like Principal Component Analysis (PCA) and Recursive Feature Elimination (RFE)
are used. The study's results tell us a lot about how well the XGBoost and
Gradient Boost models work for predicting demand in the FMCG sector. Using
feature extraction methods is also meant to find hidden patterns in the data,
which will help supply chain professionals in the FMCG business make more
accurate predictions and better decisions. Researchers can use the results of
this study to help them choose the best method for their own demand forecasting
needs. This will improve operational efficiency and cut costs in the FMCG supply
chain. |
Keywords: |
GaradientBoost, XGBoost, FMCG Sector, Feature Extraction, Demand forecasting |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
ENHANCED RICE PLANT DISEASE IDENTIFICATION: A HYBRID APPROACH OF TRANSFER
LEARNING, SVM AND PCA |
Author: |
SURENDER MOGILICHARLA , UPENDRA KUMAR MUMMADI |
Abstract: |
The accurate and timely identification of diseases and pests impacting rice
cultivation is crucial for farmers, allowing for swift intervention and thereby
minimizing economic losses. Recent progress in convolutional neural networks
(CNNs) has significantly boosted the accuracy of image classification, yet their
resource-intensive nature, demanding significant memory and processing power,
underscores the necessity of leveraging pre trained models. Additionally, the
paper introduces an ensemble approach combining deep learning with traditional
machine learning methods, further enhancing the effectiveness of disease and
pest detection in agricultural settings. The paper presents a novel approach
leveraging state-of-the-art large-scale architecture, ResNet-50, to propose two
distinct models: Model-1 integrates ResNet-50 with SVM, while Model-2
incorporates ResNet-50 with PCA and SVM to effectively detect and identify rice
diseases and pests. Through experimentation on authentic and real datasets, the
paper demonstrates the effectiveness of these models. Additionally, recognizing
the constraints of large-scale architectures, especially concerning their
compatibility with mobile or embedded devices due to processing power and memory
limitations, the paper introduces and evaluates two proposed models. Notably,
Model-2 surpasses Model-1, achieving a superior accuracy of 93.7% compared to
91.6% in Model-1. Moreover, Model-2 significantly reduces the feature set size
by 100% compared to Model-1 through dimensionality reduction using PCA. |
Keywords: |
Deep Learning, Ensemble Methods, Rice Plant Disease Detection, Pretrained
Models, Transfer Learning |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
EA HEART DISEASE DIAGNOSIS SYSTEM EMPLOYING RESIDUAL CONVOLUTIONAL NEURAL
NETWORKS WITH ADAPTIVE CROSS-LAYER STACKED ARCHITECTUREA |
Author: |
SLAKSHMI ANUSHA KOTHAMASU, MAHANKALI SARITHA, B SWATHI, KOORAGAYALA SUKEERTHI |
Abstract: |
When all diseases are considered, heart disease is the leading cause of
mortality worldwide. Heart disease is a significant risk. Whenever the blood and
oxygen-supplying arteries to the heart become completely blocked or constricted,
a cardiac issue results. Heart disease is one of the primary causes of death. In
a short period of time, there has been a substantial increase in mortality.
Accurate and secure diagnosis should be given priority in healthcare since a
misdiagnosis of heart illness may result in mortality. If the prognosis is true,
although cardiovascular disease may be avoided, if the prediction is incorrect,
it may be dangerous. As a result, the team has created an ACLS-RCNN based ICSOA
technique-based Intelligent heart disease prediction model. In the beginning,
the patient data is acquired and pre-processed. Handling missing value handling,
the three primary components of data pre-processing include scaling with IQR-RS,
managing imbalance data using ROS, and pre-processing the data itself. Features
are extracted using Kernel-based Linear Discriminate Analysis after the
pre-processed data. Then, to eliminate pertinent components, Utilizing hybrid
optimization, also known as (CI-AO+GI-SMO), valuable data is selected. Using
recommended Cross-Layer Stacked Residual Convolutional Neural Networks that are
Adaptive, an illness will be predicted. The prediction technique is improved by
using the Improved Cuttlefish-Swarm Optimization Algorithm (ICSOA). Using Python
simulation, the recall, accuracy, f1-score and precision of the suggested system
are compared to those of other well-known methods. With regard to clinical data
and diagnoses, our technology is capable of making exact predictions. |
Keywords: |
Improved Cuttlefish-Swarm Optimization, Convolutional Neural Network, Heart
Disease, Correlation-based feature selection, Kernel-based Linear Discriminant
Analysis |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
Title: |
SENTIMENT-BASED RECOMMENDATION FOR ONLINE SHOPPING |
Author: |
MAFAS RAHEEM, NIRASE FATHIMA ABUBACKER, DEVINA WIYANI |
Abstract: |
In E-commerce, a widely used strategy to improve the customer shopping
experience and address information overload is the implementation of
recommendation systems. Many E-commerce platforms have their proprietary
recommendation algorithms, with content-based filtering being a commonly
employed approach. This algorithm provides non-personalized suggestions to users
based on the similarity of content. The shopping experience involves the
decision-making process, with a significant focus on information search. In
online shopping, customers heavily rely on information search to gain in-depth
insights into products, as physical interaction is not possible. Customer
reviews play a crucial role in this information search, offering shoppers the
opportunity to learn from the experiences of previous customers. Despite the
importance of customer reviews, existing recommendation solutions often overlook
this aspect in their product recommendations. To address this gap, sentiment
analysis, a natural language processing task frequently used in reviews, is
employed to classify, and quantify text based on its polarity. This research
introduces a recommendation pipeline that combines content-based filtering,
utilizing cosine similarity calculations, and sentiment analysis, utilizing a
pre-trained RoBERTa language model. The focus is on quantifying customer reviews
from an E-commerce platform in Malaysia. The goal of this research is to develop
an embedded system that recommends products to users based not only on their
similarity but also on high ratings from various E-commerce sites. |
Keywords: |
Sentiment Analysis, Content-Based Recommendation, Sentiment-Based Ranking,
Product Review Analysis |
Source: |
Journal of Theoretical and Applied Information Technology
15th May 2024 -- Vol. 102. No. 9-- 2024 |
Full
Text |
|
|
|