|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
August 2024 | Vol.
102 No.16 |
Title: |
A NEW CHANNEL ESTIMATION FRAMEWORK FOR 6G INTELLIGENT REFLECTING SURFACE-ENABLED
MIMO |
Author: |
NAZIYA BEGUM, DR. MUHIEDDIN AMER, DR. OMAR ABDUL LATIF |
Abstract: |
The Intelligent Reflecting Surface (IRS) serves as a technology enabling passive
manipulation of wave properties like amplitude, frequency, phase, and
polarization through reflection. This technology is poised to revolutionize
wireless communication by enhancing spectrum and energy efficiency while
demanding minimal energy consumption. However, in scenarios where an IRS assists
a base station (BS) with multiple antennas and user equipment (UE) with a single
antenna, obtaining Instantaneous Channel State Information (I-CSI) for every
link at the IRS poses challenges due to the numerous reflective elements and
passive operation of the IRS. This imposes additional burdens on the system,
necessitating the integration of radio-frequency chains into the IRS system. To
address this, the paper proposes a three-phase pilot-based channel estimation
framework for uplink multiuser communications, utilizing modular redundancy to
reduce the time needed for channel estimation. The framework leverages IRS to
achieve this goal by estimating the direct channels between UEs and BSs, as well
as the reflected channels between a single UEs, IRS, and BS. |
Keywords: |
6G, IRS, MIMO, Channel estimation, Uplink, Downlink, Power utilization |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
USING DECISION TREE TECHNIQUE TO ANALYZE STUDENTS’ REFLECTIVE THINKING,
FEEDBACK, AND PERFORMANCE |
Author: |
SITI KHADIJAH MOHAMAD, ZAIDATUN TASIR, SITI NAZLEEN ABDUL RABU |
Abstract: |
This study aims to apply a decision tree technique to understand how learning
performance patterns in an educational blogging environment can be formulated
and forecasted based on reflective thinking skills, feedback types, and
performance test increment levels. A case study research design was adopted,
involving qualitative data from students’ reflections on blogs and quantitative
data on students’ performance. Data collection spanned 14 weeks and involved 18
postgraduate students enrolled in the Authoring System course. The data was
prepared in .arff format and mined using the WEKA 3.6.6 machine learning toolkit
to generate learning performance patterns. A random tree algorithm was applied
to classify the data and evaluated using a three-fold cross-validation
parameter. As a result, eight learning performance patterns were generated for
three increment categories: P3, P4, and P5. From the generated patterns, it is
evident that the higher the increment category of learning performance, the
higher the levels of reflective thinking skills and feedback types utilised by
students in their reflections. Moreover, Descriptive Reflection (DR) was noted
as the most influential attribute differentiating the variables predicting all
three learning increment classes. In addition to learning performance patterns,
the overall performance of this predictive model was moderate (recall = 50%,
precision = 50%) and acceptable (ROC = 61%). The findings are beneficial in
identifying at-risk students, such as those struggling with reflection and
displaying lower reflective thinking skills and feedback. This can alert
instructors to take early action for intervention purposes. |
Keywords: |
Reflective Thinking, Feedback, Learning Performance, Decision Tree Data Mining,
Blogging |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
A CONCEPTUAL FRAMEWORK FOR LEVERAGING CLOUD AND FOG COMPUTING IN DIABETES
PREDICTION VIA MACHINE LEARNING ALGORITHMS: A PROPOSED IMPLEMENTATION |
Author: |
EDMIRA XHAFERRA, FLORIJE ISMAILI2, ELDA CINA, ANILA MITRE |
Abstract: |
This paper presents a theoretical framework for forecasting diabetes in Albania
by combining cloud and fog computing with machine learning methods. The
framework is designed to improve the effectiveness, expandability, and ethical
accountability of diabetes management systems. It is customized to address the
unique difficulties faced in the Albanian healthcare setting, such as restricted
data accessibility and infrastructure limitations. The research used a
mixed-methods approach, integrating both quantitative and qualitative
procedures. In terms of quantity, it uses several data sources such as
Electronic Health Records (EHRs), wearable devices, and laboratory testing.
These data sources are subjected to thorough data preparation and feature
selection algorithms. Machine learning models are assessed by employing measures
such as accuracy and recall in conjunction with cross-validation procedures. The
framework’s practicality and usability are evaluated using interviews, focus
groups, and observational studies conducted in clinical settings to provide a
qualitative assessment. This research is noteworthy due to its pioneering
combination of cloud and fog computing with data analytics in the healthcare
field, specifically in predicting diabetes. It creates new knowledge in
integrating machine learning with healthcare data analytics to develop scalable
and efficient predictive models for low-resource settings. It focuses on the
requirement for predictive models that can operate in real-time and scale well,
especially in contexts with limited resources, such as Albania. The study
focuses on patient privacy, data security, and equal access to health services,
making a valuable contribution to the academic discussion on the ethical
implementation of AI in healthcare. This methodology not only enhances digital
health research but also establishes a precedent for customizing technology
solutions to address unique regional healthcare concerns. |
Keywords: |
Diabetes, Prediction, machine learning, cloud computing, fog computing |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
ADVANCING HISTOPATHOLOGIC CANCER DETECTION USING DIVERSE CNN ARCHITECTURES AND
TRANSFER LEARNING |
Author: |
N.NAGA SUBRAHMANYESWARI, M.H.M.KRISHNA PRASAD |
Abstract: |
Histopathologic cancer detection plays a crucial role in early diagnosis and
treatment planning. This research explores the application of distinct
convolutional neural network (CNN) architectures, including VGG, InceptionV3,
ResNet50, DenseNet121, and a custom CNN, for improving histopathologic cancer
detection through transfer learning and advanced data augmentation techniques.
Cancer Detection dataset is collected from Kaggle serves as the foundation for
experiments. Transfer learning is employed from general image datasets like
ImageNet and medical imaging datasets, tailoring the models to histopathologic
characteristics. Each CNN architecture is examined independently to understand
its unique contribution to feature extraction. Advanced data augmentation
strategies, carefully designed to address limited annotated data, enhance the
generalization capabilities of each model. These strategies, including rotation,
shift, shear, and zoom, contribute to improved performance in histopathologic
cancer detection. The work presents comprehensive analyses of each model's
performance metrics, providing insights into their strengths in capturing
intricate histological patterns. Results showcase the effectiveness of the
individual CNN architectures, each demonstrating superior performance metrics.
The findings underscore the significance of selecting appropriate architectures
for specific tasks, contributing to advancements in automated histopathologic
analysis with potential applications in early cancer diagnosis and treatment
planning.
|
Keywords: |
Histopathologic Cancer, CNN, Transfer Learning, VGG, Resnet, Inception
|
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
EMERGING AMBIDEXTROUS OPPORTUNITIES: HOW MALAYSIAN GLCS CAN LEVERAGE ARTIFICIAL
INTELLIGENCE |
Author: |
K.INDRAVATHY, NOORLIZAWATI ABD RAHIM |
Abstract: |
Government-linked companies (GLCs) are integral to Malaysia's economic growth,
operating under substantial government ownership and control across various
sectors. In the digital era, it is crucial for GLCs to adopt advanced
technologies, particularly artificial intelligence (AI), to enhance their
performance and remain competitive against non-GLCs. This study investigates the
role of AI in GLCs to provide ambidextrous opportunities, addressing the
research gap focused on AI adoption and potential impact in Malaysian GLCs. A
Systematic Literature Review (SLR) was conducted using the PRISMA method to
analyze 48 peer-reviewed articles from the Web of Science (WOS) and Scopus
databases, published between 2013 and 2024. The results revealed various roles
of AI that can be leveraged by GLCs as ambidextrous opportunities, including
automating financial tasks and services, enhancing transparency in procurement,
optimizing supply chain resources, improving public administration,
strengthening policy governance, optimizing marketing efforts, enhancing human
resource management, boosting corporate readiness, optimizing energy management,
managing sustainable natural resources, revolutionizing healthcare, implementing
smart farming, enhancing e-commerce strategies, optimizing renewable energy
utilization, enhancing smart grid management, increasing organizational agility,
developing managerial skills, enhancing product development, and improving
manufacturing efficiency. The study provides managerial recommendations for
integrating AI across various sectors, aiming to boost competitiveness,
operational efficiency, and innovation. This research contributes to the
literature by offering practical insights and strategic guidance for
policymakers and managers in leveraging AI to create ambidextrous opportunities,
ensuring sustained growth and competitiveness in the digital age. This study
recommends future research focus on the issues and challenges of leveraging AI
in GLCs. |
Keywords: |
Artificial intelligence, government-linked companies, ambidextrous
opportunities, digital transformation, adoption, systematic literature review,
technology |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
EDGE ENHANCE SPARSE DEEP AUTO ENCODER MODEL FOR HIGH-ACCURACY BRAIN TUMOR
DETECTION IN MRI IMAGES |
Author: |
SATYAVATI JAGA, K. RAMA DEVI |
Abstract: |
Brain tumors are among the most lethal cancerous diseases, with their severity
making them a leading cause of cancer-related deaths. The treatment of brain
tumors depends on the tumor's type, location, and size. Solely relying on human
inspection for accurate categorization can result in potentially dangerous
situations. This manual diagnostic process can be enhanced and expedited through
the use of an automated Computer Aided Diagnosis (CAD) system. This research
embarks on a ground-breaking path, leveraging cutting-edge deep learning methods
to revolutionize anomaly detection and classification of brain tumor in MRI
images. The aim of this work is to address the crucial area of brain tumor
detection by presenting a new and innovative methodology that surpasses the
limitations of current techniques. Central to this investigation is the Edge
Enhance Sparse Deep Auto Encoder (EES-DAE) model, which introduces a network
designed to detect and enhance edge saliency through deep learning (DL)
techniques. The significance of the EES-DAE model is highlighted by its
multidimensional approach, which greatly enhances the detection of brain tumors.
The process starts with a pre-processing phase where a Wiener filter is applied
to enhance the quality of brain MRI scans, providing a robust basis for further
analysis. Next, a pixel normalization elimination step is performed to extract
essential features while minimizing the effects of noise interference.
Additionally, the intermediate stages of the methodology smoothly incorporate
the 2D Discrete Cosine Transform (DCT) and entropy filters, orchestrating the
precise extraction of complex features that greatly enhance the model's
accuracy. The integration of the Edge Enhance Sparse Deep Auto Encoder (EES-DAE)
model with the capabilities of soft-max entropy classification marks the apex of
this innovative journey. The BRATS MRI brain images are utilized to assess the
effectiveness of the developed techniques. This seamless fusion of advanced
techniques achieves a remarkable outcome: the detection of brain tumor regions
with an impressive accuracy rate of 99.13%. |
Keywords: |
MRI, CAD System, EES-DAE Model, DL, 2D-DCT, CNN, Softmax, Preprocessing, Wiener
Filter, Pixel Normalization Elimination, Feature Extraction. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
PERFORMANCE COMPARISON OF REINFORCEMENT LEARNING ALGORITHMS IN THE CARTPOLE GAME
USING UNITY ML-AGENTS |
Author: |
EUN-HYEONG JO, YOUNGSIK KIM |
Abstract: |
Reinforcement learning is a field of machine learning where agents learn optimal
actions through trial and error interactions with their environment. Games
provide an effective benchmark to evaluate and compare the performance of
reinforcement learning algorithms. This study utilized Unity's ML-Agents to
implement the 'CartPole' game and applied various algorithms, including the Deep
Q-Network (DQN), Advantage Actor-Critic (A2C), and Proximal Policy Optimization
(PPO), to compare their performance. The primary research contribution of this
work is the systematic comparison of these algorithms within a consistent
environment, providing insights into their respective strengths and weaknesses.
The study presents detailed analyses of the learning processes and outcomes of
each algorithm, highlighting the DQN's superior performance in terms of
stability and efficiency. Additionally, this work contributes new knowledge by
demonstrating the practical applications and potential of reinforcement learning
algorithms in simple game environments, thereby informing future developments in
more complex domains. |
Keywords: |
Reinforcement Learning Algorithm, Deep Q-Network (DQN) Algorithm, Performance
Comparison, CartPole Game |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
MODELING OF SWITCHING PROPERTIES OF SEMICONDUCTOR OPTICAL AMPLIFIER GATE BY THE
XGM MECHANISM |
Author: |
A. ELYAMANI, A. MOUMEN, H.BOUSSETA, A. ZATNI |
Abstract: |
The semiconductor optical amplifiers (SOAs) are multifunctional optoelectronic
components. The design of SOAs necessarily needs to go through a phase of
modelling which allows studying a theoretical characterization of the SOA and
anticipating its reactions on the basis of the operating conditions. This work
presents a depth theoretical analysis of the wavelength conversion dynamics by
the cross-gain modulation (XGM) in SOA. For this we will use a dynamic algorithm
that we have already developed and which will allow us the efficient
implementation of the model by means of the finite difference method (FDM),
carrier density rate equation and a set of travelling-wave equations. This
numerical model enables us to describe the propagation of a rectangular optical
signal passing through the SOA and takes into account the effect of the
nonlinearity by realizing a pump-probe assemblage to control the power and state
of polarization at the entering of the SOA. The model can be used to investigate
the effects of material and geometrical parameters on the switching speed
properties, extinction ratio, recovery rate and output power of the signals in
SOAs. Thus, we can then study the most important properties of SOA for the use
in amplification, wavelength conversion, optical logic gates, code conversion
and regeneration which demonstrates the versatility of our model. |
Keywords: |
Modeling, Cross-Gain Modulation (XGM), Semiconductor Optical Amplifier (SOA),
Switching Speed, Extinction Ratio (ER), Recovery Rate, Finite Difference Method
(FDM). |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
IMPROVING PORT SCAN CYBERSECURITY RISKS DETECTION USING FEATURES SELECTION
TECHNIQUES WITH ML ALGORITHMS |
Author: |
RAMI SHEHAB, RANA ALRAWASHDEH, ROMEL AL-ALI, TAYSEER ALKHDOUR, MOHAMMED AMIN
ALMAIAH |
Abstract: |
Malicious automated tools use port scan attacks to explore a target systems
network ports aiming to find open ports and potential weaknesses. Port scanning
can serve as a tool for system admins and cybersecurity experts to explore the
main weaknesses in the network. Several countermeasures have been employed to
defend against port attacks such as firewalls, intrusion detection systems (IDS)
and network monitoring tools. These countermeasures aim to prevent malicious
port scanning attacks. Based on that, port scan risks assessment is one of the
essential step for detecting threats, vulnerabilities and protect the network
security. Thus, this work aims to detect port scan attacks using features
selection techniques with machine learning (ML) algorithms to reduce
cyber-attacks being successful. To achieve this objective, we used three feature
selection methods namely, ant colony algorithm (ACO), genetic algorithm (GA),
and gray wolf optimization (GWO) with machine learning algorithms such as
support vector machine (SVM), and nearest neighbor (KNN). The proposed work has
been evaluated using confusion matrix measurements in terms of precision,
recall, F1 score and accuracy. The study findings show that the percentage of
risks and attacks detection over 99% for all proposed models. This study
confirms that through the use of feature selection algorithms and machine
learning methods, can help researchers to identify port behaviors and attacks in
more efficiently. |
Keywords: |
Port Scan Attacks; Cyber-Risk; Machine Learning; Feature Selections; ACO; GA;
GWO; SVM; KNN. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
A FRAMEWORK FOR ARTIFICIAL INTELLIGENCE RISK MANAGEMENT |
Author: |
DAVID LAU KEAT JIN, GANTHAN NARAYANA SAMY, FIZA ABDUL RAHIM, NURAZEAN MAAROP,
MAHISWARAN SELVANANTHAN, MAZLAN ALI, VALLIAPPAN RAMAN |
Abstract: |
Artificial Intelligence (AI) affords tremendous benefits to multiple sectors and
businesses as its capabilities extend to different domain of activities.
Notwithstanding the benefits that it brings, there are also potential risks
which cause concerns by its users and those impacted by its use. Effective risk
management is thus essential for organizations planning to deploy AI in
high-risk applications. This study introduced a framework developed using a
knowledge graph that stores and manages information on risk management, the AI
life cycle, and stakeholder involvement, adhering to established standards. The
framework facilitated the retrieval and generation of insights that support
decision-making related to risk management, as it can represent
interrelationships between entities more effectively than relational databases
or typographies. The insights that can be generated include distribution of
risks according to AI life cycle phases, the countermeasure that could treat the
greatest number of risks and the countermeasure that produced the greatest
change in terms of impact and probability to the identified risk. In this study,
Cypher language was used to develop the framework, while Python language was
used to generate the insights from the framework. Future studies may consider
the integration of the framework in an enhanced Enterprise Risk Management
framework to enable real-time update of related information and response by the
organization. |
Keywords: |
Artificial Intelligence, Risk Management, AI Life Cycle, Stakeholder |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
ENHANCING CLOUD DATA SECURITY THROUGH LONG-TERM SECRET SHARING SCHEMES |
Author: |
SARA IBN EL AHRACHE, HASSAN BADIR |
Abstract: |
Cloud computing has experienced significant growth in recent years, becoming a
cornerstone of modern IT infrastructure. Promising "infinite scalability and
unlimited resources," cloud service providers offer on-demand access that often
obscures the underlying computing infrastructure. The inherent complexity of
virtualized, multi-tenant cloud environments surpasses that of traditional data
centers, complicating service management, particularly in terms of security.
Despite these challenges, the appealing features of cloud computing have led
many organizations to adopt cloud storage services for their critical data.
Users can store data remotely in the cloud and access it via thin clients when
needed. However, data security remains a paramount concern due to the
internet-based nature of cloud services, which limits user control over stored
data. This paper proposes an innovative approach to enhance data security in
cloud environments through a Long-Term Secret Sharing Scheme (SSS-LT). Secret
sharing schemes partition and distribute data across multiple cloud service
providers, thereby increasing data privacy and availability. Our proposed SSS-LT
addresses a key limitation of existing secret sharing methods: the degradation
of computational performance with large data sets. We conduct a theoretical
analysis of the security and complexity factors influencing our approach and
validate its efficacy through experimental evaluation, demonstrating its
superiority over existing methods. |
Keywords: |
Cloud Computing, Data Security, Secret Sharing, Map Reduce
|
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
A SMART FAMEWORK TO GUIDE THE CUSTOMERS TO RAISE THE RETURN ON INVESTMENT |
Author: |
YOUSR AHMED ALAA ELDEEN, SHERIF ADEL ABD EL ALEEM, AHMED IBRAHIM BAHGAT EL
SEDDAW |
Abstract: |
This study assesses the performance of various machine learning models and
traditional statistical tools in forecasting the prices of key commodities,
including gold, silver, crude oil, Brent oil, natural gas, and copper. The
models evaluated encompass Random Forest Regression, Gradient Boosting
Regression, Support Vector Regression, XGBoost, Long Short-Term Memory (LSTM),
and Artificial Neural Networks (ANN), in addition to traditional econometric
tools like SPSS and EViews. Performance metrics are based on the Root Mean
Square Error (RMSE) statistic. The findings reveal that LSTM outperforms other
models in capturing intricate time series patterns, particularly for copper
price predictions, demonstrated by significantly lower RMSE values. While
classical statistical methods and other machine learning models achieve
reasonable accuracy, LSTM and ANN consistently show superior performance.
Furthermore, an integrated model combining SPSS, EViews, and LSTM projections
identifies top investment prospects. Gold and silver are highlighted as solid,
safe-haven assets with highly accurate forecasts. Natural gas is noted for its
precise price predictions and potential for substantial price increases. Copper
stands out due to its excellent predictive accuracy and its capability to
provide early warnings of gold price changes. The integration of traditional
statistical approaches with advanced machine learning models offers a
comprehensive framework for forecasting commodity prices and pinpointing the
most promising investment opportunities in the commodities market. |
Keywords: |
Forecasting Commodity Prices, Machine Learning, , Long Short – Term Memory
(LSTM), Statistical Package for the Social Sciences (SPSS), Econometric Views
)EViews( .
|
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
OBJECT-ASPECT ORIENTED MODELS TO PETRI NETS MODEL, AN APPROACH FOR THE
TRANSFORMATION, ANALYSIS AND VERIFICATION OF SOFTWARE SYSTEMS |
Author: |
MOUNA AOUAG, NOUHAD MERABET, WIAM KENNOUCHE |
Abstract: |
Object-Oriented Modeling (OOM) is a software design approach that structures and
organizes code to accurately reflect reality, thereby facilitating its
maintenance and evolution. However, it does have limitations in managing
crosscutting concerns. Aspect-Oriented Modeling (AOM) provides solutions to
these challenges, despite its lack of formal semantics. This underscores the
importance of formal modeling, which does offer rigorous semantics. In this
paper, we propose an approach to transform a detailed object-oriented sequence
diagram into a detailed aspect-oriented sequence diagram, based on graph
transformation. Subsequently, we propose a method to transform the
aspect-oriented diagram into a Petri net. Our work begins with a single
meta-modeling for the first approach, using graph grammar rules to achieve an
aspect-oriented model. Then, we apply the second approach to the result of the
first one, using two meta-modeling and graph grammar rules, resulting in a Petri
net. We use the AToMPM modeling tool. Finally, we perform a property analysis
with the TINA tool. |
Keywords: |
Object-Oriented Modeling, Aspect-Oriented Modeling, Petri nets, AToMPM, TINA. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
DEEP LEARNING-DRIVEN CAR PARKING SPACE DETECTION: A YOLOR APPROACH |
Author: |
SHARFINA FAZA, ROMI FADILLAH RAHMAT, RINA ANUGRAHWATY, JESSICA WONG, FARHAD NADI |
Abstract: |
Parking is a critical facility that must be widely accessible, especially in
public places such as tourist attractions, shopping centers, and offices.
However, people are often delayed due to difficulty finding empty parking
spaces, particularly during peak hours or when there are many visitors. This is
partly because visitors do not know the exact location of available slots and
must repeatedly circle the area to find one. Therefore, there is a pressing need
to develop more efficient methods for detecting parking space availability to
expedite and simplify the search process. To address this challenge, our study
employs the You Only Learn One Representation (YOLOR) method to help detect
available parking slots in three distinct locations: the underground parking
area at 'Centre Point' shopping mall in Medan City, the university library
parking lot, and the campus parking facility at the University of Sumatera
Utara. YOLOR is one of the deep learning methods that has shown promising
results in object detection tasks, making it a suitable choice for this parking
slot identification problem. Based on our system's tests, we achieved accuracies
of 95.71%, 95.74%, and 83.33% for each parking lot, respectively. For the
occupied class, we obtained precisions of 0.99 and 0.97 in each parking lot,
while for the empty class, we consistently achieved a precision of 1.00 across
all lots. Recall rates were 0.96, 0.95, and 0.92 for the occupied class, and
0.95, 0.96, and 1.00 for the empty class in each parking lot. Our model was
trained on a total of 3,180 images, comprising 11,666 empty slots and 11,843
occupied slots. Testing was conducted using six 30-minute to 1-hour long videos,
each capturing various parking conditions under different weather (sunny,
cloudy, or rainy) and lighting situations. |
Keywords: |
Parking, Parking Spaces, Detection, YOLOR, Deep Learning
|
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
IDENTIFICATION OF PAPAYA VARIETIES IN COMPUTER VISION AND DEEP LEARNING
APPROACHES |
Author: |
SHARFINA FAZA, ROMI FADILLAH RAHMAT2, MERYATUL HUSNA, AJULIO PADLY SEMBIRING,
ANJAS SUBHANUARI, FARHAD NADI |
Abstract: |
Papaya is one of the most popular tropical fruits and is widely cultivated in
Indonesia. Papaya plants are easily found in various regions due to their good
adaptation to the tropical climate. In addition to being delicious for
consumption, papaya also offers many health benefits because of its high fiber
content and benefits for the digestive system. There are various popular papaya
varieties that are frequently consumed by the Indonesian people. However, the
similarity in shape and skin color among these papaya varieties often makes it
difficult to distinguish them, especially for laypeople. This poses a challenge
in identifying papaya varieties, both for daily consumption purposes and in the
marketing process. This research aims to develop a prototype that can identify
papaya varieties using computer vision technology with the R-FCN ResNet101
algorithm. The data collection for this research was conducted through direct
surveys at several markets that sell various papaya varieties. The data
collected was in the form of 1000 papaya fruit images captured using a mobile
phone camera. The collected data was then divided into training and testing
datasets. The first step was image preprocessing, which consisted of resizing
the images, labeling the training data, and converting it into TensorFlow Record
files. The next step involved processing the data using computer vision
technology with the R-FCN ResNet101 algorithm. The result of this step was a
learned model, which was then used to analyze and identify the papaya varieties
in the testing data. The test results achieved an accuracy of 97.5%. |
Keywords: |
Papaya, Varieties, Identify, Computer Vision, R-FCN ResNet101.
|
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
IMPROVING SEMANTIC SEGMENTATION OF MEDICAL IMAGES WITH FINE-TUNING TECHNIQUES |
Author: |
DR. K. SUDHA RANI, DR. A .SUMA LATHA, VAISHNAVI SATISH, PRADEEPA MEDISETTY |
Abstract: |
The study looks into ways to fine-tune semantic image segmentation challenges.
We investigate how to improve existing models of neural networks for
pixel-by-pixel image classification, with the goal of improving their
specificity and accuracy. To improve the model's performance, we look into
adapting pre-trained models to particular segmentation tasks. The parameters of
our study are learning rate scheduling, optimizer, and data augmentation, all of
which help to enhance the network’s segmentation performance. The outcomes of
our experiments show how well the suggested fine-tuning techniques work to
improve the specificity and generalizability of semantic segmentation models for
medical images.
|
Keywords: |
Deep Learning, Semantic Image Segmentation, Fine-Tuning, Model Optimization,
Pre-trained Models, Data Augmentation, Learning Rate Scheduling, Image
Processing, Neural Networks, Model Performance. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
COMPARATIVE ANALYSIS ON THE PERFORMANCE OF NHPP-BASED SOFTWARE RELIABILITY MODEL
FOLLOWING EXPONENTIAL LIFE DISTRIBUTION |
Author: |
SEUNG KYU PARK |
Abstract: |
In this study, the exponential-type life distribution, which is known to be
suitable for reliability analysis of failure occurrence phenomena, was applied
to the NHPP-based software reliability model, and then the performance of the
applied model was newly studied. For this purpose, the failure time data
requested by the developer was used. In conclusion, first, as a result of
evaluating MSE and R^2, which are the selection criteria for efficient models to
determine the suitability of the proposed model, the proposed models showed a
performance of over 85% and were all efficient. Second, in the performance
analysis using m(t) and λ(t), which are attribute functions that have a
significant impact on the performance of the reliability model, the
Inverse-exponential model showed excellent performance. Third, as a result of
testing future reliability, the reliability of the Inverse-exponential and
Rayleigh models showed a high and stable trend over time. Therefore, as a result
of evaluating performance attribute data (MSE and R^2, m(t), λ(t), R ̂(τ)), it
was concluded that the Inverse-exponential model had the best performance. Thus,
this study can present basic design data along with algorithmic solution
techniques that can analyze and predict performance attribute data needed by
developers during the early software development process.
|
Keywords: |
Exponential-basic, Exponential-type, Exponential-power, Inverse-exponential,
Rayleigh, Software Reliability.
|
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
OPTIMIZED DEEP LEARNING FRAMEWORK FOR BRAIN TUMOR DETECTION AND CLASSIFICATION
USING HYBRID VISUAL GEOMETRY GROUP-16 WITH REDUCED WEIGHTS VIA BUTTERFLY
OPTIMIZATION |
Author: |
RAMYA NIMMAGADDA, Dr. P. KALPANA DEVI |
Abstract: |
Finding and classifying brain tumors are important parts of medical image
analysis that need advanced deep-learning methods and optimization algorithms.
Recognizing the urgent need for accurate methods in brain tumor diagnosis, we
present a comprehensive approach integrating various stages, including data
preprocessing. In this preprocessing phase, we employ techniques like aspect
ratio normalization and resizing to form a standardized dataset. By
standardizing image dimensions, we aim to improve subsequent processes like
feature extraction and segmentation, reducing potential distortions. The
suggested model is made by using Convolutional Neural Networks (CNN) to find
patterns and traits that make tumor and non-tumor areas different from each
other. To overcome the intricate sections and fine textures during
down-sampling, the proposed model is hybridized with U-Net architecture which
gives accurate and robust results of 98%. Furthermore, the Dice coefficient is
measured using Intersection Over Union (IOU) to ensure whether it is robust to
class imbalance. This shows an intuitive interpretation, with higher values of
0.83 and 0.9 indicating strong and better segmentation performance. The model is
further developed with VGG-16 to classify the tumor grades. In terms of
accurately segmenting the tumor grades, the learnt relevant characteristics that
are derived from the segmented tumor photos provide a 73% level of satisfaction.
In order to overcome the complexity and over-fitting problems, the Butterfly
Optimization algorithm is hybridized with VGG-16 which gives an enhanced output
in classifying the grades. The proposed model outperforms other Machine Learning
(ML) and Deep Learning (DL) methods in tumor and non-tumor identification and
categorization with 99.99% accuracy. To further evaluate the suggested model's
performance, mobility, and energy economy, it is also implemented in JETSON Orin
hardware.
|
Keywords: |
Deep Learning, Convolutional Neural Networks (CNNs), U-Net, VGG-16, Butterfly
Optimization Algorithm. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
COMPARISON OF LEXICON-BASED METHOD, MACHINE LEARNING AND CHATGPT ON SENTIMENT
ANALYSIS OF BIG CAP AND SMALL CAP COMPANIES IN UNITED STATE INDEXES |
Author: |
IMRAN KAMIL , NATHAR SHAH |
Abstract: |
Sentiment analysis is a natural language processing (NLP) method that identifies
the sentiment contained in a body of text. It has gained significant attention
due to its potential applications in various domains, including finance,
marketing, and public opinion monitoring. In the financial sector, sentiment
analysis is essential for analyzing market trends, forecasting stock prices, and
guiding investment choices. This research paper compares the performance of
lexicon-based method, machine learning technique, and ChatGPT in sentiment
analysis of big cap and small cap companies in United States indexes using
Twitter data. The purpose of implementing ChatGPT is to identify the usefulness
of this well-known tool that is currently flooding the social media scene. The
results show that Random Forest achieved the highest accuracy overall with 83.6%
on big cap and 78.8% on small cap. ChatGPT sentiment has an accuracy of 77.44%
on big cap and 72.43% on small cap. Meanwhile the lowest performing method is
the TextBlob which has an accuracy of 46.52% on big cap and 43.57% on small cap.
Random Forest is able to understand the context of tweets and handle slang terms
and phrases, while ChatGPT is still under development but has the potential to
perform better in the future. There are many slang terms and phrases that are
used in the stock market that are not included in the TextBlob dictionary.
Therefore, the performance of TextBlob is the least performing method. |
Keywords: |
Sentiment Analysis, Random Forest, Lexicon, Textblob, Machine Learning, Chatgpt
|
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
Title: |
AN AUTOMATED MULTIMODAL HYBRID SYSTEM FOR WEB CONTENT FACT-CHECKING BASED ON
BERT LANGUAGE MODEL AND CONVOLUTIONAL NEURAL NETWORK |
Author: |
C. VISHNU MOHAN, N. V. CHINNASAMY |
Abstract: |
Over the last decade, people have been widely using online platforms for sharing
information and for understanding the news that has been happening around them.
Classification of social media texts, tweets etc., are one of the emerging areas
of research in today’s world, especially when it comes to information about
political and entertainment sectors. However, there are certain challenges due
to the fact that most commonly used Machine Learning techniques have not proven
to be optimal, when considering both textual and image data for fake content
detection. This study explores the efficacy of a hybrid deep learning
architecture that leverages BERT for text representation along with
Convolutional Neural Network (CNN) for classifying news as real or fake. |
Keywords: |
Machine Learning, classification, deep learning, BERT, CNN |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2024 -- Vol. 102. No. 16-- 2024 |
Full
Text |
|
|
|