|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
August 2024 | Vol.
102 No.15 |
Title: |
OPTIMIZING BROKERAGE COMPANY MARKETING: A WEB-BASED PERFORMANCE MONITORING
SYSTEM WITH CODEIGNITER |
Author: |
ANGGA ADITYA PERMANA , FEBI LATIFAH |
Abstract: |
PT Kontrak Perkasa Futures provide trading services, particularly commodities
that generate various investment products. The company's operations rely on the
performance of the marketing team, especially in the process of acquiring
customers willing to have their investment funds managed. Currently, the
performance reporting process for the marketing team employees at PT. Kontak
Perkasa Futures is still carried out conventionally through manual recording in
a ledger. This is very ineffective and inefficient for the employee especially
in real time reporting process to their manager. This research aims to design a
web-based information system using the CodeIgniter Framework to assist in
computerizing the reporting process so that the performance of the marketing
team can be monitored. The research methodology involves requirement analysis,
system design, system development and system evaluation. The data collection
methodology employed in this research includes interviews and observations, with
the data analyzed using performance, information, economy, control, efficiency,
and service (PIECES) analysis. The design of the marketing performance
information system utilizes unified modeling language (UML). The system
development method is using Scrum framework and built using MySQL, CodeIgniter
Framework and PHP language. The system evaluation is using black box testing.
The result of this research is a recommendation for the design of a marketing
performance information system that can be used by the company as a reference
for the development of marketing performance information systems in the future.
Besides that, designing a web-based information system can make positive
contribution in the effectiveness and efficiency of data and information
management. |
Keywords: |
Black Box Testing. Codeigniter, Performance, SCRUM, UML. |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
REPLICATING VIDEO GAME PLAYERS' BEHAVIOR THROUGH DEEP REINFORCEMENT LEARNING
ALGORITHMS |
Author: |
HAFSA GHARBI, MR. LOTFI ELAACHAK, AND MR. ABDELHADI FENNAN |
Abstract: |
This paper addresses the challenge of imitating the behavior of video game
players using Deep Reinforcement Learning algorithms. By training intelligent
agents with algorithms such as Proximal Policy Optimization (PPO), Behavioral
Cloning (BC), and Generative Adversarial Imitation Learning (GAIL), we enable
these agents to learn from their interactions with the game environment,
optimizing their actions based on rewards and punishments. Experimental
evaluations across various video games demonstrate that these trained agents can
successfully mimic human player behavior in complex situations. This capability
offers significant opportunities for creating challenging non-player characters
(NPCs), designing adaptive difficulty levels, and enhancing the overall gaming
experience. Our findings suggest that integrating Reinforcement Learning
techniques allows game developers to provide more realistic and immersive
gameplay, effectively bridging the gap between Artificial Intelligence and both
video games and serious games. |
Keywords: |
Deep Reinforcement Learning, Proximal Policy Optimization, Behavioral cloning,
Generative Adversarial Imitation Learning, video game. |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
COMBATING FRAUD: DYNAMIC AND ADVANCED TECHNIQUES FOR UNVEILING FALSE REVIEWS AND
DECEIVING TEXT ON E-COMMERCE WEBSITE |
Author: |
DR. P. NAGARAJ, RAJASRI MAMIDALA |
Abstract: |
E-commerce has widely grown among people in recent years and has been used for
purchasing products and services on the Internet. E-commerce faces more
challenges due to the growing amount of deceptive and fake products online. This
research aims to combat this fraud using dynamic and advanced techniques for
unveiling false reviews and deceiving product descriptions. This research
employs the DistilBERT model for detecting fake reviews and the BERT base model
for identifying misleading product descriptions. This research aims to bring
down false information, stop defying products backed up by their false reviews,
and report them. In this study, we create an FRD Algorithm and a DTD Algorithm
that solve the problem of Combating fraud: dynamic and advanced Techniques for
unveiling false reviews and deceiving text on e-commerce websites. The model is
achieving an accuracy of 95.6 %. It helps customers save more time and focus on
purchasing the product rather than determining whether the reviews are true or
false. Future research focuses on a more accurate, dynamic, and efficient way to
execute the AI models. |
Keywords: |
Fake Review Detection, Deceiving Text Detection, BERT Model, FRD Algorithm, DTD
Algorithm. |
Source: |
Journal of Theoretical and Applied Informaton Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
IMPLEMENTATION AND MANAGEMENT OF SECURITY FOR SENSITIVE DATA IN CLOUD COMPUTING
ENVIRONMENT USING ELLIPTICAL CURVE CRYPTOGRAPHY |
Author: |
N. KRISHNAMOORTHY, S.UMARANI |
Abstract: |
When it comes to cloud storage, one of the biggest concerns is keeping data
safe. In today's technologically evolved world, cloud attacks are on the rise.
Current cloud storage security services rely heavily on symmetric key encryption
algorithms, which involve the exchange of secret keys and may be susceptible to
attacks from outside parties. In the same way that cloud computing has become
indispensable in recent years, so too have the security issues surrounding the
cloud model grown in scope and complexity. There has been a dramatic shift in
how infrastructure, service delivery, and development models are seen thanks to
cloud computing. Each participant in the proposed cloud storage security
framework has a specific role to play: the data owner, who is responsible for
encrypting the plain text and constructing the access control policy; the
attribute authority, who acts as the data owner's trusted agent and stores the
attribute-based access control policy used for key generation and to restrict
access to authorized users; the cloud storage; and the data users. In Elliptic
Curve Cryptography (ECC), improving encoding schemes is a top priority. Despite
the obvious advantages of the cloud model, it will have a hard time gaining
widespread client acceptance unless issues of privacy and security are resolved.
Concerns about the privacy, authenticity, and integrity of cloud-stored data are
the focus of this thesis, along with recommendations for implementing those
safeguards. In this thesis, we design and develop secure and efficient protocols
based on Elliptic Curve Cryptography for multilevel security in a cloud
environment, with the goals of protecting the privacy of users' data, ensuring
that only authorized users can access their information, and guaranteeing that
all data is genuine and unaltered. When it comes to encryption, computational
load, and security during cloud data storage, retrieval, and access, the
proposed security frameworks have been implemented and compared to existing
models. This research work aims at introducing four different novel algorithms
and finds out the minimized encoding and decoding durations and thereby reducing
uploading and downloading durations. |
Keywords: |
Elliptic Curve Cryptography, Data Security, Cloud deployment, Private cloud,
Public cloud |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
BIG DATA AND ARTIFICIAL INTELLIGENCE IN HIGHER EDUCATION: IMPACTS ON RESEARCH
INTO NUTRITION OF CANCER PATIENTS |
Author: |
DARIFA EL HAIRACH, SOAD KHAL-LAYOUN, ABDELLATIF BOUR |
Abstract: |
The integration of Big Data and Artificial Intelligence (AI) technologies has
brought significant transformations to higher education, impacting research,
university management, and student experience. This paper explores the
multifaceted influence of Big Data and AI within the higher education landscape.
Beginning with an introduction, the paper delineates the growing importance of
these technologies. It then delves into the role of Big Data, elucidating its
definition, significance, and applications in both research and university
management. Likewise, the section on Artificial Intelligence discusses its
definition, importance, and its role in enhancing student learning and
administrative processes. Moreover, the paper explores the synergy between Big
Data and AI, discussing the benefits, challenges, and ethical considerations
inherent in their integration. Highlighting successful integration examples, it
underscores the transformative potential of combining these technologies in
higher education contexts. Moving forward, the paper examines the impacts of Big
Data and AI on research, emphasizing their role in accelerating data analysis,
enhancing predictive modeling, and facilitating collaboration among researchers.
Furthermore, the paper presents the results of the integration of these
technologies on university management and student experience. It discusses how
Big Data and AI have streamlined administrative processes and personalized
student learning experiences. Through a comparative analysis with a case study,
the paper offers insights into the real-world application and effectiveness of
these technologies in higher education settings. The paper underscores the
profound impacts of Big Data and AI on higher education, advocating for their
continued integration to foster innovation, efficiency, and personalized
learning experiences for students and researchers alike. In addition to the
broader impacts on higher education, this paper includes a focused case study on
the application of Big Data and AI in nutrition research for cancer patients.
This case study demonstrates the transformative potential of these technologies
in a specialized field of study. By leveraging Big Data, researchers can analyze
vast datasets from clinical trials, patient records, and nutritional studies to
identify patterns and correlations that were previously undetectable. AI
algorithms can then be used to develop predictive models that help in
personalizing nutritional plans for cancer patients, optimizing their treatment
outcomes. |
Keywords: |
Big Data, Artificial Intelligence, Higher Education, Research, patient
Experience |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
ENHANCING VISUAL CONTENT QUALITY IN EDUCATION AND RELIGION: A FUSION APPROACH OF
MIRNET-V2 AND AUTOENCODERS FOR IMAGE PREPROCESSING AND NOISE REDUCTION |
Author: |
SMITA RATH1 , SUSHREE BIBHUPRADA B. PRIYADARSHINI, PRABHAT KUMAR SAHU, MONALISA
PANDA, DEEPAK KUMAR PATEL, NIBEDITA JAGADEV , SIPRA SAHOO, NARAYAN PATRA |
Abstract: |
Combining MIRNet-v2, an advanced image preprocessing technique, with
autoencoders for noise reduction can have implications beyond traditional image
processing applications. This study could be related to education and religion
journals. High-quality images are essential for conveying complex information
effectively. Research on advanced image processing techniques like MIRNet-v2
combined with autoencoders could focus on improving the quality of educational
images used in textbooks, online courses, or presentations. Therefore, a study
combining MIRNet-v2 and autoencoders for image preprocessing and noise reduction
could find relevance in journals focusing on education technology, digital
humanities, religious studies, or interdisciplinary research at the intersection
of technology and society. MIRNetv2 can help print illustrations and book photos
by color correction, which involves adjusting colors to improve their accuracy
and vibrancy. It will enhance contrast to improve the visual attractiveness of
photographs, and low-light enhancement will add brightness to very dark photos.
Machine learning algorithms have shown promising results in noise reduction
tasks by learning the statistical characteristics of the noise and the
underlying image structures. The central component of our method is a
multi-scale residual block that includes several crucial components: (a)
parallel multi-resolution convolution flows for gathering multi-scale
characteristics; (b) exchange of data across the multi-resolution streams; (c)
spatial and channel attention processes for preserving contextual data; and (d)
attention-based multi-scale feature accumulation. The proposed approach involves
collecting relevant image datasets, preprocessing them, selecting suitable
machine learning algorithms, optimizing the model parameters, validating and
evaluating the models, and identifying potential areas for future research and
improvements. Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index
(SSIM), and Mean Squared Error (MSE) are the three famous performance metrics
frequently used in image processing.In contrast to PSNR, which emphasizes
pixel-level variations, SSIM considers the image data's structure and
brightness. The luminance, contrast, and structure of the processed and original
images are compared. The results clearly show that MIRNet-v2 outperforms
previous approaches by a significant margin. MIRNet-v2 achieves a total
performance gain of 3.44 dB on the LoL, SIDD, and DND datasets and some
real-time images as training and testing datasets. In summary, this method
trains an extended set of features that combines contextual data from many
scales while maintaining high-resolution spatial aspects. Extensive tests on
live picture benchmark datasets show that MIRNet-v2 produces cutting-edge
performance for various image processing applications, comprising
super-resolution, image denoising, and image enhancement. |
Keywords: |
Auto encoders; Convolutional Neural Network; MIRNet-v2; Noise Reduction; Deep
Learning |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
DIGITAL SECURITY IN MOROCCAN UNIVERSITIES IN THE ERA OF ARTIFICIAL INTELLIGENCE |
Author: |
M.BOUJARRA, A.AL KARKOURI, 1 Y.FAKHRI, S.BOUREKKADI |
Abstract: |
The rapid advancement of information technology has significantly influenced the
digital security environment in Moroccan universities, compelling these
institutions to reconsider their approaches in response to more complex risks.
The growing digitalization of academic data, including student records and
research projects, underscores the crucial need of safeguarding these digital
assets, which have become indispensable foundations of contemporary academic
endeavors. Artificial intelligence (AI) integration is becoming a critical
solution to address the increasing security problems. AI provides a range of
sophisticated features, such as proactive identification of threats, immediate
analysis of behavior, and the capacity to preemptively stop possible assaults
before they occur. Nevertheless, the implementation of heightened security
measures presents both practical and ethical obstacles. From a logistical
standpoint, the successful implementation of AI necessitates the presence of
suitable technical infrastructure and a workforce that is adequately educated to
fully use this technology. The expenses linked to the implementation of
AI-driven solutions, while potentially lucrative in the future, are a
significant concern for Moroccan institutions with sometimes constrained
financial resources. Simultaneously, the need to evaluate and control ethical
hazards linked to the utilization of AI, such as algorithmic prejudice and
privacy ramifications, introduces an intricate aspect to this technological
shift. at order to provide a background for these difficulties, the essay
explores the present state of digital security at Moroccan institutions,
highlighting particular risks and weaknesses that already exist. Subsequently,
it examines practical implementations of AI in the academic sphere, showcasing
how this technology might be used preemptively to enhance safeguards against
cyber hazards. The paper emphasizes the need of a strategy tailored to the local
environment by addressing the issues unique to Moroccan institutions, such as
financial limitations and training requirements. The suggested answers are
derived on a comprehensive comprehension of the distinct requirements of each
institution, emphasizing practical approaches to surmount these challenges.
Ultimately, a pragmatic case study demonstrates the triumphant integration of AI
in a Moroccan university, providing tangible instances and valuable insights
gained. These pragmatic observations seek to provide guidance to other Moroccan
educational institutions in their pursuit of improved digital security,
emphasizing the significance of adjustment, prudent allocation of resources, and
the incorporation of ethical issues at every stage of the process. |
Keywords: |
Digital Security, Artificial Intelligence, Moroccan Universities, Cyber Threats,
Technological Solutions |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
PREDICTION MODEL APPLYING MACHINE LEARNING TO FORECAST THE BANKRUPTCY OF
COMPANIES. A SYSTEMATIC REVIEW OF THE LITERATURE |
Author: |
DINO QUINTEROS-NAVARRO , CIRO RODRIGUEZ |
Abstract: |
Prediction models that are aimed at companies allow trends to be identified and
generate a much broader picture, making decisions more effective and efficient.
In that sense, the research study uses a literature review to identify the state
of the art of how predictive models can interact and generate more accurate and
reliable results to identify, forecast and eventually reduce the bankruptcy of
companies. In addition, the systematic analysis of the contributions was carried
out where the main models of supervised learning and other techniques were
considered. The authors [25] specify that a systematic literature review is a
primary tool for developing an evidence base by identifying, evaluating, and
interpreting all available research relevant to a particular research question,
thematic area, or phenomenon of interest. Likewise, questions were posed through
three (3) stages: identification of parameters, calculation of the ACCP and
determination of factors. For the calculation of the weighted precision metric
(ACCP), the most significant values of each model were used and the results
obtained were analyzed. In this sense, the review must contain the following:
method of analysis, theoretical basis, classification of payment yield
prediction models and analysis of the topics according to the questions asked. |
Keywords: |
Bankruptcy, Predictive, Models, Machine Learning, Making Decisions |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
EARLY DIAGNOSIS OF GLAUCOMA BY OPTIC DISC AND OPTIC CUP SEGMENTATION OVER
RETINAL FUNDUS IMAGES USING DEEP LEARNING ALGORITHM |
Author: |
R. GEETHALAKSHMI, R. VANI |
Abstract: |
Glaucoma is characterized by a progressive retinal ganglion cell loss and alters
the optic nerve head in neuro retinal rim tissue, as well as restriction of the
visual field. Glaucoma is a type of eye disorder that is the leading cause of
permanent vision loss globally. The main symptom of glaucoma is the changes in
the optical nerve head (ONH) which assist in the early recognition of glaucoma
This misalignment of ONH reflects on Optic Disc (OD) and Optic Cup (OC) size and
shape, creating changes in Cup to Disc Ratio (CDR). By calculating CDR, glaucoma
can be diagnosed at early stage. In order to calculate CDR correctly, proper
segmentation of the OD and OC is essential. In this proposed work an Adaptive
median filter with TOPHAT is utilized for improving quality of the retinal
image, further segmentation of OD and OC is performed by utilizing Multilevel
Statistical Region of Interest (MSROI) and classification of Glaucoma is done by
using a novel Artefact Convolutional Neural Network (ACNN). The accuracy of
99.38% is obtained using ACNN architecture. Further, dice coefficient of 0.79,
Intersection Over Union (IOU) of 0.67, Mean of 73, Standard Deviation of 55 and
Mean Square Error of 410 is achieved using MSROI based segmentation. The
proposed work can be utilized as desktop application in early diagnosis of
Glaucoma. This Computer Aided Diagnosis (CAD) method highly helpful in improving
accuracy, reduce time consumption and also provide ease of diagnosing the
disease since glaucoma require periodic diagnosis. The proposed model is trained
and validated using Drion-DB dataset, the system provides robust and accurate
result compared with the existing method. |
Keywords: |
Glaucoma, Segmentation, Optic Disc, Optic Cup, Artefact CNN, Deep Learning. |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
DETECTING ONLINE GAME ADDICTION USING FUZZY LOGIC WITH THE TSUKAMOTO METHOD |
Author: |
MARYANA, NURDIN, BUSTAMI, FAJRIANA, 5NABILA ZULFANI, ARNAWAN HASIBUAN |
Abstract: |
Currently online games are not a stranger to people's lives in various circles,
from children, adolescents and adults. The large number of people who are very
fond of playing online games, the chance that people will become addicted to
playing online games is very large, sufferers of online game addiction will not
know the time so they often forget their obligations, in this case teenagers who
are still in school become lazy and often skip school In order to become true
gamers, so that bad things don't happen, a system model is needed to detect
online game addiction. With the existence of this system model, it is hoped that
it can provide information and solutions for people who have experienced
addiction so that prevention and recovery can be carried out so that it does not
have fatal consequences. This system model uses five input variables, namely
thinking about games all day and feeling bad when not playing games, increased
gaming time and loss of sleep, playing games to forget real life and ignore
other activities, other people fail to prohibit playing games and will keep
playing games, get annoyed easily and fight with family or friends, while the
output variables consist of not addicted, mild addicted, heavy addicted. This
study used 86 questionnaires filled in by 86 online game players. Then the
results of the research were obtained, namely the level of not being addicted =
65%, mild addiction = 15% and heavy addiction = 20%. |
Keywords: |
Online Games, Input and Output Variables, Fuzzy Logic, Tsukamoto Method, Game
Addiction Symptoms. |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
DPARF: A DEEP LEARNING BASED INTELLIGENT FRAMEWORK FOR AUTOMATIC RECOGNITION OF
PERSONAL ACTIVITY USING SMARTPHONE DATASET |
Author: |
BHAGYA REKHA SANGISETTI, SURESH PABBOJU |
Abstract: |
In the contemporary era, technological innovations such as Internet of Things
(IoT) and Artificial Intelligence (AI) can offer unprecedented solutions to real
world problems. IoT technology has paved way for sensor enabled data collection
while AI enables learning and prediction to solve variety of problems. Personal
Activity Recognition (PAR) is an important problem in many applications such as
surveillance, security, computer gaming, sports, remote monitoring of humans or
patients, healthcare, and military to mention few. Deep Learning (DL) techniques
are widely used for solving PAR problem by processing data captured by sensors.
IoT use cases or smart phones are capable of capturing accelerometer data that
can be used to establish movements and thus recognize human activities. Existing
methods based on deep learning witnessed success in PAR activities. Long
Short-Term Memory (LSTM) is one such technique which could detect human
activities with high accuracy. However, it is desired to improve it further
towards leveraging prediction performance. Towards this end, in this paper, we
proposed a framework known as Deep Personal Action Recognition Framework (DPARF)
for automatic recognition of human activities. The framework is realized with
our enhanced LSTM model known as CNN-5LSTM which is designed to improve accuracy
in activity recognition. We proposed an algorithm known Enhanced LSTM with CNN
for Automatic Personal Activity Recognition (ELSTM-CNN-APAR). This algorithm
takes care of feature selection and processing of data consisting of temporal
sequences. A standard PAR smartphone dataset from UCI repository is used in the
empirical study. The experimental results revealed that our proposed model
outperforms many existing models with 93.04% accuracy. It can be used to have an
automated PAR Decision Support System (PAR-DSS) which may be integrated with a
real time PAR system in question. |
Keywords: |
Personal Activity Recognition, Artificial Intelligence, Deep Learning, Enhanced
LSTM, Smartphone Dataset |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
A DEEP LEARNING APPROACH TO SOFTWARE VULNERABILITY DETECTION BY LEVERAGING CNN’s
AND COMPARING WITH RNN’s FOR IMPROVED ACCURACY AND EFFICIENCY |
Author: |
RAGHUPATHY DURGA PRASAD, Dr. MUKTEVI SRIVENKATESH |
Abstract: |
This study introduces an innovative paradigm for software security that
leverages convolutional neural networks (CNNs) to detect emerging cyber threats.
Our approach enhances software security detection mechanisms, offering superior
performance compared to traditional machine learning methods and recurrent
neural networks (RNNs), which underperformed in this context. The CNN
architecture includes multiple convolutional layers for feature extraction,
pooling layers for dimensionality reduction, and fully connected layers for
classification, with non-linear activation functions like SoftMax to expedite
classification. Dropout layers mitigate overfitting and enhance generalization.
Using both synthetic and real-world data, our CNN model exhibited robust
performance, achieving an accuracy of 0.91, precision of 0.90, recall of 0.89,
and an F1-score of 0.895. These metrics indicate CNNs' proficiency in
identifying intricate patterns and anomalies in software code, reducing false
positives significantly. Although RNNs with LSTM or GRU layers capture temporal
correlations in code sequences, they were less effective than CNNs in this
application. The study's methodologies and code are available on Google Drive
for cybersecurity specialists to replicate and build upon. By automating
vulnerability detection with CNNs, cybersecurity professionals can focus more on
pre-emptive measures. This research underscores the potential of CNNs to enhance
software vulnerability detection, advocating for their integration with RNNs to
create safer and more resilient software systems in response to escalating cyber
threats. |
Keywords: |
Convolutional Neural Networks, Recurrent Neural Networks, Software
Vulnerabilities, Automated Vulnerability Detection. |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
HYBRID HENRY GAS-HARRIS HAWKS MODIFIED-OPPOSITION ALGORITHM FOR TASK SCHEDULING
IN CLOUD COMPUTING |
Author: |
NORA OMRAN ALKAAM, ABU BAKAR MD. SULTAN, MASNIDA HUSSIN, AND KHAIRONI YATIM
SHARIF |
Abstract: |
Objectives: Cloud computing environments allow users to remotely access
computational resources and data computing services online. Task scheduling
requires the development of reliable and efficient methods for mapping tasks to
resources, making it an essential component of cloud computing. Effective task
scheduling is critical for increasing operational efficiency since it entails
carefully assigning tasks to resources to ensure optimal performance. This
accurate coordination not only increases productivity, but it also optimizes
resource allocation. Cloud computing solutions can improve overall system
performance, reduce processing times, and increase efficiency by enhancing job
scheduling. Methods: This study introduced a (Henry Gas-Harris Hawks
Modified-Opposition) (HGHHM) algorithm to enhance the Henry Gas Solubility
algorithm based on two components: Harris Hawks Optimization (HHO) and a
modified comprehensive opposition-based learning (MOBL). This proposed HGHHM
algorithm utilized the HHO method as a local search strategy to enhance the
quality of approved solutions. While, MOBL improves the less effective solutions
by carefully calculating their opposite equivalents and wisely choosing the most
advantageous option. This approach facilitated the enhancement of suboptimal
solutions, leading to an overall enhancement in the efficiency of the selected
techniques. Results: CloudSim was used to test the HGHHM algorithm on HPC2N
dataset. Thus, the suggested HGHHM algorithm's simulated makespan and resource
utilization outperformed previous algorithms, in our experiments, we utilized
datasets of varying sizes from 500 to 4000. Conclusions: By using the HGHHM
algorithm, this research improves cloud job scheduling efficiency and
reliability by improving makespan and resource consumption. These findings
confirm hybrid meta-heuristic techniques' efficacy and emphasize the need to
balance exploitation and exploration to avoid local optima entrapment.
Nevertheless, the study is limited in its scope as it does not take into account
other factors such as energy consumption and cost. |
Keywords: |
Cloud Computing; Henry Gas Solubility Optimization; Harris Hawks Optimization;
Task Scheduling. |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
GENERATIVE AI: TWO LAYER OPTIMIZATION TECHNIQUE FOR POWER SOURCE RELIABILITY AND
VOLTAGE STABILITY |
Author: |
DR. SANDEEP GUPTA, RAMAKRISHNA KOLIKIPOGU, VEERA SWAMY PITTALA, S.SIVAKUMAR,
RAMESH BABU PITTALA, DR. MOHAMMED SALEH AL ANSARI |
Abstract: |
Wind and solar power are essential in the fight against climate change and for
reaching carbon neutrality targets. Due to their inherent unpredictability,
renewable energy sources pose a threat to the power system's transient voltage
stability, dependability, and flexibility. These consequences might add
complexity to power scheme design. This paper introduces a two-tiered
optimization approach for control foundations and network design to discourse
the effects of renewable vigor on electrical organization preparation,
particularly regarding dependability and transient voltage stability.
Constructing designs for generators and energy storage units are determined by
upper-layer network planning, which assesses the system dependability index.
Transient stability demands and constructing and maintenance expenses are
addressed by lower-layer challenges. It is suggested to use a two-layer
iterative technique using adaptive particle swarm optimization (PSO) for
successful nonlinear problem solution. Implementing the suggested approach on an
IEEE 33 system of testing demonstrates its practicality. In addition to
improving the network's operational efficiency and reliability, the findings
show that the suggested optimization method also fixes the issue with regard to
program and group planning. Future plans for the operation and planning of the
power scheme may be informed by the outcomes. |
Keywords: |
Climate Change, Renewable Energy, Two-layer Optimization, Voltage Stability, PSO |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
THE IMPACT OF QUANTUM COMPUTING (ALGORITHMS) ON CONTEMPORARY SECURITY SYSTEMS
AND FUTURE |
Author: |
AKKU KUBIGENOVA, ALIMBUBI AKTAYEVA, GALIYA YESMAGAMBETOVA, ALTYNBEK SHARIPBAY,
VLADIMIR SUKHOMLIN, NAZERKE AUSSILOVA |
Abstract: |
In the course of technological evolution, a new scientific point of view arose,
reviving interest in the theoretical foundations of quantum mechanics and many
new issues combining physics, computer science, and information theory. There is
a growing understanding that quantum computation may be a more natural model of
computation than the classical model and that fundamental information security
issues may be more readily revealed through the concepts of quantum computation.
This research aims to contribute to these advancements in these areas: the
quantum computer based on the theory of ternary logic is an efficient
general-purpose computing device capable of simulating any computing process,
increasing the computation time by a polynomial factor only. Quantum computation
has the potential to significantly enhance our understanding of security issues,
making our work more important than ever. The authors suggest ways to develop
optimally using the theoretical principles of the functioning of a quantum
computer based on three-digit logic and how to implement structural elements'
security using ternary technology. |
Keywords: |
Three - digit logic, Quantum computing, Quantum theory, Qubits; Cybersecurity. |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
A NOVEL ASSOCIATION-BASED ENSEMBLE CLUSTER FOR HEART PREDICTION BASED ON ECG
GRAPHS |
Author: |
G BALA KRISHNA, SIRISHA K L S, V PRADEEP KUMAR, MUMMADI RAMACHANDRA, D SANDHYA
RANI, K GNANESHWAR |
Abstract: |
Traditional image processing techniques usually process data frame-by-frame,
which may lead to a loss of temporal information. Temporal aspects are crucial
in understanding the ECG waveform's sequential events and dynamic changes.
Social Network Analysis (SNA), or graph theory analysis, studies relationships
and interactions between individuals or entities within a social network. This
analysis can provide valuable insights into various social structures, dynamics,
and behaviors. Analyzing electrocardiogram (ECG) data using social graphs is an
innovative approach that can provide insights into the cardiac system's dynamics
and relationships between different physiological parameters. Although ECG is
traditionally used to study the heart's electrical activity, incorporating
social graph analysis can offer new perspectives on the interactions and
dependencies between various ECG components and their influence on overall
cardiac health. Using clustering techniques in social graph analysis can offer
several advantages, as it helps identify patterns, groups, and structures within
the network. Social networks often consist of diverse data types, such as
textual content, user attributes, and interaction patterns. Ensemble clustering
can effectively integrate information from different data sources and
algorithms, enabling a more comprehensive analysis. The proposed methodology
integrates DBSCAN with OPTICS and produces a consensus matrix with the help of
an association matrix. DBSCAN and OPTICS are density-based clustering
algorithms, but they have different characteristics. DBSCAN is more efficient
and straightforward to implement. At the same time, OPTICS provides a
hierarchical clustering structure and is more suitable for handling
varying-density clusters. Integrating these two algorithms can enhance the
overall clustering performance, especially when dealing with complex and varied
density datasets |
Keywords: |
Density-based algorithms, Social Analysis, Ensemble Clustering, Association
Matrix, Community Detection |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
A MODEL-DRIVEN APPROACH TO TRANSFORM UML MODELS INTO MONGODB SCHEMAS USING QVTO:
FROM PIM TO PSM |
Author: |
HAMZA NATEK, AZIZ SRAI, ABDELMOUNAIM BADAOUI, FATIMA GUEROAUTE |
Abstract: |
Translating UML models into efficient NoSQL databases is a complex task within
the domain of software engineering. This study addresses the problem of
transforming UML models into MongoDB collections using the Model-Driven
Architecture (MDA) approach. The research method involves defining metamodels
for UML and MongoDB, followed by the development of QVTo transformation scripts
to map UML class diagrams to MongoDB document structures. The transformation
process was tested on a sample UML model, ensuring the correctness of the
generated MongoDB schema. The findings demonstrate that the QVTo transformation
scripts can accurately convert UML models into MongoDB collections, preserving
the integrity and semantics of the original UML diagrams. The conclusions
highlight the effectiveness of the MDA approach in bridging the gap between
UML-based designs and NoSQL database implementations, providing a robust
solution for data management in complex systems. |
Keywords: |
Model-Driven Architecture (MDA), UML to NoSQL Transformation, QVTo
Transformation Scripts, MongoDB Schema Generation, Automated Data Modeling |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
A NOVEL ADAPTIVE NETWORK INTRUSION PREVENTION SYSTEM FOR INTERNET OF THINGS |
Author: |
PARTHIBAN ARAVAMUDHAN, DR.T. KANIMOZHI |
Abstract: |
Data traffic has significantly increased as a result of recent advancements in
computer networks and other related services. At the same time, the negative
effects of cyber-attacks have become far more prevalent. Variety of network
threats which are increasing posing more difficulties and challenges than ever
before. To overcome this issue, anomaly prevention and signature-based
prevention are two key effective approaches for preventing the cyber attacks.
Even though these two key approaches have a potential for success, but still it
is like a “Cloud on the Horizon”. Anomaly-based prevention has the potential for
high false positives, while Signature-based prevention is more vulnerable to
zero-day attacks. To address this concern, this paper explores the importance in
developing an Intrusion Prevention System (IPS) model using Deep Learning (DL)
algorithm. To evaluate the performance, NIDS dataset V.10 2017 is utilized. The
Z-score normalization technique is used to clean the dataset, in which the
scaling approach is applied to remove outlier’s. Adaptive Principal Component
Analysis (A-PCA) and Linear Discriminant Analysis (LDA) are deployed for
dimensionality reduction and class separation optimization, respectively,
resulting in a standardized dataset that accelerates prevention efforts by
reducing processing latency and enhancing filtering effectiveness. The Whale
Optimization Algorithm (WOA) is implemented to find the best optimal values
using the “Rule of Convergence” method. The converged data are identified and
trained for feature extraction to build an efficient optimized Convolutional
Neural Network (CNN) model. “Re-Routing” concept is introduced, which identifies
the threshold values during the hyper-parameter tuning. The primary cause for
implementing this idea is to reduce the training cost, time consumption and also
to enhance the prevention mechanism. The proposed NIPS model is tested and
compared with other existing ML and DL methods. The simulation results affirm
that the Optimized Rerouted Convolutional Neural Network (OR-CNN) model
outperforms the other avant techniques in terms of time consumption, performance
and accuracy. |
Keywords: |
Network Intrusion Prevention System (NIPS), Convolutional Neural Network (CNN),
A-PCA, LDA,WOA, Re-Routing. |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
A HYBRID ENSEMBLE APPROACH COMBINING BAGGING AND SVM FOR ALZHEIMER'S DISEASE
STAGE CLASSIFICATION |
Author: |
S.CHITHRA, DR.R.VIJAYABHANU |
Abstract: |
Neurodegenerative illnesses, such as Alzheimer's disease, cause brain cell
damage, resulting in structural loss and neuron death, with Alzheimer's being a
common form of irreversible dementia in its advanced stages. Researchers are
looking into biomarkers, neuroimaging, and machine learning to improve early
diagnosis and care of Alzheimer's patients. Effective treatment of Alzheimer's
disease (AD) depends on a precise medical diagnosis, and typical protocols
involve constructing a single classifier by extracting features from
longitudinal MRI data. When tested on the ADNI dataset for older persons, the
ensemble bagging SVM model performs better than other approaches, demonstrating
greater performance in important evaluation measures like accuracy, sensitivity,
precision and recall. |
Keywords: |
Alzheimer’s disease, SVM, Ensemble, Bagging, MRI. |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
Title: |
SKIN CANCER DIAGNOSIS SYSTEM ON OBJECT DETECTION USING VARIOUS CNN YOLOV5 IN
ANDROID MOBILE |
Author: |
EVI FATIMATUR RUSYDIYAH, DIAN CANDRA RINI NOVITASARI, FARIS MUSHLIHUL AMIN,
ANISA NUR AZIZAH, HUDA FEBRIANTO NURROHMAN, DINA ZATUSIVA HAQ, ELEN RISWANA
SAFILA PUTRI |
Abstract: |
Skin cancer is a type of cancer that is dangerous and causes death. In
Indonesia, skin cancer is one of the most common health problems, with the
number of sufferers ranging from 5.9% to 7.8% per year, making it one of the
most common cancers besides cervical cancer and breast cancer. This research
aims to create a skin cancer diagnosis system based on object detection using
deep learning artificial intelligence based on Android mobile. The method used
in this research is CNN YOLOv5, which focuses on real-time object detection. In
this study, manual annotation was carried out according to 8 classes of skin
lesion objects, of which 3 were skin cancers. Several experiments were carried
out to obtain the best skin cancer detection system. In the trial, YOLOv5m had a
small average evaluation value compared to YOLOv5l and YOLOv5x. However, the
computing system performance on YOLOv5m is much lighter and faster than on
YOLOv5l and YOLOv5x. The YOLOv5m model produces sensitivity values >80% in
almost all classification classes. Therefore, YOLOv5m is the optimal model for
an accurate and real-time skin lesion detection system, especially for skin
cancer. The implementation on Android phones aims to make it easier for users to
detect early and participate in controlling the growth of skin lesions. |
Keywords: |
Skin Cancer, Diagnosis, Object Detection, YOLOv5, Android Mobile. |
Source: |
Journal of Theoretical and Applied Information Technology
15th August 2024 -- Vol. 102. No. 15-- 2024 |
Full
Text |
|
|
|