|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
January 2021 | Vol. 99
No.01 |
Title: |
SECURITY REQUIREMENTS TEMPLATE-BASED APPROACH TO IMPROVE THE WRITING OF COMPLETE
SECURITY REQUIREMENTS |
Author: |
NURIDAWATI MUSTAFA, MASSILA KAMALRUDIN, SAFIAH SIDEK |
Abstract: |
Writing quality security requirements contributes to the success of secure
software development. It has been a common practice to include security
requirements in a software system after the system is defined. Thus,
incorporating security requirements at a later stage of software development
will increase the risks of security vulnerabilities in software development.
However, the process of writing security requirements is tedious and complex.
Although significant work can be found in the field of requirements elicitation,
less attention has been given for writing complete security requirements. It is
still a challenge and tedious process for requirements engineers (REs) to elicit
and write complete security requirements that are derived from natural language.
This is due to their tendency to misunderstand the real needs and the security
terms used by inexperienced REs leading to incomplete security requirements.
Motivated from these problems, we have developed a prototype tool, called
SecureMEReq to improve the writing of complete security requirements. This tool
provides four important key-features, which are (1) extraction of security
requirements components from client-stakeholders; (2) validation of security
requirements probability density and security requirements syntax density; (3)
checking the security requirements and key-structure components; and (4)
validation of completeness prioritization. To do this, we used our pattern
libraries: SecLib and SRCLib to support the automation process of elicitation,
especially in writing the security requirements. To evaluate our approach and
tool, we have conducted completeness tests to compare the completeness of
writing security requirements through the results provided by SecureMEReq and
manual writing. Our evaluation results show that our prototype tool is capable
to facilitate the writing of complete security requirements and useful in
assisting the REs to elicit the security requirements. |
Keywords: |
Tool Security Requirements, Template-Based Approach, Security Requirements
Completeness, Template-Based Density, Syntax Density |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
FACTORS INFLUENCING CONSUMER ATTITUDE AND CORPORATE IMAGE ON DONATION DECISIONS
THROUGH CROWDFUNDING PLATFORM |
Author: |
ANGGRAINI JUNIA, LA MANI |
Abstract: |
The main purpose of this study is to examine the factors that influence the
consumer attitude and corporate image in making donation decisions through
crowdfunding platforms. One crowdfunding platform in Indonesia is Kitabisa.com
in this platform, fundraising is done on social media Instagram, by doing one of
the campaign #OrangBaik, is a fundraising campaign to collect funds and
contributor with the people who matter and people who donate are given the name
#OrangBaik. This study uses a quantitative approach to data collection, and uses
a survey method. The research has a total sample of 100 respondents who are
active followers on Instagram @kitabisacom. In managing the results, the study
uses the Slovin formula to represent the entire population and uses a Likert
scale to measure a person's perceptions, attitudes, or opinions on a series of
statements given in the questionnaire. The collected data were processed using
PLS-SEM (Partial Least Squares Structural Equation Modeling). The results of
this research is that consumer attitudes and corporate image greatly influence
someone in making donations to the crowd funding platform. (Clearly stated what
has been identified in the research in abstract and introduction sections)
(DONE) |
Keywords: |
Consumer Attitude, Corporate Image, Decision Of Donating, Crowdfunding,
Campaign. |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
IN THE PROCESS OF SOFTWARE DEVELOPMENT: AVAILABLE RESOURCES AND APPLICABLE
SCENARIOS |
Author: |
ANAS BASSAM AL-BADAREEN, ASHRAF MOUSA SALEH, HAYFA. Y. ABUADDOUS, ODAI ENAIZAN |
Abstract: |
For many decades, the cost, time and quality are the main concern of software
engineering. The main objective of any software organization is to produce high
quality software product within a shorter time and minimum cost. Software reuse
is one of the main strategies concerns about using available resources to
enhance the productivity of software development and the quality of software
products. It aims at using existing software products and components in the
development of new software systems. However, various types of software
components available in different sources are used in the reuse strategy. This
makes the reuse strategy confusable and its efficiency and effectiveness
debatable. Selecting unsuitable component or scenario makes the reuse
inefficient and ineffective. This study discusses the types of software
components, their sources, characteristics and applicable scenarios for
developing and reusing these components. A dataset from the literature is used
to calculate and compare the cost of reuse processes. The results show that
software reuse is an efficient strategy comparing with the normal development.
Although, considering the reusability of software components required extra cost
to the normal development, it could efficiently save the cost of the development
of new software system. Moreover, using existing software components in the
development of new reusable component is the most efficient strategy, which
required even less than the cost of developing normal component. |
Keywords: |
|
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
ANSWER SET PROGRAM AND STANFORD DEPENDENCY PARSER TOWARD TRANSLATE TEXT TO
KNOWLEDGE REPRESENTATION |
Author: |
FAISAL Y. ALZYOUD,NISREEN ALSHARMAN, ABDALLAH ALTAHAN ALNUAIMI |
Abstract: |
Knowledge representation is a promising solution that reflects intelligent
behavior in artificial systems, so a proposed method is developed in this
research which links the Natural Language Processing area with Knowledge
Representation area by discussing one possible solution to translate text to
knowledge representation. The first step in the proposed method is to get part
of speech tagging (PoS) and to extract Stanford Dependency relations for a set
of sentences using Stanford parser. Second, a set of linguistic rules will be
generated depending on Stanford Dependency relations with PoS to generate
knowledge representation for the set of input sentences. The representations
reflect deep syntax and grammatical relations between words in sentences. By
using theses representations, we can express the structure of the text, and
distinguish between the events and their environment in the text. Answer set
programming (ASP) is used to implement these linguistic rules. ASP programs
consist of rules that are the same as Prolog rules which reflect computational
mechanisms that used to create the fast satisfiability solvers for propositional
logic, the proposed approach was tested using different metrics and compared
with other works using the same dataset, and the obtained results are promising. |
Keywords: |
Answer Set Programming (ASP), Information Retrieval (IR), Natural Language
Processing (NLP), Part of Speech Tagging (PoS), Question Answering (QA) |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
UNDERGROUND OBJECT DETECTION BASED ON RADIO PROPAGATION CHARACTERISTICS |
Author: |
SUHERMAN, ERWIN WIJAYANTO, ALI HANAFIAH RAMBE, NAEMAH MUBARAKAH, YULIANTA
SIREGAR, MARWAN AL-AKAIDI |
Abstract: |
Underground object detection is useful to explore underground resources as well
as to monitor underground infrastructure. Underground object detection has been
employed for mineral exploration, archeological material finding and underground
fault detection. Existing systems usually employ the ground penetration radar
(GPR) that makes use radio signal reflection. GPR weakness is that the device
relies only on a single point of signal receptions that minimize the scope of
detection. This paper proposes multipoint radio receptions for underground
object detection based on the received signals instead of the reflected ones.
The proposed system was initially tested experimentally for bandwidth range of
97 MHz to 130 MHz which results error shifting from the employed model about
50.33% at frequency 130.762 MHz, 17.58% at 109.818 MHz and 13.38% at 97.335 MHz.
Method of finding the best frequency is then developed by employing gradient
comparison. Higher frequencies were chosen from 500 MHz to 1 GHz as these
frequency results worse losses to ensure experiments were conducted in the worst
condition. The analysis found that 537.69 MHz is the best frequency for the
frequency range. In order to reconstruct the detected object, the number of
multipath propagations is then determined. The object detections were then
measured based on the supervised and unsupervised techniques. The supervised
method exerted better precision compared to the unsupervised method by at least
30% with the detected object reducing the received signal up to 1.86 dBm or
2.68% in average. |
Keywords: |
Underground object detection, radio propagation losses, propagation model |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
P2P ISLAMIC INVESTMENTS USING BLOCKCHAIN, SMART CONTRACT AND E-NEGOTIATION |
Author: |
HASAN AL-SAKRAN, ONS AL-SHAMAILEH |
Abstract: |
Blockchain is receiving an increasing attention from industry and academia as a
breakthrough technology that could provide enormous benefits to different
sectors. Financial sector is one of those sectors that started to consider
applying blockchain. However, it is still in its initial stages. The objective
of this paper is to apply blockchain and smart contract technologies in managing
investments that implement real Islamic investment policy (profit- loss-sharing
agreement), this will result in easier, faster, secure and transparent
investment transactions to both investors and entrepreneurs (or businesses).
This work proposes a design of middleware infrastructure that implements the
real Islamic investment policy (profit-loss-sharing rate) called Musharakah
(joint venture), where investors and entrepreneurs share a pre-agreed percentage
of profit or loss from the financed project. The platform acts as a trusted
third party without the need for another mediator; it will help both investors
and entrepreneurs negotiate their terms. |
Keywords: |
Peer-To-Peer Investment, E-Negotiation, Blockchain, Smart Contract, Islamic
Profit-Loss-Sharing Agreement. |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
APPLICATION OF ENSEMBLE ARIMA, ANFIS FOR CONSTRUCTING MODEL OF GARLIC PRICE DATA
IN SEMARANG |
Author: |
TARNO TARNO, DI ASIH I MARUDDANI, RITA RAHMAWATI |
Abstract: |
This research was proposed for constructing the predictive model of commodity
price data. The classical model such as Autoregressive Integrated Moving Average
(ARIMA) and also machine learning model such Adaptive Neuro-Fuzzy Inference
System (ANFIS) have been implemented in various field of time series analysis.
This research is focused on constructing ARIMA, ANFIS and their combination or
ensemble ARIMA-ANFIS. The main problem of combination is determining the weight
of each vector predicted values which obtained from related models. In this
research, the weight of each model were determined by variance-covariance
approach and Lagrange Multiplier optimization, while in classical studies weight
of each model was determined by averaging of predicted values. The main issue of
this research is how to determine the weight of vector predicted values by using
variance-covariance approach for constructing the ensemble ARIMA-ANFIS. The
daily data of garlic price in Semarang collected from January 2019 to August
2019 were used as case studies. ARIMA, ANFIS and ensemble ARIMA-ANFIS were
implemented for predicting data. ARIMA individual, ANFIS individual, ensemble
model by averaging and ensemble model by weighting resulted high accuracy for
predicting. The combination of ARIMA(1,0,0)-ARCH(1) and ANFIS (with lag-1, lag-2
as inputs and 2 MFs) is the best model for forecasting garlic price data in
Semarang. The MAPE values of all models were less than 5% which had shown a good
performance for forecasting. |
Keywords: |
ARIMA, ANFIS, Ensemble, Garlic Price Data |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
ON THE POSSIBILITY OF IMPLEMENTING ARTIFICIAL INTELLIGENCE SYSTEMS BASED ON
ERROR-CORRECTING CODE ALGORITHMS |
Author: |
AKHAT S BAKIROV, IBRAGIM E SULEIMENOV |
Abstract: |
A new approach to the implementation of artificial intelligence systems is
proposed, based on an analogy with the theory of error-correcting coding, as
well as on the philosophical interpretation of intelligence as an information
processing system that provides, first of all, its compression, for example, by
reducing some complex digital image to a set of classification features. The
approach is based on the expansion of the binary sequence into a fuzzy Fourier
series, implying that the expansion approximates the original function up to a
certain number of permissible deviations. This solves a problem similar to that
which artificial neural networks solve, leading the recognizable image to the
image from the original training set. The analogs of the images that make up the
training sample are functions that form the basis for the expansion of the
binary sequence into a fuzzy Fourier series and/or their combination. |
Keywords: |
Artificial Intelligence, Artificial Neural Networks, Error-correcting Codes,
Dialectical Positivism |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
ANALYSIS OF E-COMMERCE CUSTOMER BEHAVIOR: A THEORY OF PLANNED BEHAVIOR APPROACH |
Author: |
EMRINALDI NUR D.P., LEDY SHAKINA GUSRAFANI |
Abstract: |
This research aims to examine the determinant factors of e-commerce customer
behavior based on the theory of planned behavior. The research sample consists
of 216 respondents who use the internet, have an interest in e-commerce or
information technology, and already make transactions in e-commerce. Determinant
variables of e-commerce customer behavior are safety, trust, service quality,
risk perception, attitude, subjective norm, perceived behavioral control, and
intention. Analysis data uses partial least square. Based on path analysis,
safety and trust can make risk perception lower. Trust also can build a positive
attitude for e-commerce customers. Positive attitude, higher behavioral control,
and strong normative norm lead to higher intention to do e-commerce
transactions. At last, the intention to do e-commerce transactions executed by
the customer to buy and pay the product online. On the other hand, there is no
effect of service quality on risk perception and no effect of risk perception on
intention. This research has implications for e-commerce companies to improve
website safety to get customer trust and lower risk. |
Keywords: |
Customer Behavior, E-commerce, Theory of Planned Behavior |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
AN ADVANTAGE OPTIMIZATION FOR PROFILING BUSINESS METRICS COMPETITIVE WITH ROBUST
NONPARAMETRIC REGRESSION |
Author: |
MARISCHA ELVENY, MAHYUDDIN KM NASUTION, MUHAMMAD ZARLIS, SYAHRIL EFENDI |
Abstract: |
Business Intelligence can be used to support various business decisions from
operational to strategic. Various new ways have been used to make progress, one
of which is with electronic-based businesses, but with a large number of
variations, business vulnerabilities are also increasingly difficult to
anticipate. To keep up with the development of the company, it is necessary to
optimize the metrics for the business. The purpose of optimization is to find
the minimum or maximum value of a problem that occurs, whether the value of a
company produces the desired results or vice versa. The reason for the
improvement is to find a basis or estimate of the difficulty that occurs,
regardless of whether the organization's estimate provides an ideal result or
vice versa., where the outliers obtained are one of the parameters that can be
considered in achieving profit. In this study, the Robust CMARS (Conic
Multivariate Adaptive Regression Spline) was used where CMARS can manage the
existing multivariate in the data and use a robust approach in handling
uncertainty outliers in the data. So that the results achieved by RCMARS are in
the form of a maximum value of the basis of the functions BF11, BF12, and BF13
with a maximum of 14.06% outliers. |
Keywords: |
Business Intelligence, Optimization, Customer Profiles, Business Metrics, Robust
Nonparametric Regression, RCMARS. |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
APPLICATION OF NEURAL NETWORK ALGORITHMS AND NAIVE BAYES FOR TEXT CLASSIFICATION |
Author: |
VADYM S. YAREMENKO, WALERY S. ROGOZA, VLADYSLAV I. SPITKOVSKYI |
Abstract: |
Neural network algorithms and probabilistic classifiers applied for text data
set processing were analyzed. Results indicated advantages of architecture of
recurrent and convolutional neural networks, deep learning neural network
algorithms and Naive Bayes classifier considering accuracy of classification and
processing speed of large data volumes. Classification algorithms’ and
appropriate mathematical models’ development principles were generalized.
Mathematical techniques are based on representing classification accuracy
criteria, text processing speed in form of objective functions, and key
parameters of neural network algorithms and probabilistic classifiers, as well
as features of organization and volume of input data as objective function
arguments. Mathematical modelling allowed identifying shortcomings of certain
types of neural network and probabilistic classifiers limiting their scope.
Algorithms based on the Naive Bayes classifier slowly analyzed large data sets,
which limits their use. Working with neural network algorithms, features of the
procedure for learning process optimization depending on type of neural network
were outlined; approaches to optimization of deep and shallow neural network
architecture were developed. Thereby analysis of efficiency of algorithms for
machine classification of text blocks is proved to be relevant for solving
fundamental problem of building artificial intelligence, and specific problems
of real time processing of large volumes of text data |
Keywords: |
Text Data, Recurrent Neural Network, Convolutional Neural Network, Deep Neural
Network |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
A NOVEL BAT ALGORITHM FOR DYNAMIC ECONOMIC POWER DISPATCH WITH PROHIBITED
OPERATING ZONES |
Author: |
RUDI KURNIANTO, HARDIANSYAH |
Abstract: |
This paper proposes a new meta-heuristic search algorithm, called Novel Bat
Algorithm (NBA). The proposed algorithm combines the bats’ habitat selection and
their self-adaptive compensation for Doppler effects in echoes into the basic
bat algorithm (BA). The selection of bat habitats is modeled as an option
between quantum behavior and mechanical behavior. The effectiveness and
feasibility of the proposed method are demonstrated by two real power systems
and compared with other optimization algorithms reported in the literature. Many
practical constraints of generators such as ramp rate limits, prohibited
operating zones, and transmission losses are considered. The new algorithm is
implemented for solving the dynamic economic dispatch (DED) problem so as to
minimize the total generation cost when considering the linear and non-linear
constraints. In order to validate the proposed algorithm, it is applied to two
cases with 6-unit and 15-unit power systems for a 24-hour time interval,
respectively. The results show that the proposed algorithm indeed produce a more
optimal solution in both cases when compared to the results of other
optimization algorithms reported in the literature. |
Keywords: |
Dynaic economic dispatch, novel bat algorithm, ramp rate limits, prohibited
operating zones |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
A NEW ASSOCIATION CLASSIFICATION BASED METHOD FOR DETECTING PHISHING WEBSITES |
Author: |
FAISAL ABURUB, WAEL HADI |
Abstract: |
Impacting businesses across the world, phishing remains today to be a serious
problem: due to anonymous access to personal details, businesses and their
consumers deal with the problems materialised out of fishing attacks, huge
financial loss being one of these problems. Because of this, phishing needs to
be identified and dealt with efficiently using intrusion detection techniques;
such mechanisms are yet to be used. It is within this paper that, with the use
of a newly-arisen method (Phishing Multi-Class, founded on the grounds of
Association Rule) we will study the issue of predicting phishing websites. So as
to weigh up the successful use of data mining algorithms using a publicly
available dataset involving 10,068 incidents of legitimate and phishing
websites, two experimental studies were conducted, in which the classifier model
was built. In the first of these two studies, the capability of PMCAR (Phishing
Multi-Class Association Rule) compared to three associative classification
algorithms (CBA, MCAR, and FACA) was examined; additionally, five benchmark
algorithms (SVM, LR, DT, and ANN) were assessed in the second experiment, so as
to generalise the competence of utilising data mining for resolving the phishing
websites detection issue. As a result of conducting these experiments, all data
mining algorithms that were evaluated predict phishing websites with decent
classification rate; and so we can conclude that, when looking to tackle the
issue of predicting phishing websites, these can be successful methods. |
Keywords: |
Phishing Websites, Associative Classification, Phishing Multi-Class |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
INTEGRATING DATA WAREHOUSE AND MACHINE LEARNING TO PREDICT ON COVID-19 PANDEMIC
EMPIRICAL DATA |
Author: |
HASAN HASHIM, EL-SAYED ATLAM, 1MALIK ALMALIKI, RASHA EL-AGAMY, M. M.
EL-SHARKASY, GUESH DAGNEW, IBRAHIM GAD, OSAMA GHONEIM |
Abstract: |
The world has recently been plagued by the pandemic of Corona Virus Disease 2019
(COVID-19). Since it is reported in Wuhan city of China, on the 8th of December
2019, the COVID-19 invaded every country around the world. As of October 24th,
2020, a total of 42,549,383 confirmed cases of COVID-19 were officially
announced and the death toll was 1,150,163. Globally, huge volumes of datasets
are generated regarding COVID-19 pandemic to open new research arena for machine
learning and artificial intelligence researchers. In this work, an integration
of data warehouse with deep learning approach, namely LSTM model, is introduced
to predict the spread of the COVID-19 in selected countries. We present the
design and development of COVID-warehouse, a data warehouse that integrates and
stores the COVID-19 data made available daily by different countries. The basic
idea of the framework is to use a COVID19 time-series dataset for analysis by
machine learning models to make forecasting of future trend based on present
values. Ultimately, the proposed prediction model can be applied to predict for
other countries as the nature of the virus is the same everywhere. In terms of
R2 metric, the experimental results of the decision tree model outperforms other
models for recovery cases compared with confirmed and death cases. Recovery
cases have a R2 of 0.996011, death cases have a R2 of 0.993124 and confirmed
cases have a R2 of 0.991676. Finally, our results emphasize the importance of
enforcing the public health advice of social distancing as well as applying the
infection control measures to combat COVID-19 before it becomes too late. |
Keywords: |
COVID-19 Virus, Infection control, Artificial intelligence, Data Warehouse, Deep
Learning Model, Prediction. |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
MULTIMODAL PIPELINE FOR QUANTITATIVE METRICS ESTIMATION OF BRAIN TISSUE
MICROSTRUCTURE USING DMRI DATA |
Author: |
ABDERRAZEK ZERAII , AMINE BEN SLAMA, SABRI BARBARIA, MOKHTAR MARS, CYRINE
DRISSI, TAREK KRAIEM |
Abstract: |
White matter changes in the corticospinal tract (CST) contribute to executive
dysfunction in the context of motor control of the body and limbs. The
objectives of this study remained to characterize the corticospinal tract using
different models for the diffusion MRI signal. We employed the DTI (diffusion
tensor imaging) and Multi-Shell Multi-Tissue Constrained Spherical Deconvolution
to estimate a multitissue orientation distribution function (ODF) analysis of 10
healthy Subjects. In this paper, our goal is to determine the sensitivity of
fibre orientation distribution (FOD) compared to the standard Diffusion Tensor
Imaging (DTI) approach and select the FOD-DEC difference existing in the two
regions of interest (PLIC and PONS). For each subject, biophysical values were
calculated for two Regions of Interest (the posterior limb of the internal
capsule and the anterior pons) at two b-value (b=1000s/mm² and b=3000 s/mm²).
Experimental results showed that the pons region more accurately predict CST
integrity than the posterior limb of internal capsule using a b-value equal to
1000 s/mm². FA and ADC are a promising metric for clinical applications
especially when we rely on qualitative data from the CSD model. |
Keywords: |
MRI, Diffusion Tensor Imaging, fibre orientation distribution, Multi-Shell
Multi-Tissue, Constrained Spherical Deconvolution (MSMT-CSD). |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
A NOVEL PENDING INTEREST TABLE SHARING SCHEME USING NEURO FUZZY LOGIC FOR NAMED
DATA NETWORKING COMMUNICATION |
Author: |
BASSAM A. Y. ALQARALLEH, MALEK Z. ALKSASBEH, AHMAD H. AL-OMARI |
Abstract: |
At present times, Information-Centric Networking (ICN) becomes familiar as a
distinct standard for next-generation Internet and exhibits the significance of
restoring the present host-centric model. The data transmission in ICN is based
on the name of the contents instead of the addresses of the host. Besides, named
data networking (NDN) is another hot research area, which permits the user to
request data with no earlier details relevant to the hosting entity. Though
earlier studies on NDN offers mobility and security over the classical Internet,
it suffers from the problem of pending interest table (PIT) management.
Therefore, a proficient PIT management strategy is essential for the utilization
of PIT memory space. For effective management of the existing PIT memory space
and improvise the cache usage, a novel PIT sharing strategy using neuro fuzzy
logic called NFPIT is presented. In order to select an optimal friendly node
(FN) for the requestor node (RN) which has less amount of PIT space, neuro fuzzy
method will be executed. In addition, we employ a deep learning model by
employing the Convolution Neural Network (CNN) for rule generation. The proposed
method is simulated using NS3 simulator and the simulation outcome has been
experimented under different aspects. The experimental values ensure the
superiority of the NFPIT method by achieving a maximum cache hit ratio of 56.4%,
content diversity of 67% with less content delivery time of 7.26ms. |
Keywords: |
Information-Centric Networking, Named Data Networking, Pending Interest Table
Sharing, Neuro Fuzzy, Cache Hit Ratio. |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
ENHANCED TEXT LINE SEGMENTATION AND SKEW ESTIMATION FOR HANDWRITTEN KANNADA
DOCUMENT |
Author: |
SHAKUNTHALA B S, Dr. C S PILLAI |
Abstract: |
Abstract. When Handwritten Kannada document undergoes text line segmentation,
the process is referred to as Text line segmentation and skew correction. This
is quite essential for the HCRS (Human Character Recognition System). The
process of text line segmentation and skew estimation tends to be quiet
challenging during document analysis. The proposed system presents improvised
text-line segmentation along with skew estimation for which the handwritten
Kannada document forms the dataset. Following are the three methods for carrying
out preprocessing, namely: (i) filtering (ii) gray scale conversion and (ii)
Binarization. The ESLD (Enhanced Supervised Learning Distance) algorithm is
being adopted for the assessment of distance amidst text lines and G_Clustering
aids in grouping of words or the Connected Components. Also, by computing skew
angle with respect to the gap, Skew estimation can be performed. It’s elucidated
from the output that the proposed system exhibits higher performance. |
Keywords: |
Segmentation, Skew Correction, Filtering, Gray Scale, Binarization. |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
FUZZY ANALYTICAL HIERARCHY PROCESS (FAHP) USING GEOMETRIC MEAN METHOD TO SELECT
BEST PROCESSING FRAMEWORK ADEQUATE TO BIG DATA |
Author: |
SALY EID HELMY, GAMAL H.ELADL, MOHAMED EISA |
Abstract: |
Big data is considered a hotspot, as all organizations realize the importance of
their big data to gain insights that help organizations develop and understand
consumer requirements. Big data needs large storage capacity and strong
processing frameworks in order to clean, process, and analyze it. Fortunately,
cloud computing offers many services and processing frameworks that facilitate
the storage and processing of big data. But the issue here is how to choose the
best suited processing framework for big data of financial services. The best
processing framework is chosen based on big data criteria and financial services
requirements. We used MCDM methods to solve this decision problem and evaluated
five big data processing frameworks (Spark, Hadoop, Flink, Storm, and Samza)
based on twelve criteria. These criteria were collected from previous
researches. Analytical hierarchy process (AHP) is a powerful and simple method
of MCDM methods, but many of researchers believe that it has some weakness due
to some uncertainty issues. Many researchers have preferred to combine fuzzy set
theory with AHP to solve the uncertainty problem. This work introduces fuzzy AHP
using geometric mean method in cloud service selection based-problem. The
results show that Hadoop framework has the highest level of security,
availability, scalability,and compatability, but has the heighest cost. Spark
has the highest storage capacity , speed, and best processing mode at lower
cost. Flink has the best usability, and processing. Storm has the best
performance, sustainability, and is the cheapest. The validity of our results
and the robustness of our hybird proposal were aproved by applying sensitivity
analysis. |
Keywords: |
Big Data, Cloud Computing, MCDM, Fuzzy Set Theory, AHP, Geometric Mean Method,
Hadoop, Spark, Storm, Flink, and Samza. |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
A FIREWALL-ADVERSARIAL TESTING APPROACH FOR SOFTWARE DEFINED NETWORKS |
Author: |
RAMI MALKAWI, IZZAT ALSMADI, AHMED ALEROUD, PAVEL PETROV |
Abstract: |
Software Defined Networks (SDN) recently evolves to give more roles to software
in network control and management. It is feared that such significant roles may
risk those networks in terms of reliability and security. As a new architecture,
thorough testing and evaluation should take place to ensure that those networks
are robust and reliable. In this paper, we focused on testing firewall modules
built on top of SDN. We modeled typical interactions between those modules and
the network based on flow and firewall rules. We believe that, in future, all
security controls including firewalls should be deployed as software services,
created in real time, as instances and deployed without any human intervention.
This paper describes also an approach that generates synthetic attacks that can
target SDNs using an Adversarial approach. It can be used to create models that
test SDNs to detect different attack variations. It is based on the most recent
OpenFlow models/algorithms and it utilizes similarity with known attack patterns
to identify attacks. Such synthesized variations of at-tack signatures are shown
to attack SDNs using adversarial approaches. |
Keywords: |
SDN, OpenFlow, Software evaluation, Model based Testing. |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
Title: |
USABILITY EVALUATION DIMENSIONS OF MOBILE HEALTH APPLICATION FOR ELDERLY: A
SYSTEMATIC REVIEW |
Author: |
ERICK KURNIAWAN,NORASIKEN BAKAR,SAZILAH SALAM, RESTYANDITO |
Abstract: |
With the increase in the elderly population as a group of potential mHealth
users, the need for mHealth applications increases to help overcome the
limitations of elderly users. The challenges caused by aging factors in computer
use are widely recognized. However, usability studies show that mHealth is still
not suitably designed for elderly users. To improve mHealth design aimed at
elderly users, we need usability evaluation dimensions that are in accordance
with elderly characteristics to identify usability problems in mHealth
applications.This study conducting Systematic Literature Review (SLR) using
PRISMA to identify the challenges faced by the elderly and usability dimensions
that are most widely used to evaluate mHealth applications for Elderly users,
then select and propose usability dimensions that match with the elderly
characteristics. Six key categories of elderly challenge influencing usability
of mHealth and nineteen most-used usability dimensions on mHealth applications
were identified. Nine usability dimensions are most suitable for evaluating
mHealth applications for the elderly were also selected. The model, however, is
incomplete without evaluation criteria and metrics. Criteria and metrics will be
developed as a complete model for the next phase of this study. |
Keywords: |
Usability Evaluation, Usability Dimension, mHealth, mHealth for Elderly. |
Source: |
Journal of Theoretical and Applied Information Technology
15th January 2021 -- Vol. 99. No. 01 -- 2021 |
Full
Text |
|
|
|