|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
March 2019 | Vol. 97
No.07 |
Title: |
DEEP NEURAL CLASSIFICATION AND LOGIT REGRESSION BASED ENERGY EFFICIENT ROUTING
IN WIRELESS SENSOR NETWORK |
Author: |
Mrs. J.SRIMATHI,AP(SG)/MCA, Dr. B. SRINIVASAN |
Abstract: |
In Wireless Sensor Network (WSN), routing strategies are required for
distributing the data from sensor nodes to base station. During the data
transmission, the node energy is the key parameter for improving the network
lifetime. The conventional routing techniques are developed to perform routing
in WSN but it failed to lessen the energy consumption and improve the network
lifetime with minimum overhead. In order to overcome the above-said issue,
Energy-Efficient Deep Neural Node Classifier based Logit Regressed Routing
(EEDNC-LRR) technique is introduced. The main aim of EEDNC-LRR technique is to
perform energy efficient routing and increase the reliability of data
transmission with maximum network lifetime and minimal overhead. In EEDNC-LRR
technique, the sensor nodes are taken to transmit the environmental data.
Initially, all sensor nodes have a certain energy level. A sensor node consumes
some amount of energy during the sensing of data in WSN. The Deep Neural Node
Classifier model is used in EEDNC-LRR technique to classify the node as higher
energy nodes and lesser energy nodes based on threshold energy level. From that,
the deep neural node classifier model produces the efficient classification
results of sensor nodes. This helps to choose the higher energy nodes to reduce
the energy consumption and extend the network lifetime while routing the data
packets. Subsequently, the higher energy nodes are transmitted to the output
layer for efficient routing in WSN. With higher energy nodes, Logit Regression
Analysis is carried out to identify the nearest neighbor node through a time of
arrival (ToA) to identify the distance between the source nodes and sink node.
After selecting the nearest neighbor, the route with a minimum distance between
the nodes is discovered for routing in WSN. This in turns, the reliable data
packets transmission is achieved with minimum overhead. The simulation is
conducted with various parameters namely energy consumption, network lifetime,
reliability and routing overhead with respect to a number of sensor nodes and
data packets. The simulation results and discussion shows that EEDNC-LRR
technique improves the network lifetime, relaibility and also minimize the
routing overhead as well as energy consumption. |
Keywords: |
WSN, Routing, Deep Neural Node Classifier, Threshold Energy Level, Logit
Regression Analysis, Nearest Neighbor Node, Time Of Arrival. |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
AN INTELLIGENT HOME ENERGY MANAGEMENT (IHEM) BASED ON STATE OF CHARGE OF BATTERY
IN HOUSEHOLD LOADS ON RENEWABLE ENERGY SYSTEM |
Author: |
OYINKANOLA LATEEF O. ADEWALE , SAWAL HAMID BIN MD ALLI , RAMIZI MOHAMED, AREMU
OLAOSEBIKAN AKANNI |
Abstract: |
This work proposed the switching control strategy for household appliances in a
rural area that based their storage on the battery for three days. The system is
to track the state of charge (SOC) of the battery to balance with the demand
from the output appliances. The existing controller or regulator tries to off
all the appliances when the SOC is reaching a certain level. However, to avoid a
total blackout and to avoid at least an appliance to be working. Thus, we try to
design an intelligent discrete-time controller and optimize the controller
parameter for the set point control followed the base rule. The system
performance result shows that proposed switch control strategies work within a
certain limit and avoid total discharge of the battery. |
Keywords: |
Alternative energy, Battery, Controller, Intelligent, and SOC |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
DYNAMICALLY CONFIGURABLE MANET TOPOLOGY FOR WIRELESS NETWORKS USING DCNRPT
ALGORITHM |
Author: |
Mrs. J.VIJAYALAKSHMI, Dr. K.PRABU |
Abstract: |
Mobile Ad hoc Network (MANET) is a collection of mobile nodes that communicate
between themselves on wireless links and are different from wireless LAN (WLAN)
communications. MANET’s mobile nodes are end devices routing information.
Mobility of nodes increases near geographical edges of the network clusters. The
nodes in a MANET work as both a sender and receiver of information. The basic
qualities of reconfiguration and ease in deployment of MANETs make them a
suitable candidate for emergency applications. MANET’s network topology is based
on the relative positions of connecting nodes. Links get created and break when
nodes change positions within the network. Node mobility affects source,
destination and intermediate nodes, resulting in an extremely volatile topology.
The dynamism of MANET topology is a unique challenge and hence a dynamic
topology based on ad-hoc position of Mobile nodes is proposed in this paper and
also improved packet delivery ratio, throughput, jitter and reduced delay time.
The proposed topology DCNRPT is simulated in NS2 for deployment of nodes and an
optimized coverage. |
Keywords: |
MANET, DCNRPT, WCA, Topology Based Routing, Position Based Routing, Topology
Characteristics. |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
A PRIORITY-QUEUE DOMINATED AND DELAY-ENERGY GUARANTEED MAC SUPERFRAME STRUCTURE
FOR WBANS TO DEAL WITH VARIOUS EMERGENCY TRAFFIC OF PILGRIMS DURING HAJJ: AN
ANALYTICAL MODEL |
Author: |
SHAH MURTAZA RASHID AL MASUD, ASMIDAR ABU BAKAR, SALMAN YUSSOF |
Abstract: |
In recent times, IEEE 802.1.5.6 based wireless body sensor networks (WBANs) are
being deployed in various medical and healthcare centres for providing quick and
real-time health facilities among the patients. Every year during Hajj several
millions of pilgrims gather together at overcrowded ritual site ‘Kaaba’ and its
surrounding places in Makkah. Ensuring the best healthcare facilities and
services among the pilgrims those who are suffering from a variety of critical
conditions and illness including chronic diseases; sudden illness, trauma and
accidents at the congested environment is a demanding research issue because
lack of proper healthcare facilities may worsen the life of pilgrims. In our
research, we define the emergency medical data into five different criticality
levels which are aperiodic or random and require immediate transmission to the
healthcare stations for further actions. Based on the data criticality level a
priority-criticality index table and a modified superframe structure for medium
access control (MAC) protocol are developed. Critical or emergency data is
obligatory to be transmitted ahead of other non-critical traffic as delay in its
transmission may impede human life. But, the problem may occur when more than
one emergency data from different sensors aggregate to the coordinator for
further transmission to the healthcare stations through exclusive access
period-EAP slot of MAC. For smooth, low delay and energy efficient data
transmission we propose an analytical method based on M/M/1 priority-queuing
system to provide delay-energy guaranteed QoS at MAC level in our research. The
Poisson distribution process has been used to analyse the packet arrival rate of
each queue. In this research, M/M/1 queuing system is used to investigate five
different priority-queues for five emergency traffics. Emergency traffic is
defined and prioritised through the criticality level of pilgrims’ medical
problems as being generated by body sensors. From the mathematical analysis, we
have seen that the medical data with the highest priority should not stay at the
queue for a long time which decreases the queuing delay at the system. Moreover,
for energy efficient data transmission, we propose a sleep/idle-wakeup mechanism
that reduces unnecessary energy consumption. An extensive analytical approach
through mathematical model shows the better performance comparing to the default
IEEE 802.15.6 standard which assumes the sensor nodes are awake all the time.
Finally, in our future research, extensive experimental work will be conducted
to corroborate and validate the analytical findings which are mentioned in the
discussion and future work section. |
Keywords: |
Hajj, Pilgrims, Emergency, WBANs MAC Superframe, Low Delay, Energy Efficiency,
Queue |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
BEHAVIOR ANALYSIS OF THE USE OF E-LEARNING USING UTAUT MODEL APPROACH (CASE
STUDY: STMIK MIKROSKIL) |
Author: |
ZULPA SALSABILA, EDI ABDURACHMAN, SOPHYA HADINI MARPAUNG |
Abstract: |
E-learning has become one of the factors that needed by universities to be able
to compete and survive. Electronic learning (e-learning) using Internet and
digital technologies to create experience in educating others. E-Learning in
STMIK Mikroskil used to help students and lecturers in teaching and learning
process. This study uses the UTAUT (Unified Theory of Acceptance and Use of
Technology) model. The aim is to analyze the tendency of users of e-learning
systems at STMIK Mikroskil Medan, by testing whether Behavioral Intention and
Behavior to Use a technology are influenced by Performance Expectancy, Effort
Expectancy, Social Influence, and Facilitating Conditions. These four factors
are moderated by gender, experience, and voluntary factors. Questionnaire data
were collected from 346 active students and analyzed by structural equation
modeling (SEM) using AMOS 24. The results of this study showed Performance
Expectation Factors, Effort Expectation Factors, Social Influence Factors had a
positive effect on Behavioral Intention, Facilitating Conditions Factors had a
positive effect on Use Behavior, Behavioral Intention has a significant
influence on Use Behavior. Gender does not have a moderating effect that affects
the factors of Performance Expectation and Effort Expectancy on Behavioral
Intention, but Gender has a moderating effect that affects Social Influence
factors towards Behavioral Intention. Experience does not have a moderating
effect that influences Business Expectancy factors on Behavioral Intention.
However, experience has a moderating effect that influences Social Influence
factors that have a positive effect on Behavioral Intention. Experience also has
a moderating effect that affects the Facilitating Conditions factor for Use
Behavior. Voluntariness has a moderating effect that affects Social Influence
factors towards Behavioral Intention. |
Keywords: |
E-Learning, Performance Expectancy, Effort Expectancy, Social Influence,
Facilitating Conditions, Unified Theory of Acceptance and Use of Technology
(UTAUT) |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
A SECURE CLOUD-BASED PICTURE ARCHIVING AND COMMUNICATION SYSTEM FOR DEVELOPING
COUNTRIES |
Author: |
ADEBAYO OMOTOSHO, JINMISAYO ADIGUN AWOKOLA, JUSTICE ONO EMUOYIBOFARHE, CHRISTOPH
MEINEL |
Abstract: |
Picture Archiving and Communication Systems (PACS) are used to enable medical
images from imaging modalities to be stored electronically and viewed on screens
so that medical practitioners and other health professionals can access them.
However, PACS comes with a lot of costs for storage, air conditioning licenses
and so on, these necessitate the need to take advantage of existing technologies
that can enhance adoption, especially in developing countries. More alarming is
that data centres for electronic health services have been the target of several
attacks and hacks in recent year, even though cloud computing has the potential
of making the adoption of PACS more cost-effective. Cloud computing has a major
drawback in the area of security. In this work, a framework for securing
cloud-based PACS is developed and implemented. |
Keywords: |
E-Health, Telemedicine, PACS, Cloud Computing, Security, Modalities, Medical
Images, Privacy |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
PERFORMANCE ANALYSIS ON THE ARDUINO UNO MICROCONTROLLER-BASED WEIGHT MEASUREMENT
SYSTEM FOR TODDLER |
Author: |
KURNIAWAN TEGUH MARTONO, EKO DIDIK WIDIANTO, YUSUF BAHCTIAR |
Abstract: |
Error in reading the data or in processing the data input frequently occurs. It
might be caused by the user negligence, the condition of the device used, or the
unstandardized system and this, later on, leads to the problem in terms of data
accuracy. The computer technology development today makes it possible to help
human in coping with the problems mentioned above. In measuring the weight of
the under-five-year-old children, the errors in reading and inputting data can
occur. This is in relation to various abilities of each cadre. Also, the use of
the device to measure the weight is not in accordance with the standard – some
might use the electronic weighing scale and other might use the mechanical one.
This unstandardized device can emerge a problem - particularly in reading the
data. The weighing measurement system using Arduino is one of the ways to
control the growth of the under-five-year-old children and this can minimize the
error level in reading and inputting data. The development of this system uses
the sequential development method and the application of this model can result
in the best result. The test of the system was conducted through two phases:
phase 1 related to the test conducted in laboratory and phase 2 related to the
test conducted in Posyandu (maternal and child health centre). The results of
the tests showed the error percentage of 0.067% and the percentage of accuracy
of 99.93%. The result for the time taken in sending the data from the system to
the mobile communication device was at 1.97 seconds at distance of 1 meter and
2.24 seconds at the distance of five meters. |
Keywords: |
Measurement, Arduino, Performance, Data, Weight |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
AUTOMATIC SECURITY EVALUATION OF SPN- STRUCTURED BLOCK CIPHER AGAINST
RELATED-KEY DIFFERENTIAL ATTACKS USING MIXED INTEGER LINEAR PROGRAMMING |
Author: |
HASSAN MANSUR HUSSIEN, SHARIFAH MD YASIN AND, ZAITON MUDA, NUR IZURA UDZIR |
Abstract: |
Block cipher algorithms become an essential domain in Information Technology
(IT) due to ever increasing the number of attacks. In point of fact, it is
significant to produce a security evaluation of block cipher algorithms to
determine a statistical non-random behavior of attacks. In relation to this, a
new theoretical attack such as related-key differential cryptanalysis (RDC)
could give rise to a more practical technique. Basically, estimating immunity of
lower bounds in the substitution-permutation network (SPN) block ciphers
structure against RDC attack is essential for providing a secure block cipher
algorithm. Currently, the automatic computer tools are not applicable to
estimate the immunity against related-key differential attacks for SPN block
ciphers structure. We present a searching strategy that determines the lower
bounds of SPN block ciphers structure against RDC using the Mixed Integer Linear
Programming (MILP). This study also aims to demonstrate the applicability and
the efficiency of the MILP technique by examining the security of Rijndael block
cipher in RDC attack. We prove this technique through calculate the number of
activation S-boxes into Rijndael block cipher. The extended MILP technique is
able to provide an automatic security estimation tool by giving accurate
results. Overall, it is applicable to an extensive variety of block cipher
algorithm that makes it an adaptable tool for industrial purposes and scholarly
research. |
Keywords: |
Related-key Differential Cryptanalysis, Mixed Integer Linear Programming (MILP),
SPN- structured Block Cipher, Rijndael, and Automatic Search Tool
|
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
EVALUATING THE ACCURACY OF DATA MINING ALGORITHMS FOR DETECTING FAKE FACEBOOK
PROFILES USING RAPIDMINER, WEKA, AND ORANGE |
Author: |
MOHAMMED BASIL ALBAYATI1, AHMAD MOUSA ALTAMIMI, DIAA MOHAMMED ULIYAN |
Abstract: |
Facebook is on constant growing, attracting more users due to the provided
high-quality services for Online Socializing, Sharing Information,
Communication, and alike. Facebook manages data for billions of people and is
therefore be a target for attacking. As a result, sophisticated ways for
infiltrating and threatening this platform have been developed. Fake profiles,
for instance, are created for malicious purposes such as financial fraud,
impersonate identities, Spamming, etc. Numerous studies have investigated the
possibility of detecting fake profiles on Facebook with each study focusing on
introducing a new set of features and employing different machine learning
algorithms for countermeasure. This paper adopts a set of features from the
previous studies and introduces additional features to improve classification
performance in order to detect fake profiles. The performance of five supervised
algorithms (Decision Tree, Support Vector Machine SVM, Naďve Bayes, Random
Forrest, and k Nearest Neighbour k-NN) are evaluated across three of the common
mining tools (RapidMiner, WEKA, and Orange). The experimental results showed
that SVM, Naďve Bayes, and Random Forest had a stable performance with a nearly
identical results across the three mining tools. However, Decision Tree
outperformed other classifiers on RapidMiner and WEKA with accuracy of 0.9888
and 0.9827, respectively. Finally, we observed that k-NN showed the most
significant change with an accuracy of 0.9603 for WEKA, 0.9145 for Orange, and
0.9460 for RapidMiner tool. These findings would be useful for researchers
willing to develop a machine learning model to detect malicious activities on
social network. |
Keywords: |
Fake Profiles Detection, Machine Learning, Classification, WEKA, RapidMiner,
Orange |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
APPLICATIONS OF MACHINE LEARNING ALGORITHMS TO THE PROBLEM OF DETECTING UNKNOWN
DATA |
Author: |
AKHMER YERASSYL, AKHMER YERMEK, BEKTEMYSSOVA GULNARA UMITKULOVNA, USKENBAYEVA
RAISA KABIEVNA |
Abstract: |
Nowadays, the term big data means the work with large volumes and diverse
composition, very often updated and located in different sources in order to
increase work efficiency, create new products and increase. The problem of
understanding a large number of unstructured and previously unknown data for the
task is usually manually solved by a team of mathematicians and analysts who
find it difficult to retain the value of all the data and all the hidden
relationships of the incoming multidimensional array. However, in order to
classify data, a full understanding of it is required. Today, there can be a lot
of data sources: data from sensors of critical equipment (“Internet of things”),
transactional “tires” and databases, electronic documents and paper media. In
this regard, the quality of classifiers of most of the studying models decreases
significantly. This article describes various machine learning methods for the
classification of previously unknown data and ways to improve the quality of
models. The advantages and disadvantages for each of the models are described,
as well as the necessary and sufficient conditions for use. As experimental
data, the Otto Product Classification dataset were taken from the open database. |
Keywords: |
Machine Learning, E-Commerce, Classification, Decision Trees |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
PREDICTION OF OSTEOARTHRITIS USING LINIER VECTOR QUANTIZATION BASED TEXTURE
FEATURE |
Author: |
LILIK ANIFAH, MAURIDHI HERY PURNOMO, TATI L R MENGKO |
Abstract: |
Osteoarthritis estimated as the eighth-leading nonfatal burden of disease in the
world is the one important reason why it is investigated. The status of
osteoarthritis is important because it is used as a basis for determining
treatment for patients. The aim of this research is analyzing the texture
feature of Junction Space Area (JSA) and design system based texture feature in
order to predict the severity of osteoarthritis in the knee using a linear
vector quantization. Textures extracted in this study are first order, second
order, and gray level run length matrix. Several stages involved as the research
procedures covering image processing, feature extraction, learning process, and
testing process. The result of feature extraction obtained several FO, GLCM and
GLRLM features for each cluster with overlapping conditions, making it difficult
to classify using linear methods, so learning used linear vector quantization
(LVQ). Feature extraction was carried out for both training data and testing
data. The training process, which was divided into several stages namely first
order learning, GLCM learning data, GLRLM learning data, and combined learning
data features. learning process for the aforementioned features of combined
learning data used learning parameter rate of 0.5 with epoch values of 1000,
5000, 10000, and 15000. The best results obtained when using a system using LVQ
based on GLCM features. But the disadvantage of this system is that it cannot
recognize grade 2 well where recognizing grade 2. |
Keywords: |
Osteoarthritis, FO, GLCM, GLRLM, LVQ |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
OPEN AND CLOSED EYES CLASSIFICATION IN DIFFERENT LIGHTING CONDITIONS USING NEW
CONVOLUTION NEURAL NETWORKS ARCHITECTURE |
Author: |
NOOR D. Al-SHAKARCHY, ISRAA HADI ALI |
Abstract: |
A classification of eyes status (open and closed) is a most important step in
various application, such as drivers fatigue detection, psychological states
analysis. Monitoring a driver to detect inattention is one of the important
application can be achieved base on the period that the eyes staying closed.
Therefor the core of these systems is the perfect classification of the status
of eyes open or closed. Traditional methods to classify open and closed eyes
suffer from the luminance information. These systems performed inadequately if
images of eyes obtained with various resolutions and elimination conditions.
Most computer vision applications deal with images in different lighting
conditions and resolutions as well as the real life applications must take into
account the accuracy and time of execution. Deep Neural Network presented the
efficient method to extract robust feature such as eye features, which determine
the status of eyes closed or open. The proposed method for classification the
eyes status which those open or closed in images of eyes with various
elimination conditions based on new architecture of Deep Neural Networks (DNNs).
The proposed Deep Neural Network Classification (DNNC) performed accuracy with
reasonable training and simulation time. Performance speed is the most crucial
point in real-time applications when the training and performance time of
existing methods have long times. The primary goals of the proposed (DNNC) can
be achieved the less time spend for execution as well as this system can be
implemented in reasonable hardware capability. The proposed system can achieve
accuracy with training set reach to 96%. The values of loss function reach to
0.01. |
Keywords: |
Deep Neural Network (DNN), Convolution Layer, Maxpooling Layer, Relu Activation
Function, Sigmoid Activation Function |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
THE APPLICATION OF DATA MINING METHODS FOR THE PROCESS OF DIAGNOSING DISEASES |
Author: |
SAULE BELGINOVA , INDIRA UVALIYEVA , SAMIR RUSTAMOV |
Abstract: |
Today, the medical field has a large amount of medical information that requires
proper processing and further use in the diagnosis and treatment of various
diseases. The development of computer technology provides tremendous
opportunities for collecting, processing, managing and researching medical
information to better understand the complex biological processes of life and
help solve the problem of diagnosis and treatment in medical institutions.
Accurate diagnosis and proper treatment provided to patients is one of the main
tasks of medical care. Therefore, data mining techniques, which are part of
knowledge discovery in databases, are becoming popular tools for medical
researchers. The use of these tools makes it possible to identify and use
patterns and relationships between numerous variables and to predict specific
disease and treatment outcomes. This paper describes the application of data
mining methods for the process of diagnosing diseases. To improve the
effectiveness of the methods of data mining for the process of diagnosing
diseases, a procedure for assessing the informativeness of heterogeneous
diagnostic indicators on the basis of a theoretical information approach has
been proposed. The results of diagnosing are described on the basis of revealing
the relationship between the supply of magnesium and the risk of somatic
diseases with the use of intelligent data analysis. |
Keywords: |
Diagnosis, Information Signs, Intellectual Analysis Of Medical Data,
Diagnostically Valuable Signs, Clustering. |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
A COMPARISON OF CHANGE MANAGEMENT GUIDELINES TO ADDRESS TECHNOLOGY ADOPTION
BARRIERS: A CASE STUDY OF HIGHER EDUCATIONAL INSTITUTIONS |
Author: |
HABIB ULLAH KHAN , RICHARD GARETH SMUTS |
Abstract: |
Emerging technologies are bolstering the development in the world in all the
domains of the world. Education sector is an example of such innovation, by
basing on which new means of pedagogy and andragogy are being explored with
suitable models. The present study aims to know the difference in such models
and the change management guidelines in addressing the barriers of Technology
Enhanced Learning (TEL) in Higher Educational Institutions (HEI). The students
of the Department of Electrical, Electronic and Computer Engineering (DEECE) of
1st year are selected from the Cape Peninsula University of Technology (CPUT)
are selected for the study using purposive random sampling. They are divided
into two groups and are made to participate in a survey after providing some
level of orientation about the change management models and the role of the
model such as UTAUT and ADKAR. The opinion of the students is collected
regarding the ability of the models to hedge against the barriers in the process
of technology enhanced learning. The collected data is analyzed to check the
means and standard deviation of scores and hence the difference between the
opinion of the groups. To achieve this, t-test for difference of means with
unequal variance is used. The results led to the acceptance of the null
hypotheses that are framed basing on the research questions. Among the two
groups of students surveyed, it is proved that the group that got exposed to TEL
approach is able to understand the barriers in a better manner than their
counterparts. In addition to the results, the draw backs of the study like
limited sample size, clarity about the causality and ineffective participation
of students are identified. Hence it is advocated to the future research works
to overcome the limitations and to understand the role of the change management
guidelines to address the barriers in the technology adopting as per the
setting. |
Keywords: |
Technology Enhanced Learning (TEL); Areas of Tension (AOT); Unified Theory of
Acceptance and Use of Technology (UTAUT); Adoption of Learning Technology (ALT);
Awareness/Desire/Knowledge/Ability/Reinforcement (ADKAR); Higher Education
Institution (HEI) |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
EXTRACTING PATTERN FROM LARGE DATA CORPUS USING APPROXIMATE REASONING AND
INTUITIONISTIC FUZZY CLUSTERING TECHNIQUES |
Author: |
Dr. ASHIT KUMAR DUTTA |
Abstract: |
The fuzzy logic is a familiar method to automate a complex activity.
Intuitionistic fuzzy clustering is the extension of fuzzy logic. Approximate
reasoning is a concept to deal vague and complex data. Many real time problems
were solved with the help of Intuitionistic fuzzy concepts. Classification and
clustering are the well known methods in the process of extraction of knowledge
from the large data corpus. The existing techniques that are based on clustering
methods could not provide optimum results with less computation cost. The
efficiency of the existing methods was not up to the mark on large datasets. The
objective of the proposed research is to provide an efficient technique for the
extraction of meaningful pattern from large dataset with least computation cost.
The proposed method has used the Intuitionistic fuzzy clustering technique with
approximate reasoning for the extraction of pattern from the benchmark datasets.
The experiment and results on large data corpus has proved that the level of the
proposed research is satisfactory. |
Keywords: |
Fuzzy clustering, Pattern extraction, Pattern Miner, Fuzzy – C Means, Soft
computing |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
A NEW PERMUTATION METHOD FOR SEQUENCE OF ORDER 28 |
Author: |
ABDULLAH AZIZ LAFTA, AMMAR KHALEEL ABDULSADAH, SAFAA JASIM MOSA |
Abstract: |
Permutation is reordering for a set of objects with taking into consideration
the important of order of their locations. In order to increase security
performance, this paper presents a new method of permutation for sequence of
order 2^8. In this propose method, any information can be converted to the
sequence of approximate values between 0 and 255 as a set of blocks. It is
needed to build the so called lookup table so that it contains the frequency and
positions of the original sequence values. In the experiments, twenty-six image
files have been experimented as evidence to the proposed method. In addition,
the correlation and entropy measures are computed to test the quality of
permutation. All the tests observed that method have significantly effective in
reducing the correlation and thereby decreasing the perceptual information of
the sequence. Hence, the security is improved. |
Keywords: |
Permutation, Algorithm, Frequency, Sequence, Byte |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
AN EFFICIENT TECHNIQUE FOR CLUSTER NUMBER PREDICTION IN GRAPH CLUSTERING USING
NULLITY OF LAPLACIAN MATRIX |
Author: |
IMELDA ATASTINA, BENHARD SITOHANG, G.A.PUTRI SAPTAWATI, VERONICA S.MOERTINI |
Abstract: |
Clustering graph dataset representing users’ interactions can be used to detect
groups or communities. Many existing graph clustering algorithms require an
initial cluster number. The closer the initial cluster numbers to the real or
final ones, the faster the algorithm will converge. Hence, finding the right
initial cluster number is important for increasing the efficiency of the
algorithms. This research proposes a novel technique for computing the initial
cluster number using the nullity of the Laplacian Matrix of Adjacency Matrix.
The fact that nullity relates to the properties of the eigenvalues in the
Laplacian matrix of a connected component is used to predict the best cluster
numbers. By using this technique, trial and error experiments for finding the
right clusters is no longer needed. The experiment results using artificial and
real dataset and modularity values (for measuring the clusters quality) showed
that our proposed technique is efficient in finding initial cluster numbers,
which is also the real best cluster numbers. |
Keywords: |
Estimating the Number of Clusters, Nullity, Laplacian Matrix, Adjacency Matrix,
Graph Clustering |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
THE ONTOLOGY APPROACH FOR INFORMATION RETRIEVAL IN LEARNING DOCUMENTS |
Author: |
LASMEDI AFUAN, AHMAD ASHARI, YOHANES SUYANTO |
Abstract: |
The number of documents on the Internet has increased exponentially. Every day,
users upload various documents to the Internet. This raises a problem, how to
find content documents that are relevant to user queries. Information Retrieval
(IR) become a useful thing to retrieve documents. However, IR still uses a
keyword-based approach to content search that has limitations in displaying the
meaning of the content. Often, keywords are used mismatch and miss concept with
a collection of documents. As a result, IR displays documents that are not
relevant to the context of the information needed. To overcome these
limitations, this study has applied Ontology-based IR. The dataset used in the
study is 100 learning documents in the field of Informatics which include
lecture material, practicum modules, lecturer presentations, proceedings
articles, and journals. IR performance evaluation is done by comparing
ontology-based IR with classical IR (keyword based). We evaluate IR performance
by executing ten queries for testing. Documents that retrieves by query
execution are calculated for performance by using Precision, Recall, and
F-Measure evaluation metrics. Based on IR performance evaluation, obtained
average recall, precision and f-measure values for ontology-based IR of 88.11%,
83.38%, and 85.49%. Meanwhile, IR classics obtained average recall, precision,
and f -measure 78.70%, 70.96%, and 74.47%. Based on the values of Recall,
Precision, and F-Measure, it can be concluded that the use of ontology can
improve relevance document. |
Keywords: |
Information Retrieval, Ontology, Learning Document, Precision, Recall,
F-Measure. |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
RT-VC: AN EFFICIENT REAL-TIME VEHICLE COUNTING APPROACH |
Author: |
SALAH ALGHYALINE, NIDHAL KAMEL TAHA EL-OMARI, RAED M. AL-KHATIB, HESHAM Y.
AL-KHARBSHH |
Abstract: |
This paper proposes and implements an efficient real-time vehicle counting
(RT-VC) approach. This approach is based on the most efficient action detection
and tracking methods in computer vision. YOLO is used for object detection,
whereas Kalman filter with Hungarian algorithm are used for tracking. The road
is divided into two zones of interest by the end-user, and any vehicle will be
counted if its trajectory crosses these zones of interest. The experiments show
that the proposed system is very accurate in comparison with other existing
approaches. For comparative evaluation, our proposed approach obtained accuracy
above 90% for most of the tested videos in the highway roads. Therefore, the
proposed approach can efficiently work with many real-time surveillance systems
and has the potential to be used in many real road applications. |
Keywords: |
Artificial Neural Networks (ANNs), Convolutional Neural Network (CNN),
Artificial Intelligence (AI), Deep Learning Algorithms, Vehicle Counting,
Surveillance Systems, Traffic Managements. |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
CLUSTER TRACE ANALYSIS FOR PERFORMANCE ENHANCEMENT IN CLOUD COMPUTING
ENVIRONMENTS |
Author: |
HAYDER H. MAALA , SUHAD A. YOUSIF |
Abstract: |
Cloud computing has received considerable interest from research institutions,
developers, and individuals in the last years. A trace’s cluster of
approximately 12,500 machines, referred to as the "Google cluster trace” had
been initiated by Google. This paper examines the characteristics, download
process and tools, and analysis of this trace dataset in an attempt to provide
insights into a type of trace date similar to the date which is in the cloud
environment. We analyzed trace dataset by using the K-means clustering algorithm
executed over SQL Server to use the implemented methodology for enhancing cloud
environment performance by allocating the data into clusters. This allocation
was aimed to be used in distributing the upcoming tasks to the most suitable
cluster, then to the most suitable machine which covers its need for resources.
The clustering process generates some clusters depending on CUP rate for each
task, these clusters represent the machines suitable to each range (Average) of
CPU rate which is required from the upcoming task to be allocated. Depending on
the relationship between tasks and machine data, machines could be selected of
each produced cluster for calculating the availability of CPU usage. This
calculation will be the millstone of future tasks allocation over cloud cluster
machines depending on its resources availability and its suitability to future
task resource requirements. |
Keywords: |
Google Cluster Trace; Clustering; Performance Enhancement; K-means; Cloud
Computing; |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
REVIEWS ON EDGE-BUNDLING TECHNIQUES FOR PARALLEL COORDINATES |
Author: |
N. NADIA S. ASRI, ZAINURA IDRUS, SITI ZZ ABIDIN, H. ZAINUDDIN, MT MISHAN |
Abstract: |
Parallel coordinates are well-known visualization tool for high and
multi-dimensional datasets. However, data cluttering and over plotting data are
major issues for parallel coordinates and becoming worst in the era of Big Data.
Thus, edge-bundling technique has overcome the aforementioned problems by
bundling similar edges in parallel coordinates. This paper provides in-depth
reviews on six approaches of edge-bundling techniques which are cost-based
approach, geometry-based approach, image-based approach, force-directed
edge-bundling approach, divided edge-bundling approach and lastly, hierarchical
edge-bundling approach. On top of that, these particular approaches are able to
reduce the visual clutter without changing the positions of the nodes and only
the shapes of edges have some changes. Based on the findings, each approach
supports certain features of parallel coordinates such as shape, scale, weight,
position, direction and length. This paper suggests a relationship diagram for
all these features. The relationship model was proposed in order to improve data
visualization by removing data clutter. In summation, edge-bundling techniques
are applicable to parallel coordinates in order to visualize multivariate
datasets. |
Keywords: |
Parallel Coordinates, Edge-bundling, Data Visualization, Information Visualize,
Big Data |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
FACTORS AFFECTING ONLINE CONSUMER BEHAVIOR ON THE LEADING E-MARKETPLACE IN
INDONESIA |
Author: |
SUTJAHYO, TOGAR ALAM NAPITUPULU |
Abstract: |
The research objective is to analyze the factors that influence online consumer
behavior in e-marketplace in Indonesia and its priorities. Questionnaires were
distributed to 384 respondents who were users of one of the top five
e-marketplaces in Indonesia. Data were analyzed using Structural Equation
Modeling (SEM) and processed using SmartPLS. This research model describes
80.50% of online consumer behavior, where there are several factors, namely
seller's reputation, trust in market-maker, perceived ease of use, perceived
usefulness, perceived risk, after sales services, customer services, trust in
the seller, and product quality. It can be concluded that the seller's
reputation factor is a factor that must be prioritized in increasing online
consumer behavior. |
Keywords: |
Online Consumer Behavior, E-Marketplace, SEM, SmartPLS, Indonesia |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
DEEP BELIEF NETWORK BASED QUESTION ANSWERING SYSTEM USING ALTERNATE SKIP-N GRAM
MODEL AND NEGATIVE SAMPLING APPROACHES |
Author: |
K YOGESWARA RAO, GORTI SATYANARYANA MURTY, T.PANDURANGA VITAL |
Abstract: |
The Question Answering (QA) system becomes essential owing to the increasing
amount of web content and high demand for the right and short information.
Intending to enhance QA results towards the Natural Language Processing (NLP)
community, most of the Question answering system exploits machine learning
algorithms to generate an appropriate answer to the user query. Even though, it
lacks to predict accurately over the large-scale data by itself, needs an
external force to make adjustments in the answer prediction. With the recent
evolution in deep learning, the neural network architecture reflects its
potential for QA. The deep learning model can determine the issues in answer
prediction on their own and resolves it. The class of deep neural network such
as Deep Belief Network (DBN) is widely applied in question answering especially
for text processing. Moreover, most of the works on the text processing exploits
the skip-gram model for representing the relevant words in the vector over a
massive volume of unstructured text data. However, it results in inefficient
outcomes, especially when processing the combination of frequent data and stop
words. To resolve these issues, this paper introduces the Deep Neural network
for Answering user queries (DNA). The proposed DNA approach performs the QA
system over DBN by applying alternate skip-N gram and negative sampling. The
conditional probability measurement develops an alternate skip-N gram model and
alternatively applying the normal N-gram and skip-N gram model. It improves the
efficiency of relevant word-pair detection without increasing the computational
complexity. By only using samples, the negative sampling reduces the impact of
noise on the accuracy of alternate skip-N gram model and improves the efficiency
of the QA system. Finally, the DNA is evaluated using Java and compared with the
existing Unified model for Document-Based Question Answering (UDBQA). The
results show the efficiency of DNA, for instance, the UDBQA approach reduces the
F-measure by 16.3%, compared to the DNA approach with 2000 number of queries. |
Keywords: |
Deep Belief Network, Question Answering, Skip-N Gram Model, Negative Sampling,
And Hidden Layers. |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
Title: |
ALTERNATING SENSING PROCESS TO PROLONG THE LIFETIME OF WIRELESS SENSOR NETWORKS |
Author: |
MOHAMMED AL-SHALABI, MOHAMMED ANBAR, ALAA OBEIDAT |
Abstract: |
In the last few decades, the usage of Wireless Sensor Networks (WSNs) is
dramatically increased due to the nature of this type of networks where the
nodes are randomly deployed in a large area to sense some attributes which the
human beings cannot do so and send the data to the main device called Base
Station (BS), and this process takes a time interval called round. The deployed
nodes may be located near to each other and then they may sense approximately
the same events. The process of sensing and sending the same data to the BS
consumes a lot of energy in the nearby nodes and they will die fast. An
alternating sensing process is proposed. The goal of the proposed mechanism is
to prolong the lifetime of the network by increasing the lifetime of the nearby
nodes and reducing the consumed energy in them. This goal is achieved by
scheduling the sensing process to reduce the consumed energy as much as
possible. The MATLAB tool is used to evaluate the proposed mechanism and compare
it with the LEACH protocol. The results show that the proposed mechanism
outperforms the LEACH protocol in terms of the network lifetime and the number
of transmitted packets to the BS by approximately 9% and 17% alternatively. |
Keywords: |
Wireless Sensor Networks, Network Lifetime, Sensing Process, Nearby Nodes, Leach
Protocol. |
Source: |
Journal of Theoretical and Applied Information Technology
15th April 2019 -- Vol. 97. No. 07 -- 2019 |
Full
Text |
|
|
|