|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Informtion Technology
December 2017 | Vol. 95
No.24 |
Title: |
THIN FILM ROUGHNESS OPTIMIZATION IN THE TIN COATINGS USING GENETIC ALGORITHMS |
Author: |
NUR FAIQAH FAUZI , ABDUL SYUKOR MOHAMAD JAYA , MUATH IBRAHIM JARRAH , HABIBULLAH
AKBAR |
Abstract: |
Optimization is important to identify optimal parameters in many disciplines to
achieve high quality products including optimization of thin film coating
parameters. Manufacturing costs and customization of cutting tool properties are
the two main issues in the process of Physical Vapour Deposition (PVD). The aim
of this paper is to find the optimal parameters get better thin film roughness
using PVD coating process. Three input parameters were selected to represent the
solutions in the target data, namely Nitrogen gas pressure (N2), Argon gas
pressure (Ar), and Turntable speed (TT), while the surface roughness was
selected as an output response for the Titanium nitrite (TiN). Atomic Force
Microscopy (AFM) equipment was used to characterize the coating roughness. In
this study, an approach in modeling surface roughness of Titanium Nitrite (TiN)
coating using Response Surface Method (RSM) has been implemented to obtain a
proper output result. In order to represent the process variables and coating
roughness, a quadratic polynomial model equation was developed. Genetic
algorithms were used in the optimization work of the coating process to optimize
the coating roughness parameters. Finally, to validate the developed model,
actual data were conducted in different experimental run. In RSM validation
phase, the actual surface roughness fell within 90% prediction interval (PI).
The absolute range of residual errors (e) was very low less than 10 to indicate
that the surface roughness could be accurately predicted by the model. In terms
of optimization and reduction the experimental data, GAs could get the best
lowest value for roughness compared to experimental data with reduction ratio of
46.75%. |
Keywords: |
Roughness, TiN coating, PVD, GAs, RSM |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
ONLINE PERFORMANCE DIALOGUE SYSTEM MODEL (e-DP): A REQUIREMENT ANALYSIS STUDY AT
BATU PAHAT DISTRICT EDUCATION OFFICE |
Author: |
ASRAR NAJIB YUNOS, ABD SAMAD HASAN BASARI, AHMAD NAIM CHEE PEE, MD SAID MD
DAIMON, ABD RAHIM ABDUL RAHMAN, LOKMAN TAHIR |
Abstract: |
This study aims to identify the viewpoint of school leaders the need of
developing Online Performance Dialogue (EDP). The model based on the method of
inquiry for educational leaders and the School District. The theory is used as a
model UTAUT basis for this study. Survey methods used between 222 schools
investigate the needs of leaders to adopt Online Performance Dialog (eDP) The
data obtained were analyzed through descriptive Statistics using statistical
package for social science (SPSS) software. Interpretation of the data is based
on the value of the mean and standard deviation. Overall findings indicate that
school leaders need on-line performance dialogue (eDP) with an average mean
4.309. The findings of this study reveal that school leaders have access to the
necessary technology to eDP. The results also showed the level of acceptance and
the intention to use the eDP model among school leaders. |
Keywords: |
Model, Online Performance Dialogue, Need Analysis |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
ACCEPTANCE SAMPLING FOR NETWORK INTRUSION DETECTION |
Author: |
C. MADHUSUDHANARAO, Dr. M. M. NAIDU |
Abstract: |
Network Intrusion Detection System (NIDS) is to prevent entry of anomalous
network flows into networks. Hundred percent inspections of all the fragments of
network flows for detecting malicious fragments and thereof anomalous flows are
highly prohibitive. The Selective Sampling Method (SSM) considers only network
flows of small size not exceeding 80 fragments. Further, it is applicable for
detecting port scan and host scan attacks only. This study proposes a novel NIDS
adapting acceptance sampling method, referred to as ASNID. It is applicable to
detect Land, Xmass, Nestea, Rose, Winnuke, NULL Scan, Teardrop, Fraggle, Port
scan, Host scan. A randomly chosen sample of fragments from a network flow is
inspected for detecting whether it is anomalous or not. It reduces the
computational effort by a factor of 0<k<1 where k is the ratio of sample size to
total fragments of a network flow. It is proved experimentally that the GMAI,
performance metric of ASNID tends to one as the sample size increases to 60%. It
is also proved that as the percentage of anomalous flows increases GMAI
increases. Hence, ASNID would of immense use in network intrusion detection. |
Keywords: |
Acceptance Sampling, Selective Sampling, Geometric Mean Accuracy Index, Network
Intrusion Detection, Network Attacks |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
A SURVEY ON PRIVACY OF LOCATION-BASED SERVICES: CLASSIFICATION, INFERENCE
ATTACKS, AND CHALLENGES |
Author: |
MOHAMAD SHADY ALRAHHAL, MAHER KHEMAKHEM, KAMAL JAMBI |
Abstract: |
In recent years, Location-Based Services (LBS) have become very popular,
especially in the light of enhancements that are daily performed on both mobile
devices and wireless networks. The popularity of LBS is derived from its
valuable benefits, where they enable the users to search for nearest Points of
Interest (POI), share ideas and comments, and enjoy playing games, making our
life easier and more enjoyable. However, LBS have some risks associated with it.
The privacy issue is considered one of the most important risks in this field
since the users are forced to build their queries based on their real geographic
locations. This paper studies the different privacy protection approaches
through a survey, where a new classification is proposed based on the amount of
collaboration between LBS users and LBS server. The protection goals (identity
ID, Location Information LI, and Temporal Information TI) that any LBS user aims
to protect are defined and measured. Based on the provided protection goals, the
most advanced inference attacks (Location Homogeneity Attack LHA, Map Matching
Attack MMA, Query Sampling Attack QSA, and Semantic Location Attack SLA) are
analyzed and evaluated. As for challenges in LBS privacy protection field, an
eight research questions and open problems are explored. In addition, we present
some rules-based recommendations, which can help the LBS users to select the
most optimal way to achieve a higher privacy protection level. |
Keywords: |
Inference Attacks, Research Questions, Privacy Protection, Protection Goals,
Privacy Metrics, Rule. |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
NETWORK LIFETIME MAXIMIZATION BASED ON ENERGY FORECAST AND COMPRESSIVE SENSING
WITH INTEGRATED SINK MOBILITY FOR HETEROGENOUS WIRELESS SENSOR NETWORKS |
Author: |
A.KARTHIKEYAN, Dr.T.SHANKAR, KRUTHIKA CHAKKA, ARCHIT DWIVEDI |
Abstract: |
Optimum usage of battery resources and efficient consumption of energy are the
primary concerns and design parameters for any WSN. Irregular energy consumption
is the major problem in current WSNs. This paper focuses on efficiently using
the energy resources and to maximize the network lifetime by the application of
compressive sensing and optimum CH selection process coupled with sink mobility
model. Compressive sensing allows us to reduce the number of transmissions taken
for complete data transfer, while optimum CH election mechanism gives efficient
energy consumption at the initial stages of transmissions and data transfer. The
residual energy of the network is further optimized by using the sink mobility
model, increasing the total lifetime of high energy nodes thereby leading to
increased network lifetime. The algorithm was simulated in MATLAB and verified. |
Keywords: |
Wireless Sensor Network, Compressive Sensing, Sink Mobility |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
LIFETIME MAXIMIZATION OF HETEROGENEOUS WIRELESS SENSOR NETWORKS USING IMPROVED
ENERGY AWARE DISTRIBUTED CLUSTERING APPROACH WITH RENDEZVOUS NODES AND MOBILE
SINK |
Author: |
A.KARTHIKEYAN, Dr.T.SHANKAR, MUDITA PRAKASH, RAJAT PUSHKARNA |
Abstract: |
Energy efficiency and its optimization are of paramount importance in data
transmission in Wireless Sensor Networks. The lifetime of the deployed sensor
node should be maximized with even distribution of the energy across the network
so that the data transmitted is not lost .The MS helps in enhancing the lifetime
of the network by reducing the energy spent in long distance transmission. The
introduction of RN helps in storage and data transmission from CH to MS. In the
proposed protocol I-EADC_RN, the field is divided into two regions Sensing
region[A] and Storage region[B]. The nodes deployed within the storage region
are called RN. The nodes deployed in the sensing region participate in CH
selection based on average energy computed from the neighboring nodes. The RN
store the collected data and transmit it to the MS when the MS comes near to
them. The objective of the proposed algorithm is to maximize the effective
lifetime of the network with the help of RN and MS, reduce energy hole problem
with efficient distribution of energy among the network with the help of
efficient competition radius assignment and by efficient mean of data
transmission with the help of relay node selection based on energy spent in
transmission and reception and the minimization of data loss with the help of
data storage in the RN. |
Keywords: |
CH-Cluster Head, RN-Rendezvous Node, RN-Region , Deployment Region, MS-Mobile
Sink |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
ASYMPTOTIC ESTIMATES OF THE SOLUTION OF A SINGULARLY PERTURBED BOUNDARY VALUE
PROBLEM WITH BOUNDARY JUMPS |
Author: |
D.N. NURGABYL, U.A. BEKISH |
Abstract: |
In this article the boundary value problem for ordinary simple differential
equations of the third order with small parameter at the higher derivatives is
considered. In the given work the author researches the singularly perturbed
boundary value problems under the condition that real parts of the roots of
additional distinctive equation have opposite signs. Boundary and initial
functions are defined; their existence and uniqueness are proved. On the basis
of the constructed boundary and initial functions analytical representation of
the solution of a boundary value problem is found. On the basis of an analytical
representation of a solution of a singularly Perturbed Boundary Value Problem
for the differential equations of conditionally the steady type, we describe the
character of growth of the derivatives of a solution of Perturbed Problem with .
The class of boundary value problems with boundary jumps is singled out. Sizes
of boundary jumps are determined. Asymptotic estimates of the solution of a
boundary value problem are found. Estimates for the difference between solutions
of the degenerate and perturbed value problems are obtained. |
Keywords: |
Asymptotic, Initial Function, Boundary Function, Boundary Value Problem,
Additional Characteristic Equation, Perturbed and Degenerated Problems |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
ENERGY CONSUMPTION PATTERNS OF MOBILE APPLICATIONS IN ANDROID PLATFORM: A
SYSTEMATIC LITERATURE REVIEW |
Author: |
HASAN SAJID ATTA AL NIDAWI, KOH TIENG WEI, KAREEM ABBAS DAWOOD, AMMAR KHALEEL |
Abstract: |
Studies related to resource consumption of mobile devices and mobile
applications have been brought to the fore lately as mobile applications depend
largely on their resource consumption. The study aims to identify the key
factors and holistic understanding of how a factor influences Consumption
Pattern (CP) effectiveness for an android platform mobile application. The study
presents a Systematic Literature Review (SLR) on existing studies that examined
factors influencing the effectiveness of CP for android mobile application and
measured the effectiveness of CP. Therefore, the current SLR is conducted to
answer the following questions: (1) What is the evidence of CP factors that
drain the battery of a mobile device? (2) What are the energy conservation
techniques to overcome all the factors that drain battery life? and (3) How can
developers measure the effectiveness of an energy conservation technique?. The
SLR investigated factors affecting the effectiveness of CP for android platform
mobile application. The analyses of forty papers were used in our synthesis of
the evidence related to the research questions above. Therefore, the analyses
showed 22 studies that investigated how to measure the energy conservation
technique effectiveness while 18 studies focused on better understanding of how
the resources of mobile devices are actually spent. In this sense, 2 studies
show the effectiveness of early analysis of software application design.
Additionally, five factors i.e., architecture, interface, behavior of the
application, resources, and network technologies that affect CP effectiveness
were identified. This study investigated a SLR targeting at studies of CP
effectiveness in android platform. The total of 40 studies were identified and
selected for result synthesis purpose in this work (SLR). The evidences show
there are five factors affecting the CPs effectiveness. Three of them have
received a little attention among developers regarding choosing the most
suitable: software architecture, application interface and behavior of the
application in terms of resource consumption. |
Keywords: |
Energy Consumption Patterns, Energy Conservation Technique, Android Mobile
Application, Systematic Literature Review |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
USING MODIFIED BAT ALGORITHM TO TRAIN NEURAL NETWORKS FOR SPAM DETECTION |
Author: |
AMAN JANTAN, WAHEED A. H. M. GHANEM, SANAA A. A. GHALEB |
Abstract: |
Nowadays a monumental amount of spam and junk email clutter email inboxes and
storage facilities. Spam email has a significant negative impact on individuals
and organizations alike, and is a serious waste of resources, time and effort.
The task of filtering spam or junk e-mail is complex and very difficult to
solve. Hence, learning-based filtering is considered an important method for
detecting spam emails as the filtering technique requires training to epitomize
the knowledge that can be used for detecting the spam. Thus, Artificial Neural
Networks are being relied on to create a learning based filter. In this article,
we particularly propose the Feedforward Neural Network (FFNN) for identification
of e-mail spam; the weights and biases of this network model are set to optimum
using a new modified bat algorithm (EBAT). Experiments and results based mainly
on two datasets (SPAMBASE and UK-2011 WEBSPAM datasets) show that the developed
FFNN model trained by EBAT achieves high generalization performance compared to
other optimization methods. |
Keywords: |
Artificial Intelligent (AI), Swarm Intelligence (SI), Feed-forward Neural
Network (FFNN), Bat Algorithm (BAT), Spam Email, Spam Detection |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
DEEP RESIDUAL LEARNING FOR TOMATO PLANT LEAF DISEASE IDENTIFICATION |
Author: |
HABIBOLLAH AGH ATABAY |
Abstract: |
Deep Learning for plant leaf analysis has been recently studied in various
works. In most cases, Transfer Learning has been utilized, where the weights of
networks, which are stored in the pre-trained models, are fine-tuned to use in
the considered task. In this paper, Convolutional Neural Networks (CNNs), are
employed to classify tomato plant leaf images based on the visible effects of
diseases. In addition to Transfer Learning as an effective approach, training a
CNN from scratch using the Deep Residual Learning method, is experimented. To do
that, an architecture of CNN is proposed and applied to a subset of the
PlantVillage dataset, including tomato plant leaf images. The results indicate
that the suggested architecture outperforms VGG models, pre-trained on the
ImageNet dataset, in both accuracy and the time required for re-training, and it
can be used with a regular PC without any extra hardware required. A common
feature visualization and verification technique is also applied to the results
and further discussions are made to imply the importance of background pixels
surrounding the leaves. |
Keywords: |
Deep Learning, Convolutional Neural Network, Plant Leaf Disease, Tomato Disease |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
EMPIRICAL STUDIES ON CLOUD COMPUTING ADOPTION: A SYSTEMATIC LITERATURE REVIEW |
Author: |
ABDULNOOR SALEH, SULFEEZA MOHD DRUS, SITI S. M. SHARIFF |
Abstract: |
Cloud computing is a major topic of discussion among IT professionals. This
practice is a good alternative for higher education institutions of limited
budget in operating their information system efficiently without spending high
capital for infrastructure resource. According to the annual report of
educational indicators of the Republic of Yemen, the education and training
sectors in Yemen face several obstacles and challenges in delivering quality
education to the entire population of the country; for example, limited
infrastructure resources and IT budget, and lack of teaching staff, technical
experts, and IT skilled personnel. This study aims to (1) review empirical
studies on cloud computing adoption in general, (2) identify the influencing
factors of cloud computing adoption, and (3) categorize these influencing
factors into technological, organizational, environmental, and individual
factors. The influencing factors of cloud computing adoption in the government
and industrial and educational sectors are also reviewed. Cloud computing
adoption in the educational sector is clearly demonstrated. A total of 50 models
are reviewed and discussed. Findings show that theoretical and empirical studies
on cloud computing adoption in the educational sector are few. Moreover, 18% and
82% of studies investigate factors related to cloud computing adoption in the
educational and industrial sectors, respectively. Furthermore, 26% of studies
use individual-level theory for cloud computing adoption, 61% use organizational
level theory, and 13% integrate individual- and organizational-level theories. |
Keywords: |
Cloud Computing, Cloud Computing Adoption, Higher Education Institution (HEI),
Individual-Level Theory, Organizational-Level Theory |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
DEVELOPING LEARNERS EMPLOYABILITY SKILL OF CRITICAL THINKING THROUGH
COLLABORATIVE ONLINE DISCUSSION AT TERTIARY INSTITUTION |
Author: |
MOHD FADZLI ALI, LOKMAN MOHD TAHIR, NURUL NADWA ZULKIFLI |
Abstract: |
Various studies reported that Malaysian students at tertiary institutions were
lack of certain employability skills namely critical thinking. Lack of critical
thinking skills is identified when the students are unable to perform tasks,
especially those of problem solving. Consequently, the students who lacked this
skill were unemployed upon their graduation. A drastic approach to overcome this
issue must be addressed. Instead of blaming unemployed graduates, the tertiary
institutions with e-learning technology were urged to play a significant role.
Through online forum discussions, the students were taught to use Socratic
questions (Paul, 1993) as this would help them to develop critical thinking
skill when they look deeper into the viewpoints, perspectives and evidence in
analysing the assumptions (Walker, 2005). This study adopts a mixed-method case
study approach. The quantitative data derived from Watkins and Corrys (2005)
questionaires measures the students e-learning readiness and their usage of
online forum discussions. The qualitative data derived from the transcripts of
the students online discussions explains how the students develop their
critical thinking using Socratic questions. The transcripts were analysed in two
stages using (1), the Socratic Question Prompts and (2), a content analysis
approach of the Interaction Analysis Model by Gunawardena et. al. (1997). The
findings indicated that students critical thinking skills could be developed
from collaborative learning through online forums. The outcome could be used to
propose the best practice for lecturers at higher learning institutions to
promote students to think critically through collaborative online discussion. |
Keywords: |
Critical Thinking, Employability Skills, Online Discussions, Socratic
Questioning Prompt, Interaction Analysis Model (IAM) |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
MINING OPTIMIZED POSITIVE AND NEGATIVE ASSOCIATION RULE USING ADVANCE ABC
ALGORITHM |
Author: |
I.BERIN JEBA JINGLE, J.JEYA A.CELIN |
Abstract: |
One of the imperative mining used in data mining is the association rule mining
which mines many eventual information and association from outsized databases.
The Association rule mining has many research challenges. The generation of
accurate frequent and infrequent itemset by reducing the candidate itemset and
reducing the space in the memory is one of the challenges. The generation of
positive and negative association rule with high confidence and high quality is
another challenge. The next research challenge is to mine an optimized positive
and negative association rule. Numerous existing algorithms have been used to
overcome these challenges but many such algorithms cause data loss, lack of
efficiency and accuracy which also results in redundant rules and loss of space
in memory. The major issue in using this analytic optimizing method is
specifying the activist initialization limit were the quality of the association
rule relays on. In the Proposed work, it is proved that an efficient optimized
Positive and negative association rule can be mined by the proposed Advance ABC
algorithm. The advance ABC (Artificial Bee Colony) algorithm is swarm
intelligence based algorithm highly efficient optimization algorithm. The
Advance ABC Meta heuristic technique is stimulated through the natural food
foraging behaviour of the honey bee creature. The result shows that the proposed
algorithm can mine incredibly high confidence non redundant positive and
negative association rule with less time. |
Keywords: |
Data Mining, Association Rule Mining, Apriori Algorithm, Accurate Multi Level
And Multi Support,Advanceabcalgorithm,GPNAR |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
CLASSIFICATION OF EMG SIGNALS BASED ON CURVELET TRANSFORM AND RANDOM FOREST TREE
METHOD |
Author: |
SUBHANI SHAIK, DR. UPPU RAVIBABU |
Abstract: |
Electromyography (EMG) signal is most powerful signal processing tools for
electrical activity of neuromuscular associated with a corresponding muscle. In
this paper, the analysis of EMG signals using curvelet transform and Random
forest tree is presented. The EMG signal including noise though dissimilar
media. The curvelet transform is used for clear away noise from the surface
electromyography and superior order of statistics is used for analyzing the
signal. The first level is to evaluate the surface of EMG signal and extract
features using curvelet transform. The second level is best EMG quality segment
was chosen and the rebuilding of the useful data signal was finished using
random forest classifier. The intention of this work is introducing a novel
approach for discover, analyzing and classifying of EMG signals. The proposed
method is applied using clinical dataset and the parameters like mean root mean
square, correlation coefficient and absolute value are calculated and to get
better quality of class separability. A comparison is made with other
traditional methods and the EMG characteristics extracted from rebuilding of EMG
signals provide the enhancement of class separability in feature space than.
Statistical results shows maximum classification accuracy of 99% and higher
information transfer rate is achieved. |
Keywords: |
Electromyography (EMG) Signal, Curvelet Transform, Random Forest Tree, Clinical
Dataset. |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
WARNING CRITERION ONTOLOGY FOR MEASURING OF COMPLIANCE IN STANDARD OPERATING
PROCEDURE IMPLEMENTATION |
Author: |
HANIF AFFANDI HARTANTO, RIYANARTO SARNO, NURUL FAJRIN ARIYANI |
Abstract: |
Management processes without responsive to abnormal activity that might occur
can affect the performance of an organization. The Publishing Workflow Ontology
(PWO) will be modified for detect abnormal activity, such as wrong pattern,
wrong resource, throughput time maximum and wrong duty combine that occurs in
the event logs. The new model from modification called Warning Criterion
Ontology (WCO) can be represent the knowledge base to detect abnormal activity
in the business process especially this knowledge can be used distinguishes the
activity is allowed to be done by the superior and any activity which should
only be done by direct superior. The model used some rule for reasoning, then
after reasoning used some SPARQL to detection abnormal activity. Results of
detection abnormal activity can be used as attributes for assessment Key
Performance Indicator (KPI) activity in determining The compliance in Standard
Operating Procedure (SOP) implementation. This research obtained a value of
96.29% of accuracy resulted comparing into the experts assessment score. |
Keywords: |
Warning Criterion Ontology, Key Performance Indicator, the Publishing Workflow
Ontology |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
STUDYING NETWORK TRAFFIC USING NONLINEAR DYNAMICS METHODS |
Author: |
SHARAFAT A. MIRZAKULOVA, VYACHESLAV P. SHUVALOV, ALEKSEY A. MEKLER |
Abstract: |
The development of the Internet, the constantly growing number of network users,
and their mutual exchange of information is becoming an important communication
bridge. However, this causes a series of technical difficulties, one of which is
the growing requirements to network and server equipment and its maintenance.
Therefore, the purpose of this study is to develop a computer program for
teaching a neural network based on a computer network traffic table. A set of
methods was used to achieve the set goal, including analysis, deterministic
chaos, and systematization. The study used such software packages as TISEAN,
MatLab, NetEmul, and Excel. The study generalized the experience that was
relevant to the problem at hand. The study calculated the Lyapunov exponent,
which characterizes the presence of chaos in the system. Analysis of the
Lyapunov exponent enables using nonlinear dynamics methods to study the nature
of the incoming and outbound traffic. With the help of the developed program,
the neural network router is capable of prediction short-term parameters of a
computer network; this information is sent to the system administrator, which
will allow adapting the router to the estimated changes in the computer network. |
Keywords: |
Distribution series, Self-similarity, Chaotic processes, Phase portrait,
Mutual information |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
IMPLICATIONS OF DISCRETIZATION TOWARDS IMPROVING CLASSIFICATION ACCURACY FOR
SOFTWARE DEFECT DATA |
Author: |
POOJA KAPOOR, DEEPAK ARORA, ASHWANI KUMAR |
Abstract: |
Since the advent of new software architectures, paradigms and technologies the
software design and development has developed a cutting edge requirements of
being on the right track in terms of software quality and reliability. This
leads the prediction of defects in software at its early stages of its
development. Implications of machine learning algorithms are now playing a very
crucial role in classification and prediction of the possible bugs during the
systems design phase. In this research work a discretization method is proposed
based on the Object Oriented metrics threshold values in order to gain better
classification accuracy on a given data set. For the experimentation purpose,
Jedit, Lucene, tomcat, velocity, xalan and xerces software systems from NASA
repositories have been considered and classification accuracies have been
compared with the existing approaches with the help of open source WEKA tool.
For this study, the Object Oriented CK metrics suite has been considered due to
its wide applicability in software industry for software quality prediction.
After experimentation it is found that Naive Bayes and Voted Perceptron,
classifiers are performing well and provide highest accuracy level with the
discretized dataset values. The performance of these classifiers are checked and
analyzed on different performance measures like ROC, RMSE, Precision, Recall
values in this research work. Result shows significant performance improvements
towards classification accuracy if used with discrete features of the individual
software systems. |
Keywords: |
Discretization, Software Defect Prediction, Classification, CK metrics |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
COLLABORATIVE DETECTION AND FILTERING TECHNIQUES AGAINST DENIAL OF SERVICE
ATTACKS IN CLOUD COMPUTING |
Author: |
IMAN EL MIR, ABDELKRIM HAQIQ, DONG SEONG KIM |
Abstract: |
Nowadays, cloud computing technology is experiencing a fastest growing in terms
services demand and number of cloud clients which make the business
organizations against a critical issue must be addressed (How to Secure Cloud
Data Center (CDC)). As result, this major challenge has attracted the attention
of several research works. The attacker is looking for unavailability of
service, dysfunctioning of resources and maximization of financial loss costs.
There are many types of attack such as Denial of service (DoS) and Distributed
Denial of Service (DDoS) where the key objective for the attacker is to cause an
overloading of the system network. They seek to send through a victim server a
huge size of data as flooding packets so as to block and prevent the users to be
served. This paper introduced a defending system for DoS attack mitigation in
CDC environment. Generally, it discussed the different techniques of DoS attacks
and its countermeasures as well proactive filtering and detection mechanisms.
Consequently, to validate our proposed solution, we have implemented our
analytical model in Discrete Event Simulator. The proposed mathematical model
considers many performance parameters including response time, throughput, drop
rate, resource computing utilization, and mean waiting time in the system, mean
number of legitimate clients in the system when varying the attack arrival rate.
Indeed, we have estimated the incurred cost from the attack. Implementing
performance analysis using queueing theory and simulation experiments, the
proposed solution would improve the flexibility and accuracy of DoS attack
prevention, and would obviously make the cloud computing environment more
secured. |
Keywords: |
Queueing Theory, Cloud Computing, Security, Performance Modeling, Dos Attacks |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
COTTON TEXTURE SEGMENTATION BASED ON IMAGE TEXTURE ANALYSIS USING GRAY LEVEL RUN
LENGTH AND EUCLIDEAN DISTANCE |
Author: |
SABIQ ADZHANI HAMMAM, TITO WALUYO PURBOYO, RANDY ERFA SAPUTRA |
Abstract: |
Combed cotton and viscose cotton (CVC) is 2 types of cotton commonly used as the
main ingredient in manufacture of clothes. Clothes made from cotton combed known
as good materials for clothes. CVC cotton, made by combining cotton combed and
cotton viscose so that in the manufacture CVC cotton quality is assumed under
the combed cotton. Manually distinguish combed cotton and CVC cotton is by
wearing clothes made of combed cotton and CVC cotton. Basically the texture in
combed cotton and CVC fabrics can be analyzed by image texture segmentation
process based on image texture analysis to get patterns showing the type of
cotton. In this research explains the texture of cotton that through the process
of image texture segmentation can release the value of features that can be used
for the classification process, so that can determine the type of cotton. In
this image texture segmentation includes grayscaling process, image
normalization and texture feature extraction. Feature Extraction uses the Gray
Level Run Length (GLRLM) method to get a feature value on the texture used for
the classification process. Classification using method euclidean distance with
4 testing images and 2 training images by changing the original image to 5x5 and
10x10 pixels generate 100% accuracy. These results indicate that accuracy using
the euclidean distance method results in high accuracy. |
Keywords: |
Image Texture Segmentation, Texture Analysis, Gray Level Run Length (GLRLM),
Euclidean Distance, Cotton |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
A SYSTEMATIC REVIEW ON THE RELATIONSHIP BETWEEN STOCK MARKET PREDICTION MODEL
USING SENTIMENT ANALYSIS ON TWITTER BASED ON MACHINE LEARNING METHOD AND
FEATURES SELECTION |
Author: |
GHAITH ABDULSATTAR A.JABBAR ALKUBAISI, SITI SAKIRA KAMARUDDIN, HUSNIZA HUSNI |
Abstract: |
This study is mainly a systematic review analysis which discusses studies
related to the role of sentiment analysis, Twitter data, and features in
predicting stock market returns. Studies show that it is not only the historical
financial data of firms or stock markets that can predict the returns of the
stock market, but sentiments and emotions of people can also help in predicting
stock market returns. One primary source of information that is available now to
everyone is Twitter, and tweets made by significant personals affect the
emotions of people and which will ultimately affect their investment decisions.
If the news is positive, then most probably it will affect people positively,
and they will invest more in stocks of that firm. If the news is negative, the
reaction of people is expected to be opposite. Besides the sentiments of people,
there are features like spatial and temporal that can also affect the stock
market returns. The spatial feature is a geographical division, it can either be
different emotions of different people from different geographical regions, or
they can be other stock markets which can affect the home stock market by any
relation. Similarly, temporal effect shows the change in something over a span
of time. People might have different opinions at different times, and they can
behave differently according to their sentiments at that specific time. Finally,
all these factors help us in predicting the future stock market returns. |
Keywords: |
Sentiment Analysis, Features, Spatial, Temporal, Stock Market |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
EVALUATION OF LEARNING PROGRAMS AND COMPUTER CERTIFICATION AT COURSE INSTITUTE
IN BALI USING CSE-UCLA BASED ON SAW SIMULATION MODEL |
Author: |
I NYOMAN JAMPEL, I WAYAN LASMAWAN, I MADE ARDANA, I PUTU WISNA ARIAWAN, I MADE
SUGIARTA, DEWA GEDE HENDRA DIVAYANA |
Abstract: |
One form of competence held namely computer certification for his protege, which
was carried out after the students finish carry out training/learning. The main
reason was held at the Institute of computer certification courses is to prepare
graduates into the workforce that have a high competence and ready to compete in
search of work. That is because the competition work in the era of the AEC
(ASEAN Economic Community), the seekers workers cannot just rely on a
certificate of learning, but is most needed is a competence that can be
demonstrated by the presence of a certificate of competence. In the
implementation of a computer certification program is a form of educational
service organized by the course institute was still found to be constraints. To
be able to find the constraints and improvements/refinements against obstacles
that have been found, then it needs to be implemented evaluation of the program.
This research aims to know the effectiveness of the implementation of the
program of learning and computer certification at Institute courses. This
research belongs to the evaluative research using CSE-UCLA model. The subject of
this research consists of: teachers, program manager, and a team of students.
Method of data collection is done by questionnaires, observation, interviews,
and documentation. Technique of data analysis in this research is descriptive
quantitative percentage to analyze the effectiveness of every component in
CSE-UCLA (Center for the Study of Evaluation-University of California in Los
Angeles) model and qualitative descriptive for analyzing constraints that lead
to results not in accordance with the standards of the evaluation of its
success. In addition, the dominant constraints affecting the implementation of
the program based on the results obtained using the method of calculation of the
SAW (Simple Additive Weighting) simulation model. The results of the evaluation
of the implementation of the program as a whole shows the effectiveness of
80.44%, this indicates that the implementation of the program belongs to the
category of good. The dominant constraints affecting the program implementation
is empowerment of technical personnel for operational management. |
Keywords: |
Evaluation, Learning, Computer Certificate, CSE-UCLA, SAW |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
ECLIPSE JDT-BASED METHOD FOR DYNAMIC ANALYSIS INTEGRATION IN JAVA CODE
GENERATION PROCESS |
Author: |
A. ELMOUNADI, N. BERBICHE, F.GUEROUATE, N. SEFIANI |
Abstract: |
In software engineering, The Unified Modeling Language (UML) is generally used
as the de facto standard notation for modeling in the analysis and the design of
the object oriented software systems. As known, throughout the modeling phase,
the structural and behavioral elements go together, because they have
complementary relationship in the understanding of systems architecture.
However, structural analysis has always attracted the interest of designers more
than the behavioral analysis, due to its prominent role in the code generation
processes. This vision influenced the computer-aided software engineering (CASE)
tools and the model-driven engineering (MDE) approach. As a result, by using
CASE tools and taking up MDE approach as it is, the obtained code artefacts are
incomplete and become the developers responsibility. Therefore, the models
abstraction is broken, which leads to a paradoxical situation while adopting
model-driven development. To cope with this challenge, the purpose of our paper
is to bring balance to the design stage by integrating the behavioral analysis
into the code generation processes, in order to empower and promote delivering
applications without the need for hand coding. |
Keywords: |
MDE, UML, Dynamic Analysis, Abstract Syntax Tree, Code Generation |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
DECIDABILITY PROPERTIES OF THE CLASS OF FORMAL LANGUAGES RECOGNIZED BY K-EDGE
FINITE STATE AUTOMATA |
Author: |
ANUCHIT JITPATTANAKUL |
Abstract: |
Finite state automata (FSA) are computational models which are used for studying
theoretical complexity of computational problems. The computational model of
k-edge finite state automata (k-FSA) have been introduced to compute naturally
ordered data such as data in music processing and time series domain. The k-edge
finite state automata contain theoretical interest that numbers of their edges
are bounded with a k value. In order to use such model in practice, some
theoretical properties need to be investigated by way of their classes of formal
languages. This paper investigates the closure properties and decidability of
the class of formal languages recognized by k-edge finite state automata called
k-acceptable languages (k-ACC). The results include the following: (i) for k
1, the class of k-acceptable languages is closed under complement. (ii) for k
2, the class of k-acceptable languages is closed under complementation, union,
intersection, concatenation and Kleene-star operation. (iii) The infiniteness
problem, membership problem, equivalence problem and emptiness problem of
k-acceptable languages are decidable. |
Keywords: |
Decidability, Closure Properties, K-Acceptable Languages, K-Edge Finite State
Automata |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
A CONCEPTUAL OF KNOWLEDGE MANAGEMENT SYSTEM MODEL WITH EARLY WARNING SYSTEM IN
CLINICAL DIAGNOSTIC ENVIRONMENT OF DENGUE FEVER |
Author: |
NORZALIHA MOHAMAD NOOR, RUSLI ABDULLAH |
Abstract: |
Knowledge management system (KMS) support the activities of knowledge management
(KM) in early warning system (EWS) to provide early warning and aid in decision
facilitation. Both KMS with EWS are also applicable in the clinical diagnostic
(CD) environment during the CD activities to provide early warning and aid in
decision facilitation of disease outbreak such as dengue. However, lack of
proper data or information management and limited knowledge sharing and
dissemination within the organization is a main challenge for mitigation of
risk. In which the problem that may relate to this challenge is the timeliness
for timely reporting and decision facilitation during CD activities. Therefore,
a conceptual of KMS model with EWS in CD environment of dengue fever is
formulated based on the existing components of KM, KMS with EWS. Then, the
components of CD activities and dengue fever are also identified and studied for
the model implementation. A pre survey and the analysis of five existing
previous models were carried out to determine the significant components of KMS
and EWS. The pre survey results analysis and gaps drawn from the analysis of
these five models are used as a basis to the initial proposed of KMS model with
EWS in CD environment of dengue fever. Next, the KMS model with EWS was then
validated via prototype to verify the model reliability and is evaluated via a
post survey. The model which is the integration between KMS with EWS is known as
KMS with EWS is to enable the capturing, storing, reusing and managing of
knowledge in order to provide early warning and aid in decision facilitation of
dengue fever. |
Keywords: |
Knowledge Management, Knowledge Management System, Early Warning System,
Clinical Diagnostic, Dengue Fever |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
NATURE INSPIRED SOFT COMPUTING BASED SOFTWARE TESTING TECHNIQUES FOR REUSABLE
SOFTWARE COMPONENTS |
Author: |
PREETI GULIA, PALAK |
Abstract: |
Software is the inseparable part of todays human life. Each and every gadget
that we use is dependent on some or other kind of software. Component based
software engineering (CBSE) has provided an effective software development
paradigm which allows selection of domain specific components from component
repository and assemble them into a modular and scalable application. The
reliability of software and its components depends on the amount of effective
testing carried on it during its development life cycle. We cannot deny the fact
that exhaustive testing is not possible. Selection of appropriate test suite is
a combinatorial problem. Soft computing provides a promising solution for the
same. Emergence of artificial intelligence over years has added fuel to nature
inspired testing techniques. This paper is a comparative study of various soft
computing approaches inspired by nature for reusable software components such as
artificial neural network, genetic algorithm, fuzzy logic and other swarm based
techniques. A comparative analysis is presented to discuss pros and cons of
different soft computing techniques for software testing of reusable components
along with their preferences in recent years. A future dimension is also
proposed to develop hybrid techniques for optimization of testing techniques. |
Keywords: |
Soft computing, Test Case Prioritization, Testing, Reusable Components. |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
KNOWLEDGE DISCOVERY OF THE STUDENTS ACADEMIC PERFORMANCE IN HIGHER EDUCATION
USING INTUITIONISTIC FUZZY BASED CLUSTERING |
Author: |
T.PRABHA, Dr.D.SHANMUGA PRIYAA |
Abstract: |
This study observes the aspects associated with the evaluation of Students
performance. To improve the performance of Students, their previous records have
to be analyzed in order to determine their behavioral pattern in academic wise.
This may help to assist the difficulty faced by the students to produce more
marks in the semester exams and to enhance their co-curricular activities. This
is done by clustering the students based on their performance. For performing
the clustering process, this work utilized the fuzzy K-Mediods which takes the
membership value of each student in account towards the particular cluster. The
main contribution of this work is assisting to determine the level of
performance of students and filtering the students who are in need of special
attention by the staffs which results in improvising the quality of education. |
Keywords: |
Education mining, Student DataBase, Performance, Fuzzy K-Mediods, Academic |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
DISTRIBUTED AND PROGRESSIVE FEATURE SELECTION ALGORITHM FOR HIGH DIMENSIONAL
DATA: A MAP-REDUCE APPROACH |
Author: |
CH. RAJA RAMESH, G. JENA3K RAGHAVA RAO |
Abstract: |
Dimensionality reduction or feature selection is an essential pre-processing
step to apply machine learning algorithm further on any data set. But at for
medium dimensional datasets it is optional or on-demand requirement. But it is
mandatory in high dimensional datasets. Its significance is increased to get the
accurate and relevant output from machine learning algorithm. Most of the
existing methods are divided into 2 types one is Dimensionality reduction and
the other one is feature selection. There is very narrow gap between these two
methods. Dimensionality reduction is more mathematical analysis with
transformations and may or may not have same subset of features from original
features. Feature selection is application of feature engineering and requires
domain knowledge. But any algorithm applicable for high dimensional data
requires more processing time and storage resources. We considered the
processing time as basis for our problem statement and implemented a distributed
algorithm for Feature Selection and named as Distributed Progressive Feature
selection algorithm with Knn+Relieff for high dimensional data. In this paper
applied MapReduce concept to select final sub set of relevant features in
progressive manner. Simulation results showthe feature with its weights for
various parameters. |
Keywords: |
Feature Selection, Dimensionality Reduction, Mappers, Similarity Measures |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
APPLYING INTERPRETIVE HERMENEUTIC APPROACH TO INTERRELATE THE GAME ENVIRONMENT
AND LEARNERS INTERACTION |
Author: |
MIFRAH AHMAD, LUKKMAN AB. RAHIM, NOREEN IZZA ARSHAD |
Abstract: |
In 21st century, one of the prominent learning tool in the education industry is
highlighted as Educational games (EG). There are many ways in which the
literature describes its importance in the current generation. In addition, the
learning environment, the rules embedded in the game, the theoretical aspects on
which the game is built, and the learners/subject-matter experts; all play a
crucial role in EG. Hence, this requires a projecting need of understanding the
relationships that allows an enhancement in the learning objectives for learners
while interacting through the game environment. Although, relationships are
vaguely dispersed in literature; understanding the need to interrelate other
concepts of game environment with learners expectations, and how the elements
of game environment interact throughout the play is essential. Therefore, this
article highlights the proposed multi-domain framework for game developers to
effectively map the game elements while extracting implicit vaguely supported
relationships based on game environment domain and learners concepts through a
systematic literature review. Consequently, this article explains on twelve (12)
implicit relationships in perspective of game environment domain and learners.
They are validated through qualitative interviews with eight (8) game-based
learning experts. Subsequently, the results are interpreted in relevancy with
all domains of proposed framework by applying interpretive hermeneutic approach,
and using NViVO software to obtain the themes, sub-themes, and coding strategy
was adopted to code the experts response with each relationship and its
elements. The findings conclude twelve (12) explicit relationships between game
environment domain and the learners relationships; for game developers to guide
them through the development phase of EG. |
Keywords: |
Game Environment, Learners, Multi-Domain Framework, Hermeneutic, Interpretivism
Paradigm, Coding Technique |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
A PROPOSED CONCEPTUAL MODEL FOR FLIPPED LEARNING |
Author: |
IRETI HOPE AJAYI, NOORMINSHAH A. IAHAD, NORASNITA AHMAD, AHMAD FADHIL YUSOF |
Abstract: |
This study developed and tested the efficacy of flipped model that is usable as
a strategy for teaching and learning. The concept applied Perceived Ease of Use
(PEOU), Perceived usefulness (PU), Teaching Method (TM), task Technology Fit
(TTF), Behavioral Attitude (BA) as factors influencing the use of the system
(flipped learning). The model made used of Technology Acceptance Model (TAM) and
Task-Technology Fit (TTF) to develop some hypotheses and some connected
determinants. The result of the survey conducted was tested using SmartPLS
software package. The outcome of the investigation indicates that Teaching
Method (TM) is the highest determinant factor followed by use intention,
perceived usefulness, perceived ease of use, behavioural attitude and TTF. This
confirmed that students belief about the manner through which they are been
taught (Teaching method) has a great influence on their performance. |
Keywords: |
Learning, Flipped Learning, TAM, and TTF |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
EXPLORING THE USE OF COMPUTER ASSISTED AUDIT TECHNIQUES AND ITS IMPACT TO THE
TRANSPARENCY AND ACCOUNTABILITY OF FINANCIAL STATEMENTS |
Author: |
RINDANG WIDURI, SYNTHIA A. SARI |
Abstract: |
The objective of this research is to explore the perception of internal auditor
in public companies on the use of Generalized Audit Software (GAS) to improve
the transparency and accountability of financial report. The qualitative
approach was used by conducting face-to-face semi structured interview with the
internal auditors from Indonesian listed companies. This research used Agency
Theory and Technology Acceptance Model (TAM) as underpinning theories, therefore
this research provides academic contribution not only in auditing area but also
in information technology. The results indicated not all participants agreed
that GAS is helping them in producing a transparent and accountable financial
report. Also, this research found that companys internal condition and
individual characteristics of internal auditor are the inhibiting factors in
using GAS. |
Keywords: |
Internal auditor, Generalized Audit Software, Technology Acceptance Model, Good
Corporate Governance, IDX |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
Title: |
HYBRID DATA DEDUPLICATION TECHNIQUE IN CLOUD COMPUTING FOR CLOUD STORAGE |
Author: |
HESHAM ABUSAIMEH, OMAR ISAID |
Abstract: |
This paper proposed a hybrid data deduplication technique in cloud computing,
that combines different types of data deduplications for satisfying different
demands and requirements. The hybrid data deduplication consists of two
different hybrid subsystems, each hybrid subsystem contains a file level
deduplication and chunk level deduplication. The file level deduplication shows
better execution time for deduplication redundant files. And, chunk level
deduplication for detecting duplicated chunks among files at the data. The
subsystems are: hybrid file variable size chunk level deduplication (FVCD), and
hybrid file fix size chunk level deduplication (FFCD). The FVCD satisfies the
requirements of users and applications that required better effectivity in
reducing the size of data. While, the FFCD provides lower execution time. And,
it can be tuned by changing its chunks size. Where, increasing the chunks size
reduces the execution time. But, it decreases the effectivity of reducing the
size of data. |
Keywords: |
Data Deduplication, Cloud Computing, Cloud Storage |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2017 -- Vol. 95. No. 24 -- 2017 |
Full
Text |
|
|
|