|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of Theoretical and Applied Information Technology
October 2016 | Vol. 92 No.1 |
Title: |
ENSEMBLE SELECTION AND OPTIMIZATION BASED ON SOFT SET THEORY FOR CUSTOMER CHURN
CLASSIFICATION |
Author: |
MOHD KHALID AWANG, MOKHAIRI MAKHTAR, M NORDIN A RAHMAN, MUSTAFA MAT DERIS |
Abstract: |
Ensemble methods or multiple classifiers which combine decisions from many base
classifiers have been confirmed to outperform the classification performance of
any single classifiers. Despite having the ability of producing the highest
classification accuracy, ensemble methods have suffered significantly from their
large volume of base classifiers. Thus, in the previous work, we have proposed a
novel soft set based method to prune the base classifiers from heterogeneous
ensemble committee and have demonstrated the ability of our proposed soft set
pruning algorithm in reducing a substantial number of classifiers while at the
same time producing the highest prediction accuracy. However, the pruning method
only suggests a subset of relevant classifiers, and the search for the optimized
and best classifiers is not yet considered. The selection of the best or
optimized classifiers is carried out by checking all combinations of pruned
classifiers. In this paper, we extended our research by proposing a new soft
ensemble selection and optimization method to find the best subset of the pruned
classifiers. The results of this work have proven that our proposed method is
able to search for the minimum number of classifiers in the ensemble repository
while at the same time maintaining or improving the classification performance.
The proposed method is systematically evaluated using Customer Churn dataset
taken from the UC Irvine Machine Learning Repository data set. This work proved
that the proposed soft ensemble selection and optimization method is able to
search for the minimum number of classifiers in the ensemble repository while at
the same time improving the classification performance. |
Keywords: |
Ensemble Selection, Customer Churn Prediction, Ensemble Optimization, Soft Set,
Ensemble Methods |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
REGION BASED IMAGE RETRIEVAL BASED ON TEXTURE FEATURES |
Author: |
ABD RASID MAMAT, FATMA SUSILAWATI MOHAMED, NORKHAIRANI ABDUL RAWI, MOHD KHALID
AWANG, MOHD.ISA AWANG, MOHD FADZIL ABDUL KADIR |
Abstract: |
Most of Content Based Image Retrieval (CBIR) system use global texture features
for representing and retrieving images. If local texture features are ignored
during the initial stage of image processing, the performance will be affected.
Meanwhile the features extraction, if it is based on Color co-occurrence Matrix
(CCM) will provide the opportunities effective CBIR. Therefore, the main
objective of this paper avoids the performance ineffectiveness and the same time
opting for much effective CBIR. The problems that were highlighted will be
tacked by considering the approach that is based on the local Haralicks texture
features, specifically using Average Analysis (AA) and Principal Component
Analysis (PCA) methods. The extraction by Haralicks texture feature was based
on the predetermined CCM. The experimentation was done on the suggested ten
categories of 1000 selected images from the Coral image database. The results
portray, it is interesting to note that for certain image categories, only six
features of the eleven Haralicks texture features namely homogeneity, sum of
squares and sum average, sum variance, difference entropy and information
measure correlation I and known as significant’ features provided the best
image retrieval. The performance has increased in the range of 8.5% to 26.0%,
compared with the previous researches. It is also indicated that the Average
Analysis (AA)s combined significant features have achieved better performance
than the Principal Component Analysis (PCA) in most categories. This finding has
important implication on the use of correct significant features from Haralick
texture features for certain image properties as well as leading to less
computational processing time due to less. |
Keywords: |
Color Co-Occurrence Matrix, Haralick Texture Features, Significant Features,
Spatial Relationship. |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
A FRAMEWORK FOR DESIGNING MOBILE QURANIC MEMORIZATION TOOL USING MULTIMEDIA
INTERACTIVE LEARNING METHOD FOR CHILDREN |
Author: |
SYADIAH NOR WAN SHAMSUDDIN, NURUL FARIHAH ABU BAKAR, MOKHAIRI MAKHTAR, MOKHAIRI
MAKHTAR, WAN MALINI WAN ISA, AZILAWATI ROZAIMEE, RHAFIZI YUSOF |
Abstract: |
Quran is the fundamental holy book of Islam. Among the most important concerns
is to learn it by heart. However, current method in Quranic schools is becoming
less effective towards young generation. Thus, there is a need to find
alternative solutions to memorize Quran for young Muslims. Mobile learning is an
alternative to conventional learning that will support the existing method.
Mobile devices have made an immediate impact on teaching and learning practices.
However, for mobile learning to be effective for children in memorizing Quran,
it is necessary to find specific design guidelines and pedagogy to this learning
method. This paper aims at providing a unifying framework for developing Quran
memorizer application using multimedia interactive method and learning theories
for mobile learning. |
Keywords: |
Quran Memorization, Mobile Learning, Game-Based Learning, Rote Learning,
Chunking. |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
AGILE TESTING PRACTICES IN SOFTWARE QUALITY: STATE OF THE ART REVIEW |
Author: |
CESAR GIL, JORGE DIAZ, MARIO OROZCO, ALEXIS DE LA HOZ, EDUARDO DE LA HOZ,
ROBERTO MORALES |
Abstract: |
In this paper you can find a review of articles related to agile testing
practices in software quality, looking for theoretical information and real
cases applied to testing inside a modern context, comparing them with the
standard procedures taking into account their advantages and relevant features.
As final result, we determine that agile practices in software quality have wide
acceptance and many companies have chosen their use for all their benefits and
impact on development software processes in several real applications, not
necessarily IT governance ones, since other kind of technical applications have
shown excellent results on testing. |
Keywords: |
Agile testing software, Scrum agile testing software, Kanban agile testing
software, Test Driven Development agile test software, Behavior Driven
Development test software, automation test software |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
ENABLING SECURE DATA TRANSACTION IN BIO MEDICAL ENGINEERING USING CCART APPROACH |
Author: |
A. VELMURUGAN , T. RAVI |
Abstract: |
Biomedical engineering, application is growing day by day. Major work carried by
it is to manufacture the medical equipment based on the data of patient produced
by professionals. The issue faced is the privacy and confidentiality of patient
details that are transferred from professionals. In addition, other important
factor is to retrieve the data from the database is other challenging process.
This paper considers these issues and proposes a method by transacting the data
with security in order to provide privacy. CART approach is implemented to
retrieve the required data from the database in an efficient way. It works by
searching the content in tree structure which saves the searching time and
classification supports effective data (required) from such a huge database.
This overcomes the misprocessing of data that are done by manual it leads to
loss in time and cost for manufacturing the equipment. The proposed work
retrieves the exact data in effective and efficient manner. |
Keywords: |
Data transaction, privacy, CART, Security and confidentiality. |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
PARAMETER ESTIMATION OF GEOGRAPHICALLY WEIGHTED MULTIVARIATE t REGRESSION MODEL |
Author: |
HARMI SUGIARTI, PURHADI, SUTIKNO, SANTI WULAN PURNAMI |
Abstract: |
The use of ordinary linear regression model in spatial heterogeneity data often
does not suitable within the data points, especially the relationship between
response variable and explanatory variables. Therefore, the geographically
weighted t regression (GWtR) is used to overcome spatial heterogeneity term. The
model is an extension of geographically weighted regression (GWR) which the
response variable follows multivariate t distribution. The aim of this study is
to obtain the estimator of geographically weighted multivariate t regression (GWMtR)
model with known degrees of freedom. The maximum likelihood estimation (MLE)
method will be applied to maximize a weighted logarithm likelihood function.
Based on the EM algorithm, the estimator of geographically weighted multivariate
t regression model can be determined. |
Keywords: |
Maximum Likelihood Estimation (MLE), EM Algorithm, Geographically Weigted
Regression, Multivariate t Model |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
SELECTION SYSTEM OF THE BOARDING HOUSE BASED ON FUZZY MULTI ATTRIBUTE DECISION
MAKING METHOD |
Author: |
DEVIE ROSA ANAMISA, AERI RACHMAD, RULLY WIDIASTUTIK |
Abstract: |
Boarding house is a residence that is temporary and in the form of blocks of
rooms in various sizes which are inhabited by students and employees from
outside the area. There is a way of selecting the best boarding house which is
influenced by several criteria, such as the rental price, amenities, number of
rooms and other criteria such as distance to the destination, the location and
the time limit on a visit. The number of criteria can easily select the best
boarding based on criteria. However, little information on the boarding house
has the shortest distance to the destination, this causes problems in choosing a
boarding location so that it takes a decision support system for selecting a
boarding locations based on considerations. Some of the solutions to solve the
problem of selecting a boarding house have not been done. Hopefully it can
provide a satisfactory success which has not been achieved. Therefore, this
research makes a decision support system to solve the problem of selecting
boarding house based on the criteria that have been determined using the method
of Fuzzy Multi Attribute Decision Making (FMADM) to generate the best
alternative boarding house and know the criteria which became the
characteristics and behavior of those searchers boarding houses which have most
domination and influence the decision of the boarding searcher to select a
boarding house. The selection process for some alternatives on FMADM needs to
determine the criteria early in the process as a reference for decision making,
while the method of Weighted Product (WP) is used to normalize weight value that
indicates the level of importance of each criteria. This research has five
levels such as very low, low, moderate, high, very high. Various experiments
were conducted to determine the criteria of FMADM method (such as location,
district, village, gender of boarding house, minimum price and maximum price) as
well as to obtain the best alternative from the boarding house. Experimental
results showed that FMADM methods that have been developed in this study were
able to solve the problem selecting the best boarding house from the highest
alternative to alternate the lowest characteristics behavioral of home seekers
the most superior in the selection of boarding houses, such as the criteria of
the water conditions, the price of boarding houses, facilities, and the distance
from the boarding house to the destination. Success rate of 100 respondents from
18 types of criteria were tested using the data in Sukolilo area, Surabaya City. |
Keywords: |
Descision Support System, Boarding House, Method, Fuzzy Multi Attribute Decision
Making, Weighted Product. |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
VARIATIONAL METHODS OF SYNTHESIS OF SIGNALS BASED ON THE FREQUENCY OF IDEAS |
Author: |
EVGENIY G. ZHILYAKOV, SERGEY P. BELOV, ANDREI A. CHERNOMORETS, VLADIMIR V.
KRASILNIKOV |
Abstract: |
The article considers the constructing a function of time, which must have a
priori given properties. In radio communications and communications main
attention is usually given to the frequency distribution of signal energy,
wherein the signals are synthesized either on the basis of predetermined
reference values at discrete time points (interpolation or estimation of a
derivative) or on the requirements to achieve maximum energy concentration of
the synthesized signal in a predetermined low frequency band. As part of this
work it was developed a new method of synthesis of signals with use of reference
values, in which the construction of interpolating function is done based on the
integrating of the evaluation of a derivative , calculated using the variational
principle to minimization of its energy in a limited frequency range, that is
defined by the sampling rate. In addition to that we got relationships
describing a discrete signal of finite duration with a guaranteed share of power
leakage outside the specified frequency bandwidth. |
Keywords: |
Variational Principles Synthesis Of The Signal On The Base Of Frequency
Representations, Optimal Subband Frequency Synthesis. |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
INTEGRATION OF SCIENTIFIC EXPERIMENTAL DATA THROUGH ONTOLOGY APPROACH: A REVIEW |
Author: |
NUR ADILA AZRAM, RODZIAH ATAN |
Abstract: |
Data integration in scientific experiments is important to the scientists in
many research domains. This is because many experimental data involved
multidiscipline areas and run in different machines or instruments which results
in data stored in different ends and human intervention is required in forming a
chain of data analysis. Ontology is one of the approaches that have been used in
data integration in many domain areas. This paper described and reviewed
ontology in data integration effort. Furthermore, the state of research for
ontology-based integration of scientific experiment data also covered in this
paper. |
Keywords: |
Data Integration, Ontology, Scientific Experiment, Scientific Research Data And
Ontology-Based Data Integration |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
VERIFICATION ON THE TRUSTWORTHINESS OF INFORMATION: A STUDY |
Author: |
MOHAMAD NAZRI KHAIRUDDIN YAP, MASSILA KAMALRUDIN, AHMAD ZAKI A BAKAR, SAFIAH
SIDEK |
Abstract: |
The pervasive use of social media has generated massive information sharing
among its users. Given the fluidity and excessive information available online,
issues relating to the trustworthiness of information have become a concern
among the users and authorities. Sensational and unreliable information shared
in the social media may cause and harm the reputation of an individual, product,
organisation or government. Therefore, there is a need to develop a mechanism
that helps users to verify the trustworthiness of information that feed in the
social media so that they can decide whether to trust or to ignore the
information. This paper reports a review on the analysis of the existing work
related to trustworthiness of information. The analysis were based on three
questions that address the definition of trustworthiness of information, factors
influencing trustworthiness of information and existing tools to verify
trustworthiness of information. Based on thirty nine selected articles reviewed,
it was found that the verification on the trustworthiness of information
approach is required. It is anticipated that the adoption of this approach will
help to educate and make the public users aware of the level of trustworthiness
of information, hence developing an informed, safe and ethical users of media
content. |
Keywords: |
Information Trustworthiness, Factors, Social Media, Tools, Approaches |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
INTEGRATION OF MEDICAL SIMULATION EQUIPMENT INTO UNIFIED DATA SYSTEM |
Author: |
ANDREY ALEKSEEVICH SVISTUNOV, DENIS MIHAYLOVICH GRIBKOV, SERGEY SERGEEVICH
SMIRNOV, DMITRII ALEKSANDROVICH SYTNIK, ALEKSANDR LVOVICH KOLYSH |
Abstract: |
At present there are no unified standards, regulations, protocols of data
transfer from medical simulators by various manufacturers. This prevents
development and improvement of automation systems of simulation training. This
work is aimed at development of software which will provide merging of medical
simulators and training devices by various manufacturers into unified data
system. This target is based on developed and documented binary data format,
where data are presented in the form of coupled key/value pair. On the basis of
the developed binary data format the structure of data package has been
developed as well as command list for interaction between client and server
applications. Using the developed data format and TCP/IP proprietary data
exchange protocol has been developed. As an alternative, the data exchange
protocol on the basis of JSON format has been developed. The developed client
and server applications facilitate data exchange between software of medical
simulation equipment and designed data system. The developed software has been
tested on the following medical simulators and training devices: Resusci Anne (Laerdal),
Lap-X (Epona), and Lap Mentor (Simbionix). In the future it is planned to expand
possibilities of the data system and to connect new simulators and training
devices to this system. The protocol and software, developed in this work, make
it possible to combine data from various medical simulators in the frame of
unified data system in real time. The obtained results facilitate automation of
training processes on the basis of medical simulators. |
Keywords: |
Simulator, Simulation Training, Medical Training Device, Data Base, Software,
Data System |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
FACTORS AFFECTING GLOBAL VIRTUAL TEAMS’ PERFORMANCE IN SOFTWARE PROJECTS |
Author: |
ALI YAHYA GHENI, YUSMADI YAH JUSOH, MARZANAH A. JABAR, NORHAYATI MOHD ALI |
Abstract: |
Today the trend is to perform software development work via distributed
geographical area among team, individual or even as an organization. However,
due to the global market and international presence of many companies, there is
a need to implement a global virtual teams. The global virtual team members are
gradually engaged in globalized business environments across space, time and
organizational boundaries via information and communication technologies. A
global virtual team relies on communication, collaboration, and information
exchange are the most important criteria in global virtual teams’ operations.
The purpose of this paper is to answer two research questions. The first
research question is to identify the factors affecting global virtual teams’
performance. A systematic literature review was conducted to answer the first
research question. The second research question is on what the rank of the
factor affecting the global virtual teams’ performance according to their level
of effect on global virtual teams’ performance. Online survey was conducted
within 103 developers and IT managers from eight IT companies to answer the
second research question. The Statistical Package for Social Science (SPSS 22)
was used to analyze the collected data. In this study, we investigated factors
that affect global virtual teams’ performance; factors considered include
cultural differences, language problems, time-zone differences, team size,
technical problems, lack of trust, lack of sufficient training, and ICT
problems. Also the findings indicated that lack of sufficient training is the
highest level of effect on global virtual teams’ performance. On the other hand,
team size is the lowest level of effect on global virtual teams’ performance. |
Keywords: |
Global software projects, Global Virtual Teams (GVTs), performance factors |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
CERTIFICATE MECHANISM IMPROVEMENT FOR SECURING OPTIMIZED LINK STATE ROUTING
PROTOCOL IN MOBILE AD HOC NETWORKS |
Author: |
ALAA ABDULLAH MAJHOOL, NOR EFFENDY OTHMAN |
Abstract: |
Mobile Ad Hoc Network (MANET) comprises a set of wireless mobile nodes which
dynamically generate a temporary network devoid of application of any present
network infrastructure and centralized administration. Basically, Optimized Link
State Routing (OLSR) is a security problem emanating from attacks. When handling
packet forwarding, several types of availability as well as integrity attacks
exist, including fabrication, modification, misrouting and dropping whether full
or partial. This research utilizes the Secure Optimized Link State Routing (OLSR
mechanism which includes certificate authorized nodes (CAs) and RSA algorithm to
enhance the security and provide secure routing for OLSR routing protocol,
through detection of malicious nodes that perform black hole attack. The
proposed protocol is called (SOLS) mechanism. The aim of using the RSA algorithm
with certificate authorized nodes (CAs) is to find the secure path from the
source to the destination and to detect the black hole attack. We have evaluated
the performance of SOLS mechanism by designing simulation using MATLAB. We have
compared our mechanism with the performance of protocol OLSR and Baadachi’s
approach. The comparison was conducted based on the detection ratio, packet
delivery ratio, routing overhead, total network load, average delay and source
traffic sent & destination traffic received. The SOLS outperformed the
Baadachi’s approach under wide network performance metrics and settings. The
SOLS improved the detection ratio by 4% compared to Baadachi’s approach; the
implication of this finding is that SOLS can be applied to MANET to detect black
hole attack. |
Keywords: |
Mobile Ad Hoc Network, Ranking Strategy, RSA Algorithm, MATLAB, Optimized Link
State Routing, Certificate Authorized Nodes |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
EXTRACTING DRUG-DRUG INTERACTIONS FROM BIOMEDICAL TEXT USING A FEATURE-BASED
KERNEL APPROACH |
Author: |
ANASS RAIHANI, NABIL LAACHFOUBI |
Abstract: |
Discovering unknown drug interactions is of great importance for healthcare
professionals since these interactions can become extremely dangerous and can
affect patient’s safety. Since newly discovered drug interactions are reported
in scientific papers, developing text mining techniques to automatically extract
those interactions from unstructured texts is of great importance. All
state-of-the-art systems evaluated on the standard DDIExtraction 2013 challenge
corpus didn't exceed the threshold of 70%, which means that developing more
powerful systems to manage this task still very important. In this paper we
present a new feature-based kernel method to extract and classify drug
interactions described in biomedical literature. Like many previous works, our
method consists of two steps. First we detect interacting drug pairs, and then
we classify each extracted pair into one of four interaction categories. To
perform the first step, we have enhanced an existing feature-based system by
adding new features, correction patterns, and trigger words. To perform the
second step, we have built a new feature-based kernel classifier that exploit
the lexical field particularity of each interaction type. This classifier is
composed of 4 binary classifiers work sequentially. When evaluated on the
DDIExtraction 2013 challenge corpus, our system achieved an F1-score of 71.14%,
as compared to 69.75% and 68.4% reported by the top two state-of-the-art systems
based respectively on Convolutional Neural Networks and graph kernel with
context vectors methods. |
Keywords: |
Drug–drug interaction, Biomedical literature, Feature-based kernel approach ,
Biomedical Informatics , Natural Language Processing |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
OPTIMAL FEATURE SELECTION FOR CLASSIFICATION OF ELECTRICITY CONSUMPTION |
Author: |
ZUHAINA ZAKARIA, NOORITAWATI MD TAHIR |
Abstract: |
Feature selection is the essential process to obtain the best feature vectors in
pattern recognition system. These feature vectors contain information describing
the original data’s important characteristics. In this research, a framework
based on factor analysis technique namely the Principal Component Analysis (PCA)
is performed to determine the best features extracted from the daily load curve
prior to clustering process. The rules of thumb applied include Bartlett’s test
of sphericity, Kaiser-Meyer-Olkin (KMO) measure, Kaiser Criterion, Scree test
along with Varimax approach. Accordingly, KMO as well as Bartlett’s test
suggested the data factorability is significant. Furthermore, Kaiser Criterion
and Scree test together with component matrix approach implied that the first
two most significant factor must be retained whilst Varimax approach confirmed
that clustering analysis should comprise of the entire load curve values. Upon
selection of features, the capability of fuzzy clustering in classifying these
features attained from 247 feeders in a particular distribution network is
examined. Initial results demonstrated the effectiveness of feature selection
process and the potential of fuzzy clustering in particular the fuzzy c- means (FCM)
in classifying electrical energy consumption. |
Keywords: |
Feature Selection, Load profiling, clustering, fuzzy relation, Principal
Component Analysis |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
IMPROVING SERIOUS GAME DESIGN THROUGH A DESCRIPTIVE CLASSIFICATION : A
COMPARATION OF METHODOLOGIES |
Author: |
SLIMANI ABDELALI, SBERT MATEU, BOADA IMMA, ELOUAAI FATIHA, BOUHORMA MOHAMMED |
Abstract: |
Serious game provides an instructional tool to make the learning process more
enjoyable, easier to memorize and effective. It combines pedagogy goals and game
play to increase the participant interest and engagement compared to traditional
methods. In this paper, we compare several methodologies of game design relative
to our classification proposal; it can assist the analysis and evaluation of
serious game design, we illustrate how this classification helps several actors
of design to make more informed decisions about the adequate methodology.
Finally, we discuss the differences in the use of serious game design
methodologies in the follow-up of the statistics of the comparative study. |
Keywords: |
Serious Game; Learning; Game Design; Comparative Study. |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
IMPROVEMENT OF THE METHODOLOGY FOR GRAIN QUALITY ASSESSMENT |
Author: |
BAKYTKAN DAULETBAKOV, LYAZZAT SULTANGALIYEVA, ARAILYM ABITOVA, PRIMZHAROVA
KALYASH |
Abstract: |
In the article - assessment of the potential properties of the grain; analysis
of technological schemes, options, and select the minimum number of parameters
to take into account the effect of the complex form of physical and chemical
structural, mechanical and biological properties of the processed grain on the
basis of factor analysis |
Keywords: |
Technological Cycle Assessment, Factor Analysis |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
A HYBRID GENETIC AND BEE COLONY OPTIMIZATION ALGORITHM FOR TEST SUITE
PRIORITIZATION |
Author: |
R.P. MAHAPATRA, RITVIJ PATHAK, KARAN TIWARI |
Abstract: |
In the software industry, any software that has been developed needs to go
through a maintenance phase. It can typically last from 10 to 15 years. Software
maintenance is the most cumbersome yet crucial activity for developers and users
alike. Typically, in a Software Development Life Cycle, a software goes from
minor to major changes/modifications/updates to provide smooth end user
functioning. This entire process of testing the modified software repeatedly, is
called as Regression Testing. Exhaustive Regression testing is not possible
because of time and budget constraints. Because of this, we have developed
several techniques to prioritize test cases in order to reduce time and effort
effectively. The two very well researched algorithms in this field are: Bee
Colony Optimization and Genetic Algorithm. In this paper, we have formulated a
fusion (read hybrid) algorithm based on these two. The Hybrid Algorithm derives
a test sequence from the initial population, runs it through a genetic loop and
finally applies scout bee path exploration to achieve maximum fault coverage in
minimum number of executions. Fault Detection Percentage has been calculated
based on the results and a comparison with optimal solution has been presented. |
Keywords: |
Bee Colony Optimization, Genetic Algorithm, Test Suite Prioritization,
Regression Testing, Test Suite Reduction |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
STABILIZATION OF HUMAN HEART USING PID CONTROLLER |
Author: |
M.AABID, A.ELAKKARY, N.SEFIANI |
Abstract: |
In the Aim of making an improvement on patient’s quality of life, especially the
development of an optimized technique for Hydro-electromechanical (HEM)
stimulation and regulation of the Human Heart, This paper simulates the control
and the command of the human heart based on three main functions: hydraulic,
electrical and mechanical parameters. The based MATLAB mathematical model will
primary help to understand the proper functioning of heart attack. In this way
we disturb a cardiovascular system with a noise (like Arousal, Anxiety ,disease)
coming from human brains and we try to stabilised the whole system by applying a
command from PID controller. This research especially focuses on dealing with
health care problems to improve quality patients life with heart medical
problems. By analysis study of results of the simulation of the whole system by
using MATLAB it is found that the overall response of the disturbed system
regulated by the adaptive controller is quite like the normal Heart signal. |
Keywords: |
Proportional Integral Derivative (PID (), Human Heart, Control and command,
Pacemaker, Hydro Electromechanical System |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
MULTIMODAL BIOMETRIC AUTHENTICATION SYSTEM USING LDR BASED ON SELECTIVE SMALL
RECONSTRUCTION ERROR |
Author: |
SAVITHA G, DR. VIBHA L., DR. VENUGOPAL K. R. |
Abstract: |
In Biometrics, physiological or behavioral features are utilized to validate an
individual's identity. Though, a substantial amount of research has carried out
in this field, unimodal biometric frameworks regularly experience some limits
because of non-universal biometrics attributes, vulnerability to biometric
spoofing or lack of accurateness. In this paper, the accuracy problem is
addressed through multimodal biometric fusion. Our proposed multimodal biometric
fusion methodology offers face and fingerprint as biometric traits as an input
for sanctuary purpose that are not unique to each other of the human body. Here,
we include Wiener filter for preprocessing phase and Discrete Wavelet Transform
(DWT) for the fusion process of the two traits. Also, a linear discriminant
regression classification (LDRC) algorithm has been proposed. We propose
selective small reconstruction error (SSRE) which helps to select the classes,
wherein chances to misclassifies the classes are considered when calculating the
between-class reconstruction error (BCRE). After maximizing the ratio of BCRE
and within-class reconstruction error (WCRE) an optimum projection matrix is
obtained through which a high discrimination value can be achieved for
classification. Finally, the experimentations are carried out and our proposed
LDRC methodology performance is found better than the existing LRC in terms of
accuracy, FAR, FER, EER. |
Keywords: |
Biometric Authentication, Between-Class Reconstruction Error, Discrete Wavelet
Transform, Wiener Filter, Within-Class Reconstruction Error. |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
ZAHMAH SOCIAL FORCE MODEL FOR PEDESTRIAN MOVEMENT |
Author: |
AHMAD ZAKWAN AZIZUL FATA, SARUDIN KARI, MOHD SHAFRY MOHD RAHIM, TANZILA SABA,
AMJAD REHMAN |
Abstract: |
The Social Force Model is one of the well-known approaches that could
successfully simulate pedestrian’s movement realistically. However, it is not
suitable to simulate a complex pedestrian movement. Hence, this research
proposed a novel model which improved the Social Force Model for simulating high
density crowd such as Tawaf. Tawaf is an Islamic ritual, which requires agents
to encircle the Kaabah. This ritual has been complex yet unique, due to its
capacity, density, and various demographic backgrounds of the agents. A certain
set of rules that must be followed by each agent, which introduces anomalies in
the flow around the Kaabah. The agents also will be assigned with unique
attributes such as; gender, walking speed and intention outlook to make the
simulations more realistic. The findings of this research will contribute to the
simulation activities of pedestrians in a highly dense population. |
Keywords: |
Autonomous Agents, Behaviour, Force, Tawaf, Pedestrians. |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
Title: |
IMPLEMENTATION OF PARALLEL ALGORITHM FOR LUC CRYPTOSYSTEMS BASED ON ADDITION
CHAIN BY A MESSAGE PASSING INTERFACE |
Author: |
ZULKARNAIN MD ALI, ARNIYATI AHMAD |
Abstract: |
The LUC cryptosystem is a modification of RSA cryptosystem. It was based on
Lucas Function and has been introduced by Smith and Lennon. The computation of
the LUC Cryptosystem is totally based on the computation of Lucas Function. Fast
computation algorithm is required since the public key, message, primes are all
big enough in order to have very secure cryptosystems. In this paper, the
Addition Chain technique will be implemented for a parallel computation
algorithm. In this case, the public key will be turn into the suitable array
where this array will be used for computation of LUC cryptosystem based on
Addition Chain. This Addition Chain will be use in manipulating the Lucas
Functions properties such as V2n, V2n+1 and V2n-1 to find the fast computation
techniques for Lucas Functions. The capability of the standard Message Passing
Interface (MPI) is implemented. The process run on special distributed memory
multiprocessors machine known as Sun Fire V1280. The proposed techniques can
reduce a computation time for LUC Cryptosystem computation compare to the
computation algorithm for one processor. As a comparison, the computation time
for one processor and several numbers of processors are also included. |
Keywords: |
Parallel Algorithm, Addition Chain, MPI, Public Key Cryptosystem. |
Source: |
Journal of Theoretical and Applied Information Technology
15th October 2016 -- Vol. 92. No. 1 -- 2016 |
Full
Text |
|
|
|