|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of Theoretical and Applied Information Technology
December 2016 | Vol. 94 No.2 |
Title: |
FCM-BPSO: ENERGY EFFICIENT TASK BASED LOAD BALANCING IN CLOUD COMPUTING |
Author: |
GEETHA MEGHARAJ, DR. K. G. MOHAN |
Abstract: |
Virtual machine (VM) migration is a methodology used for attaining the system
load balancing in a cloud environment by transferring the one VM from one
physical host to another host. In this paper, we plan to migrate the extra tasks
from overloaded VM to suitable VM instead of migrating the entire overloaded VM.
In order to select the host VMs, a FCM clustering algorithm has been used to
group the similar kind of host VMs. Once the VMs identified as overloaded, then
the corresponding candidate VMs are found using the FCM clustering algorithm.
Binary Particle Swarm Optimization (BPSO) methodology has been used for
selecting the host VMs from the set of candidate VMs based on multi-objective
fitness function, which includes task transferring time, task execution time and
energy consumption. By allocating the extra task from the overloaded VMs to the
proper VMs, we achieved the load balancing in the cloud environment. The
implementation of proposed methodology FCM-BPSO has been done using CloudSim
tool and comparative analysis done to evaluate the FCM-BPSO method with a
traditional load balancing algorithm in terms of energy consumption and time. |
Keywords: |
Load Balancing Algorithm, Task Scheduling, Particle Swarm Optimization, Fuzzy C
Means, Clustering |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
COMPARISON OF THINNING ALGORITHMS FOR VECTORIZATION OF ENGINEERING DRAWINGS |
Author: |
MATUS GRAMBLICKA , JOZEF VASKY |
Abstract: |
The thinning algorithms are used for creation the skeleton of an object. The
thinned image consists of the lines one pixel wide. The thinning or
skeletonization reduces the image complexity. The thinning process is widely
used in vectorization based on the thinning methods. In this contribution is
presented the comparison of nine known iterative parallel thinning algorithms
with one proposed and their performance evaluation on sets of the engineering
drawings. The results are evaluated and compared in regard to suitability to
vectorization of the engineering drawings. |
Keywords: |
Thinning Algorithm, Skeleton, Vectorization, Engineering Drawing |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
IMPLEMENTATION OF MC ELIECE ENCRYPTION SCHEME BASED ON QUASI-CYCLICS GOPPA CODES
(QC-GOPPA) |
Author: |
IDY DIOP, IBRA DIOUM, SAMBA DIAW , MADIOP DIOUF, SIDI MOHAMED FARSSI, YACOUB
MAHAMAT ADOUM |
Abstract: |
The McEliece cryptosystem is one of the oldest public key cryptosystems. It is
also the first public key cryptosystem based on error correcting codes. Its main
advantages are its speed of encryption and decryption, and high security
(promised to resist the quantum computer). But it suffers from a major drawback.
Indeed, it requires a very large size of the public key, which makes it very
difficult to use in practice. The use of codes having compact generator matrices
can significantly reduce the size of the public key. However with such matrices,
security must be strengthened by making a good choice of parameters of the code,
if not an opponent will use this change to attack the system.
the objective of this paper is to see and propose solutions on hardware
difficulty encryption algorithms and deciphering based on Key size and
transmission rate.
This work is an electronic contribution on the using of Goppa codes in McEliece
cryptosystems. We propose in this paper implementation on FPGA cart of the
schema of encryption based on these codes inspired by the mathematical approach.
We evaluated the performance by of our method by study Key size and transmission
rate . |
Keywords: |
Linear codes, quasi-cyclic codes, Goppa codes, McEliece cryptosystem. |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
A NOVEL SOFTWARE QUALITY PREDICTION SYSTEM BASED ON INCORPORATING ISO 9126 WITH
MACHINE LEARNING |
Author: |
OSAMA ALSHAREET , AWNI ITRADAT, IYAD ABU DOUSH, AHMAD QUTTOUM |
Abstract: |
To begin with, this research defines Software Quality Prediction System (SQPS)
as a system composed of a Classification Algorithm (CA) and a Software Quality
Measurement Model (SQMM). Machine Learning applications in software quality
measurement are expanding as research intensifies in two directions, the first
direction focuses on improving the performance of CAs while the other direction
concentrates on improving SQMMs. Despite of the increasing attention in this
area, some well-designed SQPSs showed considerable false predictions, which
could be explained by faults in the design of the CA, the SQMM, or the SQPS as a
whole. In this context, there is a debate on which CA is better for measuring
software quality, as well as there is a debate on which SQMM to follow. To start
with, the research studied an original dataset of 7311 software projects. Then,
the research derived quality measurements from the ISO 9126 Quality Model and
developed the SQMM accordingly. Next, the research compared statistical measures
of performance of four CAs, using WEKA and SPSS. Our experimental results showed
that ISO 9126 is general, but flexible enough to act as a SQMM. Despite of their
convergent performance, our experiments showed that Multilayer Perceptron
Network (MLPN) have more balanced predictions than Naive Bayes does. Following a
rarely researched approach, the SQPS predicted five levels of software quality
instead of making a binary prediction, limited with defect or non-defect
software. |
Keywords: |
Software Quality Prediction System (SQPS), ISO 9126 Software Quality Model,
Multilayer Perceptron Network (MLPN), Classification Algorithm (CA), Software
Quality Measurement Model (SQMM), Machine Learning. |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
ENHANCE NON-IDEAL IRIS RECOGNITION SYSTEM FROM NIR IRIS VIDEO |
Author: |
NUR KHDER NSEAF, AZRULHIZAM SHAPII, ASAMA KUDER NSEAF, AZIZAH JAAFAR, KHIDER
NASSIF JASSIM, AHMED KHUDHUR NSAIF |
Abstract: |
Iris pattern is one of the most consistent biometric methods used for
recognizing and identifying persons. Employing videos as a capturing instrument
is a pretty modern style in the area of iris biometric. The use of frame by
frame method provides more information and offers more suppleness compared to
old-fashioned still images. Nevertheless, the size, quality and shape of the
iris might differ between a frame and another. Additionally, to getting best
performance it need a rapid and precise method to segment iris to amelioration
rate of recognition. This work presents a method for choosing the best frames
found in an iris video. This method is based on detecting motion blur and
occlusion in iris videos and investigating their influence on the process of
recognition. This proposed is followed by a rapid and precise method to detect
pupil area, this method on the grounds of dynamic threshold with Circular
Hough Transform then apply Geodesic Active Contour for detect outer boundary
of iris. Experimental results are carried out on the MBGC NIR Iris Video
datasets from the National Institute for Standards and Technology (NIST).
Results show that the suggested selection method in NIR Iris Videos results in
substantial enhancement in recognition efficiency. Results also indicated that
the experimental evaluation of Iris segmented technique proposed in this work
indicates that the precision and speed of the iris recognition via video is
improved. |
Keywords: |
Iris Biometrics, Video Iris Recognition, Pupil Segmentation, Non-Ideal Iris
Recognition, GAC Iris Segmentation |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
PERFORMANCE ANALYSIS OF SLEACH, LEACH AND DSDV PROTOCOLS FOR WIRELESS SENSOR
NETWORKS (WSN) |
Author: |
IBRIHICH OUAFAA, ESGHIR MUSTAPHA, KRIT SALAH-DDINE, EL HAJJI SAID |
Abstract: |
Wireless Sensor Networks (WSN) is an emerging technology for attraction of
researchers with its research challenges and various application domains. It
consists of small nodes with sensing, computation, and wireless communications
capabilities. The limited energy resource is one of the main challenges facing
the security in such networks. An attempt has been made to compare the
performance of three protocols DSDV, LEACH and SLEACH. The purpose of this paper
is to create a simulation of these protocols using NS2. Comparison is made based
on packet delivery fraction, average end-to-end delay, throughput, average
jitter and packet loss. This paper presents all scenarios for simulation and
then we analyzed the results. |
Keywords: |
Wireless Sensor Network, Hierarchical Routing, DSDV, LEACH, SLEACH |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
AN OPTIMAL TUNING OF PSS USING AIS ALGORITHM FOR DAMPING OSCILLATION OF
MULTI-MACHINE POWER SYSTEM |
Author: |
RAMADONI SYAHPUTRA, INDAH SOESANTI |
Abstract: |
This paper proposes an optimal tuning of Power System Stabilizer (PSS) using
Artificial Immune System (AIS) algorithm for damping oscillation of
multi-machine power system. PSS is the efficient devices to damp the power
system oscillations which are caused by interruptions. This study presents a
robust algorithm to determine the PSS parameters using AIS algorithm. The PSS
parameters tuning is usually formulated as the objective function with
constraints, including the damping ratio and damping factor. Optimization with
maximum value of the damping factor and the damping ratio of power system modes
are taken as the goals functions, when designing the PSS parameters. This
optimization procedure could enhance the cloning process and lead to a better
outcome. In this work, the two-area multi-machine power system of IEEE model,
under a wide range of system configurations and operation conditions is
investigated. The system has been used to illustrate the performance of the
proposed algorithm. The performance of the AIS-based PSS is compared to the
Delta w PSS and Delta Pa PSS. The results verify that, AIS-based PSS, Delta w
PSS and Delta Pa PSS gives relatively good in reducing oscillation system
variables of which transfer electrical power, changes in angular velocity
generator, and the generator terminal voltage. All PSS can work well in order to
stabilize the system under interruption. However, AIS-based PSS have relatively
better than Delta w PSS and Delta Pa PSS in terms of ability to reduce
oscillation and speed of reaching a state of instability. |
Keywords: |
Power System Stabilizer (PSS), Artificial Immune System (AIS) Algorithm, Optimal
Tuning, Transient Stability, Damping Oscillation, Multi-Machine Power System. |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
GOAL-BASED MODELING FOR REQUIREMENT TRACEABILITY OF SOFTWARE PRODUCT LINE |
Author: |
ASAD ABBAS, ISMA FARAH SIDDIQUI, SCOTT UK-JIN LEE |
Abstract: |
Software Product Line (SPL) is extensively used in industry for quick
development with reusability of resources from domain engineering to application
engineering. For testing the products from domain engineering to application
engineering traceability of requirements are important. In sequential product
development, it is easy to create the links between software artifacts. However,
in SPL traceability links are difficult to create where multiple products from
same domain with some variation point according to stakeholder. This paper
proposes framework for traceability links in SPL processes i.e. domain
engineering to application engineering artifacts by using goal base modeling.
First step is to identify the variation points from domain feature model and
trace the link at implementation level of SPL platform. Second step is to trace
the links from each artifact of domain to application engineering for the
development of final products. We have applied our approach on general SPL
feature model and get the results of final products with zero constraint
violation. |
Keywords: |
Software Product Line, Feature Model, Requirement Traceability |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
A NOVEL PROBABILISTIC BASED FEATURE SELECTION MODEL FOR CREDIT CARD ANOMALY
DETECTION |
Author: |
Y.A.SIVA PRASAD, DR.G.RAMAKRISHNA |
Abstract: |
Due to the increase in online financial applications, the fraudulent operations
through online transactions have increased rapidly. Also, the anomaly detection
in credit card transactions has become equally important in many fields in which
the data have high dimensional attributes. Finding noisy anomaly attributes
using the conventional models are inefficient and infeasible, as the size and
number of instances are large. In this paper, an optimized probabilistic based
feature selection model was implemented on credit card fraud detection. An
efficient ranked attributes are extracted using the hybrid feature selection
algorithm. Experimental results show that proposed system efficiently detects
the relevant attributes compared to traditional models in terms of time and
dimensions are concerned |
Keywords: |
Feature selection algorithm, Fraud detection, Markov model, density
distribution. |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
INTEGRATION BALANCED SCOREDCARD AND FUZZY ANALYTIC NETWORK PROCESS (FANP) FOR
MEASURING PERFORMANCE OF SMALL MEDIUM ENTERPRISE (SME) |
Author: |
YENI KUSTIYAHNINGSIH, EZA RAHMANITA, JAKA PURNAMA |
Abstract: |
The purpose of this research is to determine the standards or uniformity
performance measurement indicators SMEs in accordance with needs and conditions
in Bangkalan Indonesia, Constructing and building decision models with
multi-criteria decision making (MCDM) by a hybrid between the method of the
balanced scorecard (BSC), fuzzy analytic network process (FANP), and technique
for order preferences of similarity ideal solution (TOPSIS), Implementation of
multi-criteria decision making (MCDM) to Determine the performance measurement
SMEs in Bangkalan Indonesian. The research is based on three main ideas; the
first is a fuzzy logic approach due to the complexity and lack of clarity in the
assessment criteria for performance measure indicators. The second is to measure
the overall performance according to the balanced scorecard perspectives, namely
customer, financial, internal business, learning, and growth. The third is to
rank all criteria weighted using Technique for Order Preference by Similarity to
Ideal Solution (TOPSIS) method. Based on the integration of three methods: fuzzy
ANP, Balanced scorecard, and TOPSIS, this way it makes better decisions in this
process. |
Keywords: |
Integration, Fuzzy ANP, Balanced Scorecard, Perspectives, Assessment |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
SOLVING A PROBLEM OF RESOURCE-INTENSIVE MODELING OF DECODERS ON MASSIVELY
PARALLEL COMPUTING DEVICES BASED ON VITERBI ALGORITHM |
Author: |
ALEXEY VIKTOROVICH BASHKIROV, ALEXANDER VASILIEVICH MURATOV, OLEG YURIEVICH
MAKAROV, VASILY IVANOVICH BORISOV, KSENIA NIKOLAEVNA LAPSHINA |
Abstract: |
In this paper, we consider the problem of resource-intensive simulation of
coding/decoding which corrects errors made at the preliminary stages of modern
telecommunication system development. We propose to use the technology of
parallel computing on GPU (GPGPU) to solve the problem of the process
acceleration. We discuss the aspects of encoding/decoding simulation, which
corrects errors in heterogeneous systems. The results of this technology
applying in the convolutional codec parameters simulation, decoded by Viterbi
algorithm, are given as well. Another problem concerned with limitation of the
interaction speed with the computing device tail part and a random access to
memory is also considered. We propose a solution by communication minimization
at host-computing device level, as well as the use of caching. The simulation
tools are described in the paper, including the use of computing technique of
general purpose on GPU allowing to reduce the time required to optimize the
noiseless coding system and thus for the development and implementation of
telecommunication devices. We describe the solutions of tasks on codecs
characteristics research using massively parallel computing, differing by
simplified initialization of flow pseudorandom-number generator (PRNG) ensuring
high performance with sufficient accuracy of calculations by reducing the number
of calls to an external status register. |
Keywords: |
Parallel Computing, Viterbi Algorithm, Noiseless Coding, GPU Of The Opencl
Standard, Communication Channels, Heterogeneous System. |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
A SECURED AND EFFICIENT MULTI-FACTOR BIOMETRIC AUTHENTICATION SCHEME USING PLAN
RECOGNITION TECHNIQUE |
Author: |
NOOR AFIZA MOHD ARIFFIN, NOR FAZLIDA MOHD SANI, ZURINA HANAPI, RAMLAN MAHMOD |
Abstract: |
One of the most important parts in security is an authentication. It has become
an essential security features for network communication. Nowadays, there is a
need for strong level of authentication to ensure high level of security is
being delivered to the application. All of this being done while still
maintaining the desired level of performance that is expected of it. However,
this approach brings challenging issues on efficiency and security. There have
been several schemes and proposals related to multi-factor authentication
previously but all of these schemes are still vulnerable to certain types of
attacks. Furthermore, a more pressing issue for multi-factor authentication is
on the high execution time which leads to a downfall in overall performance. The
objective of this research is to propose an authentication method scheme and
measure the effectiveness based on the authentication time. This scheme uses
plan recognition technique, which is able to detect and identify the user
effectively, defend from well-known attacks such as brute force or dictionary
attack. The proposed scheme should able to run with a very low execution time.
An experiment has been conducted to evaluate the scheme. Result from the
experiment shows that the proposed scheme processing time is lower than the
other previous schemes. This is even after additional security features has been
added to the scheme. |
Keywords: |
Multi-Factor Authentication, Biometric, Plan Recognition, Effectiveness,
Execution Time |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
MODERN STATISTICAL AND LINGUISTIC APPROACHES TO PROCESSING TEXTS IN NATURAL
LANGUAGES |
Author: |
ALEKSANDR EVGENJEVICH PETROV, DMITRII ALEKSANDROVICH SYTNIK |
Abstract: |
Natural language processing (NLP) is a research area that focuses on studying
the methods of computer analysis and synthesis of natural languages. The sources
of information can include not only texts, but also audio and video data. In
this article, we will focus on text mining. The analysis is divided into the
following subtasks: information extraction, tonality analysis, question-answer
systems, etc. In turn, information extraction also includes subtasks: named
entity recognition (NER), relation extraction, extraction of keywords and word
combinations (collocations). The methods of NLP are divided into linguistic
(based on rules and grammars) and probabilistic; there are also hybrid methods
that combine both approaches. The aim of this paper is to provide an overview of
modern approaches to text processing using the example of the tasks of named
entities recognition and identifying the relationships between them. |
Keywords: |
NLP, Information Extraction, Named Entity Recognition, NER, Relation Extraction,
Text Mining, Statistical Method, Linguistic Method, Machine Learning, Supervised
Learning, Semi-Supervised Learning. |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
E-GOVERNMENT SERVICES IN DEVELOPING COUNTRIES: A SUCCESS ADOPTION MODEL FROM
EMPLOYEES PERSPECTIVE |
Author: |
OMAR AHMED IBRAHIM, NOR HIDAYATI ZAKARIA |
Abstract: |
In government organizations, e-government services have become invaluable tools
through the information they offer in a timely and effective manner. More
specifically, ICTs have become invaluable in enhancing staff abilities to
achieve effective and efficient tasks. In contrast to developed countries that
encounter only limited issues in adopting e-government services, developing
nations face numerous adoption issues from the viewpoint of stakeholder groups.
One aspect of e-government relating to government units and their workers is
government-to-employee (G2E). In the present work, the researcher determined the
factors enhancing e-government adoption in a developing nation. These factors
include website quality, awareness, computer self-efficacy, capability of IT
workforce, and training incorporated in UTAUT - a model that has been
expansively employed by studies in literature. A survey for this study was
conducted and analysis was performed on the responses received from 42 Iraqi
employees. The obtained data was analyzed with the help of Smart PLS 2.0
software. This study's proposed model was confirmed and validated by using data
gathered from respondents who are experienced in the use of e-government
services. The analysis findings showed that the proposed relationships were all
significant and supported. The study provided limitations and recommendations
for future studies. |
Keywords: |
E-government Services, G2E, Adoption, UTAUT, Developing Countries |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
THE MODERATING EFFECT OF ISLAMIC WORK ETHICS ON THE RELATIONSHIP BETWEEN
KNOWLEDGE MANAGEMENT CAPABILITIES AND ORGANIZATIONAL PERFORMANCE AT THE PRIVATE
HIGHER EDUCATION INSTITUTIONS IN OMAN |
Author: |
AIDA A.AZIZ AHMED AL-ARIMI, MASLIN MASROM, NIK HASNAA NIK MAHMOOD |
Abstract: |
Knowledge management capabilities are recognized as an important means for
sustaining and improving organizational performance of the private higher
education institutions. The evaluation of knowledge management infrastructure
capabilities and knowledge management process capabilities has become important
since it provides a reference for directing the private higher education
institutions to enhance their organizational performance. The Islamic work
ethics may have a moderating effect on the relationship between knowledge
management infrastructure capabilities, knowledge management process
capabilities and organizational performance. This paper provides an
understanding of relationship between knowledge management infrastructure
capabilities, knowledge management process capabilities, organizational
performance and the Islamic work ethics. Additionally, it provides a new
framework that helps the private higher education institutions to assess their
knowledge management infrastructure capabilities, knowledge management process
capabilities, organizational performance and the Islamic work ethics. The
research findings showed that the level of knowledge management infrastructure
and process capabilities at the private higher education institutions was high,
and indicated that the knowledge management process capabilities had a positive
significant causal effect relationship with organizational performance, however
the research findings showed that the relationship between knowledge management
infrastructure capabilities with organizational performance had a
non-significant causal effect. Finally, the Islamic work ethics had a
significant moderating effect on the relationship between knowledge management
infrastructure and process capabilities and organizational performance. |
Keywords: |
Knowledge Management, Knowledge Management Infrastructure Capabilities,
Knowledge Management Process Capabilities, Organizational Performance, Islamic
Work Ethics, Private Higher Education Institutions. |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
A HIDDEN MARKOV MODEL TO PREDICT HOT SOCKET ISSUE IN SMART GRID |
Author: |
ISMA FARAH SIDDIQUI, ASAD ABBAS, SCOTT UK-JIN LEE |
Abstract: |
Smart meters collect sensor data at distribution ends of smart grid. The
collection process performs non-stop data bundling and results in hot socket
issue due to high resistance. This results an abnormal generation of dataset and
overall severely affect the operational aspects of smart grid. In this paper, we
present a model for Smart Meter Abnormal Data Identification (SMADI) over the
communication bridge of Smart grid repository and distribution end units, to
redirect abnormal samples to HBase error repository using Message propagation
strategy. SMADI predicts possible hot socket smart meter node through HMM and
generates a sequence of possible hot socket smart meters over time interval. The
simulation results show that SMADI precisely collect error samples and reduce
complexity of performing data analytics over giant data repository of a smart
grid. Our model predicts hot socket smart meter nodes efficiently and prevent
computation cost of performing error analytics over smart grid repository. |
Keywords: |
IoT, Smart meter, Smart grid, HBase, Hot Socket. |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
APPLICATION OF CONFIDENCE RANGE ALGORITHM IN RECOGNIZING USER BEHAVIOR THROUGH
EPSB IN CLOUD COMPUTING |
Author: |
MOHANAAD SHAKIR, ASMIDAR BIT ABUBAKAR, YOUNUS YOUSOFF, MOSTAFA AL-EMRAN ,
MAYTHAM HAMMOOD |
Abstract: |
Within the security scope, Authentication is considered as a core model to
control accessing any system. Password is one of the most significant mechanisms
which diagnose the authorized user from others. However, it is facing many
problems such as spoofing and man in the middle attack(MitMA). When unauthorized
user has got the correct password. Then, this user would be able to access into
the data and change previous password which causes significant loss in efforts
and cost. Similarly, the hacker "who dont have a password" is also trying to
penetrate the system through predicted a set of words. In fact, both of
authorized and hacker users work to input a wrong password, but authorized user
may have only one or two wrong characters while the hacker inputs a whole wrong
password. The aim of this paper, established an algorithm under the name of
"Confidence Range ". The main tasks of this algorithm are monitoring all the
activities which associated with the password on time, error, and style to the
authorized user to recognize any suspicious activity. For that reason, a unique EPSB, Electronic Personal Synthesis Behavior, has been generated to the
authorized user by the application of confidence range algorithm. |
Keywords: |
Information system security, Data Security, Hybrid Cloud computing, Confidence
Range(CR), Data classification. Electronic Personal Synthesis Behavior(EPSB) |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
CURRENT CHALLENGES AND CONCEPTUAL MODEL OF GREEN AND SUSTAINABLE SOFTWARE
ENGINEERING |
Author: |
KOMEIL RAISIAN , JAMAIAH YAHAYA , AZIZ DERAMAN |
Abstract: |
Software is a fundamental component in a rapidly advancing technological
society. The science of software engineering is the utilization of a systematic,
disciplined, quantifiable methodologies to deal with the development, operation,
and maintenance of software and also the investigation of these methodologies
which practically means how to apply engineering science to the application of
software. Sustainability is turning into an essential point in information
technology as commitment of information technology to uphold our Future, and as
advancing business sector fragment. The issue of sustainability was not duly
accounted for in the conventional and older software engineering field. Software
engineers deal with particular themes that need to take into account
sustainability, for instance, green IT, efficient algorithms, smart grids, agile
practices and knowledge management, yet there does not exist a thorough
comprehension of the idea of sustainability and if it can be incorporated to
software engineering. Information communication technology hugely affects
sustainable improvements because of its rising popularity for vitality and
resource management required when producing hardware and software units. The
ranking technique made 374 papers out of 11 different databases. In the wake of
performing the exclusion measures, the set was diminished to 97 papers that were
clearly identified with the models characterized for performing a composed
survey. The purpose of current study is to recognize recent issues in green
software engineering and examine the aspect of sustainable and create green
software product to render a conceptual model of sustainable software
engineering product to wind up even greener. Consequently, we recommend a
technique to incorporate sustainability in product life cycle. Then, a
conceptual model is rendered demonstrating the consolidated life cycles of
sustainable product and principle sustainable measurement dimensions, such as
energy or information efficiency, low cost and human health. |
Keywords: |
Green Software Engineering; Sustainability And Sustainability Dimensions
Software Product Life Cycle, Hardware, Conceptual Model |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
SCALE-SPACE APPROACH FOR CHARACTER SEGMENTATION IN SCANNED IMAGES OF ARABIC
DOCUMENTS |
Author: |
NOUREDDINE EL MAKHFI, OMAR EL BANNAY |
Abstract: |
The characters segmentation is an important stage for the optical character
recognition in documents. In this article, we present a new method for
segmenting the Arabic documents into text characters. Our method based on the
scale space to retrieve the blobs forming each character in the word image.
These blobs detected in appropriate scales to recover the characters and cut the
junctions between the text characters. The experimental results reveal that the
proposed method is encouraging despite some subdivision of characters, which
mainly produced by the reconciliations exaggerated between the characters in
words. This subdivision can be corrected by adding new steps to merge the
character fractions in the recognition phase. |
Keywords: |
Digital Image/Text; Scale Space; Cursive Writing; Character Segmentation; Arabic
OCR. |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
TIME SERIES FORECASTING FOR OUTDOOR TEMPERATURE USING NONLINEAR AUTOREGRESSIVE
NEURAL NETWORK MODELS |
Author: |
SANAM NAREJO, EROS PASERO |
Abstract: |
Weather forecasting is a challenging time series forecasting problem because of
its dynamic, continuous, data-intensive, chaotic and irregular behavior. At
present, enormous time series forecasting techniques exist and are widely
adapted. However, competitive research is still going on to improve the methods
and techniques for accurate forecasting. This research article presents the time
series forecasting of the metrological parameter, i.e., temperature with NARX
(Nonlinear Autoregressive with eXogenous input) based ANN (Artificial Neural
Network). In this research work, several time series dependent Recurrent NARX-ANN
models are developed and trained with dynamic parameter settings to find the
optimum network model according to its desired forecasting task. Network
performance is analyzed on the basis of its Mean Square Error (MSE) value over
training, validation and test data sets. In order to perform the forecasting for
next 4,8 and 12 steps horizon, the model with less MSE is chosen to be the most
accurate temperature forecaster. Unlike one step ahead prediction, multi-step
ahead forecasting is more difficult and challenging problem to solve due to its
underlying additional complexity. Thus, the empirical findings in this work
provide valuable suggestions for the parameter settings of NARX model
specifically the selection of hidden layer size and autoregressive lag terms in
accordance with an appropriate multi-step ahead time series forecasting. |
Keywords: |
Artificial Neural network (ANN), multi-step ahead forecasting, Nonlinear
Autoregressive (NARX) model, Outlier Detection, Time Series Prediction,
Temperature forecasting. |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
IDENTIFY AND CLASSIFY VIBRATION FAULT BASED ON ARTIFICIAL INTELLIGENCE
TECHNIQUES |
Author: |
MONEER ALI LILO , L.A.LATIFF, AMINUDIN BIN HAJI ABU |
Abstract: |
Steam turbines (ST) need to be protected from damaging faults in the event it
ends up in a danger zone. Some examples of faults include vibration, thrust, and
eccentricity. Vibration fault represents one of the challenges to designers, as
it could cause massive damages and its fault signal is rather complex.
Researches in the field intend to prevent or diagnose vibration faults early in
order to reduce the cost of maintenance and improve the reliability of machine
production. This work aims to diagnose and classify vibration faults by utilized
many schemes of Artificial Intelligence (AI) technique and signal processing,
such as Fuzzy logic-Sugeno FIS (FLS), Back Propagation Neural Network (BPNN)
hybrid with FL-Sugeno (NFS), and BPNN hybrid with FL-Mamdani FIS (NFM). The
signal of the fault and the design of the FL and NN were done using MATLB. The
results will be compared based on its ability to feed the output signal to the
control system without disturbing system behavior. The results showed that the
NFS scheme is able to generate linear and stable signals that could be fed to
modify the main demand of the ST protection system. This work concluded that the
hybrid of more than one AI technique will improve the reliability of protection
system and generate smooth signals that are proportional to the fault level,
which can then be used to control the speed and generated power in order to
prevent the increase of vibration faults. |
Keywords: |
Artificial Intelligent Technique, Signals Processing, Fuzzy Logic, Neural
Network, Fault Identification. |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
AN EFFICIENT METHOD TO CONSTRUCT DIAGONAL PERMUTATION SHIFT (DPS) CODES FOR SAC
OCDMA SYSTEMS |
Author: |
HASSAN YOUSIF AHMED, Z. M. GHARSSELDIEN, S. A. ALJUNID |
Abstract: |
This work introduces a proficient method to build a newly proposed code, named
diagonal permutation shifting (DPS) code for the spectral-amplitude-coding (SAC)
optical code-division multiple-access (OCDMA) system. The DPS code is derived
and constructed from well-known prime codes and certain matrix operations. This
proposed code possesses numerous properties such as the cross-correlation (CC)
between any two sequences is always equal to 1, short code length and proper
design of the transmitter - receiver structure. In particular, the DPS is
capable of removing the impact of multiple access interference (MAI) and further
alleviate phase-induced intensity noise (PIIN). Numerical results demonstrate
noticeable improvement for the DPS compared to the reported codes and can
improve system performance considerably. |
Keywords: |
DPS, SAC, OCDMA, MAI, In-phase CC) |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
COMPARING THE SIMILARITIES MEASUREMENT OF FACE EXPRESSION-RECOGNITION BASED ON
2DLDA MODIFICATION METHOD |
Author: |
FITRI DAMAYANT, WAHYUDI SETIAWAN, SRIHERAWATI, AERI RACHMAD |
Abstract: |
Facial expression recognition is the development of face recognition in an
environment of pattern recognition (feature recognition). Research on facial
expression recognition is very useful in many fields, for example in the field
of human computer interaction, in this case the computer recognizes facial
expressions of the user, then the computer programmatically perform the
appropriate instructions to the facial expression of the user. Facial
expressions can also be used as a measure of customer satisfaction with public
services. In this study, the facial expression recognition applications were
built to measure customer satisfaction with the process of feature extraction
using the Modified Two Dimensional Linear Discriminant Analysis (Modified 2DLDA)
to obtain input characteristics on each face. 2DLDA modification method is the
development of methods 2DLDA; which may have the similarity measurement using
Euclidean Distance, Manhattan Distance, and Two Dimensional Correlation
Coefficient. The combination of these test methods uses Jaffe database which is
a database that contains Japanese female facial expression. The highest test
results using the Euclidean Distance is 88.57%, the Manhattan Distance method is
89.92%, and the method Two Dimensional Correlation Coefficient of 90.48%. |
Keywords: |
Facial Expressions, Euclidean, Manhattan,Two Dimensional Correlation-
Coefficient, Modified 2DLDA |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
A SOFTWARE-HARDWARE OPTIMIZER MODEL FOR OPTIMIZED DESIGN OF THINGS IN AGENTS OF
THINGS |
Author: |
ANAS M MZAHM, MOHD SHARIFUDDIN AHMAD, ALICIA Y. C. TANG, AZHANA AHMAD |
Abstract: |
The machines, or things in the Internet of Things (IoT) lack self-reasoning
capability, which limits their potential to provide value-added services for
humans. Consequently, we introduce the concept of Agents of Things (AoT) as an
extension to the IoT, in which the things are embedded with self-reasoning
intelligent software agents to provide value-added services for humans. Two
crucial issues in designing intelligent things are to determine what value-added
services they should offer and the subsequent level of reasoning abilities that
are required for these services. Consequently, we need to find an optimum match
between the hardware capabilities of the things and their corresponding software
agents reasoning abilities to deliver value-added services on top of performing
their basic IoT functions.
In this paper, we present the results of a software analysis represented by a
software spectrum and a hardware analysis represented by a hardware spectrum. We
then link these two spectra to form a structured hardware-software optimizer for
a things design model, which we called the Structured Hardware-Software
Optimizer or SHOM. We demonstrate the use of SHOM in designing optimized things
in a simulated traffic scenario in manifesting the AoT concept. |
Keywords: |
Internet of Things; Agents of Things; Hardware Analysis; Software Analysis;
Structured Hardware-Software Optimizer; Software Hardware Optimizer Model;
Value-added Services; Optimum Things; |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
Title: |
APPLICATION OF THE BAYES RULE FOR ENHANCING THE PERFORMANCE OF THE BAGGING
ENSEMBLE TO DETECT ABNORMAL MOVEMENTS ONBOARD AN AIRCRAFT |
Author: |
ALI H. ALI |
Abstract: |
This paper presents a novel approach to the detection of abnormal passangers
movements onboard an aircraft. Firstly, it uses the simple indicators of the
total passengers movements along the aisle and in their seats as classification
features. Secondly, five machine learning classifiers are studied, namely:
decision trees, SVM with Gaussian kernel, bagging ensemble, boosting ensemble
and RUSBoost ensemble classifiers. The ROC curve, the confusion matrices and the
McNemar tests are shown and conducted. Finally, we propose a method of enhancing
the performance of the bagging ensemble using Bayes rule. The bagging ensemble
are found to have a classification accuracy of about 65% which was increased by
the application of the Bayes rule method to about 89.2%. The performance results
of each method is reported and discussed. |
Keywords: |
Machine Learning; Ensemble Classifiers; Aviation Safety; Bayes Rule; Decision
Support System |
Source: |
Journal of Theoretical and Applied Information Technology
31st December 2016 -- Vol. 94. No. 2 -- 2016 |
Full
Text |
|
|
|