|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
February 2018 | Vol. 96
No.4 |
Title: |
REAL TIME FACIAL EXPRESSION RECOGNITION IN THE PRESENCE OF ROTATION AND PARTIAL
OCCLUSIONS |
Author: |
FARHAD GOODARZI, M. IQBAL SARIPAN, MOHD HAMIRUCE MARHABAN, FAKHRUL ZAMAN BIN
ROKHANI |
Abstract: |
In the real world, occlusion is considered as an important problem in the domain
of emotion recognition. This study investigates the occlusion in real time using
a fully automatic system. The features are extracted effectively from special
areas of the face using both texture and geometrical features on the face. The
system uses a trained backpropagation neural network to find the seven basic
emotions. The experiments were conducted on normal, and with occlusions on
forehead, eyes, and mouth. The proposed system was able to detect emotions with
high recognition rates. The results are presented for near frontal and multi
view faces using UPM3DFE and BU3DFE 3D facial expression databases. |
Keywords: |
Face Emotion Recognition, Neural Network, Backpropagation, Feature Extraction,
Occlusion, 3DFE |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
A BAYESIAN NETWORK APPROACH TO MINE SPATIAL DATA CUBE |
Author: |
MIDOUN MOHAMMED, BELBACHIR HAFIDA |
Abstract: |
Spatial data mining is an extension of data mining that considers the
interactions in space. It involves various techniques and methods in various
areas of research. It takes into account the specificities of spatial
information such as spatial relationships that can be topological, metric or
directional. These relationships are implicit and difficult to represent. A
Bayesian network is a graphical model that encodes causal probabilistic
relationships among variables of interest, which has a powerful ability for
representing and reasoning and provides an effective way to spatial data mining.
Moreover, spatial data cubes allow storage and exploration of spatial data. They
support spatial, non-spatial and mixed dimensions. A spatial dimension may
contain vector and raster data. The spatial hierarchies can represent
topological relationships between spatial objects. We propose to use Bayesian
networks for knowledge discovery in spatial data cubes. The goal of our approach
is first to consider spatial relationships in the data mining process, and
secondly to benefit from the strength of the data warehouses to apply spatial
data mining on different aggregation levels according to the topological
relations between spatial data. In this article, we give a state of the art on
spatial data mining and propose a framework for data mining in spatial data
cubes, using Bayesian networks. We show in the proposed case study that our
approach confirms the results observed in the field and it is an important way
to take into account the specificities of spatial data in the spatial data
mining process. |
Keywords: |
F Spatial data mining, Bayesian networks, Spatial data cube, Spatial
aggregation, Spatial Analysis |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
CLUSTERING FAILED COURSES OF ENGINEERING STUDENTS USING ASSOCIATION RULE MINING |
Author: |
ROSEMARIE M. BAUTISTA |
Abstract: |
In todays world, the fast-paced changes in technology and upswing volume of
organizational data in almost all domains including academe are very remarkable.
This coupled with the aspiration to gain competitive advantage necessitate the
utilization of data mining. This paper applies the processes in the Knowledge
Discovery in Databases by Fayyad and presents in methodological way the steps
performed towards finding the associations between courses failed by engineering
students. It started with the preparation of data moving towards proper
transformation of it for data mining and concluding with data interpretation and
evaluation. Using association rule mining through Apriori algorithm, the rules
were extracted from the database. The statistical significance and the strength
of the rule were analyzed using 3 measures of usefulness: lift, support and
confidence. All the rules generated have positive co-relation, that is, the
relationships of the consequent of the rule with the antecedent are not due to
chance. The over-all output of the study is expected to offer viable results
that may be used by administrator, academic advisor and curriculum planners in
devising worth-while strategies such as improvement of teaching methodology,
re-structure of curriculum, modification of course pre-requisites or development
of supplemental activities to students. |
Keywords: |
Data Mining, Association Rule Mining, Market Basket Analysis, Knowledge
Discovery in Databases, Educational Data Mining |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
ENGLISH SENTIMENT CLASSIFICATION USING A GOWER-2 COEFFICIENT AND A GENETIC
ALGORITHM WITH A FITNESS-PROPORTIONATE SELECTION IN A PARALLEL NETWORK
ENVIRONMENT |
Author: |
DR.VO NGOC PHU, DR.VO THI NGOC TRAN |
Abstract: |
We have already studied a data mining field and a natural language processing
field for many years. There are many significant relationships between the data
mining and the natural language processing. Sentiment classification Has HAd
many crucial contributions to many different fields in everyday life, such as in
political Activities, commodity production, and commercial Activities. A new
model using a Gower-2 Coefficient (HA) and a Genetic Algorithm (GA) with a
fitness function (FF) which is a Fitness-proportionate Selection (FPS) has been
proposed for the sentiment classification. This can be applied to a big data.
The GA can process many bit arrays. Thus, it saves a lot of storage spaces. We
do not need lots of storage spaces to store a big data. Firstly, we create many
sentiment lexicons of our basis English sentiment dictionary (bESD) by using the
HA through a Google search engine with AND operator and OR operator. Next,
According to the sentiment lexicons of the bESD, we encode 7,000,000 sentences
of our training data set including the 3,500,000 negative and the 3,500,000
positive in English successfully into the bit arrays in a small storage space.
We also encrypt all sentences of 8,000,000 documents of our testing data set
comprising the 4,000,000 positive and the 4,000,000 negative in English
successfully into the bit arrays in the small storage space. We use the GA with
the FPS to cluster one bit array (corresponding to one sentence) of one document
of the testing data set into either the bit arrays of the negative sentences or
the bit arrays of the positive sentences of the training data set. The sentiment
classification of one document is based on the results of the sentiment
classification of the sentences of this document of the testing data set. We
tested the proposed model in both a sequential environment and a distributed
network system. We achieved 88.12% accuracy of the testing data set. The
execution time of the model in the parallel network environment is faster than
the execution time of the model in the sequential system. The results of this
work can be widely used in applications and research of the English sentiment
classification. |
Keywords: |
English Sentiment Classification; Distributed System; Gower-2 Similarity
Coefficient; Cloudera; Hadoop Map And Hadoop Reduce; Genetic Algorithm;
Fitness-Proportionate Selection |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
THE IMPACT OF DEMOGRAPHIC FACTORS AND VISUAL AESTHETICS OF MOBILE APPLICATION
INTERFACE ON INTENTION TO USE MOBILE BANKING IN JORDAN |
Author: |
MALIK KHLAIF GHARAIBEH, MUHAMMAD RAFIE MOHD ARSHAD |
Abstract: |
The current study examined the effect of demographic profiles and visual
aesthetics of mobile application interface on the intention of using mobile
banking services among clients living Jordan. The proposed research model was
based on three aesthetics from human compute interaction researches. We used
SPSS version 22 to analyse the data which was composed of 579 questionnaires
from different cities in Jordan using the convenience sampling technique. This
study empirically presents that the intention of using mobile banking was
positively and significantly influenced by the simplicity of mobile application
interface and colourfulness of mobile application interface. Meanwhile,
craftsmanship of mobile application interface was found to be insignificantly
affected by the intention to use. The results also showed that the intention to
use is statistically impacted by internet usage frequency. In contrast, age and
familiarity with mobile applications have not significant effect on intention to
use mobile banking. There are several factors that bring efficiency to the
intention to use mobile banking. Prior studies investigated the relationship
between these variables and the adoption of mobile banking services. However,
none has paid attention on the impact of visual aesthetics on the intention to
use mobile banking services. Findings of the current study prove that the
proposed research model is a comprehensive to study the acceptance of mobile
banking in Jordan. Overall, the results indicated the appropriateness of
fundamental elements of visual aesthetics model in mobile banking adoption
context. |
Keywords: |
Visual Aesthetics; Simplicity Of Mobile Application Interface; Mobile Banking;
Jordan. |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
HILBERT SPACE RELATIONAL SCATTERED DISTANCE CLUSTERING FOR DENSELY POPULATED AND
SPARSELY DISTRIBUTED HIGH DIMENSIONAL DATA OBJECTS |
Author: |
R.PUSHPALATHA, Dr.K.MEENAKSHI SUNDARAM |
Abstract: |
Clustering high-dimensional data is the process of grouping the similar data
from large amounts of database. Hence it is essential and significant issue in
both machine learning and data mining. But, Clustering on high dimensional data
has low accuracy and quality of the clustering algorithm is reduced due to the
data objects from a variety of clusters in different subspaces consisting of
dissimilar groupings of dimensions. The Different clustering algorithm is
designed to solve the difficulties, however the cluster objects hides in the
subspaces due to it sparseness and low dimensionality. In order to evaluate both
sparsely distributed and densely populated data objects in any given plane,
Hilbert Space Relational Scattered Distance Clustering (HS-RSDC) technique is
introduced. The HS-RSDC using Controlled Effort Boundary Operation namely
UNION_BOUND, INTERSECT_BOUND and PARTITION_BOUND to improve the clustering
accuracy. This helps to improve multi space data object mining quality in
various real world clustering applications. The Hilbert space relational cluster
objects are processed for producing the accurate cluster by reducing subspaces
in the data plane. After that, Scattered Distance measures are employed to
calculate the distance of geometric median. Finally, the HS-RSDC boundary
computation operations associate unlabeled data objects to more appropriate
cluster using relational object number. The relational object number is assigned
internally and globally in HS-RSDC method to remove unlabelled data objects and
to discover different types of correlation events over the cluster objects.
HS-RSDC handles the boundary overhead and improves the efficiency of user
pruning queries. By minimizing the number of intermediate pruning, the total
traversal length of data object search is reduced. Experimental results show
that the proposed HS-RSDC method increases the performance in terms of
clustering accuracy, clustering time, space complexity. |
Keywords: |
Data Mining, High Dimensional Data, Cluster Analysis, Hilbert Space, Pruning
Process, Scattered Distance, Controlled Effort Boundary Operation. |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
A REVIEW AND OPEN ISSUES OF MULTIFARIOUS IMAGE STEGANOGRAPHY TECHNIQUES IN
SPATIAL DOMAIN |
Author: |
MOHAMMED MAHDI HASHIM, MOHD SHAFRY MOHD RAHIM, ALI ABDULRAHEEM ALWAN |
Abstract: |
Nowadays, information hiding is becoming a helpful technique and fetch more
attention due fast growth of using internet, it is applied for sending secret
information by using different techniques. Steganography is one of major
important technique in information hiding. Steganography is science of
concealing the secure information within a carrier object to provide the secure
communication though the internet, so that no one can recognize and detect it's
except the sender & receiver. In steganography, many various carrier formats can
be used such as an image, video, protocol, audio. The digital image is most
popular used as a carrier file due its frequency on internet. There are many
techniques variable for image steganography, each has own strong and weak
points. In this study, we conducted a review of image steganography in spatial
domain to explore the term image steganography by reviewing, collecting,
synthesizing and analyze the challenges of different studies which related to
this area published from 2014 to 2017. The aims of this review is provides an
overview of image steganography and comparison between approved studies are
discussed according to the pixel selection , payload capacity and embedding
algorithm to open important research issues in the future works and obtain a
robust method. |
Keywords: |
Information Hiding, Image Steganography, Least Significant Bit (LSB), Different
types of Steganography, Spatial Domain.
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
GAME-THEORETICAL MODEL OF LABOUR FORCE TRAINING |
Author: |
IRINA ZAITSEVA, OLEG MALAFEYEV, SERGEI STREKOPYTOV, ANNA ERMAKOVA, DMITRY SHLAEV |
Abstract: |
The continuous and discrete models of labour force training are being built. The
application of the results from the theory of differential games and dynamic
programming allows presenting the optimal strategies of labour force training
that can be calculated. |
Keywords: |
Continuous Game-Theoretical Model, Discrete Game-Theoretical Model, Labor Force
Training. |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
PERFORMANCE STUDY IN AUTONOMOUS AND CONNECTED VEHICLES A INDUSTRY 4.0 ISSUE |
Author: |
ALESSANDRA PIERONI, NOEMI SCARPATO, MARCO BRILLI |
Abstract: |
Industry 4.0 represents an opportunity. The term 4.0 is used to specify the
current industrial revolution, not-only from the technological point of view but
also from the economical, sociological and strategical point of view. Indeed,
this revolution involves different sectors: from manufacturing to healthcare.
Its disruptive diffusion is due to several enabling technologies, such as
Internet of Things (or Internet of Everything or Industrial Internet of Things),
and, as said, it is a vision rather than a technological step forward. In this
evolutionary process, the Connected and Autonomous Vehicles (CAVs) represent the
perfect connection between technology and society world, an issue that stands
actually in the center of Industry 4.0. This article intends to extend a
previous work of the authors, in which a new and non-conventional approach to
manage the great amount of data that is generated by the CAVs has been proposed.
In particular, the validity of the approach will be demonstrated by means of
performance indices specifically defined for this case study. |
Keywords: |
Autonomous Vehicle, Connected Vehicle, Big Data, Graphdb, Internet Of Things,
Industry 4.0, Performance Indices. |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
PREVENTING SECURITY ATTACKS ON MOBILE PATTERN PASSWORDS |
Author: |
Bh PADMA, GVS RAJKUMAR |
Abstract: |
An Android Smartphone is a personal device, which keeps many of our personal
files and data, such as photos, videos, messages, bank account information etc.
Keeping these files safe from outsiders may be troublesome especially when they
try to speculate the device passwords. So Android authentication processes
should always pursue robust security enhancements to preserve the security of
the sensitive data stored in the mobiles. For pattern locking systems of
Android, older versions such as Kit Kat and Lollipop make use of authentication
systems which rely on SHA-1 and MD5 unsalted hashes, but the latest versions
such as Android Marshmallow employ Gatekeeper Mechanism and store the passwords
and authenticate the users in a trusted execution environment and are more
secured from brute-forcing. The former methods are vulnerable to dictionary and
rainbow table attacks since they are unsalted hashes, whereas the later Android
hashing schemes such as HMAC or Scrypt hashes use salts for hashing but cannot
flee from hacker’s forensic tools that crack the passwords and they do need an
additional hardware support. Therefore this paper presents two substitute
methodologies that suggest a new approach to enhance the basic SHA-1 hashing
scheme using Elliptic Curves to prevent pre-computation attacks such as
dictionaries, rainbow tables and brute forcing on pattern password scheme. These
proposed methods seem to be simple and secure without employing a complex
hardware-backed environment such as Trusted Execution Environment (TEE). This
paper also presents a comparison among the proposed schemes with respect to
Strict Avalanche Effect and CPU Execution Times after the implementation. |
Keywords: |
Android, Smartphone, SHA-1, Brute- Force, Dictionaries, TEE |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
THE INFLUENCE OF PERCEIVED RISK AND CONSUMER INNOVATIVENESS ON INTENTION TO USE
OF INTERNET OF THINGS SERVICE |
Author: |
JUNGYEON SUNG, JAEWOOK JO |
Abstract: |
The current study is focused on consumer response for IoT (the Internet of
Things) service. In particular, it explored consumer response which is
influenced by the intention to use based on the Unified Theory of Acceptance and
Use of Technology (UTAUT). UTAUT has been generally introduced as model and
theory of adoption of new technology instead of Technology Acceptance Model
(TAM). It is used as an exploration research for consumer’s response of IoT
service. Even though technology has improved and rapidly changed, if it doesn’t
understand how to adjust or work well from the point of view of consumer, the
technology won’t be necessary for consumers. The results of 147 participants
show that moderating effect of consumer innovativeness is influenced on attitude
toward IoT service and intention to use. So, the result of the study indicated
the point of view consumer’s response such as perceived risk and consumer
innovativeness different from previous research in the basis of UTAUT. |
Keywords: |
IoT, UTAUT, Perceived risk, Consumer innovativeness, Intention to use
|
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
ADAPTIVE SPECTRAL SUBTRACTION FOR ROBUST SPEECH RECOGNITION |
Author: |
JUNG-SEOK YOON, JI-HWAN KIM, JEONG-SIK PARK |
Abstract: |
Speech recognition rate degrades drastically in extreme noisy environments.
Spectral subtraction is one of the representative noise reduction method, but it
is vulnerable to non-stationary noise although it is quite effective for
stationary noise. In this paper, we propose an adaptive spectral subtraction
method to improve the speech recognition performance. The proposed method is to
consistently update the noise component in non-speech regions and remove the
corresponding component in following speech regions. To validate of the noise
reduction performance, we conducted several experiments for each noise power
level. Our approach achieved better performance compared to the conventional
spectral subtraction approach. |
Keywords: |
Noise Reduction, Spectral Subtraction, Speech Recognition, Voice Activity
Detection |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
A STUDY ON BIG DATA BASED NON-FACE-TO-FACE IDENTITY PROOFING MODEL |
Author: |
HEEGYUN YEOM, DAESON CHOI, KWANSOO JUNG, SEOKHUN KIM |
Abstract: |
Online service providers are increasingly considering the adoption of a variety
of additional mechanisms to supplement the authentication security provided by
conventional password verification. Recently, the authentication and
authorization methods using the user attribute information have been used for
various services. In particular, the need for various approaches to
non-face-to-face identification technology for online user registering and
authentication are increasing demands because of the growth of online financial
services and the rapid development of financial technology. However,
non-face-to-face approaches can be generally exposed to a greater number of
threats than face-to-face approaches. Therefore, identification policies and
technologies to verify users by using various factors and channels are being
studied in order to complement the risks and to be more reliable
non-face-to-face identification methods. One of these new approaches is to
collect and verify a large number of personal information of user. Thus, we
propose a big-data based non-face-to-face Identity Proofing model that verifies
identity on online based on various and large amount of information of user. The
proposed model performs identification of various attribute information required
for the identity verification level. In addition, the proposed model can be
quantified identity proofing reliability as collects and verifies only the user
information required for assurance level of identity proofing. |
Keywords: |
Non- Face-To -Face, Authorization, Identity Proofing, Big Data |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
QOS-DRIVEN MIGRATION SCHEME BASED ON PRIORITY IN CLOUD COMPUTING |
Author: |
A-YOUNG SON, EUI-NAM HUH |
Abstract: |
Migration used many areas like resource management in cloud computing
environment. Numerous schemes of resource scaling based on migration are
designed in the Cloud Data Center (CDC) s. However, cloud service providers
(CSPs) have not yet provided sufficient efficiency to meet user requirements
such as energy efficiency, performance and cost. Furthermore, most of the
previous researches did not consider the multi-metric. Thus, we have to consider
combining decision method with Fuzzy and AHP for multi-metric. The simulations
show that the hierarchical migration scheme is necessary to meet user
requirement. In this paper, we proposed migration scheme based on priority in
cloud computing. We also performed resource scaling based on fuzzy system and
analyzed effect of metric to maximize energy efficiency. Finally, we
demonstrated the proposed method by using different migration strategies. |
Keywords: |
Cloud computing, Resource scaling, Analytical Hierarchical Process (AHP), Fuzzy,
Multi-Criteria Decision Making (MCDM), Data Center |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
INTERGRATED MULTILAYER METADATA BASED ON INTELLECTUAL INFORMATION
TECHNOLOGY FOR CUSTOMIZATION SERVICES OF IMAGES WITH DIFFERENT USAGE PERMISSION
OF LICENSE |
Author: |
EUN-JI SEO, IL-HWAN KIM , DEOK-GI HONG , YOUNGMO KIM, SEOK-YOON KIM |
Abstract: |
Tags and metadata are used to manually retrieve public domain images, but it
would require a lot of time and money to build such a system into an automated
one. Therefore, it is necessary to extract the tags and metadata utilizing
intelligent information technology, and semantic metadata is preferred for this
purpose rather than simple metadata for each image content. In this paper, we
have designed and implemented multi-layer metadata that can be used for
customized search recommendation based on intelligent information using MPEG-7,
Dublin core, and CCRel. The metadata of public domain images are described with
the right of usage, feature information and semantic information, and they are
expressed in an XML document form so that they can be exchanged on the web
easily. |
Keywords: |
Intelligent Information Technology, Public Domain Works, Free-use Licenses,
Metadata |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
A STUDY ON CLOUD-BASED MEDIA SERVICE REFERENCE MODEL SUPPORTING MULTI-DRM |
Author: |
YOUNGMO KIM, BYEONGCHAN PARK, YU-HYEON WON |
Abstract: |
In the online content distribution platform, DRM technology is the core
technology for implementing business model, and most distribution platforms use
only one DRM technology. While the sales amount in these content distribution
platforms is distributed to the copyright owner and the content distributor
accordingly, the copyright owner and user cannot choose a different DRM purchase
cost since only the DRM technology proposed by the content distributor is used.
In this paper, we analyze the requirements for constructing media service
platform supporting multi-DRM with more than one DRM and propose a cloud-based
media service model that supports multi-DRM based on these requirements. The
simulation results shows that the profit settlement part of the proposed
reference model yields fair dividend payment to participants according to the
actual usage rate of DRM and contents. |
Keywords: |
Cloud Media Service, DRM, Copyright, Reference Model |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
A STUDY ON iBiz.FS ARCHITECTURE DESIGN FOR ESTABLISHING ELECTRONIC FINANCE
SYSTEM |
Author: |
KYUNG-MO KOO, KOO-RACK PARK, DONG-HYUN KIM |
Abstract: |
Recently, financial institutions regard the role of bank branches as important
but the cases of branches' closure and integration are occurring more often.
Also electronic finance related services of financial institutions are migrating
to further diverse and complicated shape, and customers' dependence on system is
abruptly increasing. Accordingly, accurate processing of transaction data
produced from the system and the security thereof shall be deemed as very
important factors. In order to establish this system, it is necessary to set up
the system with further systematic and standardized element. In this regard,
this thesis proposes the architecture design methodology suitable to
establishment of electronic finance system based on information engineering
design methodology. The development methodology proposed herein is equipped with
efficient and standardized structure, and is expected to enable establishment of
speedy and stable system by software design based on application architecture.
The verification result of the proposed design methodology shows the effect of
reducing the number of duration days by 13.2%, and cost reduction by 15.1%.
Through the proposed architecture design methodology, the ground for systemic
design at even high level in the area to establish electronic finance system
would be introduced, and through this system development period would be
accelerated and development costs would be saved. By application of further
systematic standardization in development of electronic finance system, high
quality system could be developed. As the direction for future study, studies
are required for additional analyses on technology and security architecture
applicable to electronic finance system. Through these analyses, further studies
on flexible expandability and systematic practice architecture application
method shall be maintained. |
Keywords: |
Application Architecture, Information Engineering Methodology, e-Bank System,
Software Development Standard, Message Processing Method, Framework, Business
Process |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
DECLARATIVE STACK FOR DISTRIBUTED GRAPH PROCESSING |
Author: |
RADWA ELSHAWI, ARWA ALDHABAAN, SHERIF SAKR |
Abstract: |
Recently, people, devices, processes and other entities have been more connected
than at any other point in history. In general, graphs have been used to
represent data sets in various application domains including computational
biology, social science, telecommunications, astronomy, semantic web and protein
networks among many others. In practice, systemsstacks of large scale graph
processing platforms are suffering from the lack of declarative processing
interface. They are mainly relying on low level programming abstractions which
can be only used by sophisticated software developers and are not adequate for
many users. In order to tackle this challenge and improve the performance and
user acceptance of large scale graph processing frameworks, we present a
declarative querying framework that can seamlessly integrate with various big
graph processing system platforms. Our experimental evaluation shows the
effectiveness and efficiency of our proposed framework. |
Keywords: |
Big Data, Big Graph, Hadoop, Spark |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
POCS-VF: PROXIMATE OPTIMUM CHANNEL SELECTION THROUGH VOID FILLING AND BURST
SEGMENTING FOR BURST SCHEDULING IN OBS NETWORKS |
Author: |
V. KISHEN AJAY KUMAR, K. SURESH REDDY, M.N. GIRI PRASAD |
Abstract: |
The burst scheduling in OBS networks is critical research objective that
attained the interest of many researchers in last few years. OBS is phenomenal
and promising data transfer strategy for present and future internet
communication requirements. It is clearly quoted in acclaimed recent reviews on
burst scheduling in OBS networks that optimal burst scheduling models are in
considerable need in the context of minimizing the burst drop ratio and
maximizing the channel utilization. Hence in this regard, a novel burst
scheduling with burst segmenting and void filling strategy called “Proximate
Optimum Channel Selection through Void Filling and Burst Segmenting” strategy
for burst scheduling in OBS networks is proposed here in this manuscript. The
experimental study done on simulation environment evincing that the proposed
model is optimal with minimal burst drop ratio, maximum channel utilization and
minimal average time to schedule that compared to the contemporary benchmark
model called MSBFVF found in recent literature. |
Keywords: |
Burst Scheduling, Optical burst switching, Void filling, Burst drop ratio |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
USING K-MEANS ALGORITHM AND FP-GROWTH BASE ON FP-TREE STRUCTURE FOR
RECOMMENDATION CUSTOMER SME |
Author: |
MUHAMMAD ALI SYAKUR, BAIN KHUSNUL KHOTIMAH, EKA MALA SARI ROCHMAN, BUDI DWI
SATOTO |
Abstract: |
The market basket has been finded patterns of purchase customer in SME. Purchase
patterns can help to make recommendations and product promotions. This research
used K-Means algorithm for sales data clustering and uses FP-Growth Algorithm to
know the relation of each cluster. K-Means clustering to classify customer data
based on the same attribute, then determined the relationship between patterns
in each group with FP-Growth Algorithm. K-Means to do customer segmentation
based on background, customer characteristic and level of purchasing power. To
facilitate the analysis of customer relationships with products purchased, then
each cluster profilling customer will be processed data record by FP Growth to
know the relevance of goods purchased. The research presents a discussion of the
comparison of time complexity between FP-Growth algorithms and Apriori
Algorithms. This research would been done the development and application of the
use of Trees ie FP-Tree (Frequent Pattern Tree). They are an extension of the
use of Trees in the data structure. FP-Tree is used in conjunction with the
FP-Growth algorithm to determine the frequent itemset of a database, in contrast
to the a priori paradigm of scanning the database repeatedly to determine the
frequent itemset. In this study, the number of transactions with many items of
goods and consumer purchasing power are varied, grouped first by using K-Means
algorithm, cluster results formed into several groups including five customer
groups based on customer profile. The result of the test is average on minsupp =
60 and minconf = 40, so the average processing time is 957 ms. |
Keywords: |
K-Means, Fp-Growth, FP-Tree, SME, Profilling Customer, Pattern |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
EVALUATION OF FEATURES FOR VOICE ACTIVITY DETECTION USING DEEP NEURAL NETWORK |
Author: |
SUCI DWIJAYANTI, MASATO MIYOSHI |
Abstract: |
Voice activity detection (VAD) is implemented in the preprocessing stage of
various speech applications to identify speech and non-speech periods. Recently,
deep neural networks (DNNs) have been utilized for VAD given their superior
performance over other methods. When used to identify speech and non-speech
periods, DNNs depend on the input of different features to discriminate speech
from noise. Hence, different features have been used as input for DNN-based VAD.
However, the contribution and effectiveness of such features have not been
thoroughly evaluated. In this paper, we address these aspects by comparing five
features, namely, log power spectra, filter bank, mel-frequency cepstral
coefficients, relative spectral perceptual linear predictive analysis, and
amplitude modulation spectrogram, which are widely used on speech processing, to
evaluate their performance in a DNN-based VAD. Experiments on the TIMIT speech
corpus show that the amplitude modulation spectrogram is the feature with the
best performance given its high accuracy even when processing speech data with
low signal-to-noise ratio. The next feature showing high performance is log
power spectra, which can be considered as a raw feature because it does not
require as many calculations or processing as the other features. This suggests
that raw features may be suitable inputs for DNN-based VAD. Moreover, limiting
the number and processing of features for DNNs may foster system performance,
real-time application, and portability of VAD by reducing the computational
cost, required memory and storage. |
Keywords: |
DNN, Speech Period, Speech Features, Voice Activity Detection, Amplitude
Modulation Spectrogram, Log Power Spectra |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
IMPROVING QUALITY OF SOFTWARE DEVELOPMENT LIFE CYCLE (SDLC) PROCESS USING CMMI
FOR DEVELOPMENT VERSION 1.3 (A CASE STUDY APPROACH) |
Author: |
SITI ELDA HIERERRA, YOHANNES KURNIAWAN |
Abstract: |
The purpose of this research is to know how to improve the quality of Software
Development Life Cycle (SDLC) process through some stages: identify existing
SDLC process weaknesses, provide an evaluation of the current SDLC process, and
provide solutions to overcome its weaknesses. Analysis methods used in this
research based on CMMI Development version 1.3 by referring to the continuous
representation and project roadmap which focus on 5 (five) processes area:
Project Planning, Project Monitoring and Control, Requirement Management,
Configuration Management, Process and Product Quality Assurance. Assessment
method used in this research is SCAMPI (Standard CMMI Appraisal Method for
Process Improvement) Class C. Based on the results of the assessment, SDLC
process has not been yet successfully reach capability level 1 due to the
organization has not yet implement fully specific practices on five processes
area, so it means there is still a score less than 4 (four) for the specific
practice. Based on these results, author gives the proposed solution for
specific area which still has a weakness, in order to increase the specific
practice score into 4 (four). |
Keywords: |
Software Development Life Cycle, CMMI for development, Continuous Improvement,
Information system, Software Development. |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
EXTRACTING UML MODELS AND OCL INTEGRITY CONSTRAINTS FROM OBJECT RELATIONAL
DATABASE |
Author: |
TOUFIK FOUAD, BAHAJ MOHAMED |
Abstract: |
Database reverse engineering is the process of extracting and transforming
database metadata to a rich set of models. These models must be able to describe
data structure at different levels of abstraction, starting from physical to
conceptual schema. The obtained schema may be used to ease, among others,
database structure update, evolution and maintenance. In the past few years, the
object oriented construct was merged into relational database. Nevertheless, a
few methods of object relational database (ORDB) reverse engineering was
presented. In this sense, the main goal of this article is to present an
approach of database reverse engineering which cover the transformation of new
added object construct. At the end of transformation we obtain a conceptual
schema (CS) expressed as UML class diagram. The returned CS is extended with a
set of OCL (Object Constraint Language) clauses which represent at a higher
level of abstraction, the database integrity constraints. We provide a program
that implements our approach for ORACLE 11g database management system. |
Keywords: |
UML, OCL, ORDB, SQL, Reverse Engineering |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
FUZZY MULTI-CRITERIA RANDOM SEED AND CUTOFF POINT APPROACH FOR CREDIT RISK
ASSESSMENT |
Author: |
BEULAH JEBA JAYA Y., DR. J. JEBAMALAR TAMILSELVI |
Abstract: |
Data mining classification techniques have been studied extensively for credit
risk assessment. Existing techniques by default uses 0.5 as the cutoff
irrespective of datasets and classifiers to predict the binary outcomes, thus
limiting their classification performance on imbalanced group sizes of datasets.
This paper addresses two key problems with the existing techniques and talks
about the advantages of using Multiple Criteria Decision Making (MCDM) technique
on multiple evaluation criteria. The first key problem is applying default
cutoff irrespective of datasets and classifiers. The second one is utilizing
single criteria for evaluating classification performance and predicting cutoff
point. This research work identifies the best cutoff point with respect to
datasets and classifiers and integrates MCDM under fuzzy environment in all data
mining stages of evaluation to take better decisions on multiple criteria,
selection of initial random seed in the clustering phase for better cluster
quality and Best Seed Clustering combined Classification (BSCC hybrid algorithm)
with selected features to improve classification performance. The integration of
these techniques gives a better hand to improve cluster quality and
classification performance score with respect to datasets and classifiers
because the cutoff point varies from dataset to dataset and classifiers to
classifiers. Experimental outcomes from applied credit dataset of UCI machine
learning repository found to be competitive and the proposed BSCC hybrid
algorithm increases the performance score on obtained cutoff point over
non-hybrid approach with default cutoff. |
Keywords: |
Credit Risk, Classification, Clustering, Fuzzy MCDM, Cutoff Point, Random Seed |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
Title: |
A FRAMEWORK FOR HANDLING BIG DATA DIMENSIONALITY BASED ON FUZZY-ROUGH TECHNIQUE |
Author: |
1MAI ABDRABO, MOHAMMED ELMOGY, GHADA ELTAWEEL, SHERIF BARAKAT |
Abstract: |
Big Data is a huge amount of high dimensional data, which is produced from
different sources. Big Data dimensionality is a considerable challenge in data
processing applications. In this paper, we proposed a framework for handling Big
Data dimensionality based on MapReduce parallel processing and FuzzyRough for
feature selection. This paper proposes a new method for selecting features based
on fuzzy similarity relations. The initial experimentation shows that it reduces
dimensionality and enhances classification accuracy. The proposed framework
consists of three main steps. The first is the preprocessing data step. As for
the next two steps, they are a map and reduce steps, which belong to MapReduce
concept. In map step, FuzzyRough is utilized for selecting features. In reduce
step, the fuzzy similarity is presented for reducing the extracted features. In
our experimental results, the proposed framework achieved 86.4% accuracy by
using decision tree technique, while the accuracy of the previous frameworks,
which are performed on the same data set, achieved accuracy between 70 to 80%. |
Keywords: |
uzzyRough set, MapReduce, Feature selection, Decision tree. |
Source: |
Journal of Theoretical and Applied Information Technology
28th February 2018 -- Vol. 96. No. 4 -- 2018 |
Full
Text |
|
|
|