|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of Theoretical and Applied Information Technology
February 2016 | Vol. 84 No.3 |
Title: |
DATA CONFIDENTIALITY IN THE WORLD OF CLOUD |
Author: |
KHALID EL MAKKAOUI, ABDELLAH EZZATI, ABDERRAHIM BENI-HSSANE, CINA MOTAMED |
Abstract: |
Cloud computing is becoming an attractive technology thanks to its diverse
benefits such as: reducing costs, sharing computing resources, service
flexibility, etc. However, the concept of data security and privacy has become a
major issue. Indeed, the key challenge is to ensure users that the cloud service
provider may store and process the raw data confidentially. The fear of seeing
sensitive raw data stored and used are the major barrier to cloud services
adoption. The use of methods capable of ensuring storage confidentiality and
data processing located in cloud servers appears to be an effective way to
overcome this barrier and to build confidence in cloud services. This paper
describes a layered model of cloud security and privacy, and some approaches
used for secure data in cloud environment. Specifically, we will focus on the
encryption methods that can ensure data storage confidentiality and another
technique that can ensure both data storage confidentiality and data treatments. |
Keywords: |
Cloud Computing, Security, Privacy, Confidentiality, Symmetric key, Homomorphic
Encryption |
Source: |
Journal of Theoretical and Applied Information Technology
29th February 2016 -- Vol. 8. No. 3 -- 2016 |
Full
Text |
|
Title: |
NON-CONTACT ELECTROCARDIOGRAM (ECG) SMART CHAIR FOR ST SEGMENT ELEVATION
MYOCARDIAL INFARCTION DETECTION |
Author: |
TUERXUN WAILI, RIZAL MOHD.NOR , KHAIRUL AZAMI SIDEK , ADAMU ABUBAKAR |
Abstract: |
Myocardial infarction (MI) is a medical term for heart attack. MI are classified
into ST-segment elevation MI (STEMI) or non ST-segment elevation MI (NSTEMI).
The ST-segment elevation MI (STEMI) can be detected by analyzing the
electrocardiography (ECG) result. However, an ECG device is only available in
hospitals due to its cost and requires an expert to operate the device. A
personal ECG device at home is therefore difficult to be acquired. For most
STEMI cases, heart problems requires early detection and medication.
Unfortunately, frequent visits to the hospital may not be possible for most
people. A new way to have personalized ECG reading without the presence of an
expert is therefore desirable. One method is to use a non-contact ECG signal
detection technique that could help assess cases of heart problems or an initial
apoplectic condition. A non-contact ECG chair framework for STEMI is proposed to
empower patients in getting personalized assessment to their heart condition at
any time. Patients can easily use the ECG device at the comfort of their home or
in the office by just sitting on their chair. The ECG signal will automatically
be captured and analyzed to intelligently diagnose the heart condition. The
implementation of this concept will help the Malaysian community with early
intervention by medical experts and significantly reduce the mortality rate. |
Keywords: |
Capacitive Sensor, Non-Contact Electrocardiogram, ST Segment Elevation,
Myocardial Infarction |
Source: |
Journal of Theoretical and Applied Information Technology
29th February 2016 -- Vol. 8. No. 3 -- 2016 |
Full
Text |
|
Title: |
ALGORITHM APPLICATION SUPPORT VECTOR MACHINE WITH GENETIC ALGORITHM OPTIMIZATION
TECHNIQUE FOR SELECTION FEATURES FOR THE ANALYSIS OF SENTIMENT ON TWITTER |
Author: |
MOCHAMAD WAHYUDI, DWI ANDINI PUTRI |
Abstract: |
Twitter has become one of the most popular micro-blogging platform, recently.
Millions of users can share their thoughts and opinions about various aspects
and activites. Therefore, twitter considered as a rich source of information for
decision-making and sentiment analysis. In this case, the sentiment is aimed to
overcome the problem of automatically classifying user tweets into positive
opinion and negative opinion. The classifier Support Vector Machine (SVM) used
in this study is a machine learning technique that is popular text classifiers,
as Support Vector Machine (SVM) algorithm is one that has a linear calcification
of the main principles for determining the linear separator in the search space
that can best separate the two classes different. But the Support Vector Machine
(SVM) has the disadvantage that the appropriate parameter selection problem. The
tendency in recent years is to simultaneously optimize the features and
parameters for Support Vector Machine (SVM), so as to improve the accuracy of
classification on Support Vector Machine (SVM). Genetic Algorithm has the
potential to produce better features and becomes optimal parameters at the same
time. This research generate text classification in the form of positive and
negative tweets on twitter. Measurement accuracy is based on Support Vector
Machine (SVM) before and after using a Genetic Algorithm. Evaluation was
performed using 10 fold cross validation while accuracy is measured by the
confusion matrix and ROC curves. The results of the study showed an increase in
accuracy of Support Vector Machine (SVM) from 63.50% to 93.50%. |
Keywords: |
Sentiment Analysis, Twitter, Support Vector Machine (Svm), Classification Text |
Source: |
Journal of Theoretical and Applied Information Technology
29th February 2016 -- Vol. 8. No. 3 -- 2016 |
Full
Text |
|
Title: |
A HEURISTIC APPROACH TO INDOOR LOCALIZATION USING LIGHT EMITTING DIODES |
Author: |
MUHAMMAD SAADI, YAN ZHAO, LUNCHAKORN WUTTISTTIKULKIJ, MUHAMMAD TAHIR ABBAS KHAN |
Abstract: |
Widespread applications of location based services for indoor environments have
created an opportunity for researchers to develop new techniques for accurate
position estimation with less complexity. In this paper, we present a heuristic
approach to localization by employing clustering for an indoor environment.
Depending upon the number of light emitting diodes (LEDs) used as transmitter,
level 1 clustering is achieved by simply comparing the signal strength for each
combination of transmitter’s light intensities at the receiver. For level 2
clustering, a new technique of clustering is proposed, named portion clustering,
is applied to further partition the area where the object of interest can be
located. From the simulation results, it can be observed that the location
estimation up to 16 centimeters can be achieved for an indoor environment with
the dimensions of 3 × 3 × 3 using LEDs. |
Keywords: |
Localization, Light Emitting Diode, Machine Learning, Clustering, Received
Signal Strength |
Source: |
Journal of Theoretical and Applied Information Technology
29th February 2016 -- Vol. 8. No. 3 -- 2016 |
Full
Text |
|
Title: |
DESIGNING AND IMPLEMENTING PARSING FOR AMBIGUOUS SENTENCES IN INDONESIAN
LANGUAGE |
Author: |
DEWI SOYUSIAWATY, EKO ARIBOWO |
Abstract: |
In the daily life there are a lot of ambiguous words or sentences. For instance,
in Indonesian language the word ‘apel’ has two different meanings if it is
placed in the sentence. It may mean 1) to hold a ceremony or 2) a name of a
fruit. Another example is the word ‘tahu’ that may mean 1) understand or know or
2) a name of a food from soybean. The ambiguous sentences become a matter in the
translation application or dictionary if the application has no facility to
detect the sentence with some different meanings. As a result, it will influence
the translation result. The word ‘tahu’ in the sentence ‘saya ingin tahu’, if it
is translated into one of regional language such as Javanese language, it may
result ‘kula kepengen tahu’ (I want a food from soybean) or ‘kula kepengen weruh’
(I want to know). It depends on what ‘tahu’ means. This research discusses the
role of parsing as the sentence breaker in identifying the ambiguous sentences,
so the real meaning of the sentence can be obtained and the congruence between
structure pattern of a sentence that is inputted and grammar rules saved can be
acknowledged.
The research started by collecting Indonesian language grammar rule data
consisting of clause structure forming a sentence, phrase that is two or more
words structure, and detail clause pattern structure including ambiguous phrases
and vocabularies list. Then, flowchart for parsing is designed in some stages
those are obtaining pattern and checking the phrase, implementing and evaluating
the application.
The research resulted the sentence parsing flowchart, application for ambiguous
sentences checking and to know the structure congruence between the input
sentence and the rules. |
Keywords: |
Parsing, Sentence, Ambiguous, Clause, Phrase |
Source: |
Journal of Theoretical and Applied Information Technology
29th February 2016 -- Vol. 8. No. 3 -- 2016 |
Full
Text |
|
Title: |
NER IN ENGLISH TRANSLATION OF HADITH DOCUMENTS USING CLASSIFIERS COMBINATION |
Author: |
MOHANAD JASIM JABER , SAIDAH SAAD |
Abstract: |
There is a need to retrieve and extract important information in order to fully
understanding the ever-increasing volume of English translated Islamic documents
available on the web. There is limited research focused on Named Entity
Recognition (NER) for Islamic translations even though NER has seen widespread
focus in other languages. Translated named entities have their own
characteristics and available annotated English corpora do not cover all the
transliterated Arabic names, which makes translations with NER difficult in the
Islamic domain. This research addressed the use of NER in English translations
of Hadith texts. The objective of this research was to design and develop a
model that was able to excerpt Named Entities from English translation of Hadith
texts. This research used supervised machine learning approaches, like Support
Vector Machine (SVM), Maximum Entropy Classifier (ME) and Naive Bayes (NB),
which were later combined via majority voting algorithm to identify named
entities from Hadith texts. From the results of this research, voting
combination approaches outmatched single classifiers with an overall F-measure
of 95.3% in identifying named entities. The results indicated that combined
models paired with suitable features were better suited to recognize named
entities of translated Hadith texts as compared to baseline models. |
Keywords: |
Named Entity Recognition, supervised machine learning, Hadith text |
Source: |
Journal of Theoretical and Applied Information Technology
29th February 2016 -- Vol. 8. No. 3 -- 2016 |
Full
Text |
|
Title: |
INTELLIGENCE INTEGRATION OF PARTICLE SWARM OPTIMIZATION AND PHYSICAL VAPOUR
DEPOSITION FOR TiN GRAIN SIZE COATING PROCESS PARAMETERS |
Author: |
MU’ATH IBRAHIM JARRAH1, ABDUL SYUKOR MOHAMAD JAYA, MOHD ASYADI AZAM, MOHAMMED H.
ALSHARIF, MUHD. RAZALI MUHAMAD |
Abstract: |
Due to increasing complexity of industrial production, decisions regarding
selection of coating parameters importantly influence the level of production,
Optimization of thin film coating parameters is important in identifying the
required output. Two main issues of the process of physical vapor deposition (PVD)
are manufacturing costs and customization of cutting tool properties. The aim of
this study is to identify optimal PVD coating process parameters. Three process
parameters were selected, namely nitrogen gas pressure (N2), argon gas pressure
(Ar), and Turntable Speed (TT), while thin film grain size of titanium nitrite (TiN)
was selected as an output response. Coating grain size was characterized using
Atomic Force Microscopy (AFM) equipment. In this paper, to obtain a proper
output result, a developed quadratic polynomial model equation which represents
the process variables and coating grain size was used in order to optimize the
coating process parameters, particle swarm optimization (PSO) was used for
optimization work. Finally, the models were validated using actual testing data
to measure model performances in terms of residual error and prediction interval
(PI). The result indicated that for response surface methodology (RSM), the
actual coating grain size of validation runs data fell within the 95% (PI) and
the residual errors were less than 10 nm with very low values, the prediction
accuracy of the model is 96.09%. In terms of optimization and reduction the
experimental data, PSO could get the best lowest value for grain size than
experimental data with reduction ratio of ≈6%. |
Keywords: |
TiN, Grain Size, Modeling, Sputtering, PVD, RSM, PSO. |
Source: |
Journal of Theoretical and Applied Information Technology
29th February 2016 -- Vol. 8. No. 3 -- 2016 |
Full
Text |
|
Title: |
IMPLEMENTATION TWOFISH ALGORITHM FOR DATA SECURITY IN A COMMUNICATION NETWORK
USING LIBRARY CHILKAT ENCRYPTION ACTIVEX |
Author: |
MUCH AZIZ MUSLIM, BUDI PRASETIYO, ALAMSYAH |
Abstract: |
Cryptography is required to secure the data networks communication. This study
implements Twofish cryptographic algorithm using library Chilkat Encryption
ActiveX Ms. Visual Basic. Twofish operate on a block of plaintext consisting of
128 bits. There are 3 steps in Twofish algorithm, the first step is divide input
bit into 4 parts, the second step was performed XOR operation between bit input
with a key, and the third step processing the input bits in 16 times Feistel
network. To facilitate the implementation of the coding in Ms. Visual Basic we
use Chilkat Encryption ActiveX. This research using agile methods with phases:
plan, design, code, test, and release. Twofish algorithm implementation using
Ms. Visual Basic and library Chilkat Encryption ActiveX can be used to secure
the data. The data succeed to be encrypted or decrypted and irreversible. The
program can be implemented to maintain the confidentiality of the data when
transmitted over the Internet. The speed encryption process need 3 times longer
than the decryption. Average of time in encryption process need 0,365 second,
while decryption process need 0,0936 second. |
Keywords: |
Data security, Twofish, Chilkat Encryption ActiveX |
Source: |
Journal of Theoretical and Applied Information Technology
29th February 2016 -- Vol. 8. No. 3 -- 2016 |
Full
Text |
|
Title: |
ENHANCEMENT OF CLOUD PERFORMANCE AND STORAGE CONSUMPTION USING ADAPTIVE
REPLACEMENT CACHE AND PROBABILISTIC CONTENT PLACEMENT ALGORITHMS |
Author: |
AHMED SALIH MAHDI, RRAVIE CHANDREN MUNIYANDIH |
Abstract: |
Infrastructure as a service (IaaS) is one of the most well-known cloud services.
It facilitates high levels of flexibility in virtual machines (VMs). One of the
challenges in cloud services is the effective management of large amounts of VM
images. Cloud input-output (IO) performance significantly affects VMs, and
significant storage resources are required, leading to higher management costs.
Currently optimization is accomplished by two methods, involving either
improving performance or decreasing image size, but low consumption of storage
resources and achieving high levels of IO performance cannot be accomplished at
the same time. Zone-based methods can potentially balance these requirements.
The method proposed in this paper involves the use of adaptive replacement cache
(ARC) and probabilistic content placement (PROB) algorithms, which together are
known as zone based-adaptive replacement cache and probabilistic content
placement (ZB-ARCPROB). This method provides more support to the cache
management of images while considering all means of achieving high IO
performance and low storage consumption. The research reported in this paper
evaluated the ZB-ARCPROB method in relation to cloud performance using Network
Simulator version 2.35 (NS2). The performance of the ZB-ARCPROB method was
compared with zone-base method in terms of three metrics, namely IO latency, IO
throughput, and relative storage consumption. This method is not only
methodologically valid, but also offers the prospect of attracting interest from
academia and industry. The results of the comparison indicate that the proposed
ZB-ARCPROB method outperforms the zone-base method. |
Keywords: |
cloud computing, virtual machine (VM), image storage, adaptive replacement
algorithm (ARC), probabilistic content placement |
Source: |
Journal of Theoretical and Applied Information Technology
29th February 2016 -- Vol. 8. No. 3 -- 2016 |
Full
Text |
|
Title: |
A PROPOSED FRAMEWORK TO SUPPORT ADAPTIVITY IN ONLINE LEARNING ENVIRONNMENT: USE
CASE IN LMS |
Author: |
AIMAD QAZDAR, B.ER-RAHA, C. CHERKAOUI, A. BAKKI, D. MAMMASS |
Abstract: |
Online Learning Environments (OLE) such as LMS, MOOC and ALS are becoming
increasingly popular in many educational institutes like universities. However,
LMSs and MOOCs provide the same content for all learners in a given course.
Educational theory suggests that learners possess different proprieties such as
background, preference, cognitive level and style of learning. This paper
investigates an experiment of implanting the adaptation in these environments.
The goal is to propose a model of adaptation taking into account several
dimensions of adaptation. We will so examine these dimensions in the context of
hybrid systems consisting of LMSs, MOOCs and ALSs. |
Keywords: |
Adaptation, Adaptive Learning Systems, Learning Management System, Massive Open
Online Cours, Learning Content, Adaptation Engine, Online Learning Environnement,
AeLF |
Source: |
Journal of Theoretical and Applied Information Technology
29th February 2016 -- Vol. 8. No. 3 -- 2016 |
Full
Text |
|
Title: |
QUERY SYSTEM OF GENERATED ONTOLOGICAL MODELS FROM TRADITIONAL DATA SOURCES |
Author: |
WIDAD JAKJOUD, MOHAMAD BAHAJ |
Abstract: |
The cooperation of ontologies, as sources of knowledge, and traditional data
sources are used to defeat the heterogeneity of information systems in terms of
integration of information. Also, this cooperation allows the exploitation of
classic web resources by inference agents. In this paper, we focus on ontologies
generated from traditional data sources. Our goal is to query an ontological
model without having to populate ontologies with instances from data source. The
proposed system adds an intermediate level of abstraction between the
ontological model and the data source schemas, this level can generate partially
and temporarily the data in XML format. The system also provides a SPARQL-XQUERY
mapping which rewrites any SPARQL query at XQUERY query in order to be executed
on already generated data. |
Keywords: |
SPARQL, XQUERY, mapping, model, ontology, data source |
Source: |
Journal of Theoretical and Applied Information Technology
29th February 2016 -- Vol. 8. No. 3 -- 2016 |
Full
Text |
|
Title: |
A ZERO-DISTORTION FRAGILE WATERMARKING SCHEME TO DETECT AND LOCALIZE MALICIOUS
MODIFICATIONS IN TEXTUAL DATABASE RELATIONS |
Author: |
ABD. S. ALFAGI , A. ABD. MANAF , B. A. HAMIDA , R. F. OLANREWAJUB |
Abstract: |
In this paper, we present a new zero-distortion fragile watermarking scheme to
detect and localize malicious modifications in textual database relations. Most
existing fragile watermarking schemes introduce errors or permanent distortion
into the original database content. These distortions violate the integrity of
the database consequently the database quality and usability are degraded.
Although, some fragile schemes are able to authenticate the database integrity
and detect the malicious modifications made on the database but they are either
tuples or attributes ordering based and unable to characterize the attack,
identify the type of attack, identify the tampered data and locating the
tampered tuples. In addition, most existing fragile schemes are based on LSB or
MSB in generating the watermark unlike to this scheme, which is based on local
characteristics of the relation itself such as frequencies of characters and
text length in generating the watermark. This scheme is serviceable for
sensitive and insensitive textual relational database since it does not
introduce any error into the original contents. In addition, this scheme
overcomes the weaknesses of data integrity violate, data usability and data
quality degradation. The experimental results show the ability of proposed
scheme in authenticating the database relation as well as the ability of
characterize the attack, identifying the changed data and locating the text that
affected by the malicious modification without depend on tuples and attributes
ordering. |
Keywords: |
Database Watermarking, Fragile Watermarking Scheme, Robust Watermarking Scheme,
Tamper Detection, and Authentication |
Source: |
Journal of Theoretical and Applied Information Technology
29th February 2016 -- Vol. 8. No. 3 -- 2016 |
Full
Text |
|
Title: |
BINARY RELEVANCE (BR) METHOD CLASSIFIER OF MULTI-LABEL CLASSIFICATION FOR ARABIC
TEXT |
Author: |
ADIL YASEEN TAHA, SABRINA TIUN |
Abstract: |
Multi-label text classification has become progressively more important in
recent years, where each document can be given multiple labels concurrently.
Multi-label text classification is a main challenging task because of the large
space of all potential label sets, which is exponential to the number of
candidate labels. Among the disadvantages of the earlier multi-label
classification methods is that they typically do not scale up with the number of
specific labels and the number of training examples. A large amount of
computational time for classification is required for a large amount of text
documents with high dimensionality, especially, the Arabic language which has a
very complex morphology and rich in nature. Furthermore, current researches have
paid a little attention to the multi-label classification for Arabic text.
Hence, this study aims to design and develop a new method for multi-label text
classification for Arabic texts based on a binary relevance method. This binary
relevance is made up from a different set of machine learning classifiers. The
four multi-label classification approaches, namely: the set of SVM classifiers,
the set of KNN classifiers, the set of NB classifiers and the set of the
different type of classifiers were empirically evaluated in this research.
Moreover, three feature selection methods (Odd ratio, Chi-square and Mutual
information) were studied and their performances were investigated to enhance
the performance of the Arabic multi-label text classification. The objective is
to efficiently incorporate classification algorithms and feature selection to
create a more accurate multi-label classification process. To evaluate the
model, a manually standard interpreted data is used. The results show that the
machine learning binary relevance classifiers which consists from a different
set of machine learning classifiers attains the best result. It has achieved a
good performance, with an overall F-measure of 86.8% for the multi-label
classification of Arabic text. Besides, the results show an important effect
from the used feature selection methods on the classification. Distinctly, the
set of the different set of algorithms proves to be an efficient and suitable
method for the Arabic multi-label text classification. |
Keywords: |
Arabic Text Classification, Multi-label Classification, Feature Selection,
Statistical Methods |
Source: |
Journal of Theoretical and Applied Information Technology
29th February 2016 -- Vol. 8. No. 3 -- 2016 |
Full
Text |
|
Title: |
DYNAMIC MEMORY ALLOCATION USING FREE BLOCKS |
Author: |
ALEKSANDR BORISOVICH VAVRENYUK, IGOR VLADIMIROVICH KARLINSKY, ARKADY PAVLOVICH
KLARIN, VIKTOR VALENTINOVICH MAKAROV, VIKTOR ALEKSANDROVICH SURIGIN |
Abstract: |
Algorithms for the dynamic allocation of RAM (Random Access Memory) to the
operating system when multiprogramming have a significant impact on the
efficiency of the operating system as a whole. Memory Manager (allocator) of GNU
C Library UNIX standard library, which claims universality, is ineffective in
some cases. This article describes the allocator algorithm with a list of clear
areas, proposed by the authors, which allows achieving a higher efficiency of
the RAM usage. The test methodology is proposed for the developed allocators,
and the results of the comparison of the proposed allocator with the allocator
of the GNU C Library UNIX standard library are provided. |
Keywords: |
Allocator, Memory Manager, The Process Of Memory Allocation, Memory
Fragmentation, Operating Systems |
Source: |
Journal of Theoretical and Applied Information Technology
29th February 2016 -- Vol. 8. No. 3 -- 2016 |
Full
Text |
|
|
|