|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of Theoretical and Applied Information Technology
May 2016 | Vol. 87 No.2 |
Title: |
SIGNIFICANT PREPROCESSING METHOD IN EEG-BASED EMOTIONS CLASSIFICATION |
Author: |
MUHAMMAD NADZERI MUNAWAR, RIYANARTO SARNO, DIMAS ANTON ASFANI, TOMOHIKO IGASAKI,
BRILIAN T. NUGRAHA |
Abstract: |
EEG preprocessing methods for classifying person emotions have been widely
applied. However, there still remain some parts where determining significant
preprocessing method can be improved. In this regards, this paper proposes a
method to determine the most significant preprocessing methods, among them to
determine (i) denoising method; (ii) frequency bands; (iii) subjects; (iv)
channels; and (v) features. The purposes are to improve the accuracy of emotion
classification based on valence and arousal emotion model. EEG data from 34
participants will be recorded with the questionnaires (valence and arousal) that
have been taken from the participants when they receive stimuli from picture,
music, and video. EEG data will be divided into 5 seconds for each trial. Then,
EEG data will be processed using denoising method and feature extraction. After
that, the most significant preprocessing methods will be chosen using
statistical analysis Pearson-Correlation. The preprocessed EEG data will be
categorized. The average accuracy results using SVM are 66.09% (valence) and
75.66% (arousal) while the average accuracy results using KNN are 82.33%
(valence) and 87.32% (arousal). For comparison, the average accuracy results
without choosing the most significant preprocessing method are 52% (valence) and
49% (arousal) using SVM while the average accuracy results using KNN are 50.13%
(valence) and 56% (arousal). |
Keywords: |
Significant Preprocessing Method, Electroencephalogram (EEG), Emotion
Classification, Valence, Arousal |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
SOCIAL MEDIA USAGE IN ACADEMIC RESEARCH |
Author: |
MUHAMMAD MURAD KHAN, IMRAN GHANI, SEUNG RYUL JEONG, ROLIANA IBRAHIM,
HABIB-UR-REHMAN |
Abstract: |
Recently researchers have used conversation prism and social media prisma,
to consolidate social medias with respect to their use. Although both identified
25 types, having average five examples each, they did not identify contribution
of each type in academic research. Moreover some of mentioned social services
had been suspended or changed. In this paper we attempt to access each social
media mentioned in conversation prism in order to first, identify services that
are operational to date, services which have suspended and those which have
changed during course of time. Second, we compare number of publications
associated with each social media, in order to identify which social media has
contributed most to academic research. Third, we attempt to find correlation
between number of publications and development tools provided by respective
social applications. Fourth, social medias are ranked with respect to number of
times other social medias share content with respective social application. It
was found that out of 168 social applications, 10% changed their service
objective while 13% were suspended. Among all social application, AMAZON had
highest i.e. 147,000 number of citations on Google scholar whereas 90.7% of
total citations were contributed by top 30 social medias. For developers, 22 out
of top 30 social medias provided developer options in form of either application
programming interface (API) or software development kits (SDK) and Facebook was
found to be most cross referred social media based on content sharing. Finally
conclusion and future work of study is presented. |
Keywords: |
Social Media, Academic Research, Conversation Prism, Developer Tools, API, SDK |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
HISTOGRAM EQUALIZATION BASED FRONT-END PROCESSING FOR NOISY SPEECH RECOGNITION |
Author: |
IBRAHIM MISSAOUI, ZIED LACHIRI |
Abstract: |
In this paper, we present Gabor features extraction based on front-end
processing using histogram equalization for noisy speech recognition. The
proposed features named as Histogram Equalization of Gabor Bark Spectrum
features, HeqGBS features are extracted using 2-D Gabor processing followed by a
histogram equalization step from spectro-temporal representation of Bark
spectrum of speech signal. The histogram equalization is used as front-end
processing in order to reduce and eliminate the undesired information from the
used spectrum representation. The proposed HeqGBS features are evaluated on
recognition task of noisy isolated speech words using HMM. The obtained
recognition rates confirm that the HeqGBS features yield interesting results
compared to those of the Gabor features which are obtained from log
Mel-spectrogram. |
Keywords: |
Front-end Processing, Histogram Equalization, 2-D Gabor processing,
Feature extraction, Noisy Speech Recognition |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
INITIAL RECOMMENDATIONS OF MOOCS CHARACTERISTICS FOR ACADEMIC DISCIPLINE
CLUSTERS |
Author: |
HUZAIFAH MOHD SALAMON, NAZMONA MAT ALI, SURAYA MISKON, NORASNITA AHMAD |
Abstract: |
Massive Open Online Courses (known as MOOCs) have been introduced as an
extensive and pervasive learning style in order to increase students
performance. Many academics and students in Malaysian higher education
institutions have begun to show their interest in applying MOOCs to deliver
course material or conduct distance education in an innovative way. However,
there are no guidelines available to assist the application of MOOCs; in
particular, there are no guidelines to help academics develop their own MOOCs
based on their area of expertise. In the education domain, usually these areas
of expertise are arranged according to certain groups referred to as academic
discipline clusters. This study investigates existing MOOCs characteristics and
recommends specific characteristics in relation to academic discipline clusters.
Content analysis was carried out by exploring MOOCs characteristics implemented
successfully in six universities worldwide. This included an examination of the
platforms used and the courses that applied MOOCs. Based on the results, the
desirable characteristics of MOOCs are recommended with regard to academic
discipline clusters, with the Universiti Teknologi Malaysia used as a case
study. |
Keywords: |
Massive Open Online Courses, MOOCs, Malaysia HEIs, Learning style |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
POWERFUL NETWORK MONITORING SYSTEM WITH ADDITIONAL SERVICES INCREASING THE
NETWORK PERFORMANCE |
Author: |
CHITLURU VEERABHADRA RAO, T PAVAN KUMAR, Dr V SRIKANTH |
Abstract: |
A powerful and excellent network monitoring system called Nagios helps the
various organizations of medium and large scale to detect and solve the IT
infrastructure monitoring related problems before they affect crucial and
confidential business information. A new feature of services were added to the
existing system which has no such type are configuration for Various levels of
authentication, Bandwidth usage and alerting services. The Nagios alerts user
whenever a defect occurs and alerts when the problem resolved. The alerts were
delivered through email,sms,pager.The alert about a particular service can also
be delivered through commenting the service from the admin manually. As there
were group of admins, any can send alert to the hosts and the chance of
conveying wrong alerts by any exist. So making the different login credentials
to all the level of authorities helps the user to receive correct alert about
the status of the service.The bandwidth usage indication of a client system
added to Nagios indicates the Bandwidth usage from a host machine and if the
usage limit reached,an alert will be generated and delivered to the
admin.Network Development Life Cycle was chosen as a methodology for
implementing this system in the network. Nagios is installed in Ubuntu Operating
System along with Multi-Router Traffic Grapher (MRTG) and Mail Postfix. MRTG and
Mail Postfix were configured to be integrated with the Nagios System. On the
client side,NSClient++ has been installed, for monitoring the bandwidth and
performance of windows based on operating system. |
Keywords: |
Nagios, Network monitoring, Email notification, SMS alert, Network
Performance,Multi Router-Traffic Grapher (MRTG) |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
IMPROVED METHOD FOR THE FORMATION OF LINGUISTIC STANDARDS FOR OF INTRUSION
DETECTION SYSTEMS |
Author: |
AKHEMETOV BAKHYTZHAN, KORCHENKO ANNA, AKHMETOVA SANZIRA, ZHUMANGALIEVA NAZYM |
Abstract: |
Due to intensive development of digital business, malicious software and other
cyber threats are becoming more common. To increase the level of security
necessary special remedies that can be effective when new types of threats and
allow fuzzy conditions to detect cyberattacks targeting multiple resources of
information systems. Different attacking effects on related resources give rise
to different sets of parametric anomalies in a heterogeneous environment. Known
tuple model of the formation of a set of core components that allow to detect
cyberattacks. For its effective application requires a formal approach to the
formation of fuzzy (linguistic) standards. To this end a method is developed
that focuses on the tasks of identifying cyberattacks on computer systems, which
is based on mathematical models and methods of fuzzy logic and is implemented
through six basic stages: the formation of subsets of identifiers linguistic
assessments, forming the base matrix of frequencies, the formation of the
derivative matrix of frequencies, the formation of fuzzy terms, the formation of
the reference fuzzy numbers, visualization of linguistic standards. The method
allows to improve the process of formalization of linguistic standards receive
options to improve the efficiency of construction of the corresponding intrusion
detection systems. |
Keywords: |
Artificial Neural Network (ANN), Static Var Compensator (SVC), Autonomous
Hybrid Power System (AHPS) |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
A NOVEL PROPAGATION TECHNIQUE IN FREE SPACE OPTICAL COMMUNICATION |
Author: |
A.K. RAHMAN, C.B.M. RASHIDI, S.A. ALJUNID, M.S. ANUAR, R.ENDUT, K.R. UMMUL |
Abstract: |
This paper focus on mitigating the atmospheric turbulence effect in free space
optical communication using a dual diffuser modulation (DDM) technique. The most
deteriorate the FSOC is scintillation where it affected the wave front cause to
fluctuating signal and ultimately receiver can turn into saturate or loss
signal. DD approach enhances the detecting bit 1 and bit 0 and improves the
power received to combat with turbulence effect. The performance focus on
Signal-to-Noise (SNR) and Bit Error Rate (BER) by using the Kolmogorovs
scintillation theory. The numerical result shows that the DD approach improves
the range where estimated approximately 40% improvement under weak turbulence
and 80% under strong turbulence. |
Keywords: |
Dual Diffuser Modulation, Free Space Optical Communications, Atmospheric
Turbulence |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
TEST CASE PRIORITIZATION ON REAL TIME APPLICATIONS |
Author: |
SNEHA PASALA, PRASAD MSR, RAMA KRISHNA V |
Abstract: |
Even though test case prioritization is known for its efficiency in detecting
faults early by making use of prioritized test suits, it isnt used in real time
applications as it needs faults to be known beforehand. Average Percentage of
Faults Detected determines the effectiveness of test suite orders either it may
be prioritized/non -prioritized. It effectively chooses the suite orders such
that the faults are detected at an early stage and with less number of test
cases. Thus for measuring this certainty, APFD is chosen due to its effective
results in our work. By making use of APFD metric, test case suites are
prioritized. APFD values for various builds of a HR application are calculated
to prove the efficiency of test case prioritization in real-time applications
and projects. |
Keywords: |
HR application, real time analysis, Prioritization criteria, component
based criteria, module based criteria, Faults, APFD |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
GENERALIZED PROBABILISTIC DESCRIPTION OF HOMOGENEOUS FLOWS OF EVENTS FOR SOLVING
INFORMATIONAL SECURITY PROBLEMS |
Author: |
YURI YURIEVICH GROMOV, IVAN GEORGIEVICH KARPOV, YURI VIKTOROVICH MININ, OLGA
GENNADEVNA IVANOVA |
Abstract: |
The method of producing a generalized probabilistic description of flows of
events that are characterized by ordinariness, stationarity and lack of
aftereffects for solving informational security problems is obtained. The
examples of these flows are test queries to network addresses, attempts of
available ports scanning, etc. The independence of increments property could be
unrealized for these events, but the Markov lack of aftereffects property is
necessarily realized. It consists that there is no influence to the time of
appearing the future event of past events, excluding the last event. The authors
obtained the distributions of flows of events for linear flow intensity
function, Poisson, binominal, and negative binominal lows. We also obtained
expressions for the distribution laws of k - th event occurrence and their main
numerical characteristics. |
Keywords: |
Informational Security, Flow Of Events, Probability, Probability
Distribution Density, Mathematical Expectation, Variance |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
EVALUATING THE PERFORMANCE OF DEEP SUPERVISED AUTO ENCODER IN SINGLE SAMPLE FACE
RECOGNITION PROBLEM USING KULLBACK-LEIBLER DIVERGENCE SPARSITY REGULARIZER |
Author: |
OTNIEL Y. VIKTORISA, ITO WASITO, ARIDA F. SYAFIANDINI |
Abstract: |
Recent development on supervised auto encoder research gives promising solutions
toward single sample face recognition problems. In this research,
Kullback-Leibler Divergence (KLD) approach is proposed to obtain penalty of
sparsity constraint for deep auto encoder learning process. This approach is
tested using two datasets, Extended Yale B (cropped version) and LFWcrop. For
comparison, Log and εL1 also employed as sparsity regularizers. Experiment
results confirm that KLD has better performance in image classification of both
datasets compared to Log and εL1. |
Keywords: |
Single Sample Face Recognition, Deep Auto Encoder, Kullback-Leibler
Divergence, Sparsity, Sparsity Regularizer |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
QUESTION CLASSIFICATION USING SUPPORT VECTOR MACHINE AND PATTERN MATCHING |
Author: |
ALI MUTTALEB HASAN, LAILATUL QADRI ZAKARIA |
Abstract: |
Question classification plays a crucial role in the question answering system,
and it aim to accurately assign one or more labels to question based on expected
answer type. Nonetheless, classifying users question is a very challenging task
due to the flexibility of Natural Language where a question can be written in
many different forms and information within the sentence may not be enough to
effectively to classify the question. Limited researches have focused on
question classification for Arabic question answering. In this research we used
support vector machine (SVM) and pattern matching to classify question into
three main classes which are "Who", "Where" and "What". The SVM leverage
features such as n-gram and WordNet. The WordNet is used to map words in
questions to their synonyms that have the same meaning. Five pattern were
introduced to analyze "What" question and label the questions with "definition",
"person", "location" or "object". The dataset set used in this research consist
of 200 question about Hadith from Sahih Al Bukhari. The experimental result
scored F-measure at 95.2%, 84.6%, and 83.6% respectively for "Who", "Where" and
"What". The result show that the SVM classifier is useful to classify question
in Arabic language. |
Keywords: |
Question Classification System; Machine Learning. |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
SMES PERFORMANCE AND SUBSIDIES IN IT INVESTMENTS: A VIS-A-VIS APROACH |
Author: |
CHRISTOS LEMONAKIS, KONSTANTINOS VASSAKIS, ALEXANDROS GAREFALAKIS, PETRO PAPA |
Abstract: |
In the knowledge-based and globalized economy, information provides value to
firms and Information Technology (IT) is the mechanism through can be achieved.
Companies that exploit endless possibilities of Information Technology obtain
the capacity to overcome future challenges. This study is focused on the case of
Greek SMEs that face an extremely competitive and unfriendly macro environment
and examines the performance of IT investments of Greek SMEs through a
combination of qualitative and quantitative longitudinal data. A variety of
financial as well as qualitative (primary) data are used for the examination of
the correlation between profitability and IT.Α survey-based methodology was used
for the collection of qualitative data of Greek SMEs covering the time period of
2004-2010. The examining period is divided into two sub-periods: pre crisis and
post crisis periods. Additionally, the examination of the relationship of
European IT investment subsidy with firm profitability is examined. The results
of this empirical study indicate that companies with IT investments present
higher profitability than their rivals. SMEs are the backbone of national
economy, thus their competitiveness plays a significant role to national
development and growth. Therefore, the findings of this research have
implications for practitioners, managers and policy makers. |
Keywords: |
IT investments, SMEs Performance, Profitability, Competitiveness and
European Subsidies |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
GROUP DECISION MAKERS-BASED MODEL FOR EVALUATING THE FEASIBILITY OF INFORMATION
AND COMMUNICATIONS TECHNOLOGY PROJECT
(CASE STUDY : LOCAL GOVERNMENT OF MUSI RAWAS) |
Author: |
WIJANG WIDHIARSO, SRI HARTATI, RETANTYO WARDOYO |
Abstract: |
Feasibility study of information and communications technology (ICT) project
becomes more and more important due to the significant growth of ICT project
investment as well as its complexity analysis, especially in term of benefit
estimation. Recently, Advanced Information Economic (AIE) has been proposed for
evaluating the ICT project feasibility, which considers the benefit and cost
analysis. In spite of its easiness, AIE remains weakness relating to the
subjectivity factor while determine the expected benefit and risk value. This
drawback potentially brings the unreasonable result, such as the extremely high
value of Return on Investment (ROI). In addition, AIE has not incorporated the
group of decision makers, which is practically considered as the main influence
factor of project appraisal.
This paper substantially discusses the new variant of information economic that
considers the group decision maker for evaluating feasibility of ICT project,
namely Group Decision Making Information Economic (GDM IE). This method also
includes the benefit and risk template to enhance the applicability of GDM IE,
where the benefit and risk related value is derived from the actual and
practical references gathered from the several local governments in South of
Sumatera. Further more, three kinds of models are involved in GDM IE for
evaluating the benefit and risk value. First method (i.e., model A) selects the
benefits and risks value based on the existing value of the template. Second
method (i.e., model B), compares the selected reference value (template) with
the new benefit value, which is entered by the user. The last method (i.e.,
model C), the user can directly entered the value of benefit and risk to
evaluate the feasibility of ICT project. To investigate its applicability, this
paper also utilized GDM IE for evaluating the ICT projects in Musi Rawas
District. |
Keywords: |
GDM IE, ICT Project, AIE, Benefits, Risks |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
DYNAMIC LOAD BALANCING FOR CLOUD PARTITION IN PUBLIC CLOUD MODEL USING VISTA
SCHEDULER ALGORITHM |
Author: |
MANISHANKAR S, SANDHYA R, BHAGYASHREE S |
Abstract: |
Larger the information technology grows, larger will be the data generated,
balancing the huge data is always a big question for the information management
industry. Cloud computing uses virtual data storage and infrastructure which
manages stores and processes huge volume of data. As the cloud offers major
services to the clients, it is very important to balance the incoming requests.
The aim of the system is to propose an efficient scheduling algorithm and to
achieve optimal load balancing. The algorithm address the challenges faced with
cloud load management by selecting best cloud partition for workload management,
incorporating a novel selection and scheduling algorithm with an assignment
problem principle approach for scheduling known as VISTA scheduling algorithm
and achieving optimized solution with minimum utilization of processing metrics. |
Keywords: |
Load Balancing, Public Cloud, Cloud Partition, Assignment Problem, Vista
Scheduling Algorithm |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
A NOVEL HARDWARE PARAMETERS BASED CLOUD DATA ENCRYPTION AND DECRYPTION AGAINST
UN-AUTHORIZED USERS |
Author: |
KAVURI K S V A SATHEESH, GANGADHARA RAO KANCHERLA, BASAVESWARARAO BOBBA |
Abstract: |
Now-a-days, there is a revolutionary trend to use cloud services, so more
customers are attracting to use public cloud storage. Data storage outsourcing
to cloud storage servers is an emerging trend among many firms and users. To
relieve from the burden of storage management, broad data access with
independent geographical locations and economizing of expenditure investment on
software, hardware and maintenance, most of the organizations are outsourcing
their data management operations to external service providers. The most
attractive part of the cloud computing is the computation outsourcing with its
uniqueness, which is becoming the major research area and also getting paid more
attention and interest from both academia and industry. As research is
progressing day-by-day, obstacles of cloud computing are getting into the focus
of vision, simultaneously few challenges are getting solved, and few solutions
are better optimized. Among the new born or existing, the challenge which is in
most limelight always is security of the outsourced data. Consequently data
owner neither have any control on hosted data nor on hosted data centers. Number
of techniques addressed this problem, to ensure data security and integrity
hosted in cloud. But all of them have their own limitations. In this paper, we
propose a new model of CP-ABE, which uses hardware parameters such as cloud
instances, server configurations are used in the setup, key generation,
encryption and decryption phases. |
Keywords: |
Attribute-based encryption, Cloud Parameters, CP-ABE, KP-ABE, Cloud
Security, Outsourcing Computation. |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
CLOUD COMPUTING SECURITY THROUGH PARALLELIZING FULLY HOMOMORPHIC ENCRYPTION
APPLIED TO MULTI-CLOUD APPROACH |
Author: |
OUADIA ZIBOUH, ANOUAR DALLI, HILAL DRISSI |
Abstract: |
Cloud computing represents a major change in the way IT resources are utilized
and creates value for businesses. It is the future of information technology
that offers many benefits such as flexibility, efficiency, scalability,
integration and cost reduction. However, the security concern is the major
drawback of widespread adoption of this technology by organizations that use
sensitive and important information. Therefore, the main aim of this paper is to
propose a new framework to secure cloud computing, prevent security risks and
improves the performance and the time of data processing .This framework
combines between various powerful security techniques such secret sharing
schema, Fully Homomorphic Encryption (FHE), multi cloud approach and the
implementation of a processing dispatcher which distributes a set of operations
on FHE encrypted data between a number of processing engines. |
Keywords: |
Could Computing, Security, Multi-Clouds, DepSky, RACS, HAIL, ICStore, FHE |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
DESIGN AND CONSTRUCTION OF CONTROL SYSTEM SEPIC CONVERTER FOR SOLAR PANEL BASED
ON FUZZY LOGIC |
Author: |
A. H. LOKA, T. HARDIANTO, B. S. KALOKO |
Abstract: |
Within renewable energy field, solar panel that principled at photovoltaic
effect has been used as alternative energy. MPPT method is using to obtain
maximum output power from solar panel if loads are fluctuate. MPPT that
implemented use fuzzy logic in function to regulate on/off time of MOSFET in
SEPIC converter by generating PWM signals. The results of testing provide load
power at irradiance 1000 W/m2, 800 W/m2, 600 W/m2, and 400 W/m2 when using MPPT
is close on power value of MPP better than direct-coupled method. Average value
of power ratio with MPPT based on fuzzy logic is 70.833 % whereas direct-coupled
is only 29.312 %. |
Keywords: |
Solar Panel, Fuzzy Logic, MPPT, SEPIC converter, Simulink |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
USING DATA MINING TO DEVELOP MODEL FOR CLASSIFYING DIABETIC PATIENT CONTROL
LEVEL BASED ON HISTORICAL MEDICAL RECORDS |
Author: |
TARIG MOHAMED AHMED |
Abstract: |
Nowadays, diabetes is considered as one of the diseases which cause more deaths
than any other disease in the world. To avoid the dangerous complications of the
diabetes, patients should control a blood glucose level as the HbA1c
(accumulative blood glucose level for 3 months) should be less than 7%. In this
paper a new predicted model has been developed by using data mining techniques.
The model aims to classify the diabetic patients into two classes which are:
under control (HbA1c < 7%) and out of control (HbA1c > 7%). The treatments plans
for 10061 diabetic patients were used to build the model. After comprehensive
survey for classification techniques, three algorithms have been selected which
were NaivaeBayse, Logistic and J48. By using WEKA application, the model has
been implemented. Based on the results of experiment, Logistic algorithm has
been selected as best one with high accuracy rate of 74.8%. To enhance the model
accuracy, the nutrition system and exercise need to be added to the dataset as
future work. |
Keywords: |
Diabetes, Data Mining, Classification techniques |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
WIRELESS FAULT TOLERANCES DECISION USING ARTIFICIAL INTELLIGENCE TECHNIQUE |
Author: |
MONEER ALI LILO, L.A.LATIFF, AMINUDIN BIN HAJI ABU, YOUSIF I. AL MASHHADANY |
Abstract: |
Wireless techniques utilized in industrial applications face significant
challenges in preventing noise, collision, and data fusion, particularly when
wireless sensors are used to identify and classify fault in real time for
protection. This study will focus on the design of integrated wireless fault
diagnosis system, which is protecting the induction motor (IM) from the
vibration via decrease the speed. The filtering, signal processing, and
Artificial Intelligent (AI) techniques are applied to improve the reliability
and flexibility to prevent vibration increases on the IM. Wireless sensors of
speed and vibration and card decision are designed based on the wireless
application via the C++ related to the microcontroller, also, MATLAB coding was
utilized to design the signal processing and the AI steps. The system was
successful to identify the misalignment fault and dropping the speed when
vibrations rising for preventing the damage may be happen on the IM. The
vibration value reduced via the system producing response signal proportional
with fault values based on modify the main speed signal to dropping the speed of
IM. |
Keywords: |
Industrial Wireless Sensor, Artificial Intelligent Technique, Signals
Processing, Fault Diagnosis System |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
SURFACING POINT CLOUD DATA FROM MULTIPLE ACQUISITIONS |
Author: |
GOPAL GAIKWAD, SHRUTI PATIL, ROHIT PATIL, NITIN KARMARKAR |
Abstract: |
The trend of using LIDAR (Light Detection and Ranging) based systems for
surveying is increasing every day. These systems output Point cloud data,
which can be further processed to produce vital information about the topology
of the surveyed area. In this paper, we provide a novel method to process the
point cloud data and build the model of the surveyed area that represent the
surveyed area more accurately. This method is more noise tolerable and no need
to care about the outliers as these outliers gets eroded by morphological
erosion operation. In this way, we will not lose features. The scope of this
work is restricted to processing the data surveyed from a closed cavities or
cave to understand their topology and estimate their volume. Generally, the
surveyed data collected using multiple sample sets and as such, each sample set
does not register with each others and may consist holes in point cloud due to
operational discrepancies. This approach will use the morphological operation to
fill the holes. We thereafter use the Sobel cross gradient operator to find out
a common border. The KNN algorithm will then define smoother and much thinner. |
Keywords: |
Point Cloud, Morphological Operation, Sobel Mask Filter, KNN Algorithm,
Multiple |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
PERFORMANCE CHARACTERIZATION OF ROHC FOR SATELLITE-BASED UNIDIRECTIONAL LINKS
USING ERROR-FREE CHANNELS |
Author: |
WAY-CHUANG ANG, MUHAMMAD-IMRAN SARWAR, TAT-CHEE WAN |
Abstract: |
Satellite communication systems play a vital role in providing Wide Area Network
(WAN) due broader coverage but at the same time impose challenge for IP services
in unidirectional Satellite link. This research evaluates RObust Header
Compression (ROHC) for Unidirectional Lightweight Encapsulation (ULE) in terms
of network performance and practical implementation design of a ROHC via
Satellite test-bed. Moreover, mathematical model was presented to estimate the
theoretical performance characteristics of ROHC compressed traffic which is then
compared with empirical results. The experiments showed that ROHC delivered
significant improvement in bandwidth utilization for packets with small payload
sizes with up to 86% gain in throughput performance when compressing traffic.
Packets with larger payload sizes exhibited an exponential decrease of
throughput gain achievable through ROHC as the size of the payload increased.
This paper also discusses the effectiveness of ROHC for IPv4 versus IPv6 traffic
that was evaluated indicates IPv6 traffic streams benefited to a greater degree
from ROHC than did IPv4 traffic streams, even on non-ideal links. |
Keywords: |
Satellite Communication, Unidirectional Lightweight Encapsulation (ULE),
RObust Header Compression (ROHC), Digital Video Broadcasting Satellite (DVB-S),
Mesh Networking |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
Title: |
JOIN-LESS APPROACH FOR FINDING CO-LOCATION PATTERNS- USING MAP-REDUCE FRAMEWORK |
Author: |
M.SHESHIKALA, D.RAJESWARA RAO, R. VIJAYA PRAKASH |
Abstract: |
Spatial co-location patterns represent a subset of features whose instances are
frequently co-located in close proximity; For example Mountain area and new
truck purchased are frequently co-located patterns, indicating that a person
living close to mountainous areas is likely to buy a truck. Since the instances
of spatial features are embedded in a continuous space and share a variety of
spatial relationships the implementation of co-location mining can be taken as a
challenge. For this, many Algorithms have been proposed, but they are
prohibitively expensive with the larger data sets. We propose a parallel
join-less approach for co-location pattern mining which materializes spatial
neighbour relationships without any loss of the co-location instances. The
parallel join-less approach drastically reduces the computation time in finding
an instance look-up schema which is used for identifying co-location instances,
whereas the previous join-less co-location mining algorithm finds the instances
sequentially which increases the computation time. The proposed algorithm is
developed on Map-Reduce. The experimental results shows the speed up in
computational performance. This algorithm works well for data sets with larger
size &having more number of features. As the size of the data set decreases it
becomes close to the sequential approach. |
Keywords: |
Spatial data Mining, Parallel Co-location Mining, join-less, Approach,
Participation Ratio, Participation Index, Map Reduce |
Source: |
Journal of Theoretical and Applied Information Technology
20th May 2016 -- Vol. 87. No. 2 -- 2016 |
Full
Text |
|
|
|