|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of Theoretical and Applied Information Technology
September 2014 | Vol. 67 No.2 |
Title: |
SURVEY ON INFORMATION EXTRACTION FROM CHEMICAL COMPOUND LITERATURES: TECHNIQUES
AND CHALLENGES |
Author: |
MUAWIA ABDELMAGID, MUBARAK HIMMAT, ALI AHMED |
Abstract: |
Chemical documents, especially those involving drug information, comprise a
variety of types – the most common being journal articles, patents and theses.
They typically contain large amounts of chemical information, such as PubMed-ID,
activity classes and adverse or side effects. Techniques are used to extract
information from a huge number of documents and it is presented in a useful
structurally prepared format that can be applied to structured, semi-structured
and unstructured texts. Numerous information extraction methods and techniques
have been proposed and implemented. In principle, there are two main approaches
to information extraction, the knowledge engineering approach and the learning
approach. In this survey study, we first provide the historical background on
information extraction approaches applied to chemical documents and discuss
several kinds of information extraction tasks that have emerged in recent years.
Then, we discuss the metrics used for evaluating information extraction systems,
and finally the survey outlines the main issues that will shape future research
in this area of study. |
Keywords: |
Information Extraction, Information Retrieval, Chemical Extraction, Extraction
Methods |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
FLOW BASED ANALYSIS TO IDENTIFY BOTNET INFECTED SYSTEMS |
Author: |
R.KANNAN, A.V.RAMANI |
Abstract: |
Botnet most widespread and occurs commonly in today’s cyber-attacks, resulting
in serious threats to our network assets and organization’s properties hence
there is a high need to detect and prevent the adverse effects of bots. Botnets
are collections of compromised computers (Bots) which are remotely controlled by
its originator (Bot-Master) under a common Command-and-Control (C&C)
infrastructure. This paper focuses on classifying the bots and the regular hosts
in the network through the classification based on their behavior. The goal is
to develop a live version of the botnet detection system which identifies a
botnet activity in a network, based on traffic behavior analysis and flow
intervals which does not depend on packet pay load i.e., they can work on
encrypted network communication protocol. The approach is to classify packets
based on source IP, destination IP, number of packet, etc., using decision tree
which is a classification technique in machine learning. The attribute selection
is mainly based on packet attribute and does not consider the data part. The
feasibility of the approach is to detect botnet activity without having seen a
complete network flow by classifying behavior based on time intervals. |
Keywords: |
Botnet, Machine learning, Malicious, Intrusion, Network flow. |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
REAL TIME VIDEO SEGMENTATION BASED ON MODIFIED MULTISCALE MORPHOLOGICAL
RECONSTRUCTION |
Author: |
NITHYA. A, KAYALVIZHI R |
Abstract: |
Video Segmentation is used in many practical applications such as medical
imaging, computer-guided surgery, machine vision, object recognition, digital
entertainment, surveillance, content-based browsing and augmented reality
applications. In the proposed technique video segmentation is performed using
modified gradient based multiscale morphological operation. Both gradient and
grey scale method used to segment the object from the frame. Also K-Mean
Clustering algorithm is applied to segment object by removing the background in
each frame. Segmented object will be obtained by refining the combined gradient
k-means segmented frame and the gray-scale k-means segmented frame. Accuracy of
the segmentation is evaluated by comparing proposed algorithm and existing
algorithm with different threshold value. |
Keywords: |
Video Segmentation, Multiscale Morphological Operation, Gradient, K-Means
Clustering, Accuracy. |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
PROPAGATION DELAY BASED COMPARISON OF PARALLEL ADDERS |
Author: |
THEMOZHI.G, THENMOZHI.V |
Abstract: |
An intelligent full adder circuit is simulated using Cadence Virtuoso Analog
Design version 6.0. The complementary property between sum and carry for most of
the input combination is considered for reducing the number of transistors in
the full adder circuit. The parameters such as the Power consumption, Delay and
Power Delay Product (PDP) are improved in the proposed CMOS full adder than the
conventional CMOS full adder. The size of the chip and the number of transistors
are greatly reduced with the proposed circuit. From the single bit full adder
module, other parallel adder circuits such as Ripple carry Adder, Carry Look
Ahead Adder, Carry Save Adder, Carry Increment adder, Carry Skip Adder, Carry
Select Adder and Carry By-pass Adder are simulated. The worst-case gate delay in
each case is measured and compared with other adders. |
Keywords: |
Full Adder, CMOS, Gate Delay, PDP (Power Delay Product), Parallel Adders. |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
TUNING OF NONLINEAR MODEL PREDICTIVE CONTROL FOR QUADRUPLE TANK PROCESS |
Author: |
P.SRINIVASARAO, DR.P.SUBBAIAH |
Abstract: |
Multi Input Multi Output (MIMO) interactive process is more complex than Single
Input Single Output (SISO) systems. We consider two inputs and two outputs for
four tank process, which is also called Quadruple Tank Process (QTP). It
consists of two inputs as voltages fed to motor pumps and water levels in the
lower tanks are the two outputs. The inputs are manipulated by setting of
valves. It has four tank interactive loop operation consisting of two lower
tanks and two upper tanks. The Model Predictive Control (MPC) technique is more
suitable and gives optimized operational control in calculating present and
future values of output. It tunes output and inputs simultaneously to provide
more stable (optimized) output within the permissible limits of tolerance /
error. MPC can stabilize all linear processes effectively and efficiently. It
can also work with nonlinear processes under extreme conditions. It offers an
optimized result under Predictive control “P” and Horizon control “M”, by tuning
P and M and its application to QTP as a Nonlinear Model Predictive Control. |
Keywords: |
Nonlinear Model Predictive Control, Optimized, QTP, Predictive Control P,
Horizon Control M |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
PERSONALIZING USER DIRECTORIES THROUGH NAVIGATIONAL BEHAVIOR OF INTERESTING
GROUPS AND ACHIEVING MINING TASKS |
Author: |
MS.R.KOUSALYA, DR. V. SARAVANAN |
Abstract: |
Web directory is a collection of web pages with links. A grouping or organizing
web content into thematic hierarchies are known as Web Directories which are
correspond to topics listed are easily visualize by the people. The key focus on
our methodology is that is personalizing web directories using navigation
patterns prepared for user profiles. A user profiles containing the information
about a user or group of user’s previous visits categories. Patterns are
generated based on navigational behavior and interest of the individual user or
a group of user. Our work provides a set of mining tasks and personalization
techniques to customize the organization of user directory based on
corresponding pattern behavior. |
Keywords: |
Personalization, User Directory, User Profile, Navigation Patterns, Clustering |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
NOVEL MODIFIED FPCM FOR WEB LOG MINING BY REMOVING GLOBAL NOISE AND WEB ROBOTS |
Author: |
P.NITHYA, DR.P.SUMATHI |
Abstract: |
Nowadays, internet is a useful source of information in everyone’s daily
activity. Hence, this made a huge development of World Wide Web in its quantity
of interchange and its size and difficulty of websites. Web Usage Mining (WUM)
is one of the main applications of data mining, artificial intelligence and so
on to the web data and forecast the user’s visiting behaviors and obtains their
interests by investigating the samples. Since WUM directly involves in large
range of applications, such as, e-commerce, e-learning, Web analytics,
information retrieval etc. Web log data is one of the major sources which
contain all the information regarding the users visited links, browsing
patterns, time spent on a particular page or link and this information can be
used in several applications like adaptive web sites, modified services,
customer summary, pre-fetching, generate attractive web sites etc. There are
varieties of problems related with the existing web usage mining approaches.
Existing web usage mining algorithms suffer from difficulty of practical
applicability. So, a novel research is very much necessary for the accurate
prediction of future performance of web users with rapid execution time. The
main aim of this paper to remove the noise and web robots by novel approach and
provide faster and easier data processing and it also helps in saving time and
it resource. In this paper, a novel pre-processing technique is proposed by
removing local and global noise and web robots. Anonymous Microsoft Web Dataset
and MSNBC.com Anonymous Web Dataset are used for evaluating the proposed
preprocessing technique. An Effective Web User Analysis and Clustering are
analyzed using Modified FPCM. Then results are evaluated using Hit Rate and
Execution time. |
Keywords: |
Preprocessing, Data Cleaning. Modified Fuzzy Possibilistic C Means, Fuzzy C
Means, Hit Rate, Execution Time. |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
FUTURE LOAD AWARE SERVICE BROKER POLICY FOR INFRASTRUCTURE REQUESTS IN CLOUD |
Author: |
A.RADHAKRISHNAN, V.KAVITHA |
Abstract: |
Cloud computing is a new computing paradigm, which offers resources for solving
complex problem in fast and lower cost under pay per usage through internet.
This feature is great boon to software companies to reduce their infrastructure
setup and maintenance cost. IaaS service is one of the fundamental service
models of cloud provider that offers entire computing environment as Virtual
Machine (VM) to customers. The VM is created on cloud provider datacenter
physical server. The provider deployed multiple datacenters in geographically
different location to cater the needs of IaaS customers. The customers are much
concerned about reducing their computation time and VM rental cost. The cloud
service brokers are playing vital role in this regard, one of their
responsibility is to direct the user request to appropriate datacenter. The
selection of datacenter is a challenging task for service broker. Our proposed
approach aims to enrich the intelligence of service broker during datacenter
selection. Our novel algorithm makes the service broker to aware about the
future load of every datacenter during request forwarding. It would facilitate
the broker to route the IaaS request in right destination. The performance of
our novel methodology is tested in Cloud Analyst tool. The result shows that our
methodology reduces task time of customer applications compare to existing
broker policies. |
Keywords: |
Virtual Machine (VM), Infrastructure as a Service (IaaS), Datacenters
Classification Algorithm (DCA), Datacenter Load Aware Service broker Algorithm (DLASA),
Neural Networks (NN). |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
PERFORMANCE ANALYSIS OF ADAPTIVE MODULATION FOR HIGH MOBILITY FOR LTE |
Author: |
LENIN, Dr. S. MALARKKAN |
Abstract: |
The demand of high data rate and affirmation to the real-time communications
(network). LTE is well placed to fulfill the demands of next generation mobile
networks. It delivers beneficial features such as high mobility transmission and
scalability of bandwidth using both TDD and FDD duplexing methods, for the time
varying and frequency selective wireless channel within one OFDM sub-carrier.
For the exact estimation of wireless channel, some of OFDM subcarrier used as a
reference signal while other subcarrier are either used to transmit data symbols
are set as unused. The weighted time-domain interpolation computed from the
channel based on the Doppler spread information and Parallel Interference
Cancellation scheme jointly with Decision Statistical Combining (PIC-DSC)
technique is used to reduce the ICI and to improve the data symbol detection.
Here the adaptive modulation technique is used additionally to adopt the channel
condition that maximizes the spectral efficiency and to meet higher throughput
instead of retransmission, the transition rate can be cut down, when the channel
condition is miserable. So the Quality of Service (QOS) in the time varying
wireless channel is maintained. |
Keywords: |
Adaptive Modulation, ICI, MIMO-OFDM, PIC, DSC |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
JOSE MEASURE BASED HIGH DIMENSIONAL DATA CLUSTERING FOR REAL WORLD CONDITIONS |
Author: |
M.SUGUNA, Dr.S.PALANIAMMAL |
Abstract: |
The modern real world data has more number of dimensions with varying arguments.
Different organization maintains customer or product information in different
forms which is difficult to perform clustering. Each data point has different in
size and properties, but has to be clustered in meaningful and efficient way to
get some knowledge from that. Many strategies have been proposed for clustering
high dimensional data but suffer with the problem of overlapping and retrieval
efficiency. We propose a new measure called JOSE (Joint optimum similarity
eccentric), which represents the distance between points in multiple forms of
data. Jose measure, which shows the relationship between two data points at
their junction points with eccentric data points. The Jose measure shows the
closeness of particular data point with set of data points in a cluster. The
proposed approach computes distance between data points with available
attributes of data points with remaining data points from the cluster. The
proposed approach has produced higher efficient clustering with negligible
overlapping. We have used Enron data set for the evaluation of the proposed
approach and the results shows that the proposed method has produced efficient
result than others. |
Keywords: |
High dimensional data, Clustering, JOSE Measure. |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
SEMANTIC ANALYSIS OF SOFTWARE SPECIFICATIONS WITH LINKED DATA |
Author: |
MARTIN DOSTAL, MICHAL NYKL, KAREL JEŽEK |
Abstract: |
Software development life cycle is the process involved in the design,
development and improvement of a software application. Nowadays especially
component systems are used due to possibility of implementing reusable
independent modules. The individual module, known as a software component, can
be implemented in a form of a software package, a web service or a web resource
that encapsulates a set of related functions. Software products are derived in a
configuration process by composing different components. Moreover a software
product line enables stakeholders to derive a different software products based
on their needs. This fact and need for validation of the software product and
its components requires methods for software specification processing and
matching with concrete sw properties. In this article we will propose an
approach for semantic analysis of software specifications with Linked Data. |
Keywords: |
Software Specifications, Linked Data, Semantic Analysis |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
A COMPARATIVE ANALYSIS OF AN OCDMA SYSTEM BASED ON SINGLE-PHOTO-DIODE (SPD) AND
SPECTRAL DIRECTION DETECTION (SDD) SCHEMES |
Author: |
SARAH G. ABDULQADER, HILAL A. FADHIL, S. A. ALJUNID |
Abstract: |
In this paper, a comparative study to evaluate the performance of
spectral-amplitude coding optical code-division multiple-access (SAC-OCDMA)
systems based on single photo-diode (SPD) detection and spectral direct
detection (SDD) schemes. In our work, we utilized Modified Double weight (MDW)
code as one of the SAC codes for evaluated both detection techniques. The
results characterizing the bit-error-rate (BER) of 10-12 with respect to the
data rate show that the SDP offers a significant improved performance for long
haul applications over SDD technique. Furthermore, the SPD detection has higher
signal –to-noise ratio (SNR) than SDD technique utilizing different types of
optical filters. Finally, for the same quality of service (QoS) for a bit error
rate (BER) of 10-12 and data rate of 622 Mbps, SPD allows higher transmission
distance of 30 km than SDD technique. |
Keywords: |
OCDMA, PIIN, MAI, SDD, SPD. |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
PRIMITIVE STRUCTURAL METHOD FOR HIGH CAPACITY TEXT STEGANOGRAPHY |
Author: |
NUUR ALIFAH ROSLAN, RAMLAN MAHMOD, NUR IZURA UDZIR, ZURIATI AHMAD ZURKARNAIN |
Abstract: |
High capacity for hiding secret information is typically the main concern in
text steganography, as well as robustness and perceptual invisibility in a
perfect steganography algorithm. The major issue in text steganography is the
difficulty of using a large amount of redundant information to hide secret bits
(1 and 0) in perceptible appearance. We propose a Primitive Structural algorithm
for Arabic text steganography to encounter this issue. This algorithm hides the
secret bits in the primitive structure (i.e. sharp edges, dots, typographical
proportion) for the Arabic character. Therefore, this new algorithm presents a
high data-embedding capacity since each character has more than one potential
place to hide the secret information. The main processes involved are the
preparation of the secret message as a binary process, identifying the Primitive
Structural for each character in the cover-text process, and finally the bit
hiding process. The experiments have shown that the data-embedding capacity
percentage is increased up to 4% for our first experiment and for the second
experiment the results show the increase in capacity up to 21% compared to the
our previous method, thus resolving the capacity issue. |
Keywords: |
Information Hiding, Data Embedding, Text Steganography, Arabic Text
Steganography, High Capacity |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
PERFORMANCE ANALYSIS OF MULTI-CARRIER AGGREGATION WITH ADAPTIVE MODULATION AND
CODING SCHEME IN LTE-ADVANCED SYSTEM |
Author: |
IBRAHEEM SHAYEA, MAHAMOD ISMAIL ROSDIADEE NORDIN HAFIZAL MOHAMAD |
Abstract: |
Although LTE-Advanced system is targeted at enhancing the users’ throughput and
increasing connection reliability, however user’s mobility tends to degrade the
efficiency of the services and applications in terms of throughput degradation
and outage probability reduction. Particularly this is during the handover from
the source to the target eNBs. Therefore, the Carrier Aggregation (CA) technique
is based on different numbers of Components Carrier (CCs) utilizing AMC scheme
is implemented in this paper in order to further enhance the system performance.
A simple mathematical formulation for user’s throughput and outage probability
evaluation are derived and then used in the simulation for random mobility. It
has shown that, the integration of CA and AMC (CA-AMC) on the downlink
significantly outperforms systems either by employing CA with common Modulation
and Coding Schemes (MCSs) or that by implementing Non-CA technique with MCSs or
AMC scheme in terms of throughput and outage probability. The total throughput
gains achieved by CA-AMC integration are 50 to 80% over non-CA employed AMC when
between 2 to 5 CCs are used respectively with AMC integration. Similarly, the
total user’s outage probability is improved when the number of CCs is increased
with AMC integration. |
Keywords: |
Carrier Aggregation, AMC, Throughput, Outage Probability, LTE-Advanced. |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
PARAMETER MAPPING OF FEATURE SELECTION VIA TAGUCHI METHOD ON EMAIL FILTERING |
Author: |
NOORMADINAH ALLIAS, MEGAT NORULAZMI MEGAT MOHAMED NOOR, MOHD NAZRI ISMAIL |
Abstract: |
In spam filtering, the use of machine learning as a filtering method is prone to
a high dimensionality of features space. In order to overcome the problem, a lot
of feature selection methods have been introduced. Besides, the number of
features used as an input to machine learning classifier is still high, thus it
will delay the delivery of incoming emails to user’s inbox. Therefore, two
stages of feature selection by using Taguchi method to reduce a high
dimensionality of features and obtained a good result are introduced. Firstly,
we used Gini Index to reduce a high dimensionality and selecting the best subset
of features, while Taguchi method is applied to assist Gini Index and PSO-SVM in
selecting the best combination of parameter settings. Apart from this, the
impact of population size on different classifier is investigated as it brings a
high impact on the classifier performances. This method is trained and tested on
Ling-spam email dataset. The experimental result shows that a hybrid Gini PSO-
SVM feature selection with Taguchi method is able to produce a good
classification result even when the population size is less than 10. |
Keywords: |
Spam Email, High Dimensionality, Feature Selection, Orthogonal Array, and
Population Size. |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
DESIGN OF PRIVACY MODEL FOR STORING FILES ON CLOUD STORAGE |
Author: |
AMBIKA VISHAL PAWAR, DR. AJAY A. DANI |
Abstract: |
Cloud Storages like Google drive, Dropbox are popular for personal and
institutional file storage. Cloud storage is beneficial in terms of scalability,
availability and economy. But cloud storage has privacy challenges due to which
many users are reluctant to use it for personal data storage. Storage of
personal or sensitive files should be done carefully. Since cloud storage is
third party storage and also has many other possibilities like files can be
shifted from one server to another cloud server. Cloud storage needs special
solution than traditional third party storages. The data can be encrypted for
security purpose but this will further raises issues like key management and key
distribution. Encryption also requires more computation cost and time on
client/user. This paper introduces the design of novel privacy model for storing
files in cloud storage, proposes architecture and algorithms for cloud storage.
Paper also discusses about the proof of privacy and proposed system’s
performance evaluation parameters for future work. |
Keywords: |
Cloud Storage, Files, Privacy, Multicloud, Splitting, Information Dispersal
Algorithm, Security, Randomization |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
CLASSIFICATION MODELS BASED FORWARD SELECTION FOR BUSINESS PERFORMANCE
PREDICTION |
Author: |
EDI NOERSASONGKO, PURWANTO, GURUH FAJAR SHIDIK |
Abstract: |
This paper proposes a classification model to improve the accuracy of prediction
for business performance. The proposed model uses a combination of forward
selection method to select the optimum attributes and classification models.
Business performance data set is used to evaluate the accuracy of the proposed
model. From results of experiments show that the combination of forward
selection and Naďve Bayes model can improve the prediction accuracy of business
performance compared to the other classification models, namely Logistic
Regression, k-NN, Naďve Bayes, C4.5 and Support Vector Machine models
significantly. The proposed model also yields better result compared to the
other attribute selection using backward elimination method. |
Keywords: |
Forward Selection, Naďve Bayes, Entrepreneur, Business performance,
Classification Models |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
AREA OPTIMIZED BALANCED MULTIWAVELETS FOR WIRELESS COMMUNICATION |
Author: |
G.ASWINI, K.ANITHA, Dr.Dharmishtan.K.Varughese |
Abstract: |
New technologies are emerging to take the challenges in the wireless
communication. According to the requirements for the next generation
technologies have led to unparalleled insist for high speed architectures for
complex signal processing applications. In this paper, we propose a modified
DMWT architecture based on CardBal filter. The DMWT coefficients that are
fractions are converted to integers and are modified to reduce the number of
multiplications and additions. The reduced CardBal filter coefficients are used
to process the data, thus reducing the computation complexity and making it
suitable for FPGA implementation. The design operates at maximum frequency of
300MHz and consumes less than 1% resources and thus is suitable for real time
applications optimizing area, speed and power. The model is tested for its
functionality using HDL code and is synthesized using Xilinx ISE targeting FPGA. |
Keywords: |
Cardinal Multiwavelets, CardBal Filter, Balanced Multiwavelets, DMWT, FPGA |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
A PREDICTIVE USERS PREFERENCE IDENTIFICATION TECHNIQUE TO IMPROVE WEB USERS
QUERY RELATIONAL PROCESSING |
Author: |
D. MADHUSUBRAM, DR. S.P. SHANTHARAJAH |
Abstract: |
One of the major challenges in the web is accessing relevant information
according to the needs of the user. With each user having different information
needs in relation to his/her query, the search results should be personalized
according to the information needs of the users. Personalized ontology model for
knowledge representation and reasoning over user profiles fails to match the
local instance repository queries (i.e.,) users with global information base
(i.e.,) web database. Although the application of ontology model has been
underway for many years and many algorithms related to ontology have been
developed, it is not applicable to the majority of the existing web documents.
Query planning for Weighted Additive Aggregation Queries (WAAQ) obtained optimal
set of sub queries with incoherency bounds and least number of refresh messages
were sent from aggregators to the client. But WAAQ is not effective in
developing the cost model for complex queries. To overcome the issues related to
complex queries, Predictive User Preference Identification (PUPI) technique is
developed based on the relational users and stored queries. PUPI technique
searches the results according to each user’s need based on their relevant
information with little effort from the side of the user, followed by it the
effectiveness with complex queries are verified. PUPI then extends to Condition
Redefine Query which redefines the queries according to the user relational
profile and stored query context. In order to learn user’s preference, PUPI make
use of both semantic and lexical information. PUPI take into account the web
queries for the current user task and engender a new query language model for
redefining based on the user query model and the considering the user profile.
PUPI technique performed experiment on Weka tool using the MSNBC.com Anonymous
Web Data. By analyzing the results, PUPI technique is better than using the
state-of-art methods. Experiments were conducted on the factors such as
execution time, micro averaged accuracy, query user relationship ratio, query
frequency similarity and true positive rate to reveal the efficiency of the
technique, query frequency similarity and true positive rate. |
Keywords: |
Ontology Model, Predictive User Preference Identification, Semantic Information,
Condition Redefine Query, User Relational Profile, Stored Complex Queries |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
EVALUATION OF EMPLOYEES AWARENESS AND USAGE OF INFORMATION SECURITY POLICY IN
ORGANIZATIONS OF DEVELOPING COUNTRTIES: A STUDY OF FEDERAL INLAND REVENUE
SERVICE, NIGERIA |
Author: |
WADZANI A. GADZAMA, JATAU ISAAC KATUKA, YUSUF GAMBO, ALIYU M. ABALI, MUHAMMED
JODA USMAN |
Abstract: |
Information security policy has become an integral part of today’s
organizational operations. It is a set of rules that guides computer resource
users to ensure confidentiality, integrity and availability of organization
information resources. Federal Inland Revenue Service of Nigeria is a government
organization that is responsible for collecting revenues to the government. Due
to its nature, it is concerned with how to protect and manage its information
resource. This can only be achieved through implementation of an effective
information security policy. This policy will help to ensure the
confidentiality, integrity and availability of its information assets are
protected. The purpose of the study is to find out the precise nature of
existing information security policy and also to access the level of IT staffs
awareness and usage of the policy in organizations of developing countries using
Federal Inland Revenue service, Nigeria as a study area. Questionnaires were
distributed which consist of both open and closed ended questions. The data
collected were analyzed and presented. The study reveals that the IT staffs are
aware of the existence and usage of the information security policy to the
success of the organization. It was also revealed that the lack of adequate
management support has affected the use of the policy. The study recommends that
the management support in terms of staff training and awareness programs,
continuous monitoring, enforcement and periodic review of the policy should be
made to ensure the effectiveness of the organization information security
policy. |
Keywords: |
Federal Inland Revenue, Service, Information Security Policy, Models,
Information and Communication Technology. |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
A NOVEL CLASSIFICATION SCHEME FOR NON-INVASIVE HEMOGLOBIN MEASUREMENT METHODS |
Author: |
KUMAR R Dr RANGANATHAN H |
Abstract: |
Hemoglobin (Hb) is an imperative constituent of red blood cells (RBC).
Assessment of individual physiological status is accomplished by measuring Hb
concentration in human blood. The limitations associated with invasive methods
are distress during sample collection, wide analysis time and disability to
provide real time monitoring. These limitations were addressed by a gamut of
non-invasive schemes. The decision scheme for different blood samples associated
with non-invasive schemes exhibits intrinsic intricacies. This paper proposes a
classification scheme for diverse blood samples by criterion based
identification and association to an existing data set. The scheme was
implemented and tested on three samples with different testing conditions.
Accuracy was identified as the performance metric for this scheme and the
results shows that the proposed schemes displays accuracy ranging from 99% to
72% for a range of 0 dB to 10dB Signal to Noise Ratio (SNR). |
Keywords: |
Hemoglobin (Hb), Red Blood Cell (RBC), Invasive, Non-Invasive, Signal to Noise
Ratio (SNR) |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
IMPROVED GENETIC ALGORITHM FOR GROUP-BASED JOB SCHEDULING IN MOBILE GRIDS |
Author: |
G.SARAVANAN, Dr.V.GOPALAKRISHANAN |
Abstract: |
In mobile grids, the existing job scheduling scheme causes increased job
processing time. The increased processing time and overhead may results in
system degradation. Also mobility and resource availability parameters are not
considered during job scheduling. Hence in this paper, we propose an improved
genetic algorithm for group-based job scheduling in mobile grids. In this
technique, the jobs are grouped according to the resource availability. Then the
jobs are scheduled based on the parameters such as mobility, resource
availability and job completion time using enhanced genetic algorithm. By
simulation results, we show that the proposed technique minimizes the job
completion time, thus enhancing the system performance and minimizing the
overhead time. |
Keywords: |
Genetic Algorithm, Job Scheduling, Mobile Grids, Simulation, Overhead Time |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
WIRELESS IMPLEMENTATION SELECTION IN HIGHER INSTITUTION LEARNING ENVIRONMENT |
Author: |
IHAB AHMED NAJM, MAHAMOD ISMAIL, ABD AL-RAZAK TAREQ RAHEM |
Abstract: |
The limitation of WiFi coverage and free frequency create problems as well as
weaken security and degrade quality of services. Therefore, a complementary
wireless technology, WiMAX, is required. WiMAX and WiFi are chosen as both
technology are the most highly popular by wireless network protocols usage in
Iraq. Simulation on both of the network environments will be used to imitate the
real situation in Tikrit University. This study provides a comprehensive field
survey on wireless networking in Tikrit University of Iraq. Suitable wireless
protocol, expanding coverage, performance of network will be included after the
application of this study. The major benefits that have achieved as the outcome
of this study are packet delivery ratio and throughput. Both WiFi scenarios
achieved packet delivery ratios of 97.2% and 96.012% respectively, while WiMAX
scenario scored 98.0% on packet delivery ratio. On the other hand, the
throughput was found to produce interesting results and increased with packet
size. WiMAX throughput had been discovered to be increasing linearly to the
throughput. The maximum throughput achieved by WiMAX was 22.12 Mbps while the
WiFi obtained throughputs of 22.46 Kbps and 11.61 Kbps for the different
scenarios. |
Keywords: |
WiFi, WiMAX, Interference, Expand Coverage. |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
A SURVEY ON NEURAL NETWORK MODELS FOR HEART DISEASE PREDICTION |
Author: |
SELVAKUMAR.P, DR.RAJAGOPALAN.S.P |
Abstract: |
The healthcare organization (hospitals, medical centers) should provide quality
services at affordable costs. Quality of service implies diagnosing patients
accurately and suggesting treatments that are effective. To achieve a correct
and cost effective treatment, computer-based information and/or decision support
Systems can be developed to full-fill the task. The generated information
systems typically consist of large amount of data. Health care organizations
must have ability to analyze these data. The Health care system includes data
such as resource management, patient centric and transformed data. Data mining
techniques are used to explore, analyze and extract these data using complex
algorithms in order to discover unknown patterns. Many data mining techniques
have been used in the diagnosis of heart disease with good accuracy. Neural
Networks have shown great potential to be applied in the development of
prediction system for various type of heart disease. This paper investigates the
benefits and overhead of various neural network models for heart disease
prediction. |
Keywords: |
Artificial Neural Network, Data Mining, Heart Diseases, Knowledge Discovery,
Heart Disease Prediction System |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
EXPERIMENTAL EVALUATION OF FUZZY- BASED FUNCTION POINT ANALYSIS FOR SOFTWARE
EFFORT ESTIMATION |
Author: |
M. SENTHIL KUMAR, Dr.B. CHIDAMBARA RAJAN |
Abstract: |
Accurate Effort Estimation is a significant task in software development, which
is helpful in the scheduling and tracking of the project. A number of estimation
models are available for effort calculation. However, a lot of newer models are
still being proposed to obtain more accurate estimation .This paper attempts to
propose a hybrid technique which incorporates both quality factors and fuzzy
based technique in function Point Analysis. Fuzzy logic has the capability of
tackle the uncertainty issues in the estimation. The goal of this paper is to
evaluate the accuracy of fuzzy analysis for software effort estimation. In this
approach, fuzzy logic is used to control the uncertainty in the software size
with the help of a triangular fuzzy set, and de-fuzzification through the
weighted average method. The experimentation is done with different project data
on the proposed model, and the results are tabulated. The measured effort of the
proposed model is compared with that of the existing model, and finally, the
performance evaluation is done based on parameters in terms of MMRE and VAF. |
Keywords: |
Effort Estimation, Function Point, Fuzzy-based Function Point, Triangular Fuzzy
Set, Accuracy |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
THE RELATIONSHIPS BETWEEN WIRELESS NETWORK COMMUNICATION, IT FLEXIBILITY, IT
ARCHITECTURE, AND STRATEGIC ALIGNMENT: AN EMPIRICAL STUDY |
Author: |
A.HAMEED, ALI OUDAH |
Abstract: |
Strategic alignment and wireless communication network are two important areas
that receive considerable attention in two last decades, for this reason, we try
to evaluate the relation between these two important concepts. This study
discussed and empirically tested the most important factors that affect
strategic alignment and wireless network communication. The objective of this
paper is to analysis and provides a model for the effect of wireless network
connectivity on strategic alignment in the case of Iraqi firms. For this
purpose, we consider the relationship between information technology flexibility
and strategic alignment and also we involve three important dimension of
flexibility in our research. Connectivity is voice and data Communication
infrastructure of networks. We empirically tested our model using data collected
from executives of 55 Iraqi firms. The results show that strategic alignment is
not positively correlated with wireless network communication; so successful
communication in a wireless network indicates good connectivity. |
Keywords: |
Strategic Alignment, Wireless Communication Network, Connectivity, IT, IT
Architecture |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
IDENTITY BASED ATTACK DETECTION AND MANIFOLD ADVERSARIES LOCALIZATION IN
WIRELESS NETWORKS |
Author: |
UDHAYA SANKAR S.M , V.VIJAYA CHAMUNDEESWARI , JEEVAA KATIRAVAN |
Abstract: |
Wireless spoofing attacks are easy to launch in wireless network due to openness
of wireless network. Although the identity of a node can be detected through
cryptographic authentication, conservative security approaches are not always
possible because of their overhead requirements. Source sends a Data to the
destination, Data is forwarded to the intermediate nodes in between source and
destination, with respect to Received Signal Strength(RSS). In this paper to
detect the identity based attacks and multiple adversaries as well as localizing
the adversaries. Detect the position of multiple adversaries even when the
adversaries vary their transmission power level. We propose to use the spatial
correlation of received signal strength(RSS) inherited from wireless nodes to
detect the attacks. We are also using primary key and encrypting the data
packets during transmission for security purpose. By encrypting the data packets
the intermediate nodes are not able to view the data presented in the data
packets. Experiments using an 802.11 (Wi-Fi) network, We use network simulation
to analyse the performance of the results. |
Keywords: |
Wireless Networks, Identity Attacks, Primary Keys, Attack Prevention |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
FLOW BASED MULTI FEATURE INFERENCE MODEL FOR DETECTION OF DDOS ATTACKS IN
NETWORK IMMUNE SYSTEM |
Author: |
S.VASANTHI, S.CHANDRASEKAR |
Abstract: |
Network immune systems have been developed in many ways but differ with the
feature set used and suffer with identifying network threats in efficient
manner. We propose a multi feature inference model which uses various parameters
of network flow. Unlike earlier approaches, the proposed method infers valuable
knowledge from the packet flow and packet details to detect DDOS attacks. The
proposed method uses, hop count, hop details, payload, Time to live with time
variant information’s. The network packets are monitored about their traversal,
through which they forwarded towards destination. We consider the botnet
attacks, which is supported by dedicated nodes distributed throughout
intermediate network. Whenever a new packet received at the network various
features are extracted and we compute the probability of genuine value according
to the features. The proposed immune system maintains packet trace for each of
the packet received at various time domains. At each time window, for each
distinct traversal path an probability value is computed using the features
extracted from traffic trace. The inferred results are applied do denial the
service for the malicious nodes. The result will be inferred using computed
probability value to allow or deny the packet into the network. |
Keywords: |
Intrusion Detection System, Network Immune System, Botnet, Flow Based Inference
Model, Denial of Service Attacks. |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
MODIFIED MFCC METHODS BASED ON KL- TRANSFORM AND POWER LAW FOR ROBUST SPEECH
RECOGNITION |
Author: |
JOHN SAHAYA RANI ALEX, NITHYA VENKATESAN |
Abstract: |
This paper presents robust feature extraction techniques, called Mel Power
Karhunen Loeve Transform Coefficients (MPKC), Mel Power Coefficients (MPC) for
an isolated digit recognition. This hybrid method involves Stevens’ Power Law of
Hearing and Karhunen Loeve(KL) Transform to improve noise robustness. We have
evaluated the proposed methods on a Hidden Markov Model (HMM) based isolated
digit recognition system with TIDIGITS data for clean speech and also with noisy
speech data. An increase in the recognition accuracy rate is observed with the
proposed methods compared to conventional Mel Frequency Cepstral Coefficients (MFCC)
technique. |
Keywords: |
Feature extraction; Stevens’ power law; MPKC; KL Transform; CMN; MFCC; HMM |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
Title: |
NEW TECHNIQUE FOR SIZING OPTIMIZATION OF A STAND-ALONE PHOTOVOLTAIC SYSTEM |
Author: |
NUR IZZATI ABDUL AZIZ, SHAHRIL IRWAN SULAIMAN, SULAIMAN SHAARI, ISMAIL MUSIRIN |
Abstract: |
This paper presents a method for sizing optimization in Stand-Alone Photovoltaic
(SAPV) system. Evolutionary Programming (EP) was integrated in the sizing
process to maximize the technical performance of the system. It is used to
determine the optimal PV module, charge controller, inverter and battery such
that the expected Performance Ratio (PR) of the SAPV system could be maximized.
Two EP models, i.e. the Classical Evolutionary Programming (CEP) and Fast
Evolutionary Programming (FEP) were tested in determining the best EP model for
the EP-based sizing algorithm. In addition, an iterative-based sizing algorithm
was developed to determine the optimal solution for benchmarking purposes. The
results showed that CEP had outperformed the FEP by producing higher PR despite
having almost similar computation time. However, the sizing algorithm using both
EP models was also found to be much faster when compared to the iterative-based
sizing algorithm, thus justifying the needs for incorporating EP in the sizing
algorithm. |
Keywords: |
Photovoltaic (PV), Stand-Alone Photovoltaic (SAPV), Evolutionary Programming
(EP), Sizing, Optimization |
Source: |
Journal of Theoretical and Applied Information Technology
20 September 2014 -- Vol. 67. No. 2 -- 2014 |
Full
Text |
|
|
|