|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of Theoretical and Applied Information Technology
July 2014 | Vol. 65 No.3 |
Title: |
EFFICIENT SCHEDULING OF WORKFLOW IN CLOUD ENVIORNMENT USING BILLING MODEL AWARE
TASK CLUSTERING |
Author: |
D.A.PRATHIBHA , B.LATHA, G. SUMATHI |
Abstract: |
Cloud computing is a cost effective alternative for the scientific community to
deploy large scale workflow applications.For executing large scale scientific
workflow applications in a distributed hetereogenous enviornment ,scheduling of
workflow tasks with the dynamic resources is a challenging issue.Moreover in a
utility based computing like cloud which supports pay per use model of the
resources ,scheduling algorithm must efficiently utilize the available time of
the resource.Most of the existing scheduling heuristics does not consider the
dynamic nature of the cloud and hence produce the static schedule. Public cloud
enviornment like Amazon EC2 offers catalog of resources and the price is
generally metered per hour.Here any fractional usage is rounded off to the next
hour.To meet the budget and deadline of the customers proposed work focuses to
incorporate a billing model aware task clustering mechanism in the workflow
scheduling process . This work also presents a resource selection algorithm
which can be used for choosing proper resource at each stage in the workflow.
Preliminary results obtained by running two scientific applications Montage and
Cybershake with different resources and task clustering mechanisms are
discussed. |
Keywords: |
Cloud Computing,Workflow,Resource Selection,Deadline,Budget,Task Clustering |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
CHDS: A FAST SEARCH ALGORITHM FOR MOTION ESTIMATION IN VIDEO CODING STANDARDSR.
VANI, M. SANGEETHA, P. DAVIS |
Author: |
R. VANI, M. SANGEETHA, P. DAVIS |
Abstract: |
In this paper, we propose a new hybrid Cross Hexagon Diamond Search algorithm (CHDS)
using cross-shaped search pattern as the initial step and asymmetric
hexagon-shaped patterns and small diamond pattern as the subsequent steps for
fast block motion estimation. In block motion estimation, search pattern with
different shape or size and the center biased motion vector distribution
characteristics has a great impact on search speed and distortion performance.
The previously developed fast search algorithms focus on improvement of either
coarse search or inner search. The Proposed CHDS algorithm reduces the number of
search points by exploiting the distortion information in the neighbouring
search points. Simulations are done using MATLAB and our experimental results
indicate that the proposed CHDS algorithm reduces an average of 39.78% of time
for motion estimation compared to New Hexagon Search (NHEXS), 10.86% of time for
motion estimation compared to Hexagon Search (HS) and 38.64% of time for motion
estimation compared to Diamond Search (DS). |
Keywords: |
Motion Estimation, Block Matching, Motion Vector, Search Algorithm |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
OPTIMIZED PRIORITY ASSIGNMENT SCHEME FOR CONGESTED WIRELESS SENSOR NETWORKS |
Author: |
BEULAH JAYAKUMARI. R, DR. JAWAHARSENTHILKUMAR V |
Abstract: |
Congestion is a likely event in wireless sensor networks due to node density and
traffic convergence. Congestion can decrease network lifetime and reduce
information accuracy. Transferring crucial data during congestion is a
challenging problem in wireless sensor network. To achieve this we have proposed
a competent data delivery protocol called Optimized Priority Assignment Scheme
for congested wireless sensor network (OPAS). It dynamically assigns priority to
every data based on their time critical nature and drops highly correlated
duplicate data by considering delivery probability and finally tend to forward
high and low priority data on two mutually exclusive routes only on congestion.
OPAS improves data delivery and decreases packet drop for a predominantly
congested wireless sensor network. |
Keywords: |
Wireless Sensor Network, Priority Assignment Scheme, Redundant Data, Data
Delivery Protocol |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
STUDY OF A REAL TIME AIRCRAFT LANDING SCHEDULE WITH AN ATTEMPT TO OPTIMIZE THE
SAME USING NON TRADITIONAL ALGORTHIMS |
Author: |
C.NITHYANANDAM, Dr.G. MOHANKUMAR |
Abstract: |
In this percent decade’s, airport scheduling operation are most essential for
aircraft landing and takeoff. The radar range control systems act as the brain
for the aircraft scheduling operations. Arrival runways are a critical resource
in the air traffic system. Arrival delays have a great impact on airline
operations and cost. Radar system is to communicate with all the aircrafts
within the 200 nautical miles (370 km). In this paper the technique describes
the execution time and penalty cost of the each aircrafts. Throughout, we
discuss how our formulations can be utilized to model a number of issues
(aircraft selection, precedence restrictions, restricting the number of landings
and takeoffs in a given time period, runway workload balancing) commonly
encountered in practice. Existing techniques does not considered the timing
factor, so based on the time factor penalty cost is very high. Many of the
techniques are used to reducing the penalty cost whenever possible to landing
and takeoff operations are done for the emergency flights. These experiment
shows whenever flights landing on the runway at that time no congestion on that
particular path, if it’s occur then its seems to be problem. In order to
eradicate these problem, neural network and Genetic Algorithms are utilized to
eradicate the congestion occur in the runway and also our proposed technique
reduced the penalty cost to be charged. |
Keywords: |
Artificial Neural Network, Aircraft Selection, Runway Selection, Scheduling,
Genetic Algorithm |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
HANDWRITTEN DIGIT RECOGNITION FOR MANAGING EXAMINATION SCORE IN PAPER-BASED
TEST |
Author: |
APICHAT SURATANEE, NANTAPORN LERTSARI, SETHAWAT KAMPHASEE, KRITSADA SRIKET |
Abstract: |
Calculating scores of a paper-based test with a large number of students is a
difficult task. Suppose a large number of students, e.g., more than 1200
students, in a class take a paper-based examination with more than 6 questions
in the test. Teachers need to write the score of each question on the
examination cover sheet, calculate total scores of all students and fill the
scores on the student list. These tasks really take a lot of time. We,
therefore, develop an automatic system for reducing some procedures of these
tasks for time-saving. Our developed system uses handwritten number recognition
that has been widely used in several fields. In this study, we employ the
recognition with artificial neural network to automatically identify the student
identification number and scores of all questions from a scanned image of the
cover page. The summation of scores is calculated automatically. Both of total
score and the student identification number are exported into excel format. With
the neural network classification to recognize the digits, we obtain high
performance with overall accuracy of 99.89%. In conclusion, two main processes
are improved from our system: (i) automatic total score calculation and (ii)
exportation of scores to excel format. This designed system could successfully
reduce the time for evaluating scores of students and yields more accurate score
calculation. |
Keywords: |
Examination Cover Page, Handwritten Digit Recognition, OCR, Artificial Neural
Network. |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
OPTIMIZATION ON SHORTEST PATH FINDING FOR UNDERGROUND CABLE TRANSMISSION LINES
ROUTING USING GIS |
Author: |
VISWARANI.C.D, VIJAYAKUMAR.D, SUBBARAJ.L, S.UMASHANKAR, KATHIRVELAN.J |
Abstract: |
In cable trench construction, one of the tasks for engineers is to select a
suitable route to minimize construction cost and obstructions. This paper
discusses the development of a Geographic Information Systems (GIS)-based
Customized system to automate the process of optimum shortest path finding for
routing of underground power supply cable between any two substations. It
combines the spatial analysis capabilities of GIS with the sophistication of one
of the Artificial Intelligence technique “Simple Ant Colony Optimization” to
deal with the complexity inherent in optimum shortest path finding. The results
of this project are validated and compared with an inbuilt tool Network analyst
of GIS software ArcGIS and explained in the Result and discussion of this paper.
In this project only optimum shortest path finding is aimed and this will be one
of the main input data in my future work of improving distribution efficiency
using GIS. |
Keywords: |
Geographic Information System (GIS), Ant Colony Optimization (ACO), Network
Analyst (ArcGIS), Electrical Distribution System |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
FUZZY CLUSTERING BASED ANT COLONY OPTIMIZATION ALGORITHM FOR MR BRAIN IMAGE
SEGMENTATION |
Author: |
P. HARI KRISHNAN, DR. P. RAMAMOORTHY |
Abstract: |
The conventional methods for segmenting MR brain images with various noises were
less effective. In this paper, we aimed for a novel method which intellectually
determines the cluster centers before applying the (FCM) fuzzy c-means, thus
increasing the iteration efficiency and reducing the computation time. The main
feature of this proposed method is to utilize Ant Colony Optimization Algorithm
(ACOA) to initialize the cluster centers and classification is made from
thereafter using the initial values. Thus it helps to avoid the noisy pixel to
be wrongly placed under any of the classes during the iterative process of FCM
clustering algorithm, hence a better segmentation of MRI brain images which were
scanned for detection of tumors was achieved. The methodology has been
successfully carried out on Magnetic Resonance Imaging (MRI) images and
efficient segmentation is was carried out on brain tumor images. |
Keywords: |
Ant Colony Optimization Algorithm (ACOA), Fuzzy C-Means (FCM), Magnetic
Resonance Imaging (MRI), Clustering and Brain Tumor. |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
COMPOSITE TERMINAL SLIDING MODE CONTROL FOR SPACECRAFT WITH COUPLED TRANSLATION
AND ATTITUDE DYNAMICS |
Author: |
CHUTIPHON PUKDEBOON |
Abstract: |
The composite terminal sliding mode controller design to deal with translation
and attitude control of a rigid spacecraft is studied. Based on the terminal
sliding mode (TSM) concept, a finite-time controller is designed to achieve
translation and attitude maneuvers in the presence of model uncertainties and
external disturbances. A finite-time disturbance observer (FTDO) is introduced
to estimate the total model uncertainties and external disturbances. The
proposed composite terminal sliding mode control consists of a finite-time
controller based on TSM concepts and a compensation term based on FTDO. The
Lyapunov theory is applied to prove the finite-time stability of the closed-loop
system. Numerical simulations on translation and attitude control of a rigid
spacecraft are also provided to demonstrate the performance of the proposed
controller. |
Keywords: |
Composite Control, Terminal Sliding Mode Control, Finite-Time Disturbance
Observer |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
SPEAKER RECOGNITION IN CLEAN AND NOISY ENVIRONMENT USING RBFNN AND AANN |
Author: |
R. VISALAKSHI, P. DHANALAKSHMI |
Abstract: |
In this paper, we propose a speaker recognition system based on features
extracted from the speech recorded using close speaking microphone in clean and
noisy environment. This system recognizes the speakers from a number of acoustic
features that include linear predictive coefficients (LPC), linear predictive
cepstral coefficients (LPCC) and Mel-frequency cepstral coefficients (MFCC).
RBFNN and AANN are two modelling techniques used to capture the features. RBFNN
model enables nonlinear transformation followed by linear transformation to
achieve a higher dimension in the hidden space. The proposed work compares the
performance of RBFNN with Autoassociative neural network (AANN). The
autoassociative neural network (AANN) is used to capture the distribution of the
acoustic feature vectors in the feature space. This model captures the
distribution of the acoustic features of a class, and the backpropagation
learning algorithm is used to adjust the weights of the network to minimize the
mean square error for each feature vector. The experimental results show that,
the performance of AANN performs better than RBFNN. AANN gives an accuracy of
94.93% for various acoustic features both in clean and noisy environment. |
Keywords: |
Radial Basis Function Neural Network (RBFNN); Autoassociative Neural Network (AANN);
Linear Predictive Coefficients (LPC); Linear Predictive Cepstral Coefficients (LPCC);
Mel-Frequency Cepstral Coefficients (MFCC); Speaker Recognition (SR) |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
PERFORMANCE EVALUATION FOR MULTI-HOLE PROBE WITH THE AID OF ARTIFICIAL NEURAL
NETWORK |
Author: |
J.V. MURUGA LAL JEYAN, Dr.M. SENTHIL KUMAR |
Abstract: |
The multi hole conical probe is extensively employed in the fluid fields for
estimating the overall and static pressure and velocity of the vibrant fields.
The probe is formed by various types of materials such as aluminum, copper and
stainless steel which are utilized in the wind tunnel to determine the static
and total pressure of the fluid fields. Many varied material probes are engaged
to assess their efficiency in execution in the concurrent surroundings at
diverse Mach number situations and the yields are calculated according to
displacement and stress for diverse material probes. The innovative artificial
neural network is effectively employed to forecast the varied material
accomplishment of the probe by making use of the Levenberg-Marquette algorithm
of the artificial neural network, which is applied in the artificial neural
network to estimate the yields of the various material probes and the outcomes
are subjected to analysis and contrast with the Conjugate Gradient with Beale (CGB)
algorithm, Variable Learning Rate Gradient Descent (GDX) algorithm and Scaled
Conjugate Gradient (SCG) algorithm of the artificial neural network. The MATLAB
software is performed to assess the efficiency of the artificial neural network
for various kinds of material probes. |
Keywords: |
Multi-Hole Probe, Materials, Artificial Neural Network, Levenberg-Marquette
Algorithm |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
WEIGHTED QUANTUM PARTICLE SWARM OPTIMIZATION TO ASSOCIATION RULE MINING AND PSO
TO CLUSTERING |
Author: |
D.GOKILA, DR.S.RAJALAKSHMI |
Abstract: |
In the area of association rule mining (ARM), the most major algorithms is
Apriori algorithm. In the existing Apriori algorithm minimum support and
confidence are determined subjectively or through trial and error method so, the
algorithm lacks the objectiveness and efficiency. To improve the efficiency of
association rules, Particle Swarm Optimization (PSO) algorithm is projected,
which gives feasible threshold values for minimum support and confidence. In the
PSO algorithm, initially it looks for the optimum fitness value of each particle
and then finds their corresponding support and confidence as minimum threshold
values. The difficulty of PSO algorithm is that, it guesses that the items have
the same implication without taking into account of their weight/attributes
within a transaction or within the whole item space. To overcome this drawback,
this paper proposes a weighted quantum particle swarm optimization algorithm (WQPSO)
with weighted mean best position according to fitness values of the particles.
WQPSO algorithm provides faster local convergence, fallout in better balance
between the global and local searching of the algorithm, so it generates good
performance. The proposed WQPSO algorithm is experienced with several benchmark
functions and compared with standard PSO. The experimental result shows the
supremacy of WQPSO and it is verified by applying the FoodMart2000 database of
Microsoft SQL Server 2000. Likewise, in clustering, there are many unsupervised
clustering algorithms have been developed one such algorithm is K-means which is
simple and straightforward. The main drawback of the K-means algorithm is that,
the result is sensitive to the selection of the initial cluster centroids and
may converge to the local optima. This is solved by PSO as it performs
globalized search and produces clusters with high intra class similarity. |
Keywords: |
Data mining, Association Rule Mining, Particle Swarm Optimization, K-Means,
Weighted Quantum Particle Swarm Optimization, Clustering |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
AN EFFICIENT ASCII-BCD BASED STEGANOGRAPHY FOR CLOUD SECURITY USING COMMON
DEPLOYMENT MODEL |
Author: |
C.SARAVANAKUMAR, C.ARUN |
Abstract: |
Cloud computing is a service oriented computing which offers everything as a
service by following the pay-as-you-go model. It is more popular because the
cloud data are accessed by variety of customers. An important issue of the cloud
computing is to secure the customer data in geographical disperse locations.
There are plenty of security standards and policies are available in order to
secure the data, but these standards are exist only at the cloud end. It is a
critical task for the customer as well as provider to store, retrieve and
transmit the data over the cloud network and storage in a secure manner. The
customer can store or process the sensitive data who needs a security during the
time of travel over the network to the processing environment. The existing
security algorithm secures customer data at the provider end which are not known
by the customer and also multiple boundaries exists in the cloud resource access
which leads the reliability problem. The main objective of the proposed
algorithm is to develop a customer owned security algorithm and the encrypted
data are send to the provider end. The provider can also apply the security over
the customer’s data by using efficient algorithm. The customer’s data are in a
secure manner at both the end to achieve maximum reliability. The proposed
algorithm uses ASCII and BCD security with steganography that stores the
encrypted data in an image file which will be send to the provider end. The
security algorithm has to be implemented by using the CDM (Common Deployment
Model) which also provides an interoperable security services over the cloud. In
future, steganography can be applied to secure the virtual images in the cloud
computing. |
Keywords: |
Cloud Computing, Cryptography, Steganography, Distributed Systems, CDM |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
SYMBOLIC DATA CONVERSION METHOD USING THE KNOWLEDGE-BASED EXTRACTION IN ANOMALY
INTRUSION DETECTION SYSTEM |
Author: |
JATUPHUM JUANCHAIYAPHUM, NGAMNIJ ARCH-INT, SOMJIT ARCH-INT, SAIYAN SAIYOD |
Abstract: |
In anomaly intrusion detection systems, machine learning algorithms, e.g. KNN,
SOM, and SVM, are widely used to construct a model of normal system activity
that are designed to work with numeric data. Consequently, symbolic data (e.g.,
TCP, SMTP, FTP, OTH, etc.) need to be converted into numeric data prior to being
analyzed. From the previous works, there were different methods proposed for
handling the symbolic data; for example, excluding symbolic data, arbitrary
assignment, and indicator variables. However, these methods may entail a very
difficult classification problem, especially an increase of the dimensionality
of data that directly affect the computational complexity of machine learning
algorithm. Thus, this paper proposed a new symbolic conversion method in order
to overcome limitations of previous works by replacing the symbolic data with
their risk values, obtained from knowledge-based extraction. The experiments
affirmed that our proposed method was more effective in improving the classifier
performance than did the previous works, and it did not increase the
dimensionality of data. |
Keywords: |
Symbolic Conversion, Knowledge Extraction, Anomaly Detection, IDS, Machine
Learning |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
DESIGN OF SECOND ORDER SIGMA-DELTA MODULATOR FOR AUDIO APPLICATIONS |
Author: |
DHANABAL R, BHARATHI V, NAAMATHEERTHAM R SAMHITHA, G.SRI CHANDRAKIRAN, SAI
PRAMOD KOLLI |
Abstract: |
The paper includes the design of second order sigma-delta modulator. A
comparative study between a first order and second order sigma-delta modulator
is carried out. The second order is preferred over the first order sigma-delta
modulator due to its better signal-to-noise ratio (SNR). The comparative study
between the modulators is done in Matlab, the second-order sigma-delta modulator
is designed and modelled in Verilog-A and simulated in Cadence SpectreS. The
second order sigma-delta modulator provides an enhancement of 27% in SNR as
compared to the first order sigma-delta modulator. |
Keywords: |
Sigma Delta Modulator, SNR, Oversampling, Noise Shaping |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
DESIGN AND IMPLEMENTATION OF FACE DETECTION USING ADABOOST ALGORITHM |
Author: |
SENTHILSINGH C, M.MANIKANDAN |
Abstract: |
Face recognition system is an application for identifying someone from image or
videos. Face recognition is classified into three stages ie)Face
detection,Feature Extraction ,Face Recognition. Face detection method is a difficult
task in image analysis. Face detection is an application for detecting object,
analyzing the face, understanding the localization of the face and face
recognition.It is used in many application for new communication interface,
security etc.Face Detection is employed for detecting faces from image or from
videos. The main goal of face detection is to detect human faces from different
images or videos.The face detection algorithm converts the input images from a
camera to binary pattern and therefore the face location candidates using the
AdaBoost Algorithm. The proposed system explains regarding the face detection
based system on AdaBoost Algorithm . AdaBoost Algorithm selects the best set of
Haar features and implement in cascade to decrease the detection time .The
proposed System for face detection is intended by using Verilog and ModelSim,and
also implemented in FPGA. |
Keywords: |
Adaboost, Face Detection, FPGA, Haar Classifier, Image Processing, Real-Time. |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
MALICIOUSNESS IN MOBILE AD HOC NETWORKS: A PERFORMANCE EVALUATION |
Author: |
K. TAMIZARASU, A.M. KALPANA, Dr. M. RAJARAM |
Abstract: |
Mobile Ad hoc Networks (MANETs) combine wireless communication with high node
mobility. Limited wireless communication range and node mobility ensure that
nodes cooperate with each other to provide networking, with network dynamically
changing to ensure needs are met continually. The protocols dynamic nature
enables MANET operation as suited to deployment in extreme circumstances. Hence,
MANETs are a popular research topic and are proposed for use in areas like
rescue operations, tactical operations and environmental monitoring. This study
evaluates impact of malicious node attack (Black Hole Attack) on Ad hoc
On-demand Distance Vector (AODV) MANET routing. The experiment consists of 25
nodes distributed over 2 square kilometres. AODV routing protocol is resorted
to. Three experiments were conducted, the first without malicious nodes and with
10% and 20% of malicious nodes. |
Keywords: |
Mobile Ad Hoc Networks (Manets), Routing In MANET, Ad Hoc On-Demand Distance
Vector (AODV), And Black Hole Attack |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
A MULTI-OBJECTIVE APPROACH FOR ENERGY EFFICIENT CLUSTERING USING COMPREHENSIVE
LEARNING PARTICLE SWARM OPTIMIZATION IN MOBILE AD-HOC NETWORK |
Author: |
A.KARTHIKEYAN, SANYUKTA, 2SHEPHALI GUPTA, T.SHANKAR |
Abstract: |
A mobile ad-hoc network (MANET) faces various challenges including limited
energy, limited communication bandwidth, computation constraint and cost.
Therefore, clustering of sensor nodes is adopted which involves selection of
cluster-heads for each cluster. This enhances system performance by enabling
bandwidth reuse, better resource allocation and improved power control. The
various existing clustering techniques provide a single optimized solution in a
single simulation run. Therefore, a multi-objective approach is used to optimize
the number of clusters and to manage the energy dissipation issues. The proposed
algorithm is a multi-objective variant of Particle Swarm Optimization (PSO)
called multi-objective comprehensive learning particle swarm optimization (MOCLPSO)
which reduces the time-complexity and increases the speed of the algorithm. In
this technique, the best position of a randomly selected particle from the
population is used to update the velocity of particle in each dimension, rather
than using the personal or global best positions. The parameters taken into
consideration in the proposed algorithm includes degree of nodes, transmission
range and battery power consumption of the nodes. This technique provides
multiple trade–off solutions in a single run of the algorithm. The performance
of the proposed algorithm is compared with various clustering techniques: LEACH,
PSO, WCA, CLPSO and MOPSO. |
Keywords: |
Comprehensive Learning Particle Swarm Optimization (CLPSO), Multi-objective
Particle Swarm Optimization (MOPSO), Multi-objective Comprehensive Learning
Particle Swarm Optimization (MOCLPSO), Particle Swarm Optimization (PSO),
Weighted Clustering Algorithm (WCA) |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
TSAAC: THRESHOLD SENSITIVE ASSISTANT AIDED CLUSTERING PROTOCOL FOR
HETEROGENEOUS WSNs USING NICHING PARTICLE SWARM OPTIMIZATION |
Author: |
A. KARTHIKEYAN, 2FALAK JINDAL, NEERAJ KAUR BUMRAH, SWAPNIL PAMECHA, T.SHANKAR |
Abstract: |
The energy efficiency and improvement in network’s lifetime are some of the
critical issues in wireless sensor networks (WSNs). Remote environmental
monitoring and target tracking are the important applications of a wireless
sensor network. In this paper, TSAAC (Threshold Sensitive Assistant Aided
Clustering) a heterogeneous protocol using Niching Particle Swarm Optimization (NPSO),
for wireless sensor networks is proposed. It employs three levels of
heterogeneity with cluster head being selected optimally based on the fitness
function of NPSO, which is an important technique for multimodal optimization.
To further balance the energy dissipation and enhance the system robustness, an
assistant cluster head can be selected based on the cluster state information.
Being threshold sensitive, this protocol significantly improves the stability
period and reduces energy consumption considerably by 14% compared to TSEP
(Threshold Sensitive Stable Election Protocol). The performance of the proposed
protocol is compared with the existing protocols like LEACH, AAAC-NPSO, SEP and
TSEP. The simulation results show that TSAAC-NPSO can achieve better network
lifetime. |
Keywords: |
WSNs (Wireless Sensor Networks), NPSO (Niching Particle Swarm Optimization),
Heterogeneity, Assistant-Aided Clustering, Thresholding. |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
MULTI-FACTOR APPROACH FOR EFFECTIVE REGRESSION TESTING USING TEST CASE
OPTIMIZATION |
Author: |
S RAJU, G V UMA |
Abstract: |
Regression testing intends to ensure that a software applications works as
specified after changes have been made to it, is an important phase in software
development lifecycle. Regression testing is the re-execution of some subset of
test that has already been conducted. In regression testing, number of
regression tests increases and it is impractical and inefficient to re execute
every test for every application or function when change occurs. It is an
expensive testing process used to detect regression faults. Regression testing
has been used to support software-testing activities and assure acquiring an
appropriate quality through several versions of a software product during its
development and maintenance. Test suites can be large and conducting regression
tests is tedious. Regression testing assures the quality of modified
applications against unintended changes. The test case selection and
prioritization is important in regression testing. Test case prioritization
seeks to find an efficient ordering of test case execution for regression
testing. Test case prioritization is used in regression testing, at the test
suite level, with the goal of detecting faults as early as possible in the
regression testing process, given a test suite inherited from previous versions
of the system. |
Keywords: |
Regression Test, Test Case Prioritization, Priority Factors, Defect Density,
Defect Removal Efficiency, Average Percentage of Fault Detected (APFD), Genetic
Algorithm, Clustering. |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
COOPERATIVE PACKET DELIVERY USING ENHANCED COOPERATIVE OPPORTUNISTIC ROUTING
SCHEME (ECORS) IN MANET |
Author: |
VIJAYAKUMAR A, SELVAMANI K |
Abstract: |
Cooperative communication is an active area of research today. It enables the
nodes to achieve spatial diversity, which leads to tremendous improvement in
system capacity and delay. In this proposed system cooperative communication
mechanism is used to determine a list of intermediate relay nodes that follows
en-route to the destinations. Here, when data packets are broadcasted from a
source node and the packets are received by a destination node along with the
route. Cooperative communications which utilizes nearby terminals to relay the
message transmissions also induces the non-cooperative nodes to participate in
some opportunistic environments for achieving the diversity gains and to improve
the efficiencies among the mobile nodes in wireless mobile ad hoc networks. The
Enhanced Cooperative Opportunistic Routing Scheme (ECORS) which is based on
light weight proactive source routing is used in this work to ensure the
cooperation of participating mobile nodes in MANETs. This new protocol is used
to easily identify the intermediate nodes and establishing the trusted routes.
The comparative performance analyses among AODV versus ECORS are properly
carried out and the better cooperative packet delivery ratio, increased
throughput and decreased delay in packet delivery are achieved in this network
simulation. |
Keywords: |
Cooperative communication, MANETs, Opportunistic packet forwarding, Packet
delivery, Retransmission, Throughput. |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
TERRESTRIAL-TO-SATELLITE INTERFERENCE IN THE C-BAND: TRACTABLE CALCULATION
TECHNIQUE |
Author: |
LWAY FAISAL ABDULRAZAK, A. HAMEED |
Abstract: |
This paper presents a research conducted on the interference mitigation between
IMT-Advanced and Fixed Satellite Services (FSS). It covers a deterministic
analysis for interference to noise ratio (I/N), adjacent channel interference
ratio (ACIR) and path loss propagation, in order to determine the separation
distances in the co-channel interference (CCI) and adjacent channel Interference
(ACI) scenarios. An analytical model has been developed based on the
deterministic analysis of the propagation model. The IMT-Advanced parameters
have been represented by Worldwide Interoperability for Microwave Access (WiMAX)
802.16e. The impact of different FSS channel bandwidths, guard band separations,
antenna heights and different deployment areas on co-existence feasibility are
considered. The results obtained in terms of minimum required separation
distance in three scenarios, co-channel, zero-guard band, and adjacent channel
are analyzed. |
Keywords: |
Interference, Separation Distance, Bandwidth, IMT-Advanced, Guard Band |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
FEATURE EXTRACTION AND DIMENSIONALITY REDUCTION IN PATTERN RECOGNITION USING
HANDWRITTEN ODIA NUMERALS |
Author: |
PRADEEPTA K. SARANGI, KIRAN K. RAVULAKOLLU |
Abstract: |
Feature extraction is the initial and critical stage which needs to be carried
out very carefully for any recognition system that uses pattern matching. In
order to reduce the feature extraction complexity, dimensionality reduction is
applied. This also increases the performance and recognition accuracy. This
paper proposes the concept of a new feature extraction and dimensionality
reduction method based on a set of linear transformation of the character image.
The verification of the method has been carried out by implementing a simple
recurrent neural network (RNN) with a data set consisting of 1500 isolated
handwritten Odia numerals. An accuracy of 92.41% is reported. Experimental
results show that the proposed method has the potential to be used as a feature
extraction method for handwritten Odia numerals. |
Keywords: |
Handwritten Recognition, Odia Numerals, Recurrent Neural Network, Feature
Extraction |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
RESOURCE MEDIAN MULTICAST ROUTING PROTOCOL FOR ENERGY CONTROLLED MULTIPATH
COMMUNICATION IN WIRELESS AD-HOC NETWORK |
Author: |
SHANMUGASUNDARAM , CHANDRASEKAR |
Abstract: |
Many applications in network consisting of scattered interactive method,
software improvements and replica model for distributed database envisages
optimized routing, scheduling and data dispersal schemes. A large wireless
ad-hoc network broadcast messages from the source to all the elements in a
multicast group. Certain research works based on the leader election model (DSLE)
focus on the incentives in the form of reputations ensuring security to
participate in the election process. With the application of DSLE, the
consumption of resources was balanced during the detection of intrusion whereas
the detection service through routing was not effective. In order to discover
the path the method Channel-Aware Ad hoc On-Demand Multipath Distance Vector
(CA-AOMDV) chooses the stable links and also predicts the path failure for
improving the routing decisions. To improve the routing decision, CA-AOMDV uses
the channel average non-fading period as a routing metric but maximal energy was
consumed for obtaining the channel state information in wireless ad-hoc network.
To handle the communication using multi-path routing path effectively, Resource
Median Multicast Routing (RMMR) Protocol is designed. The protocol RMMR adapt to
any form of wireless ad-hoc network with median multicast tree for effective
routing service. The initial connection between the wireless nodes is counted
and computation effort is minimized using the median multicast trees. Median
Multicast employs Energy Controlled Active Multicast (ECAM) algorithm which
constructs a multicast tree with minimal energy consumption while transferring
the information. ECAM algorithm uses the idling concept to reduce the energy
consumption while performing the multipath communication in wireless ad-hoc
network. The simulation metric taken for experimenting the RMMR Protocol are the
factors such as energy consumption on multipath communication, routing control
overhead, packet delivery ratio on wireless nodes, throughput based on mobility
speed. |
Keywords: |
Resource Median Multicast Routing, Wireless Ad-hoc Network, Active Multicast,
Idling, Routing Protocol, Channel State Information |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
OFFLINE HANDWRITTEN SIGNATURE VERIFICATION USING BACK PROPAGATION ARTIFICIAL
NEURAL NETWORK MATCHING TECHNIQUE |
Author: |
ANWAR YAHY EBRAHIM, GHAZALI SULONG |
Abstract: |
Handwriting is a skill that is highly personal to individuals and consists of
graphical marks on the surface in relation to a particular language. Signatures
of the same person can vary with time and state of mind. Several studies have
come up with several methods on how to detect forgeries in signatures given to
the security implication of signatures to daily business and personal
transactions. This paper illustrates the proposed methodology for an offline
handwritten signature identification and verification system which extracts
certain dynamic features derived from velocity and acceleration of the pen
together with other global parameters like total time taken and number of
pen-ups in order to distinguish between forged signatures and genuine signatures
signed under duress. Adaptive Window Positioning technique was employed for
feature extraction, which focuses on not just the meaning of the handwritten
signature but also on the individuality of the writer by dividing the
handwritten signatures into 13 small windows of size nxn (13x13) such that it is
large enough to contain ample information about the style of the author and
small enough to ensure a good identification performance. Then, a signer
specific codebook approach was used to generate a separate codebook of patterns
for each individual signer such that the number of classes in each codebook
varies as a function of the writing sample (signer), and a 3-layered Backward
Propagation Artificial Neural Network (BPANN) method was used to produce a
maximal matching and preserve the efficiency of the network. The proposed method
was validated using a trained GPDS data set of 2400 original signatures of 100
different signers and comparing the results with those of two different known
techniques of offline handwritten signature verification systems. The findings
indicate that the proposed technique had the lowest ERR value of 7.23,
indicating a more improved performance when compared against the two known
techniques respectively thus proving to be a more efficient and superior method
for offline handwritten signature identification and verification. |
Keywords: |
Offline Handwritten Signature, Identification, Verification, Adaptive Window
Positioning, Signer Specific Codebook, Backward Propagation Artificial Neural
Network |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
RSA-DWT BASED MEDICAL IMAGE WATERMARKING FOR TELEMEDICINE APPLICATIONS |
Author: |
N.VENKATRAM, L.S.S.REDDY, P.V.V.KISHORE, CH.SHAVYA |
Abstract: |
Medical images convey important information to the doctor about a patient’s
health condition. Internet transmits these medical images to remote locations of
the globe to be examined by expert doctors. But data transmission through
unsecured net invokes authentication problems for any image data. This problem
of authentication of medical images is addressed in our research as medical
image watermarking with patient images. Medical images contain very sensitive
information. Watermarking medical images require careful modifications
preserving the data in the images. This is being accomplished using RAS (Rivest,
Shamir and Adleman) algorithm for patient image encryption and decryption. The
host images are set of medical images such as MRI, CT and Ultrasound scans of
patient body parts. These medical images are watermarked with encrypted patient
image in transform domain using 2D Discrete Wavelet Transform (DWT). The host
medical image and watermark image are transformed into wavelet domain and are
mixed using two scaling factors alpha and beta. Finally these watermarked
medical images are transmitted through the internet along with the secret key
that will be used for decryption. At the receiving the embedded encrypted
watermark is extracted using 2DWT and decryption key. The robustness of the
proposed watermarking techniques is tested with various attacks on the
watermarked medical images. Peak-Signal-to-Noise ratios and Normalized cross
correlation coefficients are computed to accesses the quality of the watermarked
medical images and extracted patient images. The results are produced for three
types of medical images with one patient image watermarks using single key by
using four wavelets (haar, db, symlets, bior) at four different levels
(1,2,3,4). |
Keywords: |
Medical Image Watermarking, RAS (Rivest, Shamir and Adleman) algorithm, Discrete
Wavelet Transform (DWT), MRI, CT and Ultrasound Images. |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
ANALYSIS OF OPTIMIZATION TECHNIQUES IN CHI-SQUARED AUTOMATIC INTERACTION
DETECTION |
Author: |
SWARNALATHA S.R, G.M. KADHAR NAWAZ |
Abstract: |
In the context of pattern classification, one of the major issues discussed by
most of researchers is ‘curse of dimensionality’ problem which occurs in data
classification because the data processed in most of the application is
high-dimensional feature space.So, the essential consideration here is that
irrelevant features should be identified which causes less classification
accuracy and the main motto is to find a minimum set of attributes from the
initial set of data helping to make the patterns easier to understand along with
improved classification accuracy and reduced learning time. Therefore, the
selection of feature set is the process to search for an optimal feature subset
from the initial data set without compromising the classification performance and
efficiency in generating classification model. In this paper, we develop a hybrid
classifier by combining CHAID and genetic algorithm. Initially, the genetic
algorithm with ABC operator will extract the best attributes and based on the
extracted attributes the CHAID will generate the decision tree. We analyze the
performance with different datasets and compare the analysis with the existing
technique. |
Keywords: |
Optimization, Chi-Square, Automatic Interaction, Detection |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
POWER SYSTEM LOADABILITY IMPROVEMENT BY OPTIMAL ALLOCATION OF FACTS DEVICES
USING REAL CODED GENETIC ALGRORITHM |
Author: |
R.MEDESWARAN, N.KAMARAJ |
Abstract: |
FACTS devices can effectively control the load flow distribution, improve the
usage of existing system installations by increasing transmission capability,
compensate reactive power, improve power quality, and improve stabilities of the
power network. However, the location and settings of these devices in the system
plays a significant role to achieve such benefits. This work presents the
application of Real Coded Genetic Algorithm (RGA) for finding out the optimal
locations, and the optimal parameter settings of single type and multi type
FACTS devices to achieve maximum system loadability (MSL) in the power system.
The FACTS devices used are Thyristor Controlled Series Capacitor (TCSC) and
Unified Power Flow Controller (UPFC). The reactance model for TCSC and the
decoupled model for UPFC are considered for this work. The thermal limits of the
line and voltage limits of the buses are taken as constraints during the
optimization. Simulated Binary Crossover (SBX) and Non-uniform polynomial
mutation are employed to improve the performance of the Genetic Algorithm used.
Simulations are performed on IEEE 6 bus and 30 bus power systems. The obtained
results are encouraging and show the effectiveness of RGA. |
Keywords: |
Loadability, FACTS, TCSC, UPFC, and Real Coded Genetic Algorithm |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
DOCUMENT CLUSTERING USING CO-WORD ANALYSIS AND FORMATION OF KEYWORD AGAINST
DOCUMENT MATRIX |
Author: |
Document Clustering, Text Mining, Keyword Extraction, Co-word Analysis, HTML
Tags. |
Abstract: |
A complexity of the retrieval of relevant document from a large corpus of
documents is the most common challenging problem in the areas of web mining and
search engines. In addition, the growth of unlabelled and unsupervised documents
are also increases this complexity. Document clustering algorithms plays a vital
role to reduce this problem. In this paper, an algorithm was proposed to cluster
the documents based on their concepts. The proposed algorithm in its first part
identifies the vital keywords of the document, those helps to find out the
concept of the document, using our new statistical text mining algorithm. In the
second part, based on the results derived, the given documents are clustered
using |
Keywords: |
Vs Documents matrix analysis. This proposed Document Clustering algorithm gives
more than 90 % accuracy. |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
A NOVEL APPROACH OF FIC FOR COMPRESSION OF LARGE SCALE REMOTE SENSING COLOR
IMAGES |
Author: |
D.SOPHIN SEELI, DR.M.K.JEYAKUMAR |
Abstract: |
Image compression has become an important issue in the storage and the
transmission of images including satellite and aerial photographs. A new
approach was used for compressing satellite images is to construct compression
algorithms by using the fractal theory. This paper is based on a novel image
structure, Spiral Architecture, which has hexagonal instead of square pixels as
the basic element. In the proposed work, we use only two codebooks for all
images. Open and private code book is generated for the remote sensing image
gallery, instead of using separate codebook for each during the process; hence
high compression ratio can be achieved. Introducing Spiral Architecture into
fractal image compression for remote sensing images will improve the compression
performance in compression ratio, reduction of memory and bandwidth cost of
large-scale remote sensing images. |
Keywords: |
Fractal Image Compression; Compression Ratio; Large-Scale Remote Sensing Image;
Hexagonal Structure; Spiral Architecture; Codebook. |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
DETERMINING MAXIMUM/MINIMUM VALUES FOR TWO-DIMENTIONAL MATHMATICLE FUNCTIONS
USING RANDOM CREOSSOVER TECHNIQUES |
Author: |
SHIHADEH ALQRAINY |
Abstract: |
This paper presents a solution to determining the maximum/minimum values for
two-dimensional mathematical functions using the three most popular crossover
techniques (Single point, Two point and cut and splice) randomly in genetic
algorithm. A set of experiments were ran over 20 complex functions, the obtained
results show that using random crossover techniques tends to be worst compared
with traditional genetic algorithm that uses specific crossover method such as
single point crossover. |
Keywords: |
Genetic Algorithm, Machine Learning, Heuristic Search. Crossover Methods |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
TWO-WAYS DATABASE SYNCHRONIZATION IN HOMOGENEOUS DBMS USING AUDIT LOG APPROACH |
Author: |
RAI GUDAKESA, I MADE SUKARSA, I GUSTI MADE ARYA SASMITA |
Abstract: |
Data integration is the most important part in applying distributed database, in
which data from various data source can be united by implementing integration.
Data replication is one of the data integration forms which is very popular
nowadays, since it is convenient or it can do backup towards data in various
different sites. The practice absolutely has shortcomings in data integration,
as there is no control over the integration from replicated data, therefore it
is necessary to do synchronization towards data. Data synchronization is a part
of replication, it is a process to ensure each copy of database contains similar
object and data. Data synchronization can be applied in numerous methods, one of
them is utilizing Audit Log that is recorded every activities occur at database.
Audit Log can be applied to almost any Database Management System (DBMS). This
research utilized TCP Socket and client-server architecture to distribute data
from Audit Log. The final result concludes that Audit Log can be utilized in
synchronization with client-server implementation, yet it has limitation in
recording data. This paper also showing how Audit Log created and managed to be
used as replication and synchronization procedure. |
Keywords: |
Distributed Database, Data Integration, Data Replication, Data Synchronization,
Audit Log |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
A FUZZY AHP PROCESS IN GIS ENVIRONMENT FOR LANDFILL SITE SELECTION |
Author: |
BENNIS Kaoutar, BAHI Lahcen |
Abstract: |
The purpose of this paper is to present a hybrid methodology combining
Geographic Information Systems (GIS), Fuzzy Analytic Hierarchical Process (FAHP)
and stakeholder’s judgment. This multi-level decision-making tool captures the
ambiguity and impreciseness in decision making judgments, and results in a final
priority ranking to site a new waste disposal area for the city of Tangier in
the north of Morocco.
The criteria at hand are spatial by nature and contribute differently to the
choice-making process, the GIS-FAHP integration allows to deal with geospatial
data while accounting for the tradeoffs of the criteria with regard to the main
objective.
In fact, the FAHP acts in a first time as a method of extracting the weights of
each criteria/layer from a pair-wise comparison matrix, then, in a second time,
FAHP acts as a method in selecting the best site from a set of most viable
sites; the pair-wise comparison matrices are the result of stakeholders
evaluation of all criteria, sub-criteria and sites.
The set of most viable sites is obtained with the GIS tool in which we conducted
the data preparation, each criterion is represented by a layer which is either
extracted from remote sensing imagery or digitalization of local maps, a
weighted linear combination (WLC) processes the layers and results in a map of
suitability for landfill siting. |
Keywords: |
Fuzzy Analysis, Analytic Hierarchical Process (AHP), Geographic Information
System (GIS) , Weighted Linear Combination (WLC), Suitability. |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
SEARCH IN TEXT DOCUMENTS BASED ON N-GRAMS AND FOURIER WINDOW TRANSFORMATION |
Author: |
K.YA KUDRYAVTSEV |
Abstract: |
This paper is of theoretical nature and is devoted to the development and
theoretical justification of the new Full Text Search method in documents. The
idea of the method is to partition the text into N-grams, to construct
"spectrograms" of a text document using the windowed Fourier transform. Then a
fast Fourier transform algorithm is used and "spectrogram" of the text of the
document is compared to the "spectrogram" of the search line in special
"control" points. This paper provides a rigorous mathematical justification of
the proposed method, the possibility of forming conclusions on the presence of
the search line in the document is proven. A generalized algorithm of a new
method of full-text search is presented. |
Keywords: |
Full-text search in databases, Window Fourier transform, Fast Fourier Transform,
Spectrogram text, N-Grams. |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
GA BASED FEATURE RANKING MECHANISM TO DETECT NEW BORN INFANTS JAUNDICE WITH AN
ENSEMBLE TREE STRATEGY |
Author: |
A. ARULMOZHI, DR.M. EZHILARASI |
Abstract: |
Jaundice is a yellow discoloration of the skin and whites of the eyes that is
often seen in newborns. Newborn jaundice can be analyzed by scrutinizing the
infant and examining blood levels of bilirubin. Infants with high blood levels
of bilirubin called hyperbilirubinaemia, evolve the yellow color when bilirubin
acquires in the skin. The major symptom of jaundice is yellow coloring of the
skin and conjunctiva of the eyes. A Genetic Algorithm (GA) is used to enhance or
optimize the overall behavior by evolving the population. Genetic Algorithms (GAs)
are a virtually new criterion for a search, based on the precept on instinctive
selection. Ensemble methods are learning algorithms that design a set of
classifiers. An ensemble of classifiers is a predetermined classifier whose
individual decisions are combined in some way to classify new examples. In this
analysis, an ensemble of fitness evaluations would produce an ensemble of
fitness values for each individual. The experimental results reveal that the
proposed method can act as a supplement to support earlier detection and more
effective treatment due to improved jaundice detection. |
Keywords: |
Fitness Evaluation, Jaundice, Hyperbilirubinaemia, Maximal Information
Compression Index (MICI), Machine Learning, Kernel Support Vector Machine (SVM),
Gray Level Co-occurrence Matrix (GLCM), Genetic Algorithm (GA). |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
Title: |
REDUCED ORDER LINEAR QUADRATIC REGULATOR PLUS PROPORTIONAL DOUBLE INTEGRAL
BASED CONTROLLER FOR A POSITIVE OUTPUT ELEMENTARY SUPER LIFT LUO-CONVERTER |
Author: |
N.ARUNKUMAR, T.S. SIVAKUMARAN, K.RAMASH KUMAR, S.SARANYA |
Abstract: |
The design and analysis of reduced-order linear quadratic regulator (ROLQR) plus
proportional double integral controller (PDIC) for enhancing the dynamic
performance of the positive output elementary super lift Luo-converter (POESLLC)
worked in continuous conduction mode (CCM) is carried out, The ultimate aim is
designing the PDIC is to obtain the efficient output voltage regulation with
zero steady state error. The ROLQR is mainly designed to regulate inductor
current which in turn enhances the dynamic performance of the converter. The
POESLLC is modeled using state space average technique. Extensive simulation is
carried under both line and load variations and the controller platform are
evaluated using well as in the experimental model (digital
dsPIC30F4011controller). The simulation and experimental results are illustrated
that the POESLLC with ROLQR plus PDIC tracks reference voltage, regulate the
inductor current and robust in spite of line and load variation. |
Keywords: |
DC-DC Power Conversion, Positive Output Elementary Super Lift Luo-Converter,
Linear Quadratic Regulator, Proportional Double Integral Controller And
State-Space Average Model. |
Source: |
Journal of Theoretical and Applied Information Technology
31 July 2014 -- Vol. 65. No. 3 -- 2014 |
Full
Text |
|
|
|