|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of Theoretical and Applied Information Technology
July 2014 | Vol. 65 No.2 |
Title: |
DYNAMIC SENSOR RELOCATION TECHNIQUE BASED LIGHT WEIGHT INTEGRATED PROTOCOL FOR
WSN |
Author: |
J.JOY WINSTON, Dr.B.BALAN PARAMASIVAN |
Abstract: |
In the past few years, it is observed and noticed that we are utilizing Wireless
Sensor Networks with greater interest for various popular and crucial
applications. It represents an emerging set of technologies that will have
profound effects across a range of Medical, Industrial, Scientific and
Governmental Applications. This Wireless Sensor Network is made up of a group of
Sensor Nodes or Devices and each device possesses the ability to monitor some
aspect of its environment and each is able to communicate its observations
through other devices to a destination where data from the network is gathered
and processed further for our requirements. Sensor Nodes are operating
autonomously even in unattended environments and potentially in large numbers
and that too we could not be deployed manually in a hostile or harsh
environment. Hence, we could deploy randomly those large numbers of Sensors in
Hostile / Harsh Environment and it leads coverage issue, which is leading to
connectivity loss and moreover, sometimes, the Sensor Nodes may failure which
too causes Connectivity Loss and hence for sensitive services we couldn’t use
this. Thus, while deploying Sensor Nodes for sensitive services, we need to
focus both the Coverage and Connectivity. As we are unable to deploy Sensors
manually, the Sensor Relocation Scheme had proposed earlier, where the Sensors
themselves moving towards required position to make proper Coverage and
Connection. As it is one of the smartest techniques, this research work has
studied Dynamic Sensor Relocation Scheme thoroughly and implemented. From our
experimental results, we noted that this work achieves good coverage when Nodes
getting failure or becoming communication holes. But however this approach
consuming more Energy for continuous migration, which minimizes the lifetime of
Sensor Networks. And also it is identified that, this approach fails to focus
Connectivity issue due to node migration and Secured Communication too, which
may compromise our collecting data. To address these serious issues, this
research work has proposed an efficient Dynamic Sensor Relocation Technique
based Lightweight Integrated Protocol(LIP), which is addressing Secured
Coverage, Connectivity, Communication and urity. We have implemented our
proposed work and studied thoroughly. |
Keywords: |
Coverage, Connectivity, Energy Efficiency, Key Management and Sensor Relocation,
and Topology ControlS |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
MODELING AND REAL-TIME DSK C6713 IMPLEMENTATION OF NORMALIZED LEAST MEAN
SQUARE (NLMS) ADAPTIVE ALGORITHM FOR ACOUSTIC NOISE CANCELLATION (ANC) IN VOICE
COMMUNICATIONS |
Author: |
AZEDDINE WAHBI, AHMED ROUKHE, LAAMARI HLOU |
Abstract: |
In this paper a module consisting of a Normalized Least Mean Square (NLMS)
filter is modeled, implemented and verified on a digital signal processor (DSP)
TMS320C7613 to eliminate acoustic noise, which is a problem in voice
communications. However the acoustic noise cancellation (ANC) is modeled using
digital signal processing technique especially Simulink Blocksets.
The main scope of this paper is to implement the module onboard an autonomous
DSK C6713 in real time, benefiting to the low computational cost and the easy
implementation using Simulink programming.
The needed DSP code is generated in code composer environment under Real Time
Workshop. At the experimental level, implementation phase results verify that
implemented module behavior is similar to Simulink model. |
Keywords: |
Adaptive Algorithm, Acoustic Noise Cancellation (ANC), Real Time Implementation,
Digital Signal Processing, DSK C6713. |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
HYBRID OPTIMIZATION ALGORITHM FOR GEOGRAPHIC ROUTING IN VANET |
Author: |
A. TAMIZHSELVI, R. S. D. WAHIDA BANU |
Abstract: |
Vehicular Ad-hoc Networks (VANETs) developed rapidly in the last decade which is
widely used to improve safety and traffic efficiency. VANET is an active
research, standardization and development area due to its potential to improve
vehicle and road safety, traffic efficiency, and convenience keeping in mind
drivers and passengers comfort. Though VANETs geographic routing was recently
emphasized, developing such networks multi-hop communication is challenging
because of changing topology and network disconnections leading to Mobile Ad-hoc
NETwork (MANET) routing protocols failures and inefficiency. This study proposes
hybrid Particle swarm Optimization (PSO) with Broyden-Fletcher-Goldfarb-Shanno
to improve the efficiency of Geographical Routing Protocol (GRP). Simulation
results demonstrate that the proposed modified GRP using Hybrid PSO with BFGS,
effectively improves the packet delivery ratio, and reduces the end to end
delay. |
Keywords: |
Particle Swarm Optimization (PSO); Broyden-Fletcher-Goldfarb-Shanno (BFGS);
Geographical Routing Protocol (GRP); Vehicular Ad-hoc NETworks (VANETs) |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
A NOVEL PROTECTION GUARANTEED, QoT AWARE RWA ALGORITHM FOR ALL OPTICAL
NETWORKS |
Author: |
K .RAMESH KUMAR, R.S.D.WAHIDA BANU |
Abstract: |
All optical transparent networks carry huge traffic and any link failure or
failure to restore the link failed lightpath can cause loss of gigabits of data;
hence guaranteed protection becomes necessary at the time of failure. Many
protection schemes were presented in the literature, but none of them speaks
about protection guarantee. Also, in all optical networks, due to non
availability of regeneration capabilities, the physical layer impairments due to
optical fibers and components accumulates along the lightpaths (LP) which causes
sharp degradation of the Quality of Transmission (QoT), as measured by signal
bit error rates (BER) , which is a dominating factor for blocking probability(BP)
of transparent optical networks. The problem of protection with QoT issues was
rarely studied in the literature. In this work, a novel protection backup path
guaranteed, QoT aware Routing and Wavelength Assignment (RWA) Algorithm called
“Virtual Lit –Exhaustive Highest Q factor” (V-Lit EHQ) is presented which
exhibits desirable qualities for reliable network operation. The proposed scheme
possesses the merits and excludes the demerits of Lit and Dark protection
schemes. The results of the proposed work are compared with the standard QoT
aware versions of the Shortest Path-First Fit schemes for both lit and dark
protection. The BP, Vulnerability ratio (VR), and BER, are taken as the
performance metrics and the proposed algorithm found to be outperforming in all
metrics as shown through simulations. |
Keywords: |
Routing and Wavelength Assignment, Path Protection, QoT, Physical layer
impairments, Blocking probability |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
MVC BASED NORMALIZATION TO IMPROVE THE CONSISTENCY OF EMG SIGNAL |
Author: |
M.I SABRI, M.F MISKON, M.R YAACOB, ABD SAMAD HASAN BASRI, YEWGUAN SOO,
W.M.BUKHARI |
Abstract: |
Electromyography (EMG) is a study of muscle function through analysis of
electrical activity produced from specific muscle of interest. This electrical
activity which is displayed in form of signal as the manifestation of
neuromuscular activation associated with muscle contraction. The most well-known
technique of EMG signal recording are by using surface (non-invasive) and
needle/wire (invasive) electrode. This research focus on surface
electromyography (sEMG) signal. During sEMG recording, there are several
problems had to be encountered i.e. noise, motion artifact, signal instability,
cross talk and signal inconsistency. Inconsistency here refers to the variation
of the quantity of EMG features with respect to the quantity of force produce by
the muscle. In addition, inconsistency of features to force mapping occurs
across different person as well as across different reading of an individual.
Inconsistency is due to muscle strength and size, cross talk, signal to noise
ratio (SNR), signal bandwidth and fatigue condition. Inconsistency causes
nonlinearity or linearity relationship between features to force mapping.
Previous method introduce to solve the inconsistency across different reading of
an individual but the problem lies on reading across different person. Thus,
this paper presents a method to solve the inconsistency of EMG signal across
different person by normalizing the EMG based on percentage of maximal voluntary
contraction adaptive with muscle endurance (pre-fatigue), %MVCPF. This method is
based on a hypothesis that Integrated EMG (IEMG), Mean Absolute Value (MAV),
Root Mean Square (RMS), Sum Square Integral (SSI) and standard deviation
features are directly proportional to %MVCPF of all person. There are 2
indicator to measure the inconsistency problem which are p-value must be less
than 0.05 and the root mean square error of regression must be less than 10%.
The results show that p-value of each person signal after normalizing is 0.0126
which is less than 0.05. In addition, the results for root mean square error of
regression show IEMG, MAV, RMS, SSI and standard deviation features are less
than 10%. These shows the improve normalizing method give the better results to
solve inconsistency reading problem across different person. For the conclusion,
an objective which is to solve inconsistency features to force mapping issue of
different person successful solve. |
Keywords: |
Surface Electromyography (sEMG), Feature Extraction, Nonlinear, ANOVA, MVCPF |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
IMPLEMENTATION OF INTELLIGENT CONTROL STRATEGIES ON CURRENT RIPPLE
REDUCTION AND HARMONIC ANALYSIS AT THE CONVERTER SIDE OF THE INDUSTRIAL
INVERTERS AND TRADEOFF ANALYSIS |
Author: |
R.SAGAYARAJ, Dr. S.THANGAVEL |
Abstract: |
This paper presents a trade-off analysis among different intelligent control
based resistance emulation techniques. Resistance emulation, which is the
consequence of the current ripple reduction and harmonic reduction, is developed
with intelligent control on different Pulse Width Modulation (PWM) techniques.
In industrial drives, the harmonic reduction has been carried out using
different PWM techniques in the inverter side of the drives, but this paper
deals the harmonic current ripple reduction in the converter side of the
electrical drives. The three-phase induction motor is connected as load to the
boost converter for this analysis. Traditional PI controller based active
resistance emulation is compared with the Fuzzy Logic Controller (FLC) and ANFIS
controller based emulator. The analysis that would suggest the trade-off on the
performance of each of this technique gives guidelines to select on which
technique to be adopted at what situation. MATLAB / Simulink based simulation is
carried out and the results are tabulated for the comparison purposes. The
parameter for comparison is the current ripples and the Total Harmonic
Distortion (THD). The simulation results show the effectiveness of the proposed
method. |
Keywords: |
Adaptive Neuro-Fuzzy Inference System (ANFIS), Average Current Mode (ACM), Fuzzy
Logic Controller (FLC), Pulse Width Modulation (PWM), Total Harmonic Distortion
(THD) |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
IDENTIFYING A THRESHOLD CHOICE FOR THE SEARCH ENGINE USERS TO REDUCE THE
INFORMATION OVERLOAD USING LINK BASED REPLICA REMOVAL IN PERSONALIZED SEARCH
ENGINE USER PROFILE |
Author: |
P.SRINIVASAN AND K.BATRI |
Abstract: |
The search engine information over load reduction has been focused and presented
in the paper. The replica of cluster content, hyperlinks and sub-hyperlinks in
the personalized user profile have been removed to reduce the information
overload. This has been carried out through a personalized search results
against existing query cluster's method. It has been found that the user profile
with replication has more information overload than non-replicated profile. The
experimental result performed by Lingo algorithm with few threshold values after
finding the word weight of all content and pre-processing. The result findings
confirm that the negative impact on cluster content and positive impact over
sub-hyperlinks at the same time and no impact on hyperlinks. In order to measure
the performance , few choices of the threshold result findings have been
confirmed for the better results. The student t-test is used for analysis, and
it confirms that the variations before and after removal of cluster and link
based replica content in all the threshold levels. |
Keywords: |
Cluster Content Sub-Hyperlinks Keyword User Profile Replica Hyperlinks Overload
Search Engine |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
HYPERSPECTRAL FACE CLASSIFICATION IN WAVELET DOMAIN USING KNN CLASSIFIER |
Author: |
P.V.V.Kishore, A.S.C.S.Sastry, T.Krishna Murthy, B.Gowthami, P.Anjana |
Abstract: |
Hyperspectral face images present constructive information captured using a
Hyperspectral camera compared to normal RGB camera capturing face images.
Hyperspectral imaging is the collecting and processing of information from
across the visible electromagnetic spectrum. Hyper spectral imaging deals with
the imaging of narrow spectral bands over a continuous visible spectral range,
and produces the spectra of all pixels in the scene. In this research face
recognition experimentation is done in near infrared hyperspectral images. The
recognition is accomplished on hyperspectral face database consisting of 47 test
subjects created by Hong Kong Polytechnic University. The database images of
hyperspectral faces were collected using a CCD camera equipped with a liquid
crystal tunable filter to provide 33 bands over the near-infrared (0.7_m-1.0_m).
Experiments were conducted to demonstrate that this simple algorithm can be used
to recognize faces that changes in facial pose and expression over time. |
Keywords: |
Hyperspectral, Face Classification and Identification, Wavelet based Fusion,
Principle Component analysis (PCA), Minimum Distance Classifier(KNN). |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
RELIABILITY ANALYSIS OF WEB SERVICES BASED ON RUNTIME FAULT DETECTION |
Author: |
K.JAYASHREE, DR. SHEILA ANAND, R. CHITHAMBARAMANI |
Abstract: |
Web service technology is being increasingly used for commercial business
applications. Reliability of the web services provided is an important criterion
to enable the wide spread deployment of such services. This paper proposes
analyzing the reliability of web services based on runtime fault information.
Monitoring of web service interactions enable runtime faults to be detected and
recorded. A sample application was developed and logs were designed for
recording the service usage and faults information. Faults were injected in the
sample application and faults during publishing, discovery, binding, execution
and composition were recorded. Reliability estimation and prediction was carried
out using Jelinski-Moranda Binomial model and Goel-Okumoto Non-Homogenous
Poisson model. |
Keywords: |
Reliability for web services, Monitoring Component, Fault Diagnoser, Fault Log,
Service Usage Log, Reliability analysis. |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
A SMART SCHEDULING SCHEME WITH VOICE ENHANCEMENT FOR VOICE SERVICES IN LTE |
Author: |
M. KARPAGAM, N.NAGARAJAN |
Abstract: |
Long Term Evolution (LTE) technology supports only packet based services. To
facilitate better Quality of Service (QoS) for Voice packets, LTE has resorted
to Circuit Switched Fallback (CSFB) approach to deliver voice services. However,
the CSFB scheme has many disadvantages whichare discussed in detail in the later
stages of the paper. In this paper, we propose a scheduling scheme with voice
enhancement for effective delivery of voice services in LTE. The proposed scheme
uses packet switching unlike the CSFB scheme which uses circuit switching to
guarantee delivery of voice services, thereby, providing the network with the
flexibility of resource allocation that is provided in the packet switching.
Apart, the simulation results have demonstrated that the proposed scheme was
able to achieve the same QoS delivery provided by the CSFB approach. |
Keywords: |
LTE, Voice Services, Scheduling, QoS, CSFB |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
PRIVACY ENHANCED PERVASIVE MODEL WITH DYNAMIC TRUST AND SECURITY IN
HEALTHCARE SYSTEM |
Author: |
GEETHA MARIAPPAN AND MANJULA DHANABALACHANDRAN |
Abstract: |
In Pervasive environment privacy is foremost concern. In this paper proposed an
intelligent mechanism for privacy preservation model using dynamic trust and
security management techniques. The minimum required information without
ambiguity to be extracted from the trusted store and is to be conveyed or
exchanged to the trusted entities with in or outside the system as many times as
possible in the right context during right session to enhance the privacy of the
information. Here the issue is concurrency and asynchronous nature of the
information that means the same information is to be retrieved by the number of
entities at the same time during any session. So, the availability of the user
information and displaying secret information in public centers will create a
negative impact on some important users. To avoid this scenario we develop an
intelligent system which can identify trusted entities and dynamically adopt a
mechanism in relation with other contextual entities. |
Keywords: |
Privacy, Data Globalization, data Virtualization, Data embellishment, Pervasive |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
ACTIVITY CLASSIFICATION FRAMEWORK BASED ON PERSONALITY AND TIME SCALE |
Author: |
SITI SUHANA ABDULLAH, MOHD ROSMADI MOKHTAR, MOHD JUZAIDDIN AB AZIZ |
Abstract: |
A variety of activities input formally by a user such as data from a calendar
application usage, contains cognitive information that can be predicted through
an analysis of formal recorded activities. Although these data are static and
rigid, they still contain information which facilitates in understanding
activity tendencies for today’s computing requirement. This research aims to
classify activity and develop an activity classification framework based on
personality and time scale. The framework formation involves data collection,
analysing and classifying activities based on total frequencies of the OCEAN
personality model and the time scale identified. The testing phase involved 6
evaluators who evaluated and classified activities based on the OCEAN
personality model framework and time scale. The OCEAN model contains various
adjectives that have been categorized into five dimensions; namely Extraversion,
Agreeableness, Conscientiousness, Neuroticism and Openness to Experience. These
classifications are used and accepted in the field of psychology attributed to
personality and character development. The average percentages evaluated by the
evaluators are 26.7% (Extraversion), 1.6% (Neuroticism), 59.0%
(Conscientiousness), 26.3% (Agreeableness) and 4.9% (Openness to Experience).
The results show the capability of the framework classifying activity with the
selection of the Conscientiousness dimension by 60% and Extraversion dimension
by 40% as dominant dimensions. The results revealed that the activity
classification framework based on personality and time scale is capable of
classifying activity to the dominant personality dimension. |
Keywords: |
Cognitive Science, OCEAN Model, Activity Classification, Personality Dominant,
Calendar Data |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
ROBUST VIDEO STEGANOGRAPHIC MODEL FOR BANKING APPLICATION BY CASCADING THE
FEATURES OF SVD AND DWT |
Author: |
BOOPATHY. R , RAMAKRISHNAN. M. , VICTOR. S.P |
Abstract: |
Secret communication is attained by Steganography. Imperceptibility and
Robustness are the two main imperative components of Steganography. This paper
presents a new combined approach to improve the imperceptibility and robustness
by cascading the features of Singular Value Decomposition (SVD) and Discrete
Wavelet Transform (DWT). This proposed method show the effective usage of DWT
and SVD for hiding the already encrypted secret text message which adds as
another layer of security. The frame separation is done and 2D-DWT is applied on
desired frames, we have chosen lower resolution approximation image (LL sub
band) as well as (HL, LH) detail components for SVD transformation and
embedding, low frequency approximate coefficients are properly used in this
method which is different from other conventional methods. Results proved that
original message is extracted without any loss and minimum distortion is noticed
in the reconstructed video. High capacity is accomplished by taking the video
and converting into frames for embedding. The Experimental results show that the
proposed method is robust against various noise addictive attacks including Salt
& Pepper noise, average PSNR and NC values are also maintained as 77db and 0.999
respectively for all the frames, it is noticed that the original and disturbed
frames are almost identical. Finally comparison of the proposed method with
other existing methods has been carried out. |
Keywords: |
DWT, LSB, Steganography, SVD, Wavelet decomposition |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
AN INTEGRATED MODEL FOR SECURE ON DEMAND RESOURCE PROVISIONING BASED ON
SERVICE LEVEL AGREEMENT (SLA) IN CLOUD COMPUTING |
Author: |
ADNAN ABDULWAHED HASSAN, BASMA MOHAMADHASHIM BAI, TAGHI JAVDANI GANDOMANI |
Abstract: |
Cloud computing, as new paradigm of new century, provides various advantages and
benefits for the organizations and individuals. Among them resource provisioning
is a big advantage that supports cloud providers cloud resources with their
users. Providing a secure way for resource provisioning is a critical issue, so
that any neglect of it causes a lot of concerns for both providers and users.
Service Level Agreement (SLA) defines the level of access to the resources
providing in cloud environment. The main aim of this study is providing a model
to integrate the common security policies and mechanisms with SLA. It proposed a
security infrastructure for on-demand service provisioning. |
Keywords: |
Cloud Computing, Resource Provisioning, Service Level Agreement (SLA), Security,
Secure Model |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
A NOVEL R-SIP CACHE CONSISTENCY MECHANISM FOR MOBILE WIRELESS NETWORKS |
Author: |
LILLY SHEEBA S, YOGESH P |
Abstract: |
In Mobile Adhoc NETwork (MANETs), caching of data items among mobile nodes have
become an inevitable solution for fast information retrieval. However most of
the research works focus on ensuring routing, security and research on
maintaining cache consistency within MANETS. Here we focus on providing cache
consistency in a mobile environment so as to increase the probability of
retrieving up to date data from nearby caching nodes instead of from the server.
In order, to ensure cooperative cache consistency we are going for updating the
cache nodes using Registration based Server Initiated Push(R-SIP) mechanism
where server maintains a registration table to maintain the details of all
registered clients and updates those registered, by pushing the updated data, as
and when the server gets refreshed. Here only registered clients that have
registered are updated frequently, there by not compelling unregistered clients
to receive the recently updated data. This cache consistency mechanism for
cooperative caching ensures reduced bandwidth utilization, less query latency,
besides decreasing network traffic and the communication overhead at the data
server. |
Keywords: |
Cache Consistency, Cache Invalidation, Mobile Ad Hoc Networks, Registration
Table. |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
A TDMA-BASED SMART CLUSTERING TECHNIQUE FOR VANETS |
Author: |
Mr. J.JAYAVEL, Dr. R.VENKATESAN, Mr. S. PONMUDI |
Abstract: |
A vehicle’s on-road time since the last pit stop has to be taken into account
while forming clusters using Vehicular Ad-hoc Networks (VANET). This is
important so as to keep the vehicles at optimum driving conditions and to
provide rest to the driver at regular intervals. To the best of our knowledge,
no clustering scheme has considered this important practical issue while
designing a routing protocol for VANET. In this paper, we present a dynamic and
stable cluster-based MAC protocol with a more realistic selection metric that
includes the vehicle's on-road time and position messages like relative speed,
direction and connectivity among neighboring vehicles while giving priority to
vehicles joining a cluster. Our proposed Travel Time Based Clustering Approach (TTCA)
is based on a TDMA-based MAC scheme and has been simulated using OMNET++ and
SUMO. The results express a significant improvement in terms of cluster
stability and throughput against existing approaches. |
Keywords: |
Vehicular Ad hoc networks, Cluster, TDMA, OMNET++, SUMO |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
DATA COLLECTION USING MOBILE ROBOT IN WSN:A REVIEW |
Author: |
RAJARAM.P, PRAKASAM.P |
Abstract: |
Development of mass construction, the charge of sensor nodes has reduced
significantly and resource-rich sensor nodes are organized with high-technology
events, for example GPS. However energy is used to measure the capability of the
battery for all sensor nodes, the energy attenuation of sensor node fallout in a
dull mode that is not able to communicate with other nodes, this regulates a
bottleneck in a WSN. To conquer the issue, a new data collection algorithm using
a mobile robot from a wireless sensor networks, this is used to collect the
sensed data. It observes the present sensed data collection techniques and
provides a summary of the using mobile robot in wireless sensor networks and
associated research work on this area. Also comparisons are done between the
various schemes to explain the advantages and restrictions. The experimental
evaluation is conducted with various aspects to show the enhancement and
deployment of the sensed data collecting process in wireless sensor networks. |
Keywords: |
Data Aggregation, Data Collection, Mobile Computing, Mobile Robot, Wireless
Sensor Networks, Sensor Network, Target Model. |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
OPTIMUM DESIGN OF MULTIPLE WINDING INDUCTION MOTOR |
Author: |
S.S.SIVARAJU, V.CHANDRASEKARAN |
Abstract: |
Out of total electrical motors 80% are three-phase squirrel-cage induction
motors which are widely used in industrial and domestic applications because of
the relatively low cost and high reliability. When oversized, most of these
motors operate in low efficiency and poor power factor. Adjustable flux motors
with multiple winding connections can be an energy efficient solution in
variable and fixed load applications, improving the efficiency and power factor
by means of properly adapting motor flux to the actual load. In this paper, a
design optimization method is proposed. The optimal design of a multi-flux
stator winding to improve motor efficiency and power factor in a wide load range
is proposed using Genetic Algorithm. This algorithm is a population-based search
algorithm characterized as conceptually simple, easy to implement and
computationally efficient. A parameter-less loss approach is incorporated in the
proposed algorithm to handle the constraints effectively. A comparison of
optimum design with the conventional design for a 2.2-kW three phase squirreal-cage
induction motor is presented. It is demonstrated that the optimal design produce
better efficiency over the entire load range including energy saving which is
most suitable for industrial applications |
Keywords: |
Adjustable Flux, Energy Efficient, Multiflux Stator Winding, Less Parameter
Approch, Population Algorithm. |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
SIMULATED ANNEALING ALGORITHM USING ITERATIVE COMPONENT SCHEDULING
APPROACH FOR CHIP SHOOTER MACHINES |
Author: |
MANSOUR ALSSAGER , ZULAIHA ALI OTHMAN |
Abstract: |
A Chip Shooter placement machine in printed circuit board assembly has three
movable mechanisms: an X-Y table carrying a printed circuit board, a feeder
carrier with several feeders holding components and a rotary turret with
multiple assembly heads to pick up and place components. In order to minimize
the total traveling time spent by the head for assembling all components and to
reach the peak performance of the machine, all the components on the board
should be placed in a perfect sequence, and the assembly head should retrieve or
pick up a component from a feeder that is properly arranged. There are two
modeling approaches of solving the components scheduling problem: integrated and
iterative approaches, most popular meta-heuristic used so far for components
scheduling problem is population based using integrated modeling approach. This
work presents a single based meta-heuristic known as Simulated Annealing with an
iterative modeling approach was adopted. The computational study is carried out
to compare other population-based algorithms that adopted integrated approach.
The results demonstrate that the performance of the simulated annealing
algorithm using iterative approach is comparable with other population-based
algorithms using integrated approach. |
Keywords: |
Printed Circuit Board, Chip Shooter Machine, Simulated Annealing. |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
ENHANCED WATERMARK DETECTION MODEL BASED ECHO HIDING TECHNIQUE |
Author: |
MOHAMED TARHDA, RACHID EL GOURI, LAAMARI HLOU |
Abstract: |
Initially, Internet was a tool in the world of research and science. Information
was exchanged between universities and institutions and contributed to deepen
our knowledge. Today, the explosion of popularity of the World Wide Web has
accelerated the development of the Internet as an information and communication
resource for consumers and businesses. It has become a great way to communicate
exchange, work, meet, learn and even trade. However, Piracy has also grown and
causes much more harm to the intellectual property owners, who found themselves
losing the revenue that would have been gained if they had the legitimate
product been purchased.
To fight piracy and contribute to intellectual properties protection digital
watermarking researchers develop many techniques to hide special signals into
original digital content in such a way that it would be used as a proof to
support ownership claims. These techniques are related to text, image, video and
audio contents.
In this paper, we completed a non-blind audio watermarking design based on echo
hiding technique. To achieve this, the audio signal of a music recording, an
audio book or a commercial is slightly modified by introducing echoes with
appropriate delays. In detection process, we used polynomial-based scheme as an
alternative to cepstrum-based one that is commonly used in literature [1]. In
our method, the quality of watermarked audio tracks is enhanced as the echo is
much more attenuated in comparison to the classical echo hiding approaches. The
bit rate of the embedded information is increased as both attenuations and
delays could be used for embedding watermarks. Simulations show good detection
results. |
Keywords: |
Audio watermarking, Echo hiding technique, Cepstrum, Polynomial approach, Delay. |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
A BACK PROPAGATION BASED BOTNET DETECTION ALGORITHM FOR ENHANCED NETWORK
SECURITY |
Author: |
M.KEMPANNA, DR.R.JAGADEESH KANNAN |
Abstract: |
The increased network computers makes botnet detection as a challenging one and
it makes easier for intruders and attackers to generate mitigation attacks. The
centralized propagation nature of botnet floods warms through different botnet
clients and questions the network security. To overcome the challenges in
identifying the botnet, we propose a new back propagation algorithm for botnet
detection. The proposed back propagation method is a learning one, which keeps
track of signature of identified warm and the list of hop it traversed. For each
warm signature identified, it maintains bot matrix, in which set of hop
addresses traversed by the data packet is stored. Whenever a warm packet is
identified, its traversal path is tracked and compared with the list of hops
present in the bot matrix for the occurrence of hop address present in the
traversal path. |
Keywords: |
Network Security, Botnet, Peer-Peer Networks, Bot Matrix, Hops. |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
AN EFFECTIVE QUEUING MODEL BASED BANDWIDTH ALLOCATION SCHEME FOR FAIR QOS
PROVISIONING IN LTE SYSTEMS |
Author: |
M. KARPAGAM, N. NAGARAJAN |
Abstract: |
In this paper, we propose a novel bandwidth allocation scheme that will
guarantee quality-of-service (QoS) support in the downlink (DL) of a LTE-A
system. The proposed scheme is developed based on a queuing model that links
important performance metrics of DL service flows of LTE-A systems to a set of
tunable parameters. Based on the outcome of the results of the queuing model,
the performance metrics are set. The bandwidth allocation scheme then uses the
pre-determined performance parameters in making the scheduling decisions. In
order to achieve the best results from the proposed scheme, the scheme is
integrated with the slot allocation process that is carried out in the physical
layer. LTE-A systems uses orthogonal frequency-division multiple access (OFDMA)
slot allocation mechanism that adapts to channel conditions at the destination
mobile stations (MSs). The proposed scheme performs better than the conventional
approach in terms of system throughput, delay and packet dropping ratio,
thereby, improving the QoS of the overall system. Further the proposed system is
simple in design and its cross-layer approach guarantees improved resource
utilization especially in the presence of link adaptations and channel fading.
Simulation results have demonstrated that the proposed scheme provides better
QoS support for both real-time and non-real-time applications in LTE-A systems. |
Keywords: |
LTE-A, OFDMA, Queuing Model, QoS, Cross Layer Approach. |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
AUTONOMOUS RE-CONFIGURATION SYSTEM WITH ROUTE STABILITY IN WIRELESS
NETWORKS |
Author: |
K. MUTHULAKSHMI, Dr. K. BASKARAN |
Abstract: |
In multihop wireless networks experience frequent link failure caused by channel
interference, dynamic obstacles, and bandwidth demands. Because of this
degradation the wireless networks require expensive manual network management
recovery. This paper presents an autonomous network reconfiguration system (ARS)
using AOMDV protocol and acknowledgement reception .The ARS enables the multi
radio, wireless networks to autonomously recover from local link failure to
preserve the network performance. The ARS provides necessity channel assignment
changes with the help of the AOMDV protocol to improve performance . AOMDV uses
the channel average non fading duration as a routing metric to select the stable
link for path discovery and applies a preemptive hands-off strategy to maintain
a reliable connection by exploiting channel state information.After the data
transmission concern receiver provides the acknowledgement notification to the
transmitter node via an alternate path.The evaluation results show the network
path becomes stable and no data loss can be occurred. The overall system
performance efficiency by more than 90%. |
Keywords: |
Autonomous reconfiguration system, AODV, AOMDV, Wireless link failure and
Random Way point Model |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
ENERGY EFFICIENT MAC PROTOCOL WITH FAIR-SCHEDULING TECHNIQUE IN MANET |
Author: |
P. SIVANESAN, S. THANGAVEL |
Abstract: |
In Mobile Ad hoc Networks (MANETs), achieving fairness and increasing channel
utilization are the important design goals of scheduling. However, these two
goals contradict with each other. In this paper, a fair-scheduling technique for
inelastic traffic flows in MANET is proposed. The network traffic is
differentiated into two categories as elastic and inelastic flows. In this
technique, data packets of inelastic flows are prioritized over data packets of
elastic flows. Utility function is estimated for considering channel utilization
and channel state information along with delay of data packets. When more than a
data packet of inelastic flows compete in scheduling, packet with high (upper
bound) on delay field is prioritized and scheduled. The proposed technique is
validated through simulation results. It is proved that the proposed technique
offers fairness in scheduling network traffic. |
Keywords: |
Channel utilization, Fair-scheduling, MAC, MANET, Prioritized. |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
IMAGE RESTORATION BASED ON UNSYMMETRIC FUZZY SWITCHED STEERING KERNEL |
Author: |
R. PRABHU, Lt. Dr. S. SANTHOSH BABOO |
Abstract: |
The Digital image restoration is one of the image processing techniques that
deals with the methods used to recover an original image from a degraded image.
The proposed work is done for high density impulse noise which is affected by
gaussian blur and is solved by using fuzzy filtering techniques for denoising
and a Noise Suppressed Steering Kernel for deblurring which is a hybrid method.
In the previous work the outputs suffering with spurious edges in the case of
denoising. The above disadvantages of the steering kernel are overcome by using
median filters. The presence of spurious edges on iterative process gets added
up and becomes stronger if not removed. Hence it can be eliminated by using
median filters at the end of the each iteration, since this method is an
iterative process. A noise suppressed steering kernel NSSK is used in the
proposed method. |
Keywords: |
Image restoration, Fuzzy filter, Steering kernel, Noise suppressed steering
kernel. |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
OPTIMIZED DIGITAL FILTER ARCHITECTURES FOR MULTI-STANDARD RF TRANSCEIVERS |
Author: |
R.LATHA, Dr.P.T.VANATHI |
Abstract: |
This paper addresses on two different architectures of digital decimation filter
design of a multi-standard Radio Frequency (RF) transceivers. Instead of using
single stage decimation filter network, the filters are implemented in multiple
stages using FPGA to optimize the area and power. The proposed two types of
decimation filter architectures reflect the considerable reduction in area &
power consumption without degradation of performance. The filter coefficients
are derived from MATLAB and the filter architectures are implemented and tested
using Xilinx SPARTAN FPGA .The Xilinx ISE 9.2i tool is used for logic synthesis
and the Xpower analysis tool is used for estimating the power consumption.
First, the types of decimation filter architectures are tested and implemented
using conventional binary number system. Then the two different encoding schemes
namely i.e. Canonic Signed Digit (CSD) and Minimum Signed Digit (MSD) are used
for filter coefficients and then the architecture performances are tested .The
results of CSD and MSD based architectures show a considerable reduction in the
area & power against the conventional number system based filter design
implementation. |
Keywords: |
Digital Transceiver, Multi-rate Digital Filter, Multistage Decimation Filter,
FPGA, Area Reduction, Low Power Design. |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
REAL TIME HEART AND LUNG SOUND SEPARATION USING ADAPTIVE LINE ENHANCER
WITH NLMS |
Author: |
K.SATHESH, Dr.N.J.R.MUNIRAJ |
Abstract: |
The heart sound signal (HSS) separated from the recorded real time raw sound
signal (RSS) in a human being. The signal is taken from the different age groups
for both male and female. In a real time sound signal separation techniques the
desired sound signals are difficult to achieve, now we have separated real time
raw lung sound signal into the real heart sound signal. To the proposed
technique we have introduced adaptive line enhancer (ALE) with normalized least
mean square (NLMS) algorithm which is used to obtain the desired sound signal
from real time sound signal and the linear predictive FIR filter are used to
detect the other sound signals and the interferences. We have introduced the new
adaptive filter algorithm such as the normalized least mean square algorithm to
obtain the desired real heart sound signal because the NLMS is one of the factor
µ is not constant and also the weights are updated for each iteration. So we get
the expected the real heart signal without interferences in a real time recorded
sound signal. The proposed system is implemented and in the results we have
considered some parameters to find the error rate of the desired sound signal (DSS),
signal to noise ratio (SNR) and computational time are verified using Matlab
2010. |
Keywords: |
Heart sound signal (HSS), Lung sound signal (LSS), Adaptive line enhancer (ALE),
Normalized Least mean square (NLMS), Finite impulse response filter (FIR). |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
DESIGN OF AIRFOIL USING BACKPROPAGATION TRAINING WITH PENALTY TERM |
Author: |
K.THINAKARAN, DR.R.RAJASEKAR |
Abstract: |
Here, we investigate a method for the inverse design of airfoil sections using
artificial neural networks (ANNs). The aerodynamic force coefficients
corresponding to series of airfoil are stored in a database along with the
airfoil coordinates. A feedforward neural network is created with input as a
aerodynamic coefficient and the output as the airfoil coordinates. . In existing
algorithm as an FNN training method has some limitation associated with local
optimum and oscillation. In this paper new cost function is proposed for hidden
layer separately. In the proposed algorithm the output layer is incorporated
into to the cost function having linear and non linear error terms. The
functional constraints penalty term at the hidden layer are incorporated into to
the cost function having linear and nonlinear error terms. Results indicate that
optimally trained artificial neural networks may accurately predict airfoil
profile. |
Keywords: |
Linear Error, Nonlinear Error, Steepest Descent Method, Airfoil Design, Neural
Networks, Backpropagation, Penalty Term. |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
DEVELOPMENT BATTERY OPERATION MANAGEMENT TO MAINTAINING CONTINUITY
OPERATION OF MICROGRIDS IN ISLANDING CONDITION |
Author: |
HARTONO BUDI SANTOSO , RUDY SETIABUDY , BUDIYANTO |
Abstract: |
This research developed power sharing methods between inverters in the PV
microgrid. The regulator of microgrid operation mode while using the back-up
battery in every distributed generation during islanding condition needs to be
done in order to maintain continuity of power supply from every generator to
meet the power load. Arrangement of microgrid operation mode is done by
adjusting power-sharing supply distribution between generators which sourced
from the battery and utilizing energy source from solar radiation optimally when
the sun exist.
The study was conducted by comparing three microgrid operation mode simulations
; stand alone, equal output power, and equal battery level. The simulation
results show that the distribution of power sharing method between generators,
based on equal battery level, is better than equal output power, where on equal
battery level, the Power Distribution Index (PDI) = 100% while, in equal output
power, PDI = 94.6%. Meanwhile forthe Battery Charging Index (BCI) both methods
give the same result of 100%. |
Keywords: |
Microgrid, Power Distribution Index, Battery Charging Index, Islanding Condition
And Continuity Operation |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
AN EFFICIENTBINARIZATION TECHNIQUE FOR HISTORICAL DOCUMENT IMAGES |
Author: |
J.RAMYA, B.PARVATHAVARTHINI |
Abstract: |
Binarization is an important preprocessing step in several document image
processing tasks. Binarization of historical document with poor contrast, strong
noise, and non-uniform illumination is a challenging problem. A new binarization
algorithm has been developed to address this problem. In this paper, we describe
a new method which does not utilize statistical properties of the intensity
histogram of a gray-scale image to determine a threshold. Our method is based on
the “Divide and Conquer” strategy for calculating a threshold value which is
based on the list of gray levels in that image. It uses low pass Weiner filter
method as a preprocessing step to enhance the image by deblurring the image. It
uses Median Filter method as a postprocessing step to reduce noise in the image.
Our result shows that this new binarization method produce high quality binary
image for historical document than any other global methods such as Otsu’s
method. |
Keywords: |
Binarization, Divide and Conquer, Historical Document images, Threshold. |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
Title: |
EFFECT OF DUAL GATE MOSFET ON THE DECODER BASED DEMULTIPLEXER |
Author: |
ABDUL NAZAR C.A, KANNAN.V |
Abstract: |
In this work, a decoder based demultiplexer is designed using DG MOSFET and its
functional behavior is analyzed along with the characteristics of the device.
Equivalent circuit method is followed in this work, which is very significant in
analyzing the characteristics and performance of the dual gate metal oxide
semiconductor field effect transistor. The maximum drain current obtained is 40
mA at a gate source voltage of 5V and the variation of the drain current with
respect to drain voltage is also obtained. The decoder based demultiplexer is
designed and the simulation is carried out using PSPICE simulation software,
whereas the functional behavior is in accordance with the theoretical facts. |
Keywords: |
Demultiplexer, DGMOSFET, Equivalent Circuit Method, Drain Current |
Source: |
Journal of Theoretical and Applied Information Technology
20 July 2014 -- Vol. 65. No. 2 -- 2014 |
Full
Text |
|
|
|