|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of Theoretical and Applied Information Technology
May 2017 | Vol. 95 No.10 |
Title: |
A RISK MITIGATION DECISION FRAMEWORK FOR INFORMATION TECHNOLOGY ORGANIZATIONS |
Author: |
NORAINI CHE PA, BOKOLO ANTHONY JNR., YUSMADI YAH JUSOH, ROZI NOR HAIZAN NOR, TEH
NORANIS MOHD ARIS |
Abstract: |
Information technology (IT) organizations are faced with various risks such as
strategic, operational and technical risks. These risks should be identified,
measured and mitigated. Risk mitigation gives an opportunity to IT practitioners
and management to compute risks and develop suitable strategies to treat the
risk. Risk mitigation in organizations provides a disciplinary environment for
decision making to measure and treat potential risk continuously. Existing model
and frameworks provides inadequate support to practitioners in making risk
decision pertaining risk mitigation. This is due to the fact that existing
models or frameworks lacks the capabilities to support practitioners. In order
to address this challenge, this research identifies the processes and components
of risk mitigation in organization’s and proposes a framework of risk decision
for mitigating both technical and operational risk using software agents and
knowledge mapping as techniques. Qualitative research was adopted using
interview to collect data. A pilot study was carried out to validate the
instrument. The case study was later carried out to verify the risk mitigation
process and components. Lastly the framework was evaluated using iterative
triangulation. |
Keywords: |
Risk Decision, Risk Mitigation, Software Agent, Knowledge Mapping, Iterative
Triangulation |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
IMPLEMENTATION OF AUTOREGRESSIVE INTREGATED MOVING AVERAGE (ARIMA) METHODS FOR
FORECASTING MANY APPLICANTS MAKING DRIVERS LICENSE A WITH EVIEWS 7 IN PATI
INDONESIA |
Author: |
WARDONO, SCOLASTIKA MARIANI, YULIYANA FATHONAH |
Abstract: |
Drivers License A or Surat Ijin Mengemudi A (SIM A) is the evidence given by the
police to a person who has fulfilled all requirement of driving a motor vehicle.
Data SIM A services than at past time can be used to predict the data in the
future. One of them using Autoregressive Integrated Moving Average (ARIMA)
methods with Eviews 7. The purpose of this research is to find the best model of
ARIMA and using the best model to predict the average public services in the
field of SIM A in Pati Regency, Indonesia for the coming period. The data used
in the form of monthly data from January 2010 until December 2015. The steps in
the search for the best model of ARIMA that are : stasionary test of the data
using a data plot, a correlogram, and a unit root test; make the data become
stationary by differencing and transformation logarithms; estimate the model
when the data is already stationary; doing the diagnostic checking with a
residual normality test, a autocorrelation test, and a heteroskedastic test; as
well as selecting and determining the best model. The step resulted in the best
model that is ARIMA (0,2,2) with a logarithmic transformation which has the
value SSR = 0.937246, AIC = 0.002447, and R2 = 0.755068. |
Keywords: |
Drivers License A, Forecasting, ARIMA, Eviews 7 |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
PERFORMANCE ANALYSIS OF THROUGH SILICON VIA’S (TSVs) |
Author: |
SANATH KUMAR TULASI , M. SIVA KUMAR , P. AMRUTHALAKSHMI , M. AKHILA |
Abstract: |
At present, the 3-D IC integration utilizes Through Silicon Vias (TSVs) and it
increased tremendous energy. The structure of the TSV composed of Cu, isolating
liner and the silicon substrate. The isolating liner is encompassed signal TSV
to stay away from signal leakage from TSV to the silicon substrate. In the
conventional TSV structures, Sio2 is utilized as an isolating liner on account
of its material compatibility with thesilicon substrate. To contrast to that
several researchers had reported the issues of Sio2. Due to the high dielectric
constant to such an extent that it brings about increasing of delay.
Subsequently, Sio2 is not appropriate for elite applications. In this paper, we
utilized polymer liner as isolating liner in place of Sio2. We simulated the
performance analysis for both conventional and proposed TSV structures by
varying radius and height of TSV. The proposed TSV structure simulation shows
better results compared to conventional TSV structure. |
Keywords: |
Delay, Height, Power, Radius, Through-Silicon Via (TSV). |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
A NOVEL LOW POWER AND LOW DELAY BUFFER USING DOMINO LOGIC DESIGN IN 32 NM
TECHNOLOGY |
Author: |
M.SIVA KUMAR, SANATH KUMAR TULASI, SRAVANI KARANAM, P.TEJDEEP, A.NAGARJUNA,
K.SRISAIRAJVENKAT |
Abstract: |
As device dimensions are miniaturized, propagation delay and power optimization
issues have been accelerating in the circuit design while driving large
capacitive loads. Usually large fan out capacitive loads need to be driven by a
single gate without compromising high speed. Just as scaling the delay in
on-chip designs we go for a consistent system design to scale down the delay in
off-chip designs also. So we focus mainly on driving that large capacitive loads
, in this regard we introduce some driving circuits known as buffers .So the
main objective of this paper is to minimize delay of the overall circuit and
power consumption while driving large capacitive loads using buffers. Hence the
work is carried out in tanner tool in 32 nm technology. |
Keywords: |
Adaptive Exon Predictor, Computational Complexity, Deoxyribonucleic Acid,
Disease Identification, Exons, Three Base Periodicity |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
A THIRD ORDER SIGMA DELTA MODULATOR IN 45nm CMOS TECHNOLOGY |
Author: |
M.SIVA KUMAR, SANATH KUMAR TULASI, R.ARUNKANTH, M.NANDINI, G.SUDHEEER KUMAR |
Abstract: |
In present communication systems low power ADC’s along with high speed
characteristics are the main building blocks. Present the implementation of
these ADC architectures are in scaled VLSI technologies. This paper delineates
the design of a third order single loop switched Capacitor sigma delta modulator
designed of 45nm CMOS technology. The modulator designed is to reduce the power
consumption in the low voltage field. The power consumption is dependent on the
utilization of OTA. So the Gain enhancement OTA which has more power efficiency
compare to two-stage OTA is opted. Simulation results shown are with 45nm CMOS
technology with ±1.2V supply voltage. To design Σ-Δ modulator TANNER EDA TOOL is
used, the schematic is drafted using S-Edit, analysis of transient response have
been done in T-Spice and waveforms are simulated in W-Edit |
Keywords: |
VLSI Technologies, Low-Power Circuits, Sigma Delta Modulator,
Switched-Capacitor, Operational Trans conductance, rail-to-rail swing, Class-AB
Amplifier. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
FPGA IMPLEMENTATION OF DES ALGORITHM USING DNA CRYPTOGRAPHY |
Author: |
B.MURALI KRISHNA , HABIBULLA KHAN , G.L.MADHUMATI , K.PRAVEEN KUMAR ,
G.TEJASWINI , M.SRIKANTH , P.RAVALI |
Abstract: |
DNA Cryptography is the evolving cryptanalytic technology in the field of
information security. Using this Cryptanalytic technology which involves in DNA
Cryptography improves the security level to protect information from attackers.
However all those methods which are proposed earlier remained theoretical
concepts for enhancing security. In addition, Traditional Cryptographic methods
have some demerits such as size of the input, computational speed and cost. To
overcome these problems this proposed paper describes in detail about the
advancements that are made in the DES Algorithm (Data Encryption Standard) using
DNA cryptography. Moreover, this paper illustrates about the DES algorithm’s
encryption and decryption process which follows symmetric key system followed by
DNA cryptography. Out of two stages in the proposed technique, in first stage
the Cipher is generated using conventional DES algorithm, the key that is used
to produce cipher is generated by using partial reconfiguration and later the
key is also encrypted using dummy key. In second stage this encrypted key and
cipher is subjected to DNA computing followed by the protein form i.e., the
cipher is shown in the form of proteins which is unbreakable. This cryptographic
technique is designed and simulated using Xilinx ISE and targeted on Zed board.
The analysis of the results endorse that the proposed algorithm is immune from
attacks, reliable and robust for transmission of information. |
Keywords: |
DNA Cryptography, Data Encryption Standard, RNA, Protein form, Zed Board FPGA |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
DYNAMICALLY EVOLVABLE HARDWARE-SOFTWARE CO-DESIGN BASED CRYPTO SYSTEM THROUGH
PARTIAL RECONFIGURATION |
Author: |
B.MURALI KRISHNA, G.L.MADHUMATI, HABIBULLA KHAN |
Abstract: |
Cryptography establishes a secure channel for data communication between sender
and receiver. Nowadays, millions of online transactions happen in seconds
throughout the world like trading, banking, e-commerce, and social networking
etc., exchanges data among users. Evolution in internet led to increase in
number of hackers, cyber attacks over network, network security has become a
major issue in present era data protection has become significant, such that an
unbreakable encryption technology should be designed in order to provide
security for the data. The advent of VLSI technology has grown enormously in the
last two decades by extending its prominence towards network security where
mainly information processing cryptography has gained popularity in this field.
This paper presents a module in cryptosystem is partially reconfigurable (PR) in
run time which serves two purposes. One module for dynamic key generation
mechanisms and second module for inverse permutation block in Data Encryption
Standard (DES) and shift rows block in Advanced Encryption Standard (AES)
cryptography techniques which play a vital role in data security. A new approach
with Deoxyribonucleic Acid(DNA)structure have four nucleotides which are named
as A (Adenine), C (Cytosine), G (Guanine) and T (Thymine) are the elements
existing in DNA mechanism is applied on both cipher and key are merged and
transmitted along a channel in protein form which enhances the security. Run
time evolvable hardware like, Field Programmable Gate Array (FPGA) architecture
and its behavior changes dynamically with partial reconfiguration are suitable
for wide variety of applications which can configure to implement custom designs
and needs. Encryption Techniques are designed using Verilog HDL, synthesized in
Xilinx simulated with ISIM simulator and implemented on Virtex FPGA
architecture. Dynamic keys and reconfigurable modules are generated by loading
Partial bit streams from CF Card are configured to FPGA by issuing commands in
serial Terminal through MicroBlaze Processor. |
Keywords: |
Cryptography, Partial Reconfiguration, AES, DES, DNA, FPGA. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
BEAM POINTING ACCURACY OF PHASED ARRAYS FOR SATELLITE COMMUNICATION |
Author: |
K CH SRI KAVYA, SARAT K KOTAMRAJU, B. NAVEEN KUMAR, M. D. N. S. MOUNIKA, SROTE
SINGH, AJAY SIDDA |
Abstract: |
Phased arrays are utilized as a part of both radar and communication frameworks.
These phased arrays are generally used in most applications because they can
cover long distance. Phased array for the most part means an electronically look
through arrays. All the more as of late, phased arrays are discovering use in
communication frameworks such in satellites, and for ground based SATCOM. A
phased array is a system that uses large number of individual antenna components
each with phase control. The linear arrangement of components is considered for
the array. The phase control permits the pattern of antenna radiation example to
be filtered electronically to track targets or to keep up interchanges to
sustain link. The capacity to frame different concurrent beams implies that the
radar can at the same time track various targets. The beam pointing error lies
on different variables i.e. if we are trying to point out the beam in particular
direction there may be a very small variation in the beam and this causes the
beam pointing error. The pointing error relies on upon utilizing shifters i.e.,
computerized or analogue, scanning angle, bits utilized for phase shifting and
dispersing between the components. Here we are trying to reduce the pointing
error in order to steer the beam to the desired angle. The pointing error
decreases with increase in number of components, increase in number of bits used
for producing phase states and increase in spacing. |
Keywords: |
Analog Phase Shifter, Beam Pointing Error, Digital Phase Shifter, Effects On Bpe,
Effects Of Steering Angle, Linear Antenna Array. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
IMPLEMENTATION OF LOW POWER SRAM CELL STRUCTURE AT DEEP SUBMICRON TECHNOLOGIES |
Author: |
YEDUKONDALA RAO VEERANKI, DAMARLA PARADHASARADHI, G MADAN SANKAR REDDY, KUPPA PM
SIVA KUMAR |
Abstract: |
SRAM (Static Random Access Memory) is an significant component in memory devices
where refresh operation required. To achieve high speed, SRAM has been used in
most of the SOC chips. To get high reliability and low power consumptions in
various applications a low power Static RAM is needed. This paper concentrates
on reducing the dissipation of power during write operation in CMOS Static RAM
cell for various frequencies. Here different cell SRAM cell structures like
single bit SRAM, Stable SRAM cell respectively implemented and those are
compared with the proposed SRAM Cell construction. Generally, power indulgence
happens through the write operation because of charging and dis-charging of the
SRAM Cell bit line, it is the major problem in the Static RAM cell. In this
work, the Static RAM cell proposed which operates at low power compared to
existing models. The comparative analysis performed in the DSCH and Microwind
tools by applying technology node as 180nm. |
Keywords: |
SRAM, STABLE SRAM Cell, Single bit SRAM Cell, 180nm |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
STATISTICAL ANALYSIS OF PROPAGATION PARAMETERS FOR FADE MITIGATION |
Author: |
K. CH. SRI KAVYA, SARAT K. KOTAMRAJU, B. S. S. S. D CHARAN, PHANINDRA K, BITRA
SRINIVAS, N. NARENDRA KUMAR |
Abstract: |
Signal attenuation is the major reason for the loss of signal in some regions.
There are many atmospheric factors that cause degradation of the signal. In
summer the attenuation is mostly due to scintillations in troposphere. Fog is
the major reason for the attenuation in the winter season. Where as in monsoon
it is due to rain and cloud. The depth of this attenuation purely depends on the
frequency of the signal. If the signal frequency is greater than 10GHz,
degradation of the signal is more. This work deals with the attenuation of the
signal due to rain. This analysis is useful to implement the suitable fade
mitigation technique. Fade mitigation techniques are useful in receiving the
signal without any loss. However, suitability of the fade mitigation technique
varies from region to region. The analysis is done for the Beacon data received
at K L University, Vaddeswaram located 29.08m above sea level with Latitude -
16.46’ N and Longitude – 80.54’ E. |
Keywords: |
Attenuation, Fade Mitigation, Beacon, Ku band, Rainfall Data. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
COMPUTATION OF ATTENUATION DUE TO RAIN FOR KU BAND FREQUENCIES USING DSD FOR THE
TROPICAL REGION |
Author: |
GOVARDHANI.IMMADI, M.VENKATA NARAYANA,SARAT K KOTAMRAJU, T.S.S.P. SARVANI,T.
MANASA,CH. VAMSI YASWANT, J. AKSHAYA KALYAN |
Abstract: |
These days we are observing a rise in spectral congestion mainly due to
increased usage of wireless technologies. This lead to occupy higher band
frequencies for efficient communication. As part of this, antennas operating at
Giga Hertz frequencies are being designed and implemented for satellite
communications. But, the microwave signals are experiencing loss of signal
strength when interfered with various layers of the atmosphere, precipitation,
clouds etc. Here the major impairment is due to rain and so a model which can
estimate signal attenuation has to be developed. This can be quantified using
conventional methods like physical modelling and empirical modelling using
regression technique for years of data. This rises complexity in the calculation
of attenuation caused by rain. Handling such large data is very difficult and it
is also a time-consuming process. Rain drop distribution replaced this hard work
with simplified analysis for any specific region. First, a suitable distribution
model is selected for the region and attenuation is calculated using Mie
scattering for all spherical rain drops, considering a major part of it is due
to its size. This is followed by equation modelling using MATLAB. This
experiment is conducted at K L University located 16.44o East and 80.60o North. |
Keywords: |
Spectral Congestion, Mie Scattering, Drop Size, Drop Size Distribution,
Precipitation, Attenuation Due To Rain. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
ADAPTIVE NOISE CANCELLERS FOR CARDIAC SIGNAL ENHANCEMENT FOR IOT BASED HEALTH
CARE SYSTEMS |
Author: |
MD NIZAMUDDIN SALMAN, P TRINATHA RAO, MD ZIA UR RAHMAN |
Abstract: |
Cardiac Signals (CS) are affected with various artifacts during the acquisition
and transmission. So these artifacts must be removed before presenting it to a
doctor. In the proposed paper Normalized Median Least Mean Square (NMLMS)
algorithm is being introduced for elimination of Power Line Interference (PLI),
Baseline Wander (BW), Muscle artifacts (MA) and Electrode Motion (EM) from CS.
The NMLMS has many advantages over the other conventional algorithms, i.e., it
tends to reject single occurrence of large spikes of noise which otherwise
introduces impulsive errors. Computational complexity can be reduced by the
combination of sign algorithms with the NMLMS algorithm, which results in three
new different algorithms. Based on the above algorithms, various Adaptive Noise
Cancellers (ANC’s) have been developed to eliminate BW, MA and EM from the CS.
The above mentioned algorithms have applied to real CS obtained from the MIT-BIH
database. The simulation results confirm that the NSRMLMS algorithm is better
than the conventional LMS algorithms in terms of Signal to Noise Ration
Improvement (SNRI), Excessive Mean Square Error (EMSE) and Misadjustment (MSD).
From the simulation results it is clear that NSRMLMS achieves the highest SNRI
than the conventional LMS algorithms. The values are as follows: 11.2748dB,
9.4715dB, 10.6917dB and 10.7076 dB. These are the average values in terms of
SNRI for PLI, BW, MA and EM respectively. Due to the reduced computational
complexity these algorithms are usefull for Internt of Things (IOT) based remote
health care monitoring systems. |
Keywords: |
Adaptive Algorithms, Adaptive Noise Cancellers, Artifacts, Cardiac Signal,
health care systems. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
ADAPTIVE SPEECH ENHANCEMENT TECHNIQUES FOR COMPUTER BASED SPEAKER RECOGNITION |
Author: |
JYOSHNA GIRIKA, MD ZIA UR RAHMAN |
Abstract: |
Extraction of high resolution speech signals is important task in all practical
applications. During the transmission of desired signals many noises are
contaminated. The Least Mean Square (LMS) algorithm is a basic adaptive
algorithm has been widely used in many applications as a significance of its
simplicity and robustness. In practical application of the LMS algorithm, an
important parameter is the step size. It is well known that if the convergence
rate of the LMS algorithm will be rapid for the step size is fast, but the
drawback is steady-state mean square error (MSE) will raise. On the other side,
for the small step size, the steady state MSE will be small, but the convergence
rate will be slow. Thus, the step size provides a tradeoff between the
convergence rate and the steady-state MSE of the LMS algorithm. Make the step
size variable rather than fixed to enhance the performance of the LMS algorithm,
that is, choose large step size values during the initial convergence of the LMS
algorithm, and use small step size values when the system is close to its steady
state, which results in Normalized LMS (NLMS) algorithms. In this technique the
step size is not constant and varies according to the error signal at that
instant. In order to improve the quality of the speech signal, decrease the mean
square error and increasing signal to noise ratio of the filtered signal, Weight
Normalized LMS(WNLMS), Error Normalized LMS(ENLMS), Unbiased LMS (UBLMS)
algorithms are being introduced as quality factor. These Adaptive noise
cancellers are compared with respect to Signal to Noise Ratio Improvement (SNRI). |
Keywords: |
Adaptive filtering, Noise cancellation, SNRI, Speech enhancement, Unbiased. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
AN EFFICIENT DELAY AWARE QUALITY RELATION BASED ROUTING TREE ON WIRELESS NETWORK |
Author: |
P.RAJU , DR C.CHANDRA SEKAR |
Abstract: |
Wireless network combine different types of nodes and vector path to perform
information processing with active node variance. The wireless node is
additionally positioned to carry out routing with multiple base stations, and
routing work is taken as an upcoming research work. Many reliable routing
approaches are widely used in many resource limited applications, since it
improves data delivery rate and reduces the average delay. However, they lack
quality of routing and increases the delay time. In this paper, we propose a
Menger connectivity graph based on minimum hop, called Delay Aware Quality
Relation based Routing Tree (DAQR-RT). DAQR-RT provides the quality routing tree
on wireless network. DAQR utilizes the active window conception to record the
quality relationship using previous routing data information in wireless
network. The previous routing information (i.e., historical information) is
designed with the objective of reducing the delayed time rate efficiently in
proposed DAQR-RT. Delay Aware Quality Relation constructs the recognized routing
tree in wireless network. Besides, Active window conception previous information
quality is captured and estimates the before constructing the recognized routing
decision tree for effective broadcasting of information. Finally, the quality
relationship in DAQR-RT mechanism uses a Menger connectivity graph based on
minimum hop count field which resulting in avoids the poor connectivity and
reducing the possibility of retransmissions while broadcasting. Simulations
results are conducted to measure the efficiency of proposed DAQR-RT in wireless
network through data transmission rate, size of the data block, delay rate,
throughput and reduces retransmission during broadcasting. In the performance
evaluation, we show that the Delay Aware Routing Tree algorithm achieves
comparable performance to the state-of-the-art methods. Experimental analysis
shows that DAQR-RT is able to reduce the delay time while broadcasting by 25.62%
and reduce the retransmission rate by 37.5% compared to the state-of-the-art
works. |
Keywords: |
Wireless Network, Menger Connectivity, Delay Aware Quality Relation, Routing
Tree, Active Window |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
CRITICAL SECURITY CHALLENGES IN CLOUD COMPUTING ENVIRONMENT: AN APPRAISAL |
Author: |
MOHAMMAD SHUAIB MIR, MOHD. ADAM BIN SUHAIMI, BURHAN UL ISLAM KHAN, M. MUEEN UL
ISLAM MATTOO, RASHIDAH F. OLANREWAJU |
Abstract: |
This paper mainly contributes a comprehensive survey on the climacteric security
challenges imposed by cloud computing. The paper highlights the
challenges/loopholes existing in cloud environment despite all the efforts
adopted by organizations, and offers the recommendations for cloud providers as
well as users. In this paper, more than 20 research papers pertaining to cloud
security, in the span of past 5 years, have been studied and analyzed and twelve
most important security threats to organizations have been identified that need
attention by research community for encouraging their cloud adoption. This
review concludes that some measures have to be adopted for accomplishing
complete security in all aspects in the cloud environment. Unlike previous
approaches, this effort is directed towards providing a thorough and inclusive
review of the security vulnerabilities in cloud. Furthermore, this paper tries
to suggest means how cloud federations can enhance security while mitigating
risk and building customer trust at the same time. |
Keywords: |
Cloud Computing, Cloud Services, Cloud Service Models, Security Challenges,
Threats |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
QUANTUM KEY DISTRIBUTION THROUGH AN ANISOTROPIC DEPOLARIZING QUANTUM CHANNEL |
Author: |
MUSTAPHA DEHMANI, EL MEHDI SALMANI, HAMID EZ-ZAHRAOUY, ABDELILAH BENYOUSSEF |
Abstract: |
Quantum cryptography is one of the major applications of quantum information
theories. However, the Quantum key distribution (QKD) introduced by Bennett and
Brassard in 1984 which is known as BB84 protocol, is used to obtain a secure
random cryptographic secret key between the expediting Alice and the designating
Bob and to detect the presence of eavesdroppers on the quantum channel. This
channel is not always perfect; it often undergoes a quantum depolarizing channel
which is a model for noise in quantum systems. In this work we study the
depolarizing effect with an anisotropic probabilities of Bit-Flip, Phase-Flip
and Bit-Phase-Flip in the presence of an eavesdropper for the two methods of
attacks, cloning attack and intercept and resend attack, also we prove that the
phase flip probability act strongly on the exchanged information safety. |
Keywords: |
Noise, Depolarizing channel, Eavesdropper, Attack, Phase flip probability |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
COMMON SENSE BASED TEXT DOCUMENT CLUSTERING ALGORITHM BY COARSE AND FINE GRAINED
CLUSTERING TECHNIQUES |
Author: |
G. LOSHMA, DR. NAGARATNA P HEDGE |
Abstract: |
Text documents occupy the major source of data and hence it is important to keep
the data in an organized fashion. Clustering is one of the ways for data
organization, which tends to group similar documents together. In spite of the
presence of numerous existing clustering algorithms, still there is an emergent
need for accurate clustering algorithms. Additionally, most of the clustering
algorithms work by distance based measures, which is the reason for lack of
accuracy. In order to overcome these issues, this work presents a double layered
text document clustering algorithm. The entire system is categorized into phases
such as document pre-processing, representation, clustering and cluster
labelling. The document pre-processing phase prepares the document in such a way
that it is suitable for the forthcoming processes. The document representation
phase is to standardize the structure of the document and this is done by
Document Index Graph (DIG) model. The documents are then clustered by cosine
similarity and rough set of clusters are formed. The second level of cluster
refinement is achieved by ConceptNet, which works on the basis of common sense
reasoning. Finally, the clusters are labelled by picking the top ranked
key-phrase. This work is tested over BBCSport and 20 NewsGroup dataset and the
proposed approach proves better results in terms of F-measure, purity and
entropy. |
Keywords: |
Document clustering, DIG model, Sense based clustering, Distance based
clustering |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
A NOVEL AUTOMATED METHOD FOR COCONUT GRADING BASED ON AUDIOCEPTION |
Author: |
PRASHANTH THOMAS, Dr. ANITA H B |
Abstract: |
The quality of the coconuts used for various purposes is of utmost importance.
Demand for better quality products is constantly on the rise due to the
improvements in the standard of living of people. There is a possibility that a
bad coconut goes unnoticed by the traders, as it is hard to decide if a coconut
is good or bad by relying only on its external appearance. Traditionally,
quality assessment is carried out manually with the help of three senses; sight,
hearing and smell. In the proposed work, a sound processing technique is used in
an attempt to automate this process which overcomes the drawbacks of manual
processing, which can be used in large godowns and warehouses. This proposed
method provides the quality assessment of the coconut purely based on
audioception. While creating the database, coconuts varying in size, shape,
color and water content were taken from several places as a source for the
dataset. Features are extracted from the sound pattern produced by the dropped
coconut, which forms the basis for classification. Sequential Minimal
Optimization (SMO), Dagging and Naive Bayes classifiers were used and the
results obtained were found to be encouraging. |
Keywords: |
Sound Processing, Coconut Grading, Sequential Minimal Optimization (SMO),
Dagging, Naive Bayes, Fast Fourier Transform |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
MDAASI: MODEL DRIVEN ARCHITECTURE APPROACH FOR APPLICATION SECURITY INTEGRATION |
Author: |
LASBAHANI ABDELLATIF, MOSTAFA CHHIBA, ABDELMOUMEN TABYAOUI, OUSSAMA MJIHIL |
Abstract: |
There have been many research works suggesting Model-driven Architecture (MDA)
approaches for automatic application generation and personalization. MDA
approach allows code generation from platform-specific models (PSMs) by the
means of generators that automatically transform models into the source code for
a chosen platform to automate software engineering process. Previous works have
widely addressed code generation, but they are not considering nonfunctional
aspects such as application security. In this current work, we are proposing
some additional MDA mechanisms to generate secure applications based on a given
set of security policies. In this context, this approach is used for integrating
security properties, such as Authorization, Authentication, Communication
encryption, Message Integrity, and Confidentiality of critical data, thus
security properties will be incorporated in the generated software during the
whole development process or in early abstraction stages. In other words,
security models will be merged with the system models in different abstraction
levels by applying a set of model-to-model transformation. As a result of this
process, the system's source code and configuration files will be generated
automatically from communication diagrams by applying a model-to-code
transformation. |
Keywords: |
Model Driven Architecture, Code Generation, Application Security, Communication
Diagram. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
HSLA: HETEROGENEOUS STORAGE-TIER LOG ANALYZER OVER HADOOP |
Author: |
NAWAB MUHAMMAD FASEEH QURESHI, DONG RYEOL SHIN, ISMA FARAH SIDDIQUI, ASAD ABBAS |
Abstract: |
Hadoop ecosystem processes extremely large datasets in a parallel computing
environment. The Hadoop Distributed File System (HDFS) manages operational
aspects of processed, unprocessed and log archives. Recently, HDFS has adopted
heterogeneous environment, that enables file system to cope with storage-tier
data processing. This increases the functional utilization of storage devices
and distributes node capacity to storage-tier unevenly. Thus, a job having high
priority is affected with delay latency and storage devices i.e. Disk, SSD and
RAM consumes individual time overhead to release a non-priority job data. To
analyze the complexity of storage-tier, we present Heterogeneous Storage-tier
Log Analyzer (HSLA) strategy, that collects control and data flow events to a
central repository and performs an analysis over log datasets. The analytics
metrics consists of pre-emptive measures observed through events traces. The
experimental results depict that, HSLA presents a broad aspect of storage-tier
contingency problem and proposes a node computing capacity share strategy to
balance functional processing of HDFS blocks over storage-tier. |
Keywords: |
Hadoop, HDFS, Log analysis, Storage-tier, heterogeneous node. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
ESMP: EXPLORATORY SCALE FOR MALWARE PERCEPTION THROUGH API CALL SEQUENCE
LEARNING |
Author: |
G. BALAKRISHNA, DR. V.RADHA, DR. K.VENU GOPALA RAO |
Abstract: |
One of the critical factor of computer aided services and data security is
defending malicious executables known as malwares. Since the zero day activities
of malware, it becomes continuous process to sense and prevent the malicious
activities of the vulnerable executables. The contemporary literature evinces
the many of malware detection approaches. The malware detection by dynamic
assessment is figured as significant to explore the behavioral information of
the malicious executables. The recent malware analysis is concluding that the
act of obfuscating the malicious executables is boosting the complexity of
defending such attacks. This practice strongly demanding the more accurate
malware defending approaches, hence this manuscript contributed an exploratory
scale to analyze API call sequence in order to estimate the scope of malicious
act by an executable. The proposed model called Exploratory Scale for Malware
Perception (ESMP) is a machine learning strategy that acquires knowledge from
the defined executables that labeled as either malicious or benevolent. Further
this knowledge is used to define the exploratory scale proposed. ESMP even
capable of identifying zero day exploiting of malware. The experimental study
was carried out on set of executables labeled as either malicious or benevolent.
The 70% of the given executables were used to train the ESMP to define
exploratory scale and rest 30% were unlabeled and given to test the significance
of the ESMP towards malware detection accuracy. The statistical metrics such as
accuracy, sensitivity and specificity were assessed to notify the scalability,
robustness and detection accuracy of the ESMP. |
Keywords: |
Executables, Malwares, Benevolent, Zero day activities, Exploratory. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
USING COGNITIVE AGENT IN MANUFACTURING SYSTEMS |
Author: |
KHULOOD ABU MARIA , NAGHAM A. A. , TAREK KANAN, EMAN ABU MARIA |
Abstract: |
Cognitive agents for business and industrial environments are one of the most
dominant ideas in multi-agent scheme today. We propose a general, flexible, and
powerful architecture to build software agents that embed artificial cognitive
factors. The propose agent possesses sensible knowledge and reactive abilities,
and interacts with the external complex business environment, including other
agents. We examine if artificial cognitive states can improve the performance of
agents in some business and industrial conditions. Our agent model generates
responsive actions in reaction to certain stimuli. Cognitive Agent Model (CAM)
is proposed for this purpose. Sales and Production Planning (SPP) (as business
and manufacturing application) is chosen to demonstrate the use of our agent.
Netlogo is used as an agent programming language. It is the agent-oriented
language used to simulate and implement our proposed models. |
Keywords: |
Agent, Agent Modeling, Cognitive Factors, Believable, Behavior, Decision Making,
Intelligent, Multi-Agent System.
|
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
OFFLINE ARABIC HANDWRITTEN ISOLATED CHARACTER RECOGNITION SYSTEM USING SUPPORT
VECTOR MACHINE AND NEURAL NETWORK |
Author: |
MOHAMED AL-JUBOURI, HESHAM ABUSAIMEH |
Abstract: |
The Arabic Language had a little attention in this field compared with other
languages due to the high cursive nature of the handwritten Arabic language,
especially with their dots. The difficulty lies in the complexity of locating
the wavy shape in the characters, which solved by the combination of certain
features extraction methods that work in separate way. The proposed of Isolated
Arabic off-line handwritten recognition system based on two stages classifiers
(Hybrid). First stage is a linear Support Vector Machine (SVM) for splitting the
dataset characters into two groups - Characters with dots and Characters without
dots, by giving certain extraction features to each group. This division can
reduce the error rate of characters recognition which has similar looking shape.
Second stage supplies the first stage result to Neural Network (NN) stage which
granted one of the best correctness and accuracy by training. Finally, a fully
recognized character is acquired successfully. This work is implemented (IFN/ENIT)
dataset, the system significantly reduce the load of NN process by SVM
classifier, which can be used for real-time applications. A total accuracy of
this proposed work reaches 92.2% |
Keywords: |
Arabic Handwritten, Optical Character Recognition. Support Vector Machine,
Feature Extraction, Neural Network, IFN/ENIT.
|
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
A NEW COLOR IMAGE WATERMARKING TECHNIQUE USING MULTIPLE DECOMPOSITIONS |
Author: |
SALAM ABDULNABI THAJEEL, LAMYAA MOHAMMED KADHIM, SALLY ALI ABDLATEEF |
Abstract: |
Continuous and rapid development of network technologies has made communication
faster and simpler nowadays. This improvement has led to an increasing amount
and variety of data (i.e., texts, videos, images, speech, and audio), which are
distributed and exchanged through networks. However, many issues have emerged,
such as illegitimate copying and proof of illusory ownership. Watermarking is
widely used as a technique to copyright protection of digital images. In this
study, we propose a robust watermarking technique for color images using
multiple decompositions to keep the copyright of the owner. Arnold transform is
used to encrypt the watermark image to increase security. The given cover image
is subjected to slantlet transform, contourlet transform, Schur decomposition,
and discrete cosine transform. Eventually, the encrypted watermark is embedded.
Experimental results show that the proposed scheme achieves good
imperceptibility and high resistance against various attacks. |
Keywords: |
Image Watermark, SLT, Arnold Transform, Contourlet Transform, Schur
Decomposition. |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
Title: |
MLD-LEACH: AN ENHANCED LEACH PROTOCOL FOR MULTIMEDIA WIRELESS SENSOR NETWORK |
Author: |
M. T. BENNANI, M. AIT KBIR |
Abstract: |
Transferring multimedia content such as videos and images over a wireless sensor
network represents an open subject of research. It can be treated in different
ways. For instance, we can send images through the network without compression
using the spatial domain or using the frequency domain. In this paper use
wireless sensor network routing protocols, such GPSR, for image transmission by
processing it the frequency domain using the DCT transform. Actually, the aim of
our work is to propose a new variante of Leach protocol, named MLD (Multiple
Level Delay)-Leach, that is tested by studying the impact of image compression
on energy consumption and the quantity of images received by the sink. For this
purpose, we used Omnet++/Castalia as a simulator. Two applications are used in
this simulation: the first one sends images without compression, and the second
application sends images using a compression in the frequency domain. We are
interested by this approach to study network energy behavior over time. |
Keywords: |
Castalia, WMSN, LEACH protocol, OpenCV |
Source: |
Journal of Theoretical and Applied Information Technology
31st May 2017 -- Vol. 95. No. 10 -- 2017 |
Full
Text |
|
|
|