|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of Theoretical and Applied Information Technology
October 2015 | Vol. 80 No.2 |
Title: |
IMPROVEMENT OF PERFORMANCE INTRUSION DETECTION SYSTEM (IDS) USING ARTIFICIAL
NEURAL NETWORK ENSEMBLE |
Author: |
WIHARTO, ABDUL AZIZ, UDHI PERMANA |
Abstract: |
The main focus from one of the problems in computer networks is computer
security systems because of the high threat of attack from the internet in
recent years. Therefore, an Intrusion Detection System (IDS) that monitors the
traffic of computer networks and oversight of suspicious activities in a
computer network is required. Research on intrusion detection system have been
carried out. Several researches have used artificial neural networks combined
with a fuzzy clustering method to detect attacks. However, there is an issue
that arise from the use of such algorithms. A single artificial neural network
can produce overfitting on intrusion detection system output. This research used
two methods of artificial neural networks, namely Lavenberg-Marquardt and
Quasi-Newton to overcome that issue. Both algorithms are used to detect computer
networks from attack. In addition, the use Possibilistic Fuzzy C-Means (PFCM)
before going into the neural network ensemble with simple average. Then on the
output, Naive Bayesian classification method is used. Dataset used in the
research were NSL-KDD dataset which is an improvement of KDD Cup'99. KDDTrain+
used for training data and KDDTest+ for testing data. Evaluation results show
good precision in detection of DoS (89.82%), R2L (75.78%), normal (72.25%) and
Probe (70.70%). However, U2R just get 14.62%. At recall, good results achieved
by normal state (91.44%), Probe (87.11%) and DoS (83.31%). Low results occurred
in U2R (9.50%) and R2L (6.14%). Meanwhile, lowest accuracy on normal category
(81.18%) and highest in U2R (98.70%). The results showed that the neural network
ensemble method produces a better average accuracy than previous researches,
amounting to 90.85%. |
Keywords: |
Accuracy, Anomaly Based, Intrusion Detection System, Neural Network Ensembles,
NSL-KDD |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
DATA COMPRESSION METHODS |
Author: |
ALEKSANDR BORISOVICH VAVRENYUK, ARKADY PAVLOVICH KLARIN, VIKTOR VALENTINOVICH
MAKAROV, VIKTOR ALEХANDROVICH SHURYGIN |
Abstract: |
Currently, there is a sharp worldwide increase in the volumes of the
transmitted, stored, and processed information. Despite a certain share of
superficiality, the term "information explosion" rather precisely describes the
existing situation. According to the American research statistics, the volume of
information created by the mankind by the year 2007 and stored on the artificial
carriers amounted to 295 billion gigabytes (2.95×1020 bytes). These volumes
determine the great importance of the "data compression" field of knowledge. The
concept of "data compression" (DC) is very broad. Therefore, this article makes
an attempt of the DC systematization (classification), which can facilitate the
study of the problem as a whole as well as the choice of an appropriate method
for solving a particular problem. The most known areas and specific examples of
the DC application are concerned. |
Keywords: |
Data Compression, Application Areas, Information System, Physical Experiment,
Collider, Space Researches, Efficient Data Presentation. |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
A CONCEPTUAL METAMODEL APPROACH TO ANALYSING RISKS IN BUSINESS PROCESS MODELS |
Author: |
HANANE LHANNAOUI, MOHAMMED ISSAM KABBAJ, ZOHRA BAKKOURY |
Abstract: |
Business processes are performing ineffectively due to different risks. Several
methods are proposed for the identification and management of risks in business
process environments. In this paper, we propose a technique, for analysing risks
in business process models, that originates from the safety domain. This
approach initiates the improvement of business process models in an early phase
of business process lifecycle by translating risk analysis output to design.
Moreover, we introduce a conceptual meta-model for our integrated approach. This
will describe specific concepts of business related risk analysis and will
facilitate the exploitation of risk analysis output for further process
improvement. |
Keywords: |
UML, HAZOP, Risk Analysis, Business Process Models, Design |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
ENSURING SECURITY ON MOBILE DEVICE DATA WITH TWO PHASE RSA ALGORITHM OVER CLOUD
STORAGE |
Author: |
SUJITHRA. M, PADMAVATHI. G |
Abstract: |
Mobile devices are rapidly becoming a key computing platform and an essential
part of human life as the most effective and convenient communication tools not
bounded by time and place. With the rapid growth of mobile devices and mobile
applications, the need for mobile security also has increased dramatically. Due
to increasing use of mobile devices the requirement of cloud computing in mobile
devices arises, which gave birth to Mobile Cloud Computing. Mobile Cloud
Computing refers to an infrastructure where data storage can happen away from
mobile device i.e. on a cloud. To ensure the correctness of users' data in the
cloud, the framework mainly focuses on the data security over the Cloud
Computing Paradigm by purposing new cryptographic technique named as Two Phase
RSA Encryption. The comparison has been conducted by running several encryption
settings to process different sizes of data blocks to evaluate the algorithm’s
encryption/decryption speed and compared them to choose the best data encryption
algorithm so that it can implement in future work. |
Keywords: |
Encryption, Decryption, Mobile Device, Cloud Storage, Data Storage |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
EFFICIENT REVERSE SKYLINE ALGORITHM FOR DISCOVERING TOP K-DOMINANT PRODUCTS |
Author: |
SHAILESH KHAPRE, M.S. SALEEM BASHA, A. MOHAMED ABBAS |
Abstract: |
Recent boom in internet growth and the advancement in internet security have led
to rapid growth in Ecommerce and related services. In this context, capturing
the preferences of customers plays an important role in decisions about the
design and launch of new products in the market. The science that primarily
deals with the support of such decisions is the Operational Research. Since many
of the research problems of Operational Research have to do with the analysis of
large volumes of data, therefore there has been a keen interest in data
management methods to solve these problems. In this work we develop new
algorithms for two problems related to the analysis of large volumes of consumer
preferences, with practical applications in market research. The first problem
we consider is to find the potential buyers of a product (potential customer’s
identification). We formulated this problem as a reverse query skyline and
propose a new algorithm called ERS. Secondly, Practical applications often
require simultaneous processing of multiple queries. To resolve this problem, we
formulated a new type of query, which is referred to as a query to find the k
dominant candidates (k-dominant query). Our experimental evaluation validates
the efficiency of the proposed algorithm which outperforms BRS by a huge margin. |
Keywords: |
Skyline Algorithms, Market Analysis, Personalized Service Mining, Personalized
marketing, E-Commerce. |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
A HYBRID OF SINGULAR VALUE DECOMPOSITION AND REGRESSION ANALYSIS COLLABORATIVE
FILTERING WITH LINEAR INCREMENTAL UPDATE METHOD |
Author: |
WIJAK SRISUJJALERTWAJA, DUSSADEE PRASERTTITIPONG |
Abstract: |
Collaborative filtering (CF) approach comprises of several well-known techniques
which successful in creating personalized recommendations. Singular Value
Decomposition (SVD) based technique is the dominant class of CF techniques. The
techniques rooted from SVD concept mostly return the high accuracy
recommendation results than others. These SVD-based techniques are come up with
the concept of model-based CF techniques, in which the relationship of
historical ratings between users and items are learned. The learning processes
usually perform in off-line mode. The model parameters are assessed according to
this off-line process for constructing the knowledge models which are further
use in on-line recommendation environments. However, the accuracy of the SVD-based
technique is reduced according to the increasing number of solutions that they
have to evaluate compared with the steady number of knowledge that they have
learned, namely users-items sparse problem. On the other hand, the memory-based
CF techniques are also suggested in literatures. These techniques are relied on
collecting knowledge about ratings between users and items in computational area
and re-computing the entire knowledge every time when the recommendations are
called for. Thus, the users-items sparse problem is not an obstructer for
memory-based CF techniques, because the new knowledge is always encompassed to
the fundamental knowledge. Even though, memory-based CF techniques do not cause
of difficulty in the users-items sparse problem, they impractical for
implementing in on-line environment. Because of memory-based CF techniques take
a lot of time for estimating just only one recommendation result. Hence, this
paper proposed a hybrid of SVD-based technique and memory-based technique for
CF. A regression analysis algorithm is proposed as a memory-based CF technique.
The incremental update method with linear time refreshment also presented in
this paper for making the practical on-line knowledge maintenance. The empirical
experiment was established. The accuracy results acquired from the hybrid
between the revolutionary RSVD technique and the proposed linear regression
analysis with incremental update method depicted the highest accuracy,
especially in users-items sparse situations. |
Keywords: |
Recommender Systems, Collaborative Filtering, Singular Value Decomposition,
Incremental Update, Users-Items Sparse Problem |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
FORMAL ANALYSIS OF WEIGHTED LONGITUDINAL ROUTERS NETWORK VIA SCALE-FREE NETWORK.
A CASE STUDY |
Author: |
FIDA HUSSAIN CHANDIO, ABDUL WAHEED MAHESAR, SAYED CHHATTAN SHAH, FOZIA ANWAR,
AKRAM ZEKI |
Abstract: |
The process of identifying the central nodes in complex networks research has
remained interesting and very important issue for network analysis. The
identification of main nodes in the network can lead to many answers for the
solution of security and other problems depending on the type of complex
networks under analysis. Different topological metrics of the network can be
used to locate the major nodes in the network but the degree and betweenness
(Load) centralities perform very important role in evolution and communication
of nodes in growing networks. Unfortunately, these metrics have been analyzed in
different complex systems mainly either on the bases of the number of links to
nodes in the network or with much focused from the perspective of weights of
links. Therefore, locating the main nodes in the network not only depends on
links but majorly on weight of links. Routers network of the internet is an
example of scale-free nature which follow power-law distribution and causes
inhomogeneous structure with some nodes with large number of links while many
with a few. Further, in this type of distribution few nodes become very
important. In this paper, we analyze the behavior of routers network by using
two metrics of centralities with weighted and un-weighted links based on the
dataset of PTCL routers network in Pakistan. Furthermore, by using centralities
measures we try to show that weight of links is important as compare to number
of links by following the concept from “rich get richer” to “fit get richer” in
routers network. Moreover, we prove that weighted routers network is very close
to scale-free networks as compared to un-weighted, and due to this phenomenon
these networks sustain their robustness. |
Keywords: |
Scale-free networks; weighted networks analysis; load distribution; degree
distribution; shortest distance; graph theory |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
A NEW PARALLEL AND DISTRIBUTED FRAMEWORK BASED ON MOBILE AGENTS FOR HPC: SPMD
APPLICATIONS |
Author: |
FATÉMA ZAHRA BENCHARA, MOHAMED YOUSSFI, OMAR BOUATTANE, HASSAN OUAJJI |
Abstract: |
This paper proposes a new distributed framework and its main components for HPC
(High Performance Computing). It is based on a cooperative mobile agents model
which implements the team works strategies to perform parallel programs
execution as distributed one. The program and data to be performed is
encapsulated on team leader agent which deploys its worker agents AVPUs (Agent
Virtual Processing Units). All the AVPUs have to move to a specific node and
perform and provide their computational results. Consider the great number of
data and of the AVPUs to be managed by the team leader agent and which alter
negatively the HPC. In this work we focused on introducing a specific mobile
agent the MPA (Mobile Provider Agent) which implements some mechanisms for the
management of data and tasks and the AVPUs to ensure a load balancing model. It
applies also some additional strategies to maintain the others performance keys
thanks to the mobile agents several skills. |
Keywords: |
High Performance Computing, Distributed Computing Environment, SPMD
Applications, Mobile Agents, Big Data Processing |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
ANALYSIS OF SHIFT REGISTER USING INTEGRATED POWER AND CLOCK DISTRIBUTION NETWORK
BASED MASTER SLAVE FLIP FLOP |
Author: |
S.THOMAS NIBA |
Abstract: |
The increase in integrated circuit design complexity density and performance,
specifically in 3-D integration, requires more complicated power, ground, and
clock distribution network. This network consumes a large portion of the limited
on-chip metal resources. In digital ICs, the clock distribution network (CDN)
distributes the clock signal, which acts as a timing reference within the
system. Since the clock signal is heavily loaded, has the highest capacitance
and operates at high frequencies, the CDN consumes a large amount of the total
power in synchronous systems. The integrated power and clock distribution
network (IPCDN) is used to reduce the metal requirements, routing complexity,
and power. The IPCDN is proposed in order to eliminate the need for the global
and local clock distribution network. In IPCDN, a differential power clock
signal with a suitable dc voltage level and sinusoidal voltage-swing clock from
the differential positive power clock and negative power clock signals. IPCDN
does not require any change to be made to the conventional combinational and
sequential circuit design. The element of IPCDN, including the LC differential
power clock signal driver and the clock buffer, have been simulated using Taiwan
semiconductor manufacturing company 65-nm CMOS technology with a power clock
signal, a 1-volt dc component, and 400-mv sinusoidal swing at a frequency of 5
GHz. The behaviour of a master slave flip flop with IPCDN was integrated at
extreme corners. From these results we can implement this master slave flip flop
in the shift register applications and compare the results with respect to shift
register based on normal master slave flip flop. |
Keywords: |
Clock Buffer, Clocked Inverter (Domino Logic), Combinational, Pass Transistor,
Distribution Network, LC Differential Driver, Power–Clock, Sequential. |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
MONITORING, INTROSPECTING AND PERFORMANCE EVALUATION OF SERVER VIRTUALIZATION IN
CLOUD ENVIRONMENT USING FEED BACK CONTROL SYSTEM DESIGN |
Author: |
VEDULA VENKATESWARA RAO, Dr. MANDAPATI VENKATESWARA RAO |
Abstract: |
Data centers of today are rapidly moving towards the use of server
virtualization as a preferred way of sharing a pool of server hardware resources
between multiple ‘guest domains’ that host different applications. The
hypervisors of the virtualized servers, such as the Xen use fair schedulers to
schedule the guest domains, according to priorities or weights assigned to the
domain by administrators. The hosted application’s performance is sensitive to
the scheduling parameters of the domain on which the application runs. However,
the exact relationship between these parameters of the domain and the
application performance measures such as response time or throughput is not
obvious and not static as well. Furthermore, due to the dynamics present in the
system there is need for continuous tuning of the scheduling parameters. The
main contribution of our work is the design and implementation of a controller
that optimizes the performance of applications running on guest domains. We
focus on a scenario where a specific target for the response time of an
application may not be provided. The goal is to dynamically compute the CPU
shares for the virtual machines in such a way that the application throughput
should be maximized, while keeping the response time as low as possible, with
the minimum possible allocation of CPU share for the guest domain. The
optimizing controller design is based on the feedback control theoretic concept.
The controller computes the values of the scheduling parameters for every guest
domain in such a way that it minimizes the CPU usage and response time, and
maximizes throughput of the applications. To evaluate our work, we deployed
multi-tier application in virtual machines hosted on the Xen virtual machine
monitor. The performance evaluation results show that the controller brings the
cap value close to the expected optimal value. The optimizing controller also
rapidly responds to changes in the system when a disturbance task is introduced
or load on the application is changed. |
Keywords: |
Cloud Computing, Data Center, Virtualization, hypervisor, Xen, virtual machine,
Green IT, scheduler, performance, response time, throughput, feedback control
theory. |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
A REPORT ON REDUCING DIMENSIONS FOR BIG DATA USING KERNEL METHODS |
Author: |
CH. RAJA RAMESH, K RAGHAVA RAO, G.JENA, C V SASTRY |
Abstract: |
Big-Data is very popular word to perform huge data processing; it brings so many
opportunities to the academia, industry and society. Big data hold great promise
for discovery of patterns and heterogeneities which are not possible with small
data. Big Data faces many challenges like unique computational and statistical
challenges including scalability and storage. Among these challenges some maybe
mentioned as noise accumulation, spurious correlation, incidental endogeneity
and measurement errors. Most of the problems occur based on the size of the data
associated with large number of attributes. Irrelevant attributes add noise to
the data and increase the size of the model. Moreover datasets with many
attributes may contain groups of data that are correlated. All these attributes
may be measuring the same feature. One way of dealing with this problem is to
eliminate some attributes (dimensions) which do not exhibit large variance and
hence do not affect the clusters. Several techniques exist to ignore certain
attributes or dimensions such as Principle component analysis (PCA), Singular
Value Decomposition (SVD) etc. We review these techniques in this paper with
respect to clustering. We plan to use principle component analysis and Kernel
methods for Dimensionality reduction which is an essential preprocessing
technique for large scale data sets. It can be used to improve both the
efficiency and effectiveness of classifiers. |
Keywords: |
Big Data , Dimensionality Reduction, Feature Extraction, Fuzzy , Term Data |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
MODELING AND CONTROL OF A DOUBLY FED INDUCTION GENERATOR BASE WIND TURBINE
SYSTEM OPTIMIZITION OF THE POWER |
Author: |
MAROUANE EL AZZAOUI, HASSANE MAHMOUDI |
Abstract: |
In recent years, wind energy has become one of the most important and promising
sources of renewable energy. As a result, wind energy demands additional
transmission capacity and better means of maintaining system reliability. The
evolution of technology related to the wind system industry has led to the
development of a generation of variable speed wind turbines that present
numerous advantages compared to fixed-speed wind turbines. This paper deals with
the modeling and control of a wind turbine driven doubly fed induction generator
(DFIG) that feeds AC power to the utility grid. Initially, a model of the wind
turbine and maximum power point tracking (MPPT) control strategy of the
doubly-fed induction generator is presented. Thereafter, control vector-oriented
stator flux is performed. Finally, the simulation results of the wind system
using a doubly-fed of 3MW are presented in a Matlab/Simulink environment. |
Keywords: |
Converter, Doubly Fed Induction Generator (DFIG), MPPT, vector control, wind
turbine. |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
USABILITY EVALUATION OF SOME POPULAR PAAS PROVIDERS IN CLOUD COMPUTING
ENVIRONMENT |
Author: |
SHARMISTHA ROY, BRATATI CHAKRABORTI, PRASANT KUMAR PATTNAIK, RAJIB MALL |
Abstract: |
This paper aims to focus on the usability evaluation of two interactive and
popular cloud PaaS (Platform as a Service) providers namely Microsoft Azure and
Appharbor. Usability evaluation has been carried out in two stages: Performance
evaluation is carried out by taking CPU Utilization, Memory Usage and Disk
Seeking Rate as essential parameters and in next stage customer satisfaction is
measured by considering user feedback mechanism based on Interview and
Questionnaire method. |
Keywords: |
PaaS service, Usability evaluation, Cloud computing, Questionnaire method |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
ADAPTIVE SLIDING WINDOW ALGORITHM FOR WEATHER DATA SEGMENTATION |
Author: |
Yahyia BenYahmed, Azuraliza Abu Bakar, Abdul RazakHamdan, Almahdi Ahmed,Sharifah
Mastura Syed Abdullah |
Abstract: |
Data segmentation is one of the primary tasks of time series mining. This task
is often used to generate interesting subsequences from a large time series
sequence. Segmentation is one of the essential components in extracting
significant patterns of weather time series data, which may be useful in
identifying the trend and changes in weather prediction. The task use
interpolation to approximate the signal with a best-fitting series and return
the last point of the segments as change point or as a sequence of time points
as a window. Sliding window algorithm (SWA) is a well-known time series data
segmentation method, in which a segment with an error threshold and fixed window
size is created when the change point is reached. In actual data such as weather
data, SWA is unsuitable because appropriate error threshold and change point are
required to avoid information loss. In this paper, we propose an adaptive
sliding window algorithm (ASWA) that categorizes weather time series data based
on the change point information. |
Keywords: |
Time Series, Segmentation, Change Points, Sliding Window, Adaptive Sliding
Window |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
A CONCEPTUAL FRAMEWORK FOR A MULTIDIMENSIONAL MODEL OF TALENT MANAGEMENT DATA
WAREHOUSE |
Author: |
AZWA ABDUL AZIZ, MUHAMMAD SANI ILIYASU, IBRAHIM ABDUL KARIM MAHMOUD, RAJA
HASYIFAH RAJA BONGSU, JULAILY AIDA JUSOH |
Abstract: |
The modern business environment and dynamic economic situation has made the
workforce and business ethics more versatile and sophisticated. This development
forges a complex business atmosphere that compels industries to compete
productively for sustainable growth. Thus, it becomes imperative for
organizations to manage their talents (workforce) effectively to achieve
sustenance in the modern economic climate. Regrettably, it has been challenging
for HR managers to identify, select and retain competent individual’s that suits
their industrial needs (talent management). Therefore, this paper proposes a
conceptual framework for a Multidimensional Model of a Data Warehouse (DW) for
Talent Management (TM). Further, a hybrid approach of multidimensional modeling
will be adopted in developing the model of the DW. In addition, student personal
information and academic performances from institute of higher learning across
Malaysia will be used as the data sources. Similarly, industrial needs (job
vacancies) outlined by Multimedia Development Corporation (MDEC) Malaysia will
be considered as user requirement. The proposed DW will provide information that
will facilitate direct mapping of candidate to industrial needs among other TM
practices. Also, the DW will facilitate various TM related analytics using
appropriate Business Intelligence tool. |
Keywords: |
Data Warehouse, Multidimensional Modeling, Star Schema, Business Intelligence,
Talent Management. |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
AN IMPROVED LSB IMAGE STEGANOGRAPHY TECHNIQUE USING BIT-INVERSE IN 24 BIT COLOUR
IMAGE |
Author: |
MOHAMMED ABDUL MAJEED, ROSSILAWATI SULAIMAN |
Abstract: |
Steganography is an art of disguising the fact that communication is going on by
concealing information in other information. In general, the communication
carrier can be files in many formats; however, digital images are the most
common due to their frequent use on the Internet. This paper introduces an
improvement on the standard least significant bit (LSB)-based image
steganography technique and proposes the bit inversion method that improves the
stego-images quality in 24-bit colour image. A stego-image is the outcome of an
image (usually called the cover image), after a secret message is hidden in it.
In this technique, the LSB’s of some pixels of the cover image are inverted,
when inputs of specific patterns of some bits related to the pixels are found.
In this way, less number of pixels is modified in comparison to the standard LSB
method. Our focus is to obtain a high value ratio of the Peak Signal-to-Noise (PSNR)
of the stego-image, to make sure that both stego-image and the original image
are difficult to discern by human eyes. The proposed bit inversion method starts
with the last LSBs of both green and blue colour planes that will be replaced by
the first and the second most significant bits (MSB) of the secret image. The
proposed method introduces two additional levels of security to the standard LSB
steganography. The first level is that because only the green and blue colours
are used, instead of three colors red, green, and blue in the standard LSB, the
red colour will act as noise data, and thus increases the complexity of an
attacker, when he/she tries to retrieve the secret message. The second level
exploits the new bit inversion technique that reverses the bits of the image
pixels after applying the standard LSB. Experiments have been conducted using a
collection of standard images to evaluate the proposed technique, which give the
Peak Signal-to-Noise Ratio (PSNR) values of 72, 61, and 70 for Lena.jpg,
Babbon.jpg, and Pepper.jpg respectively. From the experiment, we also observed
that by using the bit inverse technique, less numbers of pixels are modified
compared with the standard LSB method. |
Keywords: |
Image Steganography, LSB, Bit-Inverse, Robustness, Colour Image |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
STREAMING TWITTER DATA ANALYSIS USING SPARK FOR EFFECTIVE JOB SEARCH |
Author: |
LEKHA R. NAIR, DR. SUJALA D. SHETTY |
Abstract: |
Near real time Big Data from social network sites like Twitter or Facebook has
been an interesting source for analytics by researchers in recent years owing to
various factors including its up-to-date-ness, availability and popularity,
though there may be a compromise in genuineness or accuracy. Apache Spark, the
trendy big data processing engine that offers faster solutions compared to
Hadoop, can be effectively utilized in finding patterns of relevance useful for
the common man from these sites. Recently many organizations are advertising
their job vacancies through tweets, which saves time and cost in recruitment.
This paper addresses the issue of real time analyzing and filtering those
numerous job advertisements from among the millions of other streaming tweets
and classify them into various job categories to facilitate effective job
search, utilizing Spark. |
Keywords: |
Big Data Analytics, Tweet Stream Analysis, Spark Streaming, Social Network
Analysis, Streaming Big Data Processing |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
PHISHING EMAIL CLASSIFIERS EVALUATION: EMAIL BODY AND HEADER APPROACH |
Author: |
AMMAR YAHYA DAEEF, R. BADLISHAH AHMAD, YASMIN YACOB, NAIMAH YAAKOB, MOHD. NAZRI
BIN MOHD. WARIP |
Abstract: |
The internet is a great importance to millions of people in their social and
financial activities every day. This not only limited to individual users of the
Internet, but also to organizations for the purposes of trade and others. A huge
number of financial activities occur every day with millions of dollars are
transferred where this large amount of financial events open the appetite of
fraudsters to implement fraudulent activities. Thus, users vulnerable to many
threats, including the theft of private information, banking information, and
many more. Recently, phishing is a serious threat which steals user’s sensitive
information and regarded as the most profitable cybercrime. Phishing mainly
relies on email claiming originating from trusted source contains an embedded
link to redirect victims to not benign website in order to get users financial
data. As the risk of Phishing emails increases progressively, detecting and
overriding this phenomenon has become very urgent, especially the zero day
phishing campaigns which are new phishing emails not seen by anti-phishing
tools. Although there are several solutions for phishing detection such as
blacklists and heuristics, there is no clear discussion about the required
processing time and the complexity of the designed solutions. This paper aims to
make such dissection for server side solutions which proved to be the best
choice to defeat zero day attacks. |
Keywords: |
Phishing, Emails, Body features, Header features, and Classifiers |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
A GENERIC VOBE FRAMEWORK TO MANAGE HOME HEALTHCARE COLLABORATION |
Author: |
HOGER MAHMUD, JOAN LU |
Abstract: |
In this paper we propose a conceptual framework to manage Home HealthCare (HHC)
provision and stakeholder collaborations using the concepts of Virtual Breeding
Environment (VBE) and Virtual Organisation (VO). Providing healthcare at home is
gaining popularity as more and more patients prefer to receive care in the
comfort of their homes. Providing care at home is complex and involves many
stakeholders; collaboration and resource sharing between these stakeholders is
essential for a successful home healthcare provision. In this paper we outline a
framework to classify different parties involved in Home Healthcare (HHC)
collaboration based on their roles. The framework consists of two main
components which we call HHC-VBE and HHC-VO. The proposed framework is applied
to a simple home healthcare case study and it is later evaluated. The result
shows that the framework is simple and flexible and can be applied to other
scenarios. This research contributes towards addressing the ongoing challenge of
HHC collaboration management. |
Keywords: |
Virtual Breeding Environment (VBE), Virtual Organisation (VO), Home Healthcare (HHC) |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
Title: |
ELEARNING ENVIRONMENT AS A FACILITATOR FOR KNOWLEDGE CREATION USING SECI MODEL
IN THE CONTEXT OF BA |
Author: |
SALEH KASEM, SAMIR HAMMAMI, MANSOUR NASER ALRAJA |
Abstract: |
Using eLearning technology today in education is changing the way in which
knowledge is introduced, stored and distributed, accordingly; knowledge
management technologies is used to rapidly capture, organize and deliver large
amounts of new knowledge.
BA as a shared context in eLearning environment requires students to share,
construct and utilize knowledge through knowledge creation processes as proposed
in Nonaka's model of knowledge creation (SECI).
This research aims to introduce and prove that eLearning environments supports
the knowledge processes and creates conditions which are consistent with
Nonaka's model of knowledge creation (SECI) and the concept of BA as a shared
context for knowledge creation processes and activities, a research model was
built and hypotheses were formed to reflect it, then a survey was distributed
between the students of Syrian Virtual University (SVU) to collect the primary
data to discover the strength of relationships between the model variables, The
results demonstrates that BA as a shared context affects positively knowledge
creation model (SECI) in the mediation of eLearning environment.
This research is a consolidated contribution to field of eLearning by engaging
knowledge management concepts in the whole process of it. |
Keywords: |
Knowledge, SECI, Context, BA, eLearning, SVU. |
Source: |
Journal of Theoretical and Applied Information Technology
20th October 2015 -- Vol. 80. No. 2 -- 2015 |
Full
Text |
|
|
|