|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of Theoretical and Applied Information Technology
February 2015 | Vol. 72 No.1 |
Title: |
PROPOSED MODELS OF ADAPTIVE KNOWLEDGE AGGREGATOR |
Author: |
ALI TAHA Al-OQAILY, ZAINUDDIN BIN HASSAN, Nor’ashikin Ali, ZAINAB AMIN AL-SULAMI |
Abstract: |
Knowledge is considered as an important and valuable source for organizations.
The right knowledge contributes to better decision making and thus, improves
competitiveness and organizational performance. Thus, it is essential for
organizations to manage their knowledge properly through knowledge management
processes as to sustain in the competitive industry. Tacit knowledge, which is
stored in employees’ minds and is hard to manage, has been considered as a
crucial factor affecting the performance of organizations. Therefore, knowledge
management enables the tacit knowledge of employees be converted to explicit
knowledge to enable the retrieval of knowledge by other organizational members
so that they can use that knowledge to be more innovative. Retrieving the right
knowledge is important to enable the employees to perform better in their work;
however, it poses a major challenge especially when retrieving knowledge from a
large and variety of sources. The traditional knowledge retrieval methods share
the explicit knowledge without a proper evaluation of the quality of knowledge
(for example, without a proper editing). Thus, the aim of this paper is to
develop efficient knowledge management methods that are able to; (1) to retrieve
the right explicit knowledge from tacit knowledge based on responsible
measurement variables; and (2) to aggregate and formulate the retrieved
knowledge effectively for sharing valuable and focused knowledge. These methods
will enable the organizational members to share the right explicit knowledge to
the right employees at the right time. |
Keywords: |
Explicit Knowledge; Knowledge Aggregation; Knowledge Management; Knowledge
Measuring; Knowledge Retrieving; Tacit Knowledge. |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
OPTIMIZE 3D GRAPHIC FOR CULTURE GAME BY USING POLYGON REDUCTION |
Author: |
ANANG KUKUH ADISUSILO |
Abstract: |
These days, almost every release major game are made in 3D or use a heavy amount
of 3D graphic. The 3D graphics is made to resemble the original object or to
design according to the manufacturer. In the culture-based games, using 3D
graphics can be means to introduce and propagate a particular culture. This
research has been used in games of Indonesian culture in the time of Majapahit.
Optimization focused on the 3D graphic for characters, (non-player character)
NPCs and some environment. By using poly reduction algorithms for optimizing on
part of the game can make the performance run more quickly. Preliminary data
onto use of personal computer games without optimization is around 30 fps, while
in game on mobile devices around 5 fps. The initial frame which is generated
from game character has 17.928 vertices and the 3.522 vertices for NPC.
Moreover, the amount of some environment consist of a Gazebo, houses of
Kentongan, Palm trees, Banana trees, and Cottage has 15.897 vertices. After the
optimization process produce three times of the gaming performance is better
able to reach around 62 fps for personal computers and more than 21 fps with
using mobile devices. |
Keywords: |
Polygon Reduction, Optimize 3D, Game, Mobile Devices |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
ENHANCING ADVANCED ENCRYPTION STANDARD (AES) S-BOX GENERATION USING AFFINE
TRANSFORMATION |
Author: |
NUR HAFIZA ZAKARIA, RAMLAN MAHMOD, NUR IZURA UDZIR, ZURIATI AHMAD ZUKARNAIN |
Abstract: |
The development of technology has resulted in a number of new suggestions done
on block ciphers. Although there have been so much evolvement of the block
cipher, the industry still needs another block cipher as long as the cipher is
secured and met all the security requirements. One of the critical parts is,
secured communication which assists to protect the confidentiality and integrity
of the data. Secured communication can be attained by encrypting the data. In
this research, we proposed to enhance Advanced Encryption Standard (AES) S-Box
generation using affine transformation approach which shall meet the security
requirements. AES is one of the best cryptographic algorithms that can be used
to protect electronic information. Researchers have found a weakness in the AES
algorithm. They managed to come up with a clever new attack that can recover the
secret key four times easier than anticipated by experts. In this research, we
are trying to remove the weaknesses of AES by changing the S-Box and adding one
new function which are inspired from crossover and mutation process. This
improvement will satisfy the security of AES. |
Keywords: |
AES, Affine Transfomation, S-box, Randomness, Cryptanalysis, Confusion,
Diffusion |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
NEW APPROACH WITH ENSEMBLE METHOD TO ADDRESS CLASS IMBALANCE PROBLEM |
Author: |
SEYYEDALI FATTAHI, ZALINDA OTHMAN, ZULAIHA ALI OTHMAN |
Abstract: |
An attractive research in recent years is solving class imbalance problem in
imbalanced dataset. The class is imbalanced when the number of one class
(majority) is more than another one (minority). The classification of this
imbalanced class causes imbalanced distribution and poor predictive
classification accuracy. This paper introduces a new ensemble –based method for
imbalanced data set classification using Synthetic Minority Over-sampling
Technique (SMOTE) and Rotation Forest algorithm to address class imbalance
problem. Rotation Forest applied as ensemble classifier combines with well-known
re-sampling method (SMOTE). It constructs classifiers with obtaining features by
rotating subspaces of the original dataset. The advantages of Rotation Forest
rather than other ensemble methods (Boosting, Bagging, Random Subspace) is that
same information held as original data sets and no information lost in data sets
which used to construct classifiers. Experimental results reveal the
effectiveness of SMOTE and Rotation Forest performance at data level in overall
accuracy, Cohen’s kappa Coefficient, False Negative rate, AUC, and RMSE compared
to other related classification ensemble methods (SMOTE-Boost, SMOTE-Bagging,
SMOTE-random subspace) on twenty KEEL repository imbalanced datasets (binary
dataset not multi-class) which selected randomly from different ratios by
implementing Java-based WEKA and STATISTICA software. SMOTE implemented for
training data by values of N=100, 200, 300, and 400. Kappa-Error diagram is
plotted to analysis the behavior of ensemble methods. The experimental results
clarify the validness of proposed ensemble classifier. |
Keywords: |
SMOTE, Rotation Forest, Random Subspace, Bagging, Boosting |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
MISSING VALUE IMPUTATION USING FUZZY POSSIBILISTIC C MEANS OPTIMIZED WITH
SUPPORT VECTOR REGRESSION AND GENETIC ALGORITHM |
Author: |
P.SARAVANAN, P.SAILAKSHMI |
Abstract: |
Quality data mining results can be obtained only with high quality input data.
So missing data in data sets should be estimated to increase data quality. Here
comes the importance of efficient methods for imputation of missing values. If
the values are Missing At Random (MAR), it can be estimated using some complex
manner from available data. For such an estimation of values, a combination of
fuzzy c means and possibilistic c means algorithms are used in the proposed
system. Thus combining the advantages of fuzzy c means algorithm, such as data
can belongs to more than one cluster which gives best result for overlapped data
etc and that of possibilistic c means such as handling noisy data effectively.
Proposed system considers both membership function and typicality of the data.
Fuzzy-Possibilistic c means method is optimized using Genetic Algorithm with
Support Vector Regression (SVRGA). The main purpose of SVRGA is to minimize the
error. Support Vector Regression model must be trained with complete records.
Genetic Algorithm is used to select new parameters from existing population. If
the error is found to be minimum then it is assumed that parameters are
optimized and the dataset does not contain incomplete records. If the error is
not minimum again estimate the missing values using fuzzy possibilistic c means
clustering with new parameters. The system is tested with two different real
time datasets, Iris and marine db with various standard missing ratios. The
performance of proposed method is calculated using Random Mean Square Error (RMSE)
and compared with competitor. The graphs show the system proposed in this work
is performing well. |
Keywords: |
Missing value Imputation, Fuzzy Possibilistic C Means, Support Vector
Regression, Genetic Algorithm, Multiple Imputations. |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
NEW APPROACH FOR IMBALANCED BIOLOGICAL DATASET CLASSIFICATION |
Author: |
SEYYEDALI FATTAHI, ZALINDA OTHMAN, ZULAIHA ALI OTHMAN |
Abstract: |
This paper presents a new ensemble classifier for class imbalance problem with
the emphasis on two -class (binary) classification. This novel method is a
combination of SMOTE (Synthetic Minority Over-sampling Technique), Rotation
Forest, and AdaBoostM1 algorithms. SMOTE was employed for the over-sampling of
the minority samples at 100%, 200%, 300%, 400%, and 500% of the initial sample
size, with attribute selection being conducted in order to prevent the
classification from being over-fitted. The ensemble classifier method was
presented to solve the problem of imbalanced biological datasets classification
by obtaining a low prediction error and raising the prediction performance. The
Rotation Forest algorithm was used to produce an ensemble classifier with a
lower prediction error, while the AdaBoostM1 algorithm was used to enhance the
performance of the classifier. All the tests were carried out using the
java-based WEKA (Waikato Environment for Knowledge Analysis) and Orange canvas
data mining systems for training datasets. The performances of three types of
classifiers on imbalanced biomedical datasets were assessed. This paper explores
the efficiency of this new method in producing an accurate overall classifier
and in lowering the error rate in the overall performance of the classifier.
Tests were carried out on three actual imbalanced biomedical datasets, which
were obtained from the KEEL dataset repository. These imbalanced datasets were
divided into ten categories according to their imbalance ratios (IR) which
ranged from 1.86 to 41.40. The results indicated that the proposed method, which
used a combination of three methods and various evaluation metrics in its
assessments, was effective. In practical terms, the use of the SMOTE-RotBoost
for the classification of biological datasets results in a low mean absolute
error rate as well as high accuracy and precision. The values of the Kappa
Coefficient were close to 1, thus indicating that all the rates in every
classification were the same even though the false negative rates, which were
close to 0, showed the reliability of the measurements. The SMOTE-RotBoost has
useful AUC-ROC outputs that characterise the wider area under the curve compared
to other classifiers and is a vital method for the assessment of diagnostic
tests. |
Keywords: |
SMOTE, Rotation Forest, Random Subspace, Bagging, Boosting |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
FUZZY RULE BASED CLASSIFIER FOR SOFTWARE QUALITY DATA |
Author: |
JAYA PAL, VANDANA BHATTACHERJEE |
Abstract: |
Software quality estimation based on measured attributes from previous similar
products is an active field of research. Such estimation models must inevitably
handle imprecision and uncertainty and hence soft computing techniques are
gaining popularity. This paper presents a Fuzzy rule based classifier for
software quality data and its performance is compared with Bayesian classifier.
The fuzzy rules have been generated using Fuzzy C-Means clustering. The
objectives of this paper are threefold. First, Fuzzy C-Means algorithm is
applied to a set of Software Quality data and clusters are generated. The
nearest data points to each cluster centroid are used to generate optimum set of
fuzzy rules which are refined with the help of train data. These rules are used
to label the clusters generated by Fuzzy C-Means algorithm Second, these fuzzy
rules are used to classify the test data. Third, naïve Bayes classification
method is applied to classify the test data. Confusion matrix is generated and
results show that the performance of both the classification methods is
comparable. |
Keywords: |
Fuzzy Clustering, Fuzzy C-Means Algorithm, Fuzzy Rules, Software Quality, Bayes
Theorem, Naive Bayes Classification, Laplacian Correction. |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
A SOFTWARE SERVICE MODEL USING SCHEDULE BASED FAIR QUEUE WEIGHT FOR DYNAMIC
ADMISSION CONTROL ON CLOUD INFRASTRUCTURE |
Author: |
CHIDAMBARAM , CHANDRASEKAR |
Abstract: |
Software-as-a-Service using admission control has been considered as the next
generation software delivery model that shares the services among different
tenants to improve the utility rate. A major feature of the admission control is
that the tenant’s resources are usually shared by the cloud. While the tenant’s
enjoy the convenience of resources being obtained from the cloud structure, they
fear of undesired pressure and can become a significant barrier with respect to
time. With this the QoS parameters is increasingly becoming the choice of many
organizations. But, provisioning and scheduling execution of tenants of
resources become a major thrust to be solved in addition to QoS parameters. To
attain effective SaaS profits on single tenants, a software service model using
schedule based fair queue weight (S-FQW) for dynamic admission control on cloud
infrastructure is proposed in this paper. To start with, the work focus on
reducing the time complexity using Schedule based Fair Queue Weight for each
tenant. The Schedule based Fair Queue Weight is developed to schedule the
software services based on the weighted approximated processor sharing. The
weight is then assigned based on the time period the request is made by the
tenant to cloud structure. Subsequently the weighted value is used to identify
the positional point of the tenant to deliver software service using cloud
infrastructure with higher flexibility rate. The dynamic admission control task
in S-FQW model helps to derive the positions of several dynamic set of tenants.
Finally, with this even the randomly distributed tenant also reaches higher
profit rate using linear interpolation method providing software service at
flexible rates by reducing the time complexity even on randomly distributed
tenants. Various statistical parameters are measured and compared with the
existing state-of-the-art works on cloud structure using Amazon web service
dataset. Experiment is conducted on factors such as time complexity, total
software profit rate on single tenant, average response time to tenant and
software delivery rate. |
Keywords: |
Semi-Customized Dynamic Admission Control, Schedule Based Fair Queue Weighted,
Cloud Infrastructure, Linear Interpolation, Software-As-A-Service |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
USER EXPERIENCE EVALUATION OF MOBILE SPIRITUAL APPLICATIONS FOR OLDER PEOPLE: AN
INTERVIEW AND OBSERVATION STUDY |
Author: |
NAHDATUL AKMA AHMAD, AZALIZA ZAINAL, FARIZA HANIS ABDUL RAZAK, WAN ADILAH WAN
ADNAN, SALYANI OSMAN |
Abstract: |
Developments in religion and spirituality in the world of mobile technology seem
to flourish with the increasing of thousand applications that assisting daily
spirituality practices and experience. At the same time, world scenario shows
drastically increasing number of older people worldwide. In conjunction with the
intertwined scenario between increasing number of elderly and rapid developments
of mobile spiritual technology, HCI researchers are urged to design mobile
spiritual application tailored to an older people need to ensure the usefulness
of the application. This initial study seeks to address on the issue of older
peoples' spirituality experiences, whereby the study focused on the use of
interviews and observation methods in capturing older people spiritual
experience while engaging with mobile spiritual application and secondly,
mapping of the spirituality elements with an Emotion Model. This study as well
investigates the suitability of using interviews and observation method in
evaluation to capture older peoples' experiences. An interview has been
conducted with eight older peoples in the field study (home and working place).
Interview and observation are good tools in capturing user experience data,
however, an improvement on the research method should be done to ensure the
results gained is reliable and the questions used while interviewing is easy to
understand by older people. The study also indicates that interview activities
with older people are a complex task and so it is necessary to be applied in a
different and proper way. Therefore, appropriate steps in doing interview study
with older people are also suggested in this paper. |
Keywords: |
Techno Spiritual, Spirituality, Usability Evaluation, Older Adult, Mobile
Applications |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
A SPARSE ENCODING SYMMETRIC MACHINES PRE-TRAINING FOR TEMPORAL DEEP BELIEF
NETWORKS FOR MOTION ANALYSIS AND SYNTHESIS |
Author: |
MILYUN NI’MA SHOUMI, MOHAMAD IVAN FANANY |
Abstract: |
We present a modified Temporal Deep Belief Networks (TDBN) for human motion
analysis and synthesis by incorporating Sparse Encoding Symmetric Machines (SESM)
improvement on its pre-training. SESM consisted of two important terms:
regularization and sparsity. In this paper, we measure the effect of these two
terms on the smoothness of synthesized (or generated) motion. The smoothness is
measured as the standard deviation of five bones movements with three motion
transitions. We also address how these two terms influence the free energy and
reconstruction error profiles during pre-training of the Restricted Boltzmann
Machines (RBM) layers and the Conditional RBM (CRBM) layers. For this purpose,
we compare gait transitions by bifurcation experiments using four different TDBN
settings: original TDBN; modified-TDBN(R): a TDBN with only regularization
constraint; modified-TDBN(S): a TDBN with only sparsity constraint; and
modified-TDBN(R+S): a TDBN with regularization plus sparsity constraints. These
experiments shows that the modified-TDBN(R+S) reaches lower energy faster in RBM
pre-training and reach lower reconstruction error in the CRBM training. Even
though the smoothness of the synthesized motion from the modified-TDBN
approaches is slightly less smooth than the original TDBN, they are more
responsive to the action command to change a motion (from run to walk or vice
versa) while preserving the smoothness during motion transitions without
incurring much overhead computation time. |
Keywords: |
Temporal Deep Belief Network (TDBN), Sparse Encoding Symmetric Machines (SESM),
Restricted Boltzmann Machine (RBM), Conditional RBM (CRBM) |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
INVESTIGATION OF BANDWIDTH ALLOCATION BASED ON RAT SELECTION IN A WIRELESS
HETEROGENEOUS NETWORK FOR SMART HOME APPLICATION |
Author: |
SITI SARAH NIK ZULKIFLI, ROSDIADEE NORDIN, MAHAMOD ISMAIL, MARDINA ABDULLAH |
Abstract: |
Smart homes have gained a lot of attention in recent years. People are looking
forward to receiving an opportunistic network, where they can gather
communication devices within a wireless heterogeneous network. The most
important issue in a wireless heterogeneous network is to manage bandwidth, due
to the scarcity of this resource in wireless links, compared to wired networks.
Bandwidth management inefficiency may result in an unsatisfactory performance of
a network. This paper investigates bandwidth allocation based on Radio Access
Technology (RAT) selection in a wireless heterogeneous smart home environment
that integrates different RATs considered in the study, which are Bluetooth,
cellular 3G, and Wi-Fi. The nodes/users in a smart home are served through a RAT
that best fits the service requirements; with adequate bandwidth that guarantees
Quality of Service (QoS) requirements. We will demonstrate the comparison
between the bandwidth occupancy of each RAT using two types of service admission
procedures. From the results, the bandwidth occupied by different RATs show a
variation distribution depending on the service admission control procedure
used. We advocate the notion of bandwidth allocation with adequate bandwidth for
service application requests as a criterion to quantify the state of balance in
future heterogeneous multi-RAT environments. |
Keywords: |
Radio Resource Management, Bandwidth, RAT Selection, Heterogeneous Wireless
System, Smart Home |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
A NEW IMPROVED-MUSIC ALGORITHM FOR HIGH RESOLUTION DIRECTION OF ARRIVAL
DETECTION |
Author: |
KUSAY F. AL-TABATABAIE |
Abstract: |
This paper presents a new Improved-MUSIC (I-MUSIC) algorithm to solve the
problem of precise direction of arrival detection. It is based on the actual
MUSIC algorithm that has been modified in order to improve the resolution of the
Direction of Arrival (DOA). The actual MUSIC algorithm suffers from power
function that will reduce the resolution accuracy. For that reason, the I-MUSIC
has been proposed by ignoring the covariance steering vector. The results show
that this algorithm can give a high resolution even though the wireless system
is operating under a very low Signal to Noise Ratio (SNR). Statistical
comparisons with other algorithms (MUSIC, MUSIC-Like and MI-MUSIC) show that the
I-MUSIC has the best performance. |
Keywords: |
Interference, Direction of Arrival, Signal to Noise Ratio, MUSIC Algorithm,
Resolution. |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
OPTIMUM TERRESTRIAL KA MICROWAVE LINK FOR MALAYSIA BASED ON RAIN DISTRIBUTION
PROFILE EXTRACTED FROM RADAR DATA |
Author: |
KUSAY F. AL-TABATABAIE, LWAY F. ABDULRAZAK |
Abstract: |
This paper aims to identify the optimum link required for radio communication at
different rain rates, and also to estimate rain attenuation for various path
lengths at different time percentages for 28 GHz terrestrial link. The obtained
cumulative rainfall rate, and rain attenuation path lengths had been achieved
using radar reflectivity from Malaysian Meteorological Department. The radar
data had been compared with rain gauge networks and ITUR Study Group 3. The
result achieved in providing 99.99% of time percentage of availability, which
leads to better Ka microwave link design in tropical region. Thus, proper
implementation of mitigation technique for microwave link could be identified. |
Keywords: |
Rain intensity, Rain length distribution, Rain attenuation, and terrestrial path |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
GENERATING SERVICES SUPPORTING VARIABILITY FROM CONFIGURABLE PROCESS MODELS |
Author: |
Hanae Sbai, Mounia Fredj, Boutaina
Chakir |
Abstract: |
Today, a PAIS (Process-Aware Information System) is the broadly adopted
information system by enterprises, which aims to define how to use services and
business processes to achieve the business goals of enterprises. For this
purpose, research work in PAIS was interested in aligning business processes and
services in order to achieve a good connection between business process models
as represented in the process layer, and service models as represented in the
application layer In this context, to improve the reuse of business processes
and services, the concepts of configurable process models and configurable
services (which support variability) have emerged as new leading edge concepts
for improving reuse. Indeed, variability refers to the characteristic of a
system to adapt, specialize and configure itself with the context of use. More
recently, configurable process models become the most popular solution to deal
with adaptability in a PAIS. This has increased the research focus in how to
extract services from configurable process models. Several approaches have
developed solutions for proposing services generation from business process.
However, generating configurable services from configurable process model has
been neglected in the context of PAIS. In this paper, we propose an MDA approach
for an automatic generation of configurable service models from configurable
process models. |
Keywords: |
PAIS; configurable service; configurable process model; variability; MDA |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
MODELING AND GENERATING THE USER INTERFACE OF MOBILE DEVICES AND WEB DEVELOPMENT
WITH DSL |
Author: |
MOHAMED LACHGAR, ABDELMOUNAÏM ABDALI |
Abstract: |
Due to the large number and variety of mobile technologies (Android, iOS,
Windows Phone, etc) and web (Java Server Faces, Asp.net, HTML 5, etc)
based-components, developing the same application for these different platforms
becomes a tedious task. The Model Driven Architecture (MDA) approach aims to
provide an easy and efficient practical solution for developing a cross-platform
application. In this work, we propose a new approach to the design of the user
interface for mobile applications and web applications, which we apply to the
android platform and Java Server Faces Framework. This approach is later
generalized for all mobile platforms and web based-components, by defining a
language for the development of graphical interfaces, the Technology Neutral DSL
(Domain-specific language) intended to be cross-compiled to generate native code
for a diversity of platforms. |
Keywords: |
Model-driven engineering, Domain-specific language, Cross-Platforms, Code
Generation, Templates |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
A STUDY ON HUMANIZING SOFTWARE TEST EFFORT AND QUALITY |
Author: |
Dr N.SRINIVASAN |
Abstract: |
For improving software development processes with the goal of developing
high-quality software within budget and planned cycle time, Capability Maturity
Model (CMM) has become a popular methodology. Prior investigation focusing on
CMM level 5 projects, has identified many factors as determinants of software
development effort, quality, and cycle time. Using a linear regression model
based on data collected from different CMM level 5 projects of reputed
organizations, that high levels of process maturity, as indicated by CMM level 5
rating, reduce the effects of most factors that were previously believed to
impact software development effort, quality, and cycle time were found. The only
factor found to be significant in determining effort, cycle time, and quality
was software size. Testing is more than just debugging. The purpose of testing
can be quality assurance, verification and validation, or reliability
estimation. Particularly regression testing is an expensive, but important,
process. Unfortunately, there may be insufficient resources to allow for the re
execution of all test cases during regression testing. In this situation, test
cases are needed to be prioritized. Regression testing improves the
effectiveness of regression by ordering the test cases so that the most
beneficial are executed first. There are many studies on regression test case
prioritization which mainly has focuses on Greedy Algorithms(GA). However, it is
known that these algorithms may produce suboptimal results because they may
construct results that denote only local minima within the search space. By
contrast, meta heuristic and evolutionary search algorithms aim to avoid such
problems. This paper addresses the problems of choice of fitness metric,
characterization of landscape modality and determination of the most suitable
search technique to apply. The empirical results replicate previous results
concerning GA. The results show that GA perform well, although Greedy approaches
are surprisingly effective given the multimodal nature of the landscape. |
Keywords: |
Capability Maturity Model (CMM).Greed Algorithms (GA), Kilo Source Lines Of Code
(KSLOC), Capability Maturity Model Integration (CMMI), Function Points (FP),
Total Quality Management (TQM). |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
A NOVEL BINOMIAL TREE APPROACH TO CALCULATE COLLATERAL AMOUNT FOR AN OPTION WITH
CREDIT RISK |
Author: |
SASTRY KR JAMMALAMADAKA, KVNM RAMESH, JVR MURTHY |
Abstract: |
Options traded over-the-counter are associated with credit risk and are called
vulnerable options. Collateral can be taken to mitigate the credit risk. However
the amount of collateral should be such that it shouldn’t take into account the
almost not plausible states of the underlying which when considered increases
the required collateral amount. This paper proposes a methodology to calculate
the optimum collateral amount that is required from the seller of a vulnerable
option. The calculated collateral makes the vulnerable option as risky as the
exchange traded option. The algorithm to calculate the optimum collateral uses
the novel binomial decision tree built without any assumption on the underlying
distribution. The study reveals that the price of a vulnerable option converges
to an exchange traded option as the collateral amount reaches a certain optimum
value. The proposed methodology will be of interest to the option seller so that
the excess collateral above the optimum collateral can be used for some other
purposes. |
Keywords: |
Binomial Tree, Credit Risk, Collateral Amount, Option pricing, Venerable
options, Margin Computation |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
Title: |
POWER-AWARE SYSTEM DESIGN FOR MULTIPROCESSORS AND VOLTAGE SCALING/FREQUENCY |
Author: |
K.SURESH, M.RAJASEKHARABABU |
Abstract: |
Most critical and major in growing technology for High Performance of demand of
power & energy and an urgent problem in powering technologies. Energy
optimization is an enabling power Management. The Consumption of Energy Should
be ascertainable not only to Gate Level or Register Transfer (RT) Level but also
to the System Level. Reducing the Energy Consumption of system not deviating the
overall performance of the system. The compiler optimization will help to reduce
power reduction at software level. Power management software level strategy is
the code optimization by measuring the difficulties at where we can get
profitable of investigate optimization criteria to minimization of overall
energy consumption . The Energy consumption and run time computed for various
compiler techniques on XScale Architecture using XEEMU tool. The optimized code
picked out and code is tuned dynamically by varying voltage-frequency. The
optimized codes are tuned dynamically. |
Keywords: |
Compiler Optimization, Performance Evaluation, Voltage-Frequency Scaling, XScale
Architecture. |
Source: |
Journal of Theoretical and Applied Information Technology
10th February 2015 -- Vol. 72. No. 1 -- 2015 |
Full
Text |
|
|
|