|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
December 2017 | Vol. 95
No.23 |
Title: |
CLASSIFICATIONS, ASSESSMENTS AND CHARACTERISTICS AS FACTORS TOWARDS ANALYZING
ORGANIZATIONAL KNOWLEDGE |
Author: |
ZAID A. SABEEH, SMFD SYED MUSTAPHA, ROSHAYU MOHAMAD |
Abstract: |
Knowledge is an intellectual property that is generated and circulated among
members in the knowledge-based organization for the ultimate purpose of
achieving sustainability and growth of businesses. Both academia and businesses
have paid remarkable attention to the multifaceted field of organizational
knowledge, such efforts have been, and are still being translated into enormous
volume of research work and business reports. Despite the originality of some of
these research efforts, there is still a lack for concise roadmaps, models and
frameworks that address organizational knowledge based on adequate guidelines.
This study aims to explore and synthesize previous literature that discuss the
topic of organizational knowledge with the primary objective of guiding this
exploration using three factors. These factors are classifications, assessments
and characteristics of organizational knowledge which are considered focal
points of interest for academia and industries alike. This paper followed the
approach of critical analysis for research work that were published and well
received by research and industrial communities. The analysis of literature
considered 70 scholarly research papers that were published within the period
September 2011 until September 2016 to uncover recent patterns and trends in the
field of organizational knowledge. The analysis of literature guided by three
main factors (classification, assessment and characteristics of organizational
knowledge) indicates gaps in these previous studies. These analyses can be used
for future studies that are focused more on research in the field of
organizational knowledge from the perspective of these three guiding factors. |
Keywords: |
Organizational Knowledge, Classification Of Knowledge, Knowledge Assessment,
Characteristics Of Knowledge, Organizational Knowledge Literature,
Organizational Knowledge Sharing |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
AUGMENTED REALITY BASED SHOOTING SIMULATOR SYSTEM TO ANALYSIS OF VIRTUAL
DISTANCE TO REAL DISTANCE USING UNITY 3D |
Author: |
KURNIAWAN TEGUH MARTONO, OKY DWI NURHAYATI, CAHYANI GALUHPUTRI WULWIDA |
Abstract: |
The development of computer technology particularly in computer graphics has
provided simplicity in doing engineering in any fields. One of the engineerings
done is by building a simulation system. Simulation is one of the accurate
solutions in learning a very complex problem or in doing an efficiency process
towards the cost spent if using a real system. Shooting is one of physical
activities involving the brain ability and response of other parts of body. This
activity requires a skillful ability in terms of accuracy and acceleration.
Shooting is mostly done by army or the community of wild hunters. In the
activities of shooting practice, it needs high cost in its implementation in
addition to a quite secured location. Augmented Reality (AR) is one of
technologies that can be used in the development of shooting simulator system.
The built system is by changing the shooting target in the form of virtual
object. Thus, this activity can be done in a room without any needs to add the
security system. This Augmented Reality Shooting Simulator application is built
using the extreme programming method, which consists of Planning, Designing,
Coding and Testing steps. The research aims were produced shooting simulator
system to measure virtual distance estimation to the real distance. In this
research, shooting simulation was created using Augmented Reality. They uses
three distinct black and white square markers measuring 20cm x 20cm which will
be captured by camera and display in three dimensions virtual object. The
research results concluded that light intensity and markers angle affect the
virtual distance accuracy. Augmented Reality Shooting Simulator distance
estimation gave best results in light intensity range of 10 lux 30 lux and 0
angle. The other results were estimated virtual range of rifle with target and
the number of shots that hit it. |
Keywords: |
Simulator, Shooting, Augmented Relity, Extreme Programming, Distance, Computer
Graphic |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
A HYBRID APPROACH FOR UNSUPERVISED PATTERN CLASSIFICATION |
Author: |
FADOUA GHANIMI, ABDLWAHED NAMIR, EL HOUSSIN LABRIJI |
Abstract: |
In this paper, we present a new data classification approach in an unsupervised
context, which is based on both numeric discretization and mathematical
pretopology. The pretopologicals tool, specially the adherency application are
used in the modes extraction process. The first part of the proposed algorithm
consists to a presentation of the set of the multidimensional observations as a
mathematical numeric discrete set; the second part of the algorithm consists in
detecting clusters as separated subsets by means of pretopological
transformations. |
Keywords: |
Pretopology, Cluster Analysis, Adherence, Pretopological Closure, Unsupervised
Classification |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
FACIAL EXPRESSION RECOGNITION USING MULTISTAGE HIDDEN MARKOV MODEL |
Author: |
MAYUR RAHUL, NARENDRA KOHLI, RASHI AGARWAL |
Abstract: |
Facial Expression Recognition is an application used for biometric software that
can be used to identify special expressions in a digital image by comparing and
analysing the different patterns. These software are popularly used for the
purpose of security and are commonly used in other applications such as home
security, human-computer interface, credit card verification, surveillance
systems, medicines etc.. Recognizing faces becomes very difficult when there is
a change occurs in facial expressions. In this paper two layer extension of HMM
is used to recognized continuous effective facial expressions. Gabor wavelet
technique is used for feature extraction. Two layered extension of HMM consists
of bottom layer which represents the atomic expression made by eyes, nose and
lips. Further upper layer represents the combination of these atomic expressions
such as smile, fear etc. In HMM, Baum-Welch method is used for parameter
estimation. Viterbi Method and Forward Procedure are used for calculating the
optimal state sequence and probability of the observed sequence respectively.
This proposed system consists of three level of classification. Output of the
first level is used for the training purposes for the second level and further
this level is used for the third level for testing. Six basic facial expressions
are recognised i.e. anger, disgust, fear, joy, sadness and surprise.
Experimental result shows that Proposed System performs better than normal HMM
and has the overall accuracy of 85% using JAFFE database. |
Keywords: |
JAFFE, Gabor Wavelets Transform, PCA, Local Binary Patterns, SVM, Faps,
Cohn-Kanade Database, Markov Process, Static Modelling |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
HEART RATE REGULATION BASED ON T-S FUZZY CONTROLLER |
Author: |
FOROUGH. PARHOODEH, HAMID MAHMOODIAN |
Abstract: |
The control of human heart rate (HR) during exercise is an important issue for
athletics and assessing physical fitness, weight control, cardiovascular
patients and the prevention of heart failure. A T-S type fuzzy model for a
nonlinear model of (HR) response that describes the central and peripheral local
responses during and after treadmill exercise is constructed and followed by
designing a fuzzy controller based on parallel distributed compensation (PDC)
method. The state variable is reconstructed using a T-S fuzzy observer. Linear
matrix inequality (LMI)-based and Takagi-Sugeno (T-S) model-based fuzzy approach
is applied. To relevant simulations are made to verify the effectiveness of this
proposed fuzzy controller. |
Keywords: |
Heart Rate (HR), T-S Fuzzy Model, T-S Fuzzy Controller, T-S Fuzzy Observer,
Tracking Error |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
TOWARD SECURE COMPUTATIONS IN DISTRIBUTED PROGRAMMING FRAMEWORKS: FINDING ROGUE
NODES THROUGH HADOOP LOGS |
Author: |
N. MADHUSUDHANA REDDY, Dr. C. NAGARAJU, Dr. A.ANANDA RAO |
Abstract: |
MapReduce programming paradigm is used to process big data in large number of
commodity computers where parallel processing is leveraged. In this kind of
programming phenomenon, users do not have control over distributed computations.
Therefore it is quite natural for users having privacy and security concerns.
The nodes involved in MapReduce computations may become malicious and cause
security issues. Different attacks are possible on distributed environment. The
nodes that cause such attacks are known as rogue nodes. In this paper a
methodology is proposed based on analysis of Hadoop log files to find rogue
nodes that are malicious and launch attacks for disturbing normal functioning of
MapReduce framework. In other words, the methodology aims at integrity
verification of computations in MapReduce environment. This is achieved without
having any cryptographic primitives or other computational operations. Hadoop
logs and low-level system calls are utilized and correlated in order to obtain
operations performed by different nodes. This knowhow is then matched with
system calls and invariants in program to find out malicious operations and the
rogue nodes that cause such operations. The proposed methodology is evaluated
with real Hadoop cluster environment to demonstrate proof of the concept. The
empirical results revealed the significance of using this approach towards
secure computations in distributed programming frameworks while processing big
data. |
Keywords: |
Mapreduce, Hadoop, Detection Of Rogue Worker Nodes, Hadoop Logs, System Calls |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
IMPROVED APPROACH TO IRIS SEGMENTATION BASED ON BRIGHTNESS CORRECTION FOR IRIS
RECOGNITION SYSTEM |
Author: |
ABDUL SALAM H. AB., TAHA M. H., NAJI M. S., ALI H F |
Abstract: |
With the increasing need for security systems, security and the authentication
of individuals become nowadays more than ever an asset of great significance in
almost every field. Iris recognition system provides identification and
verification of an individual automatically based on characteristics and the
unique features in iris structure. The correct iris recognition system based on
the iris segmentation method and how controlled the inner and outer boundaries
of iris that can be damaged by irrelevant parts such as eyelashes and eyelid. To
achieve this aim, in this paper basically explains the proposed segmentation
method [Iris Segmentation Based on Brightness Correction ISBC] by addition two
brightness FB(First Brightness) and SB(Second Brightness), which applied on an
eye image passed through preprocessing operations to implement this algorithm in
C# Programming Language, with a new modifications in iris segmentation stage.
The proposed approach testing conducted on the iris CASIA (Chinese Academy of
Science and Institute of Automation) dataset (CASIA v1.0 and CASIA v4.0 interval
) iris image database and the results indicated that proposed approach has 100%
accuracy rates with (CASIA v1.0) and 100% accuracy rates with (CASIA v4.0
interval). |
Keywords: |
Iris Segmentation, Brightness Correction, First Brightness Correction, Second
Brightness Correction, CASIA. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
GLOBAL FEATURES WITH IDENTICAL TWINS BIOMETRIC IDENTIFICATION SYSTEM |
Author: |
BAYAN OMAR MOHAMMED, SITI MARIYAM SHAMSUDDIN |
Abstract: |
Studies in pattern recognition domain currently revolve around twins biometric
identification. The twins biometric Identification system may lead to the
discovery of a distinguishing pattern of a biometric of an individual. A
significant improvement can also be seen in the Unimodal biometric
identification, it allows accurate and reliable identification of identical
twins with good performance of certain traits. However, since the similarity
level is very high, Identical twins identification is much more difficult when
compared to that of non-twins. Hence, the use of more than one biometric trait
with global features is proposed. Further, pattern recognition requires the
extraction and selection of meaningful features, which leads to the key issue in
the identification of twin handwriting-fingerprint, that is, the question of how
to acquire features from many writing and styles twin handwriting-fingerprint to
enable the reflection of the right person between twins. This study thus
proposes the global with Aspect United Moment Invariant for global feature
extractions with the application of identical twin multi-biometric
identification with Inter-class and Intra-class . |
Keywords: |
Identical Twin, Global Features, Multi-Biometric, Identification, Unique
Representation, Aspect United Moment Invariant, Similarity. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
EFFECTS OF ELECTRONIC COMMUNICATION TOOLS AS MODERATING VARIABLES ON TACIT
KNOWLEDGE ELICITATION IN INTERVIEW TECHNIQUES FOR SMALL SOFTWARE DEVELOPMENTS |
Author: |
NAUMAN AHMAD, JOAN LU, IBRAHIM DWEIB |
Abstract: |
Interviewing is an important technique and initial step, during the process of
software development, and due to its simplicity and awareness with the
participants, it is largely used to conduct the detailed exploration during the
process of requirements elicitation, but it is widely accepted that the experts
face problems in collecting the tacit knowledge, which can interrupt the process
of interviewing. In this paper, we have tested a proposed framework, to use the
electronic communication tools, Audio Podcasts, E-mails, Chatting (Online
Chat Sessions) and Hybrid (Combination of Podcasts + E-mails + Online
Chatting), to discuss the detailed interview agenda, between the interviewer and
interviewee, before the process of interviewing, for the semi-structured
interviews. This study has used mixed methods. Firstly, the research has
utilized the collected, quantitative data for testing the hypothesis to compare
the difference of effects among all the e-tools, and secondly, after using these
e-tools, evaluations have been done to find the difference of outcomes, through
the qualitative data, collected via semi-structured interviews. Results suggest
that the use of these electronic communication tools as moderating variables
have strong impact on effectiveness. The findings are adequate to encourage
further research work, and the outcomes have proved to be of great interest for
the software specialists. Several recommendations have been given for the future
research work. |
Keywords: |
Electronic Communications, Interviewing, Interview Agenda, Requirements
Elicitation, Semi-structured interviews, Tacit Knowledge. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
A NEW METHOD FOR ANALYZING CONGESTION LEVELS BASED ON ROAD DENSITY AND VEHICLE
SPEED |
Author: |
M.D. ENJAT MUNAJAT , DWI H. WIDYANTORO , RINALDI MUNIR |
Abstract: |
Congestion remains as an important issue to date, and it is continuously
investigated by experts in order to find solutions in the process of information
dissemination to road users. The existing research was more concerned on
detecting vehicles and road rather than congestion, thus there has been no clear
definition of congestion offerred. What really happened was that vehicle
accumulation will always be referred to as congestion, although it is not
necessarily the case, thus the information about congestion condition becomes
inaccurate. Moreover, the existing research mainly viewed the vehicle speed
aspect as the basis of congestion level estimation. Other researchers used GPS
and probe detectors, but there has been much limitation despite such equipment.
On average, the research carried out was only focused on the number of vehicles
at a given frame as the basis for density/congestion calculation. In fact,
congestion is also determined by road density and vehicle speed in a given
period. Therefore, it is necessary to find a way that can display the
information about road condition and congestion in a factual, timely manner. In
this paper, we present two novel methods in the context of congestion detection
employing information about road density and vehicle speed. Road density defines
the extent to which road area is occupied by vehicles, while vehicle speed sees
how fast vehicles pass a given frame. Thus, the information of both helps define
the levels of vehicle congestion on a road. Based on the experiment conducted on
an in-city road, this method was found to be accurate in defining the levels of
congestion, starting from light, jam to heavy-jam traffic. To corroborate the
argumentation of the congestion conditions, calculation using Fuzzy model was
conducted given that the congestion levels are not measurable in an exact manner
(light (macet ringan), jam (macet sedang), heavy-jam (macet berat) and total
traffic gridlock (macet total)), thus the information obtained is more accurate.
The method developed does not require high cost, yet it is quite effective in
presenting congestion information in a quick, real-time, accurate way. |
Keywords: |
Traffic Congestion, Vehicle Detection, Vehicle, Tracking, Vehicle Density,
Vehicle Speed Detection, Image Processing |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
DETERMINANT FACTORS OF CYBERBULLYING: AN APPLICATION OF THEORY OF PLANNED
BEHAVIOR |
Author: |
HOSEIN JAFARKARIMI, ROBAB SAADATDOOST, ALEX TZE HIANG SIM, JEE HEE MEI |
Abstract: |
Employing the Theory of Planned Behavior (TPB), this study aims to find
determinant factors that have influence over individuals intention to
cyberbully others. Along with the TPBs main variables, including attitude,
subjective norms and perceived behavioral control, in this research, the role of
moral obligation, perceived threat of legal punishment and overall gain was also
studied. Using a scenario-based questionnaire, the data were collected from 96
students in Universiti Teknologi Malaysia. According to the results, subjective
norms and overall gain reflected to be significant over cyberbullying intention,
while the rest of variables did not reflect any significance. The impact of
these determinants was not the same among females and males, which reflect
gender differences matters in intention to cyberbully. |
Keywords: |
Cyberbullying, Cybercrimes, Theory of Planned Behavior, Computer Ethics |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
DESIGN AND VALIDATION OF INFOSTRUCTURE MATURITY MODEL SURVEY THROUGH RASCH
TECHNIQUES |
Author: |
ALIZA ABDUL LATIF, NOOR HABIBAH ARSHAD, NORJANSALIKA JANOM |
Abstract: |
Disaster in Malaysia originated from either natural causes or manmade reasons.
It can bring catastrophic impact to a nation, in terms of human, economic or
environment. Disaster management essentially relies heavily on information where
all agencies involved need to ensure proper information is being disseminated in
the event of a disaster. Information is needed in ensuring an effective disaster
relief efforts. Hence, this paper aims to design and validate an instrument
which measures the infostructure used in disaster management from three
dimensions, Coordination, Communication and Control, namely the 3Cs. The
instrument validation involved three (3) stages: (i) content validity, (ii)
expert validation, and (iii) pilot test using Rasch Model with winstep software.
This study proved that twenty-two (22) respondents are balanced for responding
to fifty-seven (57) items in Rasch measurement model. |
Keywords: |
Coordination; Communication; Control; Rasch Analysis |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
TOWARD A SURVEY INSTRUMENT FOR INVESTIGATING CUSTOMER KNOWLEDGE MANAGEMENT IN
SOFTWARE COMPANIES |
Author: |
ARASH KHOSRAVI, AB RAZAK CHE HUSSIN, HALINA MOHAMED DAHLAN |
Abstract: |
This paper presents a method of developing an instrument for Customer Knowledge
Management (CKM) in Enterprise Software (ES) development. Knowledge-Based View
(KBV) and Theory of Technology in a Generic CKM framework were used to
demonstrate the Organizational, Human, and Technological factors that enable the
CKM process. Human, Organizational and Technological CKM enablers were
identified from the literature. The weight and priority of these factors were
determined by experts from the ES development companies. Based on the high
priority factors, we hypothesized the constructs and develop measurement items
to be validated. The measurement items are adopted from the previous validated
sources. The instrument was evaluated using content validity and a pilot study.
A Content Validity Index (CVI) approach was used to validate the instruments in
term of relevancy and simplicity. During the content validity, the number of
measurement items was reduced from 50 to 46. Moreover, the survey questionnaire
of this study can be used as the foundation for the development of policy as
well as strategy to enhance the probability of successful implementing the CKM. |
Keywords: |
Customer Knowledge Management (CKM), Customer Relationship Management, Knowledge
Management, Software Quality, Content Validity Index, Pilot Study, Survey
Questionnaire Development |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
NATIONAL CYBER SECURITY STRATEGIES FOR DIGITAL ECONOMY |
Author: |
CHOOI SHI TEOH, AHMAD KAMIL MAHMOOD |
Abstract: |
Digital economy is strengthening in prominence and relevance in the era of
increasingly connected world in the cyberspace. The boom in digital economy,
however, is coupled with cyber threats and cyber risks for nations in the form
of malware, escalating organized cybercrime, personal information and data
breach, and Advanced Persistent Threat (APT). As such, nations need to prepare
for the cyber threats from new frontiers namely Internet of Things (IOT), mobile
and cloud technologies. National cybersecurity strategy (NCSS) is an essential
element as cybersecurity is needed to protect and enable digital economy. This
article seeks to gain insights on the relationship between the development of
the NCSS and the success of digital economy of the nation based on literature
from journal articles, global reports, current industry happenings and market
trends. NCSS of nine nations were analyzed centered on digital economy success.
Interestingly, it was found that the readiness of nation to reap digital economy
is not correlated to the development and publication of the nations NCSS. In
order for digital economy to thrive, the digital confidence of the stakeholders
should be high. Nations with high digital confidence depend less on the national
level NCSS to strengthen the trust and confidence in digital space.
Nevertheless, the NCSS is still a need as a foundation for long term strategy to
cement the cybersecurity of the nation, as cyber threats and risks keep evolving
and will remain a challenge. A country like Singapore is intensifying its
efforts and commitment in national cybersecurity. The existing top ten nations
in digital economy has NCSS providing the necessary foundation for digital
economy to flourish further. Though a NCSS is not a requirement for nation to
begin digital economy, NCSS is a requirement for a nation to continuously
develop and to be successful in digital economy. |
Keywords: |
Cybersecurity, Digital Economy, National Cybersecurity Strategy, Critical
Infrastructure, Cybercrime |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
IMAGE ENCRYPTION USING ENHANCED FOUR STAGE ENCRYPTION |
Author: |
SANGAPU VENKATA APPAJI, DR. GOMATAM V S ACHARYULU |
Abstract: |
Secure communication for the information is very important role in digital era.
There are several encryption techniques introduced to communicate the data from
one location to the other. In digital communication, the images are encrypted
using several techniques. In this paper, an encrypt scheme to encrypt bit map
images is proposed using enhanced four stage encryption. Different analytic
studies are performed on the images for finding the traces of the original image
in the encrypted image. Comparative studies are performed on different encrypted
images of the same image to trace back the original image. The results show that
no patterns of the original image found in the encrypted image. |
Keywords: |
plain text, cipher text, bit map images, cipher images, four stage encryption. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
TEXTUAL ANALYSIS FOR DEVELOPING FUZZY COGNITIVE MAPS - THE CASE OF STRATEGY MAPS |
Author: |
PETR HAJEK, PIOTR PACHURA, ONDREJ PROCHAZKA, JAN STEJSKAL |
Abstract: |
Expert opinions have been applied to construct fuzzy cognitive maps (FCMs) to
support the process of strategic planning. FCMs are beneficial tool to represent
strategy maps due to their capacity to model causal-effect relationships among
the key strategy concepts. To overcome the problem of rather subjective expert
evaluation of the relationships, automatic knowledge acquisition is preferable.
Moreover, the causal-effect relationships evolve dynamically and are
context-specific. Here, the knowledge acquisition is performed to obtain
knowledge on causal strategic concepts. This knowledge is extracted from
strategic documents. This approach has two major steps. First, latent semantic
analysis is employed to obtain an interpretable semantic model. Second,
collocated causal concepts are used to model relationships among strategic
concepts. This approach also requires theoretical background literature/domain
experts to determine the direction of the causalities. The generated FCMs can
subsequently be used to simulate the effects of strategic management and, thus,
provide an effective decision support tool. Several innovation strategies of
regions for two periods are used as a case study. To verify the proposed
approach, it is demonstrated that the generated FCMs are consistent with expert
opinions and fuzzy ANP method. The analysis of the dynamic evolution of the FCMs
also shows how strategic priorities change over time. |
Keywords: |
Fuzzy cognitive map, Business strategy, Textual analysis, Innovation strategy |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
HYBRID HAND-DIRECTIONAL GESTURES FOR BAIOMETRIC BASED ON AREA FEATURE EXTRACTION
AND EXPERT SYSTEM |
Author: |
FAHAD LAYTH MALALLAH, BARAA T. SHAREF, ASO MOHAMMAD DARWESH, KHALED N. YASEN |
Abstract: |
Nowadays, biometric authentication researches are becoming one of the major
focuses among researchers due to various fraud attempts are taking place.
Although, several authentication operations are available, these are not free of
defects that affect negatively on the authentication operation. Therefore, a
novel technique is proposed using index-finger of a hand in order to point out
random directions such as up, down, left, or right. Accordingly, a new feature
extraction based on area of the index-finger is proposed. It is hybrid between
static and dynamic hand directional gesture recognition having advantage that is
not forgettable as password due to biologically that this gesture is stored in
the brain as visual memory type. This method starts by recording a video around
2-10 seconds as time duration, and then frames are processed one by one to
output 4-set-direction, which are deemed as passwords for an individual. Later
on, extracted gesture direction vector is matched against the stored one, to
output either accept or reject status. Experiments were conducted on
60-video frames were prepared for training and testing recorded from 10
individuals. Result findings demonstrate high successful recognition rate as the
performance accuracy is 98.4% of this proposed method. |
Keywords: |
Biometrics, Hand Gesture, Pattern Recognition, Feature Extraction, Expert
System, Computer Vision, Data Science. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
DATA-DEPENDENT IN ROLE-BASED GOAL MODELING |
Author: |
ROHAYANTI HASSAN, NOR ASHILA ABDUL RAHMAN, RAZIB M. OTHMAN, WAFAA ABDULLAH |
Abstract: |
Role-based goal modeling demonstrates an improvement on stakeholders’ role
representation and its data element in modeling a system to-be. Since the
requirements might be contributed from multi stakeholders, several goals are
possible to rely on similar sources of requirements. In another scenario, other
stakeholders may interact with the outcome from other sub goals, where this
demonstrates the occurrence of dependency. Dependency implies different
feasibility and adequacy of each goal and sub goal. Data dependency happens when
the data can either be an input or output from one goal to another goal.
Consequently, the data has been changed or intervened from one goal to another.
This paper discusses the integration of data element into role-based goal
modeling from the aspect of: (i) how to form the data dependency in role-based
goal realization graph and (ii) how to assess the new formation in terms of
feasibility and adequacy. The conflict and priority of the data dependency will
be determined in order to estimate the complexity and risk along the process.
This new improvement of goal modeling will be validated using a real case study
taken from Plant Integrated System. |
Keywords: |
Multi stakeholders, Stakeholders role, Data dependency, Complexity, Risk |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
ANALYSIS OF MOVING AVERAGE AND HOLT-WINTERS OPTIMIZATION BY USING GOLDEN SECTION
FOR RITASE FORECASTING |
Author: |
MEDY WISNU PRIHATMONO, EMA UTAMI |
Abstract: |
Moving average and exponential smoothing holt winters is a method for
forecasting calculations of some existing statistical and computational method.
The selection of two methods in this research is based on ritase data from
January 2013 December 2015 to view the average, trend and seasonal pattern
that will occur in one period of future forecasting research. However, the
weakness from Exponential Smoothing Holt Winters is the determination for
initial values (α, β, & γ) are inputted with trial value from 0 to 1 which may
not yield maximum results. Golden section method is added in this research to
assist the optimum determination for initial values (α, β, & γ) from Exponential
Smoothing holt Winters to produce the accurate results. This research aims to
know the forecasting model for the amount of income ritase at Department of
transportation Yogyakarta UPT Giwangan terminal management with Exponential
Smoothing holt Winter’s and moving average. Furthermore, to know the comparison
of forecasting results with both methods. To obtain the right method, the
measuring instrument is needed to detect the accuracy of the prediction value,
while the one used in this research is the mean absolute percentage error
(MAPE), mean square deviation (MSD) and Mean Absolute Deviation (MAD).The
determination of forecast value and selection of MAPE, MSD, and smallest MAD is
using the two methods above. The results of data analysis shows that Exponential
Smoothing holt Winters is considered as the right method for the amount of
ritase income at Department of Transportation, city of Yogyakarta - UPT
Management of Giwangan Terminal because it produces the smallest value of MAPE =
4%, MSD = 446841 and MAD = 496. |
Keywords: |
Forecasting, Golden Section, Exponential Smoothing Holt Winters, Moving Average |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
AN EFFICIENT RADAR SIGNAL DENOISING FOR TARGET DETECTION USING EXTENDED KALMAN
FILTER |
Author: |
JAMI VENKATA SUMAN, JOSEPH BEATRICE SEVENTLINE |
Abstract: |
Nowadays target detection and tracking play a vital role in the field of
aeronautical, spacecraft, wild area, Marine Corps, underwater scenario and so
on. In the target detection, Radio Detection and Ranging (RADAR) signal is
transmitted and the reflected signal has status of target information. In this
paper the performance of radar signal generation and radar target detection
(RTD) models simulated using MATLAB Simulink is discussed. The implementation of
Extended Kalman Filter (EKF) using Register Transfer Level (RTL) Verilog-
Hardware Description Language (HDL) and their analysis using for the Field
Programmable Gate Array (FPGA) implementation in Xilinx tool and the
Applications Specific Integrated Circuit (ASIC) implementation in cadence
encounter tool with 180nm and 45nm library technologies are also presented in
this paper. The Root Mean Square Error (RMSE) and Signal-to-Noise Ratio (SNR)
values are evaluated by using MATLAB Simulink. On FPGA analysis, LUT, slices,
flip flops, frequency and ASIC implementation area, power, delay, Area Power
Product (APP), Area Delay Product (ADP) is improved in proposed EKF-RTD method
than conventional methods. |
Keywords: |
RADAR, MATLAB Simulink, Extended Kalman Filter, Radar Target Detection, RTL,
HDL, Xilinx, FPGA, Cadence, ASIC. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
MULTI-OBJECTIVE GENETIC ALGORITHMS FOR THE GREEN VEHICLE ROUTING PROBLEM: A
COMPARATIVE STUDY |
Author: |
JABER JEMAI, MANEL ZEKRI, KHALED MELLOULI |
Abstract: |
The Green Vehicle Routing Problem (GVRP) is an extension of the standard VRP
taking into account the awareness of companies and governments of the dangerous
effect of gases emissions. The primary objective of the GVRP is to minimize the
volume of emitted carbon dioxide (co2) in adding to the optimization of the
traveled distance and other functional objectives. In this paper, we model the
GVRP as a bi-objective optimization problem for which many solving algorithms
can be adapted and applied including deferent variants and extensions of
Multi-Objective Genetic Algorithms (MOGAs). We select three elitist MOGAs:
Non-dominated Sorting Genetic Algorithm II (NSGA-II), Strength Pareto
Evolutionary Algorithm - II (SPEA-II) and the Indicator-Based Evolutionary
Algorithm (IBEA) to evaluate the quality of the returned Pareto fronts using
deferent metrics: computation time, traveled distance, emissions volume,
generational distance, spacing, entropy, and contribution. The comparison is
performed on a set of standard benchmark problems. The experimental results show
that IBEA outperforms other algorithms over many metrics. |
Keywords: |
Green Supply Chain, Green Multi-Objective VRP, Multi-Objective Genetic
Algorithms |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
LOCATION-AIDED LEVEL BASED DISJOINT MULTIPATH ROUTING (LLDMR) ALGORITHM |
Author: |
V.MUTHUPRIYA, S.REVATHI |
Abstract: |
The routing in a dense causes more routing overhead and end-to-end delay because
of the broadcast storm problem and frequent link failure in the network. In
mobile ad hoc networks, the broadcast of the Route Request (RREQ) packet among
the nodes increases along with the increase in network size, which results in
routing overhead. Similarly, in these networks, the rate of link failure
increases with node speed and so the single path routing protocols reinitiate
the route discovery phase to find a new path whenever there is a link breakage,
which finally results in more end to end delay in the network. Though several
reactive multipath routing protocols were been introduced to overcome this
problem, it uses more RREQ broadcasting to identify multiple paths. Recently
more location based routing schemes have been proposed in order to minimize
these broadcast problem, but they are not scalable for large ad hoc networks. In
this paper, Location-aided Level based Disjoint Multipath Routing (LLDMR)
algorithm is proposed, which finds multiple node-disjoint paths between source
and destination with minimum flooding of control messages. In the proposed
algorithm, the RREQ broadcast occurs only within inter-link nodes to minimize
control packet overhead. Also, the algorithm predicts the link failure by
comparing the nodes threshold value with links threshold value and accordingly
it switches to alternate path even before the link failure occurs. The
performance of LLDMR is analyzed using NS2 simulator by varying its network size
and speed. The simulation results are compared with existing non-location based
multipath routing protocol AOMDV and location based routing protocol, location
aided route discovery mechanism based on two-hop neighbor information(TN-CMAD
and TN-CRDN).. Also the simulation results show that the proposed system has a
higher packet delivery ratio, a minimum of control packet overhead and a reduced
end to end delay than the existing protocols, AOMDV, TN-CMAD and TN-CRDN. |
Keywords: |
Location-Aided Routing (LAR), Location-aided Level based Disjoint Multipath
Routing (LLDMR), Node-disjoint paths, Inter-link node, Nodes and levels
threshold value |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
PQ EVENT DETECTION AND CLASSIFICATION BASED ON DUAL TREE COMPLEX WAVELET AND
COARSER-FINE TWO STAGE ANN CLASSIFIER |
Author: |
PRATHIBHA, MANJUNATH, CYRIL PRASANNA RAJ |
Abstract: |
Dual Tree Complex Wavelet Transform (DTCWT) is shift invariant and has 2m
redundancy as compared with Discrete Wavelet Transform. In this paper, DTCWT is
used to obtain non-redundant sub bands energy levels representing PQ events in
different and unique sub bands. The sub bands are quantized and thresholded to
retain 95% of information that will improve classification process. Two stage
FFNN architecture is designed to classify six possible PQ events by performing
coarse and fine classification process. The two stage classifier with 10 neurons
in each FFNN architecture and 4 neurons in the second stage achieves 97.5% of
classification accuracy. The develop algorithm is suitable for real time
applications in smart meters. |
Keywords: |
PQ event, DTCWT, Neural Network, Two Step Classifier, Smart Meter |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
BANN: A NOVEL INTEGRATION OF SECURITY WITH EFFICIENCY USING BLOWFISH AND
ARTIFICIAL NEURAL NETWORKS ON CLOUD |
Author: |
JOHN JEYA SINGH.T, DR E.BABURAJ |
Abstract: |
Multimedia data security and storage space allocation on cloud servers is a
matter of concern for many CSPs and also has a vast scope for research. Media
files generally are encrypted and stored on storage servers due to various
security threats. Though recent advancements are in mass storage density of
servers and high speed processors have provided a little relaxation to CSPs but
still with their limited storage capacity and heavy usage of services by users
globally, these storage servers usually pave way for high memory allocations for
media files resulting in lack of server space. We argue that uncompressed media
such as images take more storage space compared to compressed files and also
consume more time for cipher operations, which results in poor performance and
considerably high bandwidth usage for operations. We propose a combination of
Blowfish encryption algorithm with Artificial Neural Networks to provide an
efficient way to store and process media files on servers. We have evaluated the
performance of proposed work and compared it in terms of PSNR, compression
ratio, mean square error, average difference, maximum difference and normalized
absolute error and time efficiency during cipher operations. Through these
experimental results we prove the efficiency of system which increases
dramatically using BANN technique. |
Keywords: |
Cloud Computing, Cloud Storing, Cloud Retrieval, Neural Networks, Compression |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
LOAD BALANCING METRIC BASED ENERGY CONSUMPTION IN WIRELESS MESH NETWORKS |
Author: |
NAIMA EL HAOUDAR, ABDELLAH ELHADRI, ABDELILAH MAACH |
Abstract: |
Load balancing and energy consumption have become important criterions for
evaluating a Wireless Mesh Networks performance. The progress of technology and
smart devices has been put up in a rapid way recently. Despite its advantages,
it has some drawbacks, for example increase of energy consumption in the
worldwide. In our paper, we propose a load balancing routing metric based energy
consumption. The aim of our metric is to minimize the energy consumption and
balance the nodes traffic in the network, by comparing the traffic load and
energy value of each node with a standard threshold value. For that, we evaluate
the traffic load and energy value of each node and all its neighbors. We
evaluate the performance of our routing metric, comparing it with others
existing routing metrics, using simulator tool NS2. We find that our new routing
metric achieves the minimum energy consumption. |
Keywords: |
WMNs; Energy Consumption; Load Balancing; Performance. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
TOWARDS AN AUTOMATIC GENERATION OF NEURAL NETWORKS |
Author: |
MAHA MAHMOOD, BELAL AL-KHATEEB |
Abstract: |
The automatic generation of neural network architecture is a useful concept that
is used in many application areas despite the optimal architecture not being
known a priori. Therefore, trial and error is often performed before a
satisfactory architecture is found. Construction deconstruction algorithms can
be used as an approach, but they have several drawbacks. Usually this approach
is restricted to a certain subset of network topologies and as with all hill
climbing methods, they often get stuck at local optima and may therefore not
reach the optimal solution. In order to overcome these limitations, an
evolutionary computation as an approach to the generation of neural network
structures is used. The aim of this paper is to design an automatic generation
of neural networks architecture that performs random operations within hidden
layers. Associated operations include generating layer, add node, delete layer,
delete node and keep the architecture with no change, at which all the weights
are initialized randomly. These neural networks are able to adapt themselves
with the reality, learn from the training of the various applications and adapt
their architecture depending on the uses of the application. The automatically
obtained neural network architecture is much better than all other architectures
that are found during the evolutionary process. This network is tested in the
game of tic tac toe and is played against selected tic tac toe computer programs
and against selected human players and the obtained results are promising,
suggesting many other research directions. |
Keywords: |
Neural Network, Evolutionary Algorithms, Genetic Programming, Genetic Algorithms |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
APPLICATION OF FUZZY MULTI-OBJECTIVE METHOD FOR DISTRIBUTION NETWORK
RECONFIGURATION WITH INTEGRATION OF DISTRIBUTED GENERATION |
Author: |
RAMADONI SYAHPUTRA, INDAH SOESANTI |
Abstract: |
This paper proposes an application of fuzzy multi-objective method for
distribution network reconfiguration with integration of distributed generation
(DG). The method transforms the multi-objectives of the configuration
optimization problem into a single objective problem using theory of fuzzy set.
In fuzzy set, each objective of optimization is associated with a membership
function. The function represents the level of satisfaction of the objective.
There are four objectives which are formed in fuzzy membership function
including minimum power loss, maximum branch current loading, maximum bus
voltage magnitude and load balancing the entire feeders of electric power
distribution network while a radial structure must be maintained. In this work,
the fuzzy multi-objective method has been applied in optimization of an IEEE
77-bus distribution network configuration with integration of DG. The results
show that efficiency of the network is improved significantly. |
Keywords: |
Fuzzy Multi-Objective; Optimization; Distribution Network; Distributed
Generation. |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
Title: |
FAST TWO DIMENSIONAL DIGITAL FILTER DESIGN BASED ON FAST FOURIER TRANSFORM |
Author: |
MUZHIR SHABAN AL-ANI |
Abstract: |
Digital signal processing (one dimension or two dimension) is one of the most
powerful field that will shape 21st century science, engineering and technology.
The field of digital filter design becomes an important issue for their wide
range of applications. Consequently two dimensional (2D) filter design plays is
an important field of processing that have direct application on digital image
processing. The main objective of this work is to design an efficient fast 2D
digital filter, in which can be implemented via fast Fourier transform. The
motivation of this approach is concentrated on the efficient high speed digital
filter design. To achieve this target the digital 2D digital filter is
implemented to avoid the raised problems. The implemented approach gives an
accurate design to select filter type and size according to their application.
Good performance in filtering operation and processing time are achieved in
testing the implemented approach. The processing time gain is increasing rapidly
according to the increasing in the filter size comparing with the direct
convolution. |
Keywords: |
2D Filters, Filter Design, Real Time Filters, FFT, and 2D Convolution |
Source: |
Journal of Theoretical and Applied Information Technology
15th December 2017 -- Vol. 95. No. 23 -- 2017 |
Full
Text |
|
|
|