|
Submit Paper / Call for Papers
Journal receives papers in continuous flow and we will consider articles
from a wide range of Information Technology disciplines encompassing the most
basic research to the most innovative technologies. Please submit your papers
electronically to our submission system at http://jatit.org/submit_paper.php in
an MSWord, Pdf or compatible format so that they may be evaluated for
publication in the upcoming issue. This journal uses a blinded review process;
please remember to include all your personal identifiable information in the
manuscript before submitting it for review, we will edit the necessary
information at our side. Submissions to JATIT should be full research / review
papers (properly indicated below main title).
|
|
|
Journal of
Theoretical and Applied Information Technology
August 2018 | Vol. 96
No.16 |
Title: |
A NOVEL EARLY WARNING SYSTEM USING FUZZY MULTIPLE ATTRIBUTE DECISION MAKING
ALGORITHM AND METEOROLOGICAL DATA |
Author: |
MUSLIKHIN, FATCHUL ARIFIN, PONCO WALIPRANOTO, ARIF ASHARI |
Abstract: |
An early warning system (EWS) has the possibility to predict data accurately in
the limited positions of units using sensors and additional data input. But in
reality it is not easy, requires a lot of system, cross platform and field of
science. This applies tries to realize the EWS, so it is necessary to configure
the addition of input data, where data from sensors and meteorological data is
required to predict floods accurately. The purpose of this system is to make
decisions and determine the flooding area. In order to achieve this goal,
Decision Support System (DSS) techniques with primary and secondary data are
applied. Primary and secondary data as input of Fuzzy Multiple Attribute
Decision Making (FMADM) algorithm. The expectation is based on weight,
normalized model to get optimal prediction result. The EWS equipped by sirens,
short messages, websites, and also Android apps to provide monitoring and
prediction information. The experiment was carried out using EWS hardware
mounted on streams and the results indicated the good performance of the system
with fulfill errors. |
Keywords: |
Flood area predicting, FMADM, DSS, meteorological data, early warning system. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
AN EFFICIENT AUDIO ENCRYPTION BASED ON CHAOTIC LOGISTIC MAP WITH 3D MATRIX |
Author: |
YUSSRA MAJID HAMEED, NADA HUSSIEN M. Ali |
Abstract: |
The widespread popularity of the Internet and the fast developments in computer
technologies influence the expansion of electronic data exchange and digital
communications. Consequently, digital audio communication is used in daily life
activities such as banking, commerce, e-learning, military, education and
politics. As a result, a huge amount of critical audio data is exchanged
everyday over shared and open networks. In consequence of the rapid growth of
data communications and digital audio, the issue of providing a high level of
audio security becomes a foremost importance. Chaotic maps have been used
recently in cryptography for large scale data encryption such as text, image,
video and audio data, due to their strong properties such as sensitivity to
changes in system control parameters and initial conditions, pseudo-randomness
and aperiodicity. This paper has been presented a chaos-based audio confusion
and diffusion system. A symmetric block audio encryption approach have been
introduced, which is based on substitution and shuffling using chaotic logistic
map with 3D-matrix. The confusion process is done by searches for positions of
the audio symbols in 3D-matrix which is generated by chaotic logistic map
system. Then, these symbols are replaced accordingly. Then the shuffle mechanism
is done to the positions of the matrix depending upon the system key. Moreover,
the resulting confused audio are prepared to diffusion mechanism which is
achieved by the exclusive-or operation between random value and chaotic logistic
map array. So, all of the cipher audio will get affected even if only one-bit of
audio sample have been changed. The control parameters and initial conditions
are extracted from the encrypted key, so the system is key sensitive. Further,
Parity added to ensure integrity. Information theory of Shannon entropy test,
NIST tests and security analysis show that the suggested scheme is secure and
can be used in audio encryption. |
Keywords: |
Logistic Map, Chaotic Systems, Confusion, Diffusion, Block Cipher, 3D-Matrix,
Shuffle, Encryption. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
A SURVEY: CHALLENGES OF IMAGE SEGMENTATION BASED FUZZY C-MEANS CLUSTERING
ALGORITHM |
Author: |
WALEED ALOMOUSH, AYAT ALROSAN, NORITA NORWAWI, YAZAN ALOMARI, DHEEB ALBASHISH,
AMMAR ALMOMANI, MOHAMMED ALQAHTANI |
Abstract: |
Image segmentation is the method of dividing an image into many segments that
comprise groups of pixels. In many real applications such as images segmentation
there are issues such as limited spatial resolution, poor contrast, overlapping
intensities, noise and intensity in homogeneities. The semi and fully automatic
image segmentation are a difficult and complicated process due to several
reasons such as the different appearance of intensity level, patterns of objects
inside image, overlapping among different regions (segments), and partial volume
effects (noise level). Fuzzy c-means (FCM) algorithm is the most popular method
used in image segmentation due to its robust characteristics for ambiguity.
Although, the conventional FCM algorithm suffer from some weaknesses such as
initialize clusters center, determine the optimal number of clusters and
sensitive to noise. This paper presents the review challenges of image
segmentation based FCM algorithm and describe how solve these kinds of FCM
problems. |
Keywords: |
Image Segmentation, Fuzzy Clustering, FCM, Metaheuristic Search Algorithms,
Fitness Functions |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
A SELF-ORGANIZING MAP ALGORITHM USING ONLY A TESTING DATA SET WITH THE
ONE-DIMENSIONAL VECTORS AND AN ODDS RATIO COEFFICIENT FOR ENGLISH SENTIMENT
CLASSIFICATION IN A PARALLEL SYSTEM |
Author: |
DR.VO NGOC PHU, VO THI NGOC TRAN |
Abstract: |
Many different approaches have already been studied for sentiment classification
for many years because It has been significant in everyday life, such as in
political activities, commodity production, and commercial activities. A new
model using an unsupervised learning for big data sentiment classification has
been proposed in this survey. We have used a Self-Organizing Map Algorithm (SOM)
to cluster all sentences of one document of the testing data set comprising
8,500,000 documents, which are the 4,250,000 positive and the 4,250,000 negative
in English, into either the positive polarity or the negative polarity
certainly. In this survey, we do not use any data sets. We do not any
one-dimensional vectors based on a vector space modeling (VSM). We also do not
use any multi-dimensional vectors based on the VSM. We only use many
one-dimensional vectors based on many sentiment lexicons of our basis English
sentiment dictionary (bESD). The valences and the polarities of the sentiment
lexicons of the bESD are calculated by using An Odds Ratio Coefficient (ORC)
through a Google search engine with AND operator and OR operator. We also do not
use many multi-dimensional vectors based on the sentiment lexicons of the bESD.
With one document of the testing data set, the SOM is used to cluster all the
sentences of this document into either the positive or the negative on a map.
The sentiment classification of this document is identified based on this map
completely. We have tested the proposed model in both a sequential environment
and a distributed network system. We have achieved 88.14% accuracy of the
testing data set. The execution of the proposed model in the sequential system
is greater than that in the parallel network environment. Many applications and
research of the sentiment classification can widely use the results of the
proposed model. |
Keywords: |
English Sentiment Classification; Distributed System; Parallel System; Odds
Ratio Similarity Coefficient; Cloudera; Hadoop Map And Hadoop Reduce; Clustering
Technology; Self-Organizing Map |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
A FUZZY LOGIC BASED HYBRID APPROACH FOR DISEASE INTERPRETATION AND PREDICTION |
Author: |
MR. RAVI AAVULA, DR. R. BHRAMARAMBA |
Abstract: |
Data mining and data exploration in databases are attracting a big quantity of
analytics, research, industry, and media attention these days. Despite the
growing number of machine-learning algorithms that have been formed, still to
implement them and provide the effectiveness and practicality is much desired.
However, in order to help the medical experts to suggest a proper and an
efficient medical plan by employing the predicted output of the built model, it
is significantly needful to determine which attribute-variables have more
significance to the final outcome of cancer patients’ patterns. This paper
presents a novel fuzzy logic based hybrid approach for cancer disease
interpretation and prediction. The earlier forecast and location of disease
cells can be useful in curing the illness in medical applications. We performed
the experiments on Breast Cancer Wisconsin Data Set utilizing our proposed
method. Experiment analysis in later section prove the efficiency of our
proposed method. Proposed method is computationally more efficient than existing
methods and, therefore, suited even for massive sized data sets in the
biomedical field. |
Keywords: |
Knowledge discovery, Data mining, Machine learning, Medical data, Cancer
prognosis. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
ENGLISH SENTIMENT CLASSIFICATION USING A BIRCH ALGORITHM AND THE SENTIMENT
LEXICONS-BASED ONE-DIMENSIONAL VECTORS OF A GOWER-2 COEFFICIENT |
Author: |
DR.VO NGOC PHU, VO THI NGOC TRAN |
Abstract: |
Sentiment classification is significant in everyday life, such as in political
activities, commodity production, and commercial activities. In this survey, we
have proposed a new model for Big Data sentiment classification. We use a
Balanced Interative Reducing and Clustering using Hierarchies algorithm (BIRCH)
and many one-dimensional vectors basd on many sentiment lexicons of our basis
English sentiment dictionary (bESD) to cluser one document of our English
testing data set, which is 8,500,000 documents including the 4,250,000 positive
and the 4,250,000 negative based on our English training data set which is
5,000,000 sentences comprising the 2,500,000 positive and the 2,500,000
negative. We calculate the sentiment scores of English terms (verbs, nouns,
adjectives, adverbs, etc.) by using a GOWER-2 coefficient (G2C) through a Google
search engine with AND operator and OR operator. We do not use any
multi-dimensional vector. We also do not use any one-dimensional vector based on
a vector space modeling (VSM). We do not use any similarity coefficient of a
data mining field. The BIRCH is used in clustering one sentence of one document
of the testing data set into either the 2,500,000 positive or the 2,500,000
negative of the training data set. We tested the proposed model in both a
sequential environment and a distributed network system. We achieved 87.82%
accuracy of the testing data set. The execution time of the model in the
parallel network environment is faster than the execution time of the model in
the sequential system. The results of this work can be widely used in
applications and research of the English sentiment classification. |
Keywords: |
English Sentiment Classification; Distributed System; Parallel System; GOWER-2
Similarity Coefficient; Cloudera; Hadoop Map And Hadoop Reduce; Clustering
Technology; Balanced Interative Reducing And Clustering Using Hierarchies
Algorithm. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
ENGLISH SENTIMENT CLASSIFICATION USING A BIRCH ALGORITHM AND THE SENTIMENT
LEXICONS-BASED ONE-DIMENSIONAL VECTORS OF A GOWER-2 COEFFICIENT |
Author: |
DR.VO NGOC PHU, VO THI NGOC TRAN |
Abstract: |
Sentiment classification is significant in everyday life, such as in political
activities, commodity production, and commercial activities. In this survey, we
have proposed a new model for Big Data sentiment classification. We use a
Balanced Interative Reducing and Clustering using Hierarchies algorithm (BIRCH)
and many one-dimensional vectors basd on many sentiment lexicons of our basis
English sentiment dictionary (bESD) to cluser one document of our English
testing data set, which is 8,500,000 documents including the 4,250,000 positive
and the 4,250,000 negative based on our English training data set which is
5,000,000 sentences comprising the 2,500,000 positive and the 2,500,000
negative. We calculate the sentiment scores of English terms (verbs, nouns,
adjectives, adverbs, etc.) by using a GOWER-2 coefficient (G2C) through a Google
search engine with AND operator and OR operator. We do not use any
multi-dimensional vector. We also do not use any one-dimensional vector based on
a vector space modeling (VSM). We do not use any similarity coefficient of a
data mining field. The BIRCH is used in clustering one sentence of one document
of the testing data set into either the 2,500,000 positive or the 2,500,000
negative of the training data set. We tested the proposed model in both a
sequential environment and a distributed network system. We achieved 87.82%
accuracy of the testing data set. The execution time of the model in the
parallel network environment is faster than the execution time of the model in
the sequential system. The results of this work can be widely used in
applications and research of the English sentiment classification. |
Keywords: |
English Sentiment Classification; Distributed System; Parallel System; GOWER-2
Similarity Coefficient; Cloudera; Hadoop Map And Hadoop Reduce; Clustering
Technology; Balanced Interative Reducing And Clustering Using Hierarchies
Algorithm. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
NLP AND IR BASED SOLUTION FOR CONFIRMING CLASSIFICATION OF RESEARCH PAPERS |
Author: |
KHALID M.O. NAHAR, NOUH ALHINDAWI, OBAIDA M. AL-HAZAIMEH, 4RA ED M. AL KHATIB,
ABDALLAH M AL AKHRAS |
Abstract: |
In this paper, an approach is presented for classifying and categorizing the
research’s papers in very accurate manner. Typically, the papers are classified
into clusters based on the concepts and the contents, this clustering process is
mainly depends on the title of the paper. However, a lot of papers have
ambiguous title or have a very short title. Therefore, the researcher needs to
cluster and classify the papers not just depending on the title, but also
include other parts of the paper like: abstract, keywords, and may be some key
parts of the paper. This process is time consuming since the researchers spend a
lot of time to decide the related cluster of the undertaken paper. Our presented
approach provides an automatic, short time, and accurate solution, which mainly
depends on Information Retrieval (IR) as core process along with some Natural
Language Processing (NLP) techniques. Latent Dirichlet Allocation (LDA) and
Latent Semantic Indexing (LSI) are the two IR algorithms which used in the new
approach. We use the LDA for classifying the papers using the concept of topic
modeling. And we use the LSI for performing querying. The new approach uses the
title of the paper, the abstract, and the keyword for performing the
classification process. Two distinct experiments were conducted over 600 papers
in the field of computer science. The results show the efficiency of the
proposed approach in classifying and mapping the papers accurately and
efficiently. |
Keywords: |
NLP and Information Retrieval (IR), Classification; Topic Modeling; Latent
Dirichlet Allocation; Latent Semantic Indexing; Gensim |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
A NOVEL STEGANOGRAPHY METHOD BASED ON 4 DOMINATIONS STANDARD CHAOTIC MAP IN
SPATIAL DOMAIN |
Author: |
Dr. ANWAR ABBAS HATTAB, Dr. SADIQ A. MEHDI |
Abstract: |
The goal of steganography is to embed secret information in a data is
considered as a cover in a way that unparticipating users can not able to
discover the content of this information by estimating the data. In this paper,
new method of stenography has been suggested to hide text in cover image as
dynamic method in which combine cryptograph with information hiding to do high
level security. In this method chaotic theory is used to choose randomly pixel
in image based on initial control parameters. Some operations of number theory
are used to generate keys from text's characters with these coordinates of the
location of chosen pixel which are used to change these characters from the text
which is stored in this location as four dominations to make diffusion and
confusion in text and hide in cover image randomly. The result experiments show
strongest of this method based on value of Mean Square Errors (MSE), Pack Signal
to Noise Ratio (PSNR) when compared with other methods. |
Keywords: |
Steganography Method, Standard Chaotic Map, PSNR, MSE, Histogram. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
ENHANCEMENT OF GEOMETRICAL PREDICTIVE HANDOVER PROBABILITY BASED ON COVERAGE
SECTORS |
Author: |
ALSUBAIE, SULAIMAN MOHAMMED S, NOR EFFENDY OTHMAN, ROSILAH HASSAN |
Abstract: |
Mobile Internet Protocol (IP) handover refers to the seamless communication link
change of one node from one access point to another. It is useful for preventing
disruption in communication sessions in general, and it has a significant impact
on the performance of the vehicular networks for the frequency of an occurrence
in particular. This is important for different networks in general and vehicular
networks in particular due to the high dependency of different vehicular and
intelligent transportation of the internet. Thus, it is highly motivational
applications to develop an efficient handover system for vehicular networks. The
problematic aspect of vehicular handover is the non-accurate location
information that might be provided to the handover because the non-accurate
Global Positioning System (GPS) signal in an urban environment especially when
the environment is occupied by the tall structures. Hence, it is essential to
develop vehicular handover from the perspective of location prediction to assist
in correct prediction to the next access point (AP). The two issues of mobile IP
handover are in the latency and the possible loss of packets. Most previous
studies have concentrated on the architecture aspect of the mobile IP to resolve
this problem. Despite the effectiveness of such solutions they do not target
directly the latency problem caused by the handover. In this research, a
probability based geometrical model is developed for prediction of next AP based
on logged information about the history of vehicles mobility in the road with
respect to current AP. The methodology is based on dividing the coverage area
around each AP to set of sectors and building dynamic probability table about
the mobility of the vehicle from one AP at particular sector to the predicted
AP. For further improvement in the performance, Kalman filter has been
incorporated into each vehicle for accurate prediction of the vehicle location
in the coverage zone. Simulation results have proven that our model outperformed
the previous models in terms of all evaluation measures of the network
performance: Packet Delivery Ratio (PDR), End to End delay (E2E delay), and
overhead. The improvement showing on effects of number of sectors in the PDR is
nearly 26.15% while the enhancement of the delay is up to 84.21% in the zone of
the transition from one AP to another AP. Moreover, the achieved improvement of
overhead is with a percentage of 34.67%. |
Keywords: |
Handover, Vehicular Network, Predictive Handover, Mobile IP, Coverage Sector |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
OPTIMAL CONTROL OF MULTI-CLASS MULTI-SERVER QUEUEING SYSTEM |
Author: |
ALI MADANKAN, ALI DELAVARKHALAFI |
Abstract: |
We consider Markovian multi-server queues with two class of customers: high and
low-priority ones, and presented a framework for a control problem of such
queuing system. Most authors have used Brownian control problems (BCP) as formal
diffusion limits and also BCPs are used for queuing network control problems
too. In this paper, we also suppose formal diffusion limit to control a queuing
system where our problem becomes a control problem with the dynamics of Brownian
motion. In a related problem, but simpler, a minimum trajectory has been
achieved and is provided as the solution of a stochastic differential equation
in one dimension and then for a multi-dimensional problem follows. |
Keywords: |
Optimal Control, Brownian Control, Queuing System. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
FUTURE OF MODIFICATIONS ON THE HUMAN BODY ACCORDING TO SCIENCE FICTION: WETWARE
AND THE CYBORG ERA |
Author: |
DR SHAZIA ZAHEER, V. GNEVASHEVA, S.Butt |
Abstract: |
Science Fiction (Sci-Fi) brings several examples of modifications made in the
human body, each having different goals in mind — it may be either improving or
compromising intellectual, physical, or psychological abilities. Lately, with
consistent advancements in the Health field, mostly brought about by e-Health
startups and the interdisciplinary combination of Biology, Medicine, Computer
Science, and Engineering, many of the modifications seen in big screens became a
reality, albeit from a weak signal point of view and not yet mainstream
solutions to Health issues. Aiming to define the scope of this research, as
Sci-Fi works are abundant and take the form of movies, animes, mangas, and
books, filtering all of those would be a herculean job. Hence, for this paper,
only movies and animes were assessed, according to precepts established in the
Methodology section. Taking our society’s progress into consideration, the aims
of this work are twofold: (i) knowing to what extent there has already been real
scientific progress with regard to science fiction scenarios and predictions of
human body transformations; (ii) understanding the repercussions of humans
undergoing such modifications applied to several fields, such as Economics,
Sociology and Ethics, pinpointing scenarios that should be discussed in
preparation for future changes. |
Keywords: |
Wetware, Science Fiction, Human augmentation, Cyborgs, Future wheels |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
ALG CLUSTERING TO ANALYZE THE BEHAVIOURAL PATTERNS OF ONLINE LEARNING STUDENTS |
Author: |
AGUNG TRIAYUDI, ISKANDAR FITRI |
Abstract: |
In this paper, we describe a student's behavior search pattern using new method
of Agglomerative Hierarchical Clustering (AHC) that is ALG (Average Linkage
Dissimilarity Increment Distribution - Global Cumulative Score Standart). The
dataset was taken from the 1523 student posts. This calculation resulted in 8
student behavior patterns obtained from 12 primary clusters. The cluster
evaluation using the silhouette coefficient (S) generated the highest value of
0.9464 and cluster evaluation using cophenetic correlation coefficient (CPCC)
generated the highest value of 0.9925. |
Keywords: |
AHC, ALG, Dataset, Cluster, CPCC. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
SYSTEMATIC LITERATURE REVIEW FOR MALWARE VISUALIZATION TECHNIQUES |
Author: |
PRITHEEGA MAGALINGAM, GANTHAN NARAYANA SAMY, WAFA MOHD KHAIRUDIN, MOHD FIRHAM
EFENDY MD SENAN, ASWAMI FADILLAH BIN MOHD ARIFFIN, ZAHRI HJ YUNOS |
Abstract: |
Analyzing the activities or the behaviors of malicious scripts highly depends on
extracted features. It is also significant to know which features are more
effective for certain visualization types. Similarly, selecting an appropriate
visualization technique plays a key role for analytical descriptive, diagnostic,
predictive and prescriptive. Thus, the visualization technique should provide
understandable information about the malicious code activities. This paper
followed systematic literature review method in order to review the extracted
features that are used to identify the malware, different types of visualization
techniques and guidelines to select the right visualization techniques. An
advanced search has been performed in most relevant digital libraries to obtain
potentially relevant articles. The results demonstrate significant resources and
types of features that are important to analyze malware activities and common
visualization techniques that are currently used and methods to choose the right
visualization technique in order to analyze the security events effectively. |
Keywords: |
Visualization Technique, Malware, Features, Analytics, Security Event |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
EFFECTIVE IDEA MINING TECHNIQUE BASED ON MODELING LEXICAL SEMANTIC |
Author: |
MOSTAFA ALKSHER, AZREEN AZMAN, RAZALI YAAKOB, EISSA M. ALSHARI, RABIAH ABDUL
KADIR, ABDULMAJID MOHAMED |
Abstract: |
Automatic extraction of hidden ideas from texts is extremely important that
would help decision makers to identify and retrieve significant information,
which possibly used to solve current problems. However, adequate measurements
need to be utilized to verify candidate ideas. In existing idea mining
measurement research, a well-balanced measurement is used to measure the
distribution of the number of known and unknown terms from the idea text and the
context text to find useful ideas within a text pattern. The existing models do
not take into consideration the relationships between these terms which may
share one or more semantic component. This leads to a limited characterization
of potential ideas. Therefore, this paper proposes an improvement to the idea
mining model by considering the semantic relationships among terms based on
synonyms by using the WordNet. The effectiveness of the proposed model is
evaluated on a dataset consisting of 50 randomly selected abstracts of
scientific articles. Based on the results, the proposed model showed an
improvement in the performance of the idea mining model where an increase of
28.4% is achieved. |
Keywords: |
Idea mining, Information retrieval, WordNet, text pattern, text mining. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
CONCEPTUALISING IT CONSULTING SERVICES: AN APPROACH FROM IT-BUSINESS ALIGNMENT
MODELS AND DESIGN SCIENCES |
Author: |
FRANCISCO MACIA PEREZ, JOSE VICENTE BERNA MARTÍNEZ, CARLOS RAMON LOPEZ PAZ,
1JOSE MANUEL SANCHEZ BERNABEU |
Abstract: |
The constant integration of business and manufacturing processes is a difficult
task that can be facilitated through IT consulting services. However, if these
services do not adequately address the problems of alignment between IT and
business, efficiency in integration can be seriously compromised. This
article presents a methodology that systematises IT consulting services for the
acquisition, incorporation, and integration of IT elements in an organisation in
such a way that is aligned with the business and by contemplating its
contribution to the value chain. The proposal is based on a set of rules,
methods, guidelines, patterns, and artefacts that define a flow of action and
implement a strategy that provides a consulting solution as a final result.
Likewise, a method is proposed to evaluate the applied methodology and the
solution obtained. For validation of the method, a set of business processes
based on a case study applied in several Cuban companies related to the food
sector has been defined to help adjust the parameters and corroborate the
generalisation of the proposal. This research allows to ensure the alignment
of business and IT to avoid the failure in the incorporation of IT to companies.
It also analyzes and establishes the ideal artefacts for IT consulting and
generates an IT consulting methodology that makes the analysis of consultants
more robust in order to guarantee the success of incorporating IT into
companies. |
Keywords: |
IT Consulting Services, IT Alignment Models, Business Modelling, BSC-IT, BPM |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
EXPLORATION STUDY OF CERTIFICATE POLICY AND CERTIFICATION PRACTICE STATEMENT
DESIGN FOR CERTIFICATION AUTHORITIES IN INDONESIA |
Author: |
YOVA RULDEVIYANI, ARFIVE GANDHI, YUDHO GIRI SUCAHYO |
Abstract: |
Certification Authority (CA) must unveil its Certificate Policy (CP) and
Certification Practice Statement (CPS) as obligatory and fundamental documents
to describe its technical information security, business processes, and legal
compliance. Although had been initiated since 2014, Indonesia National Public
Key Infrastructure (INPKI) still cannot be operated completely by Root CA,
Sub-CA’s, and other involved participants. This situation affected by CA’s
inability to produce adequate CP and CPS that cover necessary information
required above. As Root CA in INPKI, Ministry of Communication and Information
Technology (MCIT) shall propose CP and CPS for itself and also provide CP and
CPS framework for its Sub-CAs. Previously, Sub-CAs confronts difficulties to
propose CP and CPS due to their low proficiency. Using the concept of knowledge
management, MCIT needs to regulate and educate Sub-CAs and itself as Root CA by
proposing CP and CPS as knowledge transfer and guidelines. Proposed CP and CPS
become empirical externalization and internalization so that each CA can compose
its own CP and CPS with decent content to cover the required issues. This
research explores how CAs in INPKI formulates their CP and CPS based on Request
for Comment (RFC) 3647 with larger point of view. This exploration aims to
extend and criticize whether the proposed CP and CPS are qualified to encourage
the CA’s readiness and the preparation of INPKI. This exploration contributes
significant impact through preparation of CP and CPS. Produced CP and CPS will
be more qualified and enhanced in unveiling necessary information to obtain
trustworthiness in three aspects: governance; technical; and human resource
requirements. |
Keywords: |
Certification authority; Certificate policy; CP; Certification practice
statement; CPS; Information security; Public Key Infrastructure |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
ROUTING PROTOCOL FOR SDN-CLUSTER BASED MANET |
Author: |
AHMED JAWAD KADHIM, SEYED AMIN HOSSEINI SENO, RANA ALI SHIHAB |
Abstract: |
Mobile Ad hoc NETwork (MANET) is a network of mobile nodes that connect with
each other through the wireless interfaces without infrastructures. These nodes
have limited energy and move freely from one location to another. Software
Defined Networking (SDN) is a new architecture consists of data and control
parts. It was discovered to increase the possibilities of traditional network
architecture. Moreover, it plays a big role in saving the energy by selecting
the optimal path with minimum energy consumption or the path included
intermediates node with highest remaining energy. The mobility of MANET nodes
makes the routing process very difficult and needs in sometimes that all nodes
participate in this process, which leads to high overhead and energy
consumption. Therefore, there is need to a special routing protocol to resolve
the above troubles. The aim of this work is to design a routing protocol called
SDN-Cluster Based Routing Protocol (S-CBRP) to enhance the route
building/rebuilding process and increase the lifetime of MANET by selecting the
optimal path to the target node with minimum energy consumption and takes into
account the node's remaining energy and delay constraints. The proposed
architecture depends on implementing SDN agent in each cluster head node to work
as a local SDN controller for managing one or more clusters. All the local
controllers connect to the central SDN controller that manages the entire
network. Also, the full dump and incremental transmission approaches are used to
decrease the energy consumption and overhead of sending the cluster information
to the central SDN controller. The results demonstrated that S-CBRP is better
than FF-AOMDV in terms of energy consumption, network overhead, average
source-to-destination delay and packet delivery ratio. |
Keywords: |
Cluster-Based-MANET, SDN, Local Controller, SDN Agent, Network Lifetime |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
SECURITY REQUIREMENTS ELICITATION AND CONSISTENCY VALIDATION: A SYSTEMATIC
LITERATURE REVIEW |
Author: |
NURIDAWATI MUSTAFA, MASSILA KAMALRUDIN, SAFIAH SIDEK |
Abstract: |
Security requirements are important in developing secure software development.
Objectives: This study plans to identify properties of security requirements for
developing secure software as well as to analyse the existing works for
requirements validation. The gaps and limitations of each approach was discussed
in this study. Method: A systematic literature review is conducted to identify
and analyse related literature on elicitation of security requirements for
developing secure software. Findings: There are four results: (1) the security
properties highly considered for developing secure software are
“Confidentiality”, “Integrity” “Identification & Authentication”, and
“Availability”; (2) the approaches in validating security requirements are
controlled user experiments, tools and manual checklist; (3) the security
references used are the NIST, the Common Criteria and the ISO/IEC; and (4)
security requirements template and consistency checking. Finally, the gaps and
limitations of the existing works were also discussed. Conclusion: The primary
challenge of security requirements during elicitation is to write the correct
security requirements and validating the consistency of security requirements.
As such, requirements engineers should consider the challenges posed by security
requirements in eliciting and validating security requirements. |
Keywords: |
Security Requirements, Consistency Management, Security Requirements Validation,
Security Requirement Engineering, Secure Software |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
XEW 2.0 : ESTABLISHMENT OF A NEW COMPETITIVE INTELLIGENCE SYSTEM FOR BIG DATA
ANALYTICS |
Author: |
AMINE EL HADDADI, ANASS EL HADDADI, ABDELHADI FENNAN, BERNARD DOUSSET |
Abstract: |
Competitive Intelligence is a strategic management of information, which aims to
provide collaborative decision-making. In other words, competitive intelligence
is a mapping of the surrounding business environment. Nowadays, every
organization needs a competitive intelligence system in order to enhance its
position in the market, or simply to survive, as well as to be able to track
every single change and to provide the right response to it in a real time
scale. We propose in this paper a new Competitive Intelligence System for Big
Data Analytics (CIS-BG: XEW 2.0). |
Keywords: |
Competitive Intelligence System; Big Data Analytics, Big Data Visualisation; XEW
2.0 |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
MALWARE PREDICTION ALGORITHM: SYSTEMATIC REVIEW |
Author: |
MOHD NAZRI MAHRIN, SURIAYATI CHUPRAT, ANUSUYAH SUBBARAO, ASWAMI FADILLAH MOHD
ARIFFIN, MOHD ZABRI ADIL TALIB, MOHAMMAD ZAHARUDIN AHMAD DARUS, FAKHRUL AFIQ ABD
AZIZ |
Abstract: |
Malware is a threat to information security and poses a security threat to harm
networks or computers. Not only the effects of malware can generate damage to
systems, they can also destroy a country when for example, its defense system is
affected by malware. Even though many tools and methods exist, breaches and
compromises are in the news almost daily, showing that the current
state-of-the-art can be improved. Hundreds of unique malware samples are
collected on a daily basis. Currently, the available information on malware
detection is ubiquitous. Much of this information describes the tools and
techniques applied in the analysis and reporting the results of malware
detection but not much in the prediction on the malware development activities.
However, in combating malware, the prediction on malware behavior or development
is as crucial as the removing of malware itself. This is because the prediction
on malware provides information about the rate of development of malicious
programs in which it will give the system administrators prior knowledge on the
vulnerabilities of their system or network and help them to determine the types
of malicious programs that are most likely to taint their system or network.
Thus, based on these, it is imperative that the techniques on the prediction of
malware activities be studied and the strengths and limitations are understood.
For that reason, a systematic review (SR) was employed by a search in 5
databases and 89 articles on malware prediction were finally included. These 89
articles on malware prediction has been reviewed, and then classified by
techniques proposed in detection of new malware, the identified potential
threats, tools used for malware prediction, and malware datasets used.
Consequently, the findings from the systematic review can serve as the basis for
a malware prediction algorithm in future as malware predication became a
critical topic in computer security. |
Keywords: |
Malware Prediction Techniques, Computer Security, Potential Threats, Malware,
Malware Datasets |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
A SYSTEMATIC STAKEHOLDERS AND TECHNIQUES SELECTION FRAMEWORK FOR SOFTWARE
REQUIREMENTS ELICITATIONS |
Author: |
FARES ANWAR, ROZILAWATI RAZALI |
Abstract: |
Requirements elicitation is the most critical phase in software requirements
engineering. The process is indeed resource intensive, as it involves a number
of dedicated stakeholders who are deliberately gathered to confer and stipulate
software requirements. The effectiveness of the process is greatly influenced by
the suitability of the stakeholders involved and the elicitation techniques used
to gather the requirements. Previous studies indicate that improper stakeholder
identification and technique selection normally lead to unsuccessful
requirements elicitation process. Such phenomena would later cause serious
impacts to projects such as costly rework, overrun schedule and poor quality
software. Furthermore, the advancement of technology has introduced various
requirements elicitation techniques. The existing technique options however are
not always obvious. It is uncertain on how to select the right elicitation
techniques for specific situations under certain constraints. This study
addresses this issue by proposing a framework for selecting the suitable
stakeholders and elicitation techniques to be used in the requirements
elicitation process of a particular project. The study adopts qualitative data
collection and analysis. The qualitative data were captured through individual
and focus group interviews with experts. Through the analysis, the study
formulates a set of criteria for choosing the right stakeholders, which later
acts as the conditions to determine the suitable elicitation techniques to be
used. In addition to the stakeholders’ characteristics, the study also considers
technique features, requirements sources and project characteristics as the
conditions to choose the elicitation techniques. The criteria and conditions
form the systematic stakeholder and elicitation technique selection framework.
The framework is useful for project managers to decide the appropriate
stakeholders and elicitation techniques to be employed based on the stakeholder
characteristics and project constraints. |
Keywords: |
Stakeholders Selection; Techniques Selection; Requirements Elicitation. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
INTRUSION DETECTION MECHANISM USING FUZZY RULE INTERPOLATION |
Author: |
MOHAMMAD ALMSEIDIN, SZILVESZTER KOVACS |
Abstract: |
Fuzzy Rule Interpolation (FRI) methods can serve deducible (interpolated)
conclusions even in case if some situations are not explicitly defined in a
fuzzy rule based knowledge representation. This property can be beneficial in
partial heuristically solved applications; there the efficiency of expert
knowledge representation is mixed with the precision of machine learning
methods. The goal of this paper is to introduce the benefits of FRI in the
Intrusion Detection Systems (IDS) application area, in the design and
implementation of the detection mechanism for Distributed Denial of Service
(DDOS) attacks. In the example of the paper as a test-bed environment an open
source DDOS dataset and the General Public License (GNU) FRI Toolbox was
applied. The performance of the FRI-IDS example application is compared to other
common classification algorithms used for detecting DDOS attacks on the same
open source test-bed environment. According to the results, the overall
detection rate of the FRI-IDS is in pair with other methods. On the example
dataset it outperforms the detection rate of the support vector machine
algorithm, whereas other algorithms (neural network, random forest and decision
tree) recorded lightly higher detection rate. Consequently, the FRI inference
system could be a suitable approach to be implemented as a detection mechanism
for IDS; it effectively decreases the false positive rate value. Moreover,
because of its fuzzy rule base knowledge representation nature, it can easily
adapt expert knowledge, and also be-suitable for predicting the level of degree
for threat possibility. |
Keywords: |
Fuzzy Rule Interpolation, Inference System, Intrusion Detection System, DDOS
Attack, Detection Mechanism |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
PREDICTION OF BREAST CANCER RECURRENCE USING MODIFIED KERNEL BASED DATA
INTEGRATION MODEL |
Author: |
ARIDA FERTI SYAFIANDINI, ITO WASITO, RATNA MUFIDAH, IONIA VERITAWATI, INDRA BUDI |
Abstract: |
Analysis of early cancer prognosis is necessary to determine the proper
treatment for each patient. Furthermore, as microarray DNA has high dimensional
data it would lead to a challenging task. Several studies in high dimensionality
reduction have been conducted to determine significant genes with least error in
cancer classification. One of those studies implements mining process such as
feature selection using parametric and non-parametric statistical tests. Other
than feature selection, data integration is also believed as an optimal solution
in increasing cancer classification performance. In this paper, dataset
containing gene expression value and clinical parameters observed from 60 breast
cancer patients is used for experiment. The experiment consists of integrating
data using early kernel based data integration model with modification in its
dimensionality reduction step. In the existing related research, kernel
dimensionality reduction is used. In this paper, mining process using several
parametric and non-parametric based statistical tests is used as the replacement
of kernel dimensionality reduction. The last step in kernel based data
integration is classification using Support Vector Machine (SVM). Ten-fold cross
validation scheme is used in the experiment. SVM with linear kernel gives the
best accuracy rate compared to other kernels. |
Keywords: |
Recurrent Cancer, Data Integration, Kernel Method, Kernel Dimensionality, Gene
Expressions |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
REAL-TIME WAREHOUSE ARCHITECTURE PROPOSAL FOR OIL AND GAS INDUSTRY |
Author: |
AYAH F. LEERI, MOHAMMAD A. ALKANDARI |
Abstract: |
Oil, gas and resources industry plays a dominant role as an income resource for
many major companies and countries. To increase the Income through raising the
production, many real-time data captured needs to be analyzed and decisions
needs to be taken rapidly to optimize well performance and increase production.
Currently, data captured are being stored in various systems which forward
different tags to multiple applications. In this paper, we study the
requirements and nature of Oil, gas and resources industry. We then propose an
architecture for centralized real-time warehouse that shall consolidate all data
(real-time and static) that are gathered from different resources such as wells,
pipelines and other production related information. The aim of this paper is to
provide a green, scalable and secure infrastructure for data storing and data
analyses, taken into consideration the nature of Oil, gas and resources
environment. |
Keywords: |
Software Architecture, Real-Time Warehouse Architecture, Data Warehouse
Architecture, Software Requirements, service oriented architecture |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
RECENT PROGRESS OF FACTORS INFLUENCING INFORMATION TECHNOLOGY ADOPTION IN LOCAL
GOVERNMENT CONTEXT |
Author: |
FAIZURA HANEEM, NAZRI KAMA |
Abstract: |
Information Technology (IT) adoption is increasingly being studied in many
different contexts, both in public and private sectors. However, there are not
many review papers published on IT adoption specifically in a local government
context. Local governments have unique characteristics in terms of the
organization’s structure, the power of authority, norms and culture. Hence, the
primary aim of this study was to review recent literature from the year 2013 to
2017 on IT adoption at the organizational level in a local government context.
We strategized our review methods through utilizing relevant keyword search in
Scopus, Web of Science, Emerald and Springer Link databases which include
journals, proceedings, books and book chapters. The search identified 715
publications during the initial stage using the snowballing technique.
Thereafter, 22 relevant publications were filtered out during the quality
assessment stage. Within the context of local government, this review presented
the analyses of IT adoption research progress, the research domains, research
methodology and the factors influencing IT adoption. This study identified 37
factors of IT adoption in local government context which have been categorized
into four main dimensions which are Technological, Organizational, Individual
and Environmental (T-O-I-E). Surprisingly, policy and regulations, top
management support, relative advantage, cost, governance, personnel skills and
citizen demand emerged as among the most influential factors for IT adoption in
the context of local governments. The results from this study will help other
researchers to understand the current stage of IT adoption in local government
context in terms of research domains, research methodology, and the factors
influencing IT adoption. |
Keywords: |
IT Adoption, Local Governments, Local Authorities, Review |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
THE APPLICATION OF ENTROPY-ROV METHODS TO FORMULATE GLOBAL PERFORMANCE FOR
SELECTING THE AUTOMOTIVE SUPPLIERS IN MOROCCO |
Author: |
MOUNA EL MKHALET , SOULHI AZIZ , RABIAE SAIDI |
Abstract: |
In Morocco, the automotive sector is constantly improving because it occupies an
important place in the Moroccan economy; this is why our research will be in
this context in order to bring added value in term scientist. The study we are
conducting proposes, firstly, the selection of best AKPI: Appropriate Key
Performance Indicators in an objective and utilitarian way, by the application
of the combined method ENTROPY-ROV (Range of Value) in the Moroccan automotive
sector, and then realize the formula of Global Performance. Secondly, this one
will be used to select the best supplier in the Moroccan industrial automotive
sector. Furthermore, the results obtained by the application of Entropy-ROV
shows that the highest weight among the AKPIs are Machine Availability and the
Number of Occupational Injuries, which respectively correspond to the key
factors of success: Efficiency of Production Systems and Health and Safety:
which requires improvement on the part of Moroccan companies in automotive
sector. We also find that the calculation of Global Performance for suppliers
shows, that the best supplier is the supplier2 which ranks first among the
others. In addition when we compare the results obtained concerning the priority
of the AKPIs and the choice of suppliers by the Entropy-ROV method in our
research, to that of Chahid et al [28], who used the AHP method, we note that
they are not the same. Finally, our contribution is to use for the first time, a
scientific, quantitative and mathematical multi-criteria evaluation method
called ENTROPY-ROV, which is a combined method, in the automotive industry in
Morocco in order to select the best key success factors and to evaluate the
suppliers in an objective and utilitarian way. |
Keywords: |
Entropy, ROV, Supplier Selection, Appropriate Key Performance Indicator (AKPI),
Moroccan Automotive Industry |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
APPROACH USING INTERPRETIVE STRUCTURAL MODEL (ISM) TO DETERMINE KEY SUB-FACTORS
AT FACTORS: BENEFITS, RISK REDUCTIONS, OPPORTUNITIES AND OBSTACLES IN AWARENESS
IT GOVERNANCE |
Author: |
UKY YUDATAMA, ACHMAD NIZAR HIDAYANTO, BOBBY A.A NAZIEF |
Abstract: |
This study aims to find the factors and sub-factors important in awareness IT
Governance. This is necessary because it has a major influence on the successful
implementation of IT Governance within an organization. The data were collected
through interviews with 3 competent experts in the field of IT Governance, the
best data suitability that has been obtained from these experts, then considered
and to be processed using Interpretive Structural Model (ISM). This method is
considered very effective to obtain the hierarchical structure and the
relationship between each factor and sub-factor. The final results of this study
obtained some important factors and sub-factors in awareness of IT Governance
among others: benefits, risk reduction, opportunities, and obstacles. Of the
four factors are divided into 14 sub-factors, and from the result analysis by
using ISM obtained: (a) differences in viewpoint about business and IT
objectives; (b) the ownership of data that is still tied between sections and
(c) lack of technical knowledge, these three are important sub-factors
(sub-factors key) that may affect success in the implementation of IT
Governance, therefore these sub-factors need to get serious attention for the
implementation of IT Governance can run well so as to improve the quality and
performance of the organization in the future. |
Keywords: |
Factor, Sub-factor, Awareness, IT Governance, Interpretive Structural Model |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
A STUDY OF THE DIGITIZATION PROCESS TO PRESERVE THE CULTURE AND HERITAGE OF A
CIVILIZATION USING NATURAL LANGUAGE PROCESSING AND IT S IMPACT ON THE SOCIAL,
ECONOMIC AND SCIENTIFIC ASPECTS |
Author: |
MUKESH MADANAN, NORLAILA HUSSAIN, ADEEL AHMAD KHALIQ |
Abstract: |
Venturing into the arena of digitization, technology has been innovative in
establishing digitized manuscripts and e-books. Digitizing and archiving
manuscripts and documents blanketing texts and images using digital cameras and
scanners are being viewed as a stepping stone to the use of technology. To be a
part of Generation Next, focus has to be on up to the minute technology of
digitization. This entangles to preserve not only manuscripts but also the
culture and heritage depicting it. Preserving the artefacts, manuscripts and
paintings through primitive techniques physically in historical societies and
libraries has limitations. Highlighted restraints are access to general public
and lack of facilities to safeguard valuables and keep in place their
vulnerability. Manuscripts and documents when archived through chemical methods
do not most of the time depict actual bearings. Digitization of rich culture and
heritage documents and manuscripts would recalibrate these constraints and iron
out most of these limitations. The paper focuses on the digitization process to
preserve the artifacts using natural language processing techniques and indexing
and also highlights the future of digitization to enhance the cultural heritage
of a nation. |
Keywords: |
Digitization, Cultural Heritage, Metadata, Indexing, Database Management |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
CURRENT TRENDS AND FUTURE RESEARCH DIRECTIONS FOR INTERACTIVE MUSIC |
Author: |
MAURICIO TORO |
Abstract: |
In this review, it is explained and compared different software and formalisms
used in music interaction: sequencers, computer-assisted improvisation,
meta-instruments, score-following, asynchronous dataflow languages, synchronous
dataflow languages, process calculi, temporal constraints and interactive
scores. Formal approaches have the advantage of providing rigorous semantics of
the behavior of the model and proving correctness during execution. The main
disadvantage of formal approaches is lack of commercial tools. |
Keywords: |
Interactive Scores, Process Calculi, Temporal Constraints,
Score-Following, Meta-Instruments. |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
FLIPPED CLASSROOM INSTRUCTIONAL MODEL WITH MOBILE LEARNING BASED ON
CONSTRUCTIVIST LEARNING THEORY TO ENHANCE CRITICAL THINKING (FCMOC MODEL) |
Author: |
THADA JANTAKOON, PALLOP PIRIYASURAWONG |
Abstract: |
This study was a report on the findings of a Research and Development (R&D)
aiming to develop the model of the flipped classroom with mobile learning based
on constructivist learning theory to enhance critical thinking (FCMOC model).
The sample consisted of 10 experts in computer education field during the FCMOC
model developing stage. The research procedures included 2 phases: (1) Develop
FCMOC model and (2) Evaluation of the FCMOC model. The research results was
found that the FCMOC model consists of three components were (1) Flipped
classroom learning activities on mobile learning technology, (2) The major
methods constructivist theory, and (3) 5-Step model to move students toward
critical thinking. The experts also evaluated which step of the FCMOC model was
most suitable for the development of the respective aspects of critical thinking
skill. |
Keywords: |
Flipped Classroom, Mobile Learning, Constructivist, Critical Thinking, FCMOC
Model |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
Title: |
PROFILING FRAMEWORK IN IDENTIFYING CYBER VIOLENT EXTREMISM (CYBER-VE) ATTACK |
Author: |
NURHASHIKIN MOHD SALEH, SITI RAHAYU SELAMAT, ZURINA SAAYA |
Abstract: |
Cyber Violent Extremism (Cyber-VE) attack becomes topped at the international
agenda and it still significant and concern to many governments in Southeast
Asia and beyond. Cyber-VE becomes a threat to the country as the ongoing
increase in online activities by the violent extremists group. The threat of
Cyber-VE is still on the rise and the existing counters do not seem to be
reducing this attack. Hence, the aim of this paper is to propose a new profiling
framework in profiling Cyber-VE activities. This paper integrates between
Cyber-VE traces classification and the components of criminology theory. Traces
classification is generated through the process of identifying, extracting, and
classifying traces. Two types of criminology theory which social are learning
theory and space transition theory are used to explain and identify the criminal
behavior. Then, both traces classification and criminology theory are integrated
in order to develop the profiling framework. The proposed Cyber-VE profiling
framework consists of three main processes which are data extraction and
classification, Cyber-VE behavior identification, and Cyber-VE profile
construction. The proposed profiling framework is evaluated and validated to
verify its capabilities in profiling Cyber-VE activities. In experimental
approach, the results from the dataset showed that the profiling framework is
capable to profile Cyber-VE activities. In expert view, the results showed that
the proposed profiling framework able to identify the activities that related to
Cyber-VE attack. These findings will be used in helping the investigators in
identifying any activities that related to Cyber-VE attack and help in profiling
Cyber-VE attack. |
Keywords: |
Cyber Violent Extremism (Cyber-VE), Dark Web, Profiling Framework, Traces
Classification, Criminology Theory |
Source: |
Journal of Theoretical and Applied Information Technology
31st August 2018 -- Vol. 96. No. 16 -- 2018 |
Full
Text |
|
|
|