Research Article | Open Access | Download PDF
Volume 3 | Issue 5 | Year 2012 | Article Id. IJCTT-V3I5P111 | DOI : https://doi.org/10.14445/22312803/IJCTT-V3I5P111
Feature Extraction from web data using Artificial Neural Networks
Manoj Kumar Sharma, Vishal Shrivastav
Citation :
Manoj Kumar Sharma, Vishal Shrivastav, "Feature Extraction from web data using Artificial Neural Networks," International Journal of Computer Trends and Technology (IJCTT), vol. 3, no. 5, pp. 723-732, 2012. Crossref, https://doi.org/10.14445/22312803/IJCTT-V3I5P111
Abstract
The main ability of neural network is to learn from its environment and to improve its performance through learning. For this purpose there are two types of learning supervised or active learning – learning with an external ‘teacher’ or a supervisor who present a training set to the network. But another type of learning also exists : unsupervised learning[1] . Unsupervised learning is self organized learning doesn’t require an external teacher. During training session neural network receives a number of input patterns , discovers significant features in these patterns and learns how to classify input data into appropriate categories. It follows the neuro - biological organization of the brain. These algorithms aim to learn rapidly so learn much faster than back-propagation networks and thus can be used in real time. Unsupervised NN are effective in dealing with unexpected and changing conditions[3]. There are basically two major self – organising networks based learning : Hebbian and competitive learning. We will use Hebbian learning in this paper to visualize that how it can help in feature extraction from any data. We will use input vector weight matrix examples and denote presence of a feature by 1 and absence of a feature by 0. In this method we will see that how features are identified and we can discover patterns in given data. We can use this method for classification and clustering purpose also. Then we will apply this learning rule on web data or content for discovers pattern & extraction of features. Weight increases when same pattern repeats and decrease when it doesn’t repeat. The network associates some input xi with some outputs yi and yj because input xi and xj coupled during training. But it cannot associate some input x with some output y because that input didn’t appear during training and our network has lost the ability to recognize it.
Keywords
Swarm intelligence, particle swarm optimization, transliteration, grapheme, phoneme, hybrid.
References
[1] Becker, S & Plumbley, M (1996). Unsupervised neural network learning procedures for feature extraction and classification. International Journal of Applied Intelligence, 6, 185-203
[2] Mumford, D (1994). Neuronal architectures for pattern-theoretic problems. In C Koch and J Davis, editors, Large-Scale Theories of the Cortex. Cambridge, MA: MIT Press, 125-152.
[3] Artificial neural networks for pattern recognition B YEGNANARAYANA Scidhanci, Vol. 19, P a r t 2, April 1994, pp. 189-238.
[4] Artificial Intelligence: A Guide to Intelligent Systems, 2/E, Michael Negnevitsky. ISBN-10: 0321204662 ISBN-13: Publisher: Addison-Wesley. 9780321204660.
[5] An Introduction to Feature Extraction Isabelle Guyon, and Andr´e Elisseeff
[6] S.Haykin, Neural Networks and Learning Machines,2010,PHI
[7] Xianjun Ni , Research of Data Mining Based on Neural Networks, World Academy of Science, Engineering and Technology ,39,2008, p 381-38
[8] Networks of Neural Computation WK7 – Hebbian Learning Dr. Stathis Kasderidis Dept. of Computer Science University of Crete Spring Semester, 2009.
[9] Leen, T. K. (1991). Dynamics of learning in linear feature-discovery Network,2:85-105 networks.
[10] Baldi, P. and Hornik, K. (1989). Neural networks and principal component analysis: Learning from examples without local minima. Neural Networks, 2:53-58.
[11] P. Dayan and L. Abbott, Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems, The MIT Press, Cambridge, MA, 2001.
[12] Artificial Neural Networks, Sargur Srihari.
[13] Artificial Intelligence, third edition Elaine Rich, Kevin Knight, B Nair.
[14] [Chang et al., 2000] H. Chang, D. Cohn, and A. K. McCallum. Learning to create customized authority lists. In Proceedings of the 17th International Conference on Machine Learning, pages 127–134. Morgan Kaufmann, 2000