Recently, deep learning techniques have been used significantly for large-scale image classification targeting wildlife prediction. This research adopted a deep convolutional neural network (CNN) and proposed a deep scalable CNN. Our research dynamically modifies the network layers (scalability) in a multitasking system and enables real-time operations with minimum performance loss. It suggests a straightforward technique to access the performance gains of the network while enlarging the network layers. This is helpful as it reduces redundancy in network layers and boosts network efficiency. The architecture implementation was done in software using the Keras framework and TensorFlow as the backend on the CPU. To corroborate the universality and robustness of our proposed approach; we train our model on a GPU with a newly created dataset named “Zedataset”, preprocessed for performance evaluation. Results obtained from our experimentations show that our proposed architecture design will perform better with more datasets at the set optimum parameters.
Keywords: GPU, Keras, deep CNN, CNN, Scalability, TensorFlow, image classification, optimum parameters, backend.
Background of the study
Identifying and recognising animals from photos has long stood as no unique method that provides a robust and efficient solution to all situations. Several researchers used long-standing traditional approaches for its implementation with the problem still hanging in limbo as the task hugely involves collecting a large volume of images which predominantly is conducted manually with possible images having an imperfect quality which sometimes affects the speed of classification, and accuracy even for domain experts. More so, processing these image sets is time-consuming, effort-demanding, and comes at a very high cost as an overwhelming amount of data is collected.
In recent years, much attention has focused on using deep neural network-based techniques in image processing, particularly animal recognition and identification. However, the increase in the performance characteristics of the network depends on how scalable the network is designed. In machine learning, scalability is often defined as the result that even the slightest change in the size of the network parameters such as the network layers, and training sets has on the computational performance of an algorithm (accuracy, memory allocation, speed of processing). So the question is to find a balance or in order words get a suitable solution quickly off the mark and most effectively. This is of serious concern as in scathing circumstances where the existence of temporal or contiguous constraints like real-time applications dealing with large datasets, unapproachable computational problems demanding learning or first prototyping needing quickly implemented results.
To deal with a large dataset, it is expedient to minimize the training time and allotted memory space while preserving accuracy; however, till date, most proposed deep learning algorithms do not proffer a proper trade-off among them. To contain these issues above, we aim to optimize floating points by changing them to fixing points to reduce memory complexity and yield faster processing in the network. This research will use the convolutional neural network framework for animal identification and prediction. In contrast, stochastic gradient descent is used to optimize the network’s parameters (i.e., weights, biases) through error backpropagation with momentum and adaptive learning rate. Network layers and nodes in each hidden layer will be added in systematic experimentation and intuition with a robust test to harness.
1.1 Concept of Deep learning
Deep learning is an offshoot of machine learning, which is not new to informatics and predictive analysis. However, recently, it has drawn much attention as neuroscientists, psychologists, engineers, economists, and AI workers attempt to explore their learning potential. Deep learning approaches are a set of algorithms that strive to model data with extreme abstractions using a replica architecture with tortuous formation. It is one of the many segments of machine learning techniques based on learning representations of raw data, such as the intensity per pixel value of data or sections of a specific figure in a more abstract way.
There are several ways the area of deep learning has been represented as it is a subset of machine learning techniques.
i. Uses multiple layers with nonlinear processing units cascaded for feature extraction
ii. Are based on the (unsupervised) learning multiple data representations where hierarchical representation is formed when higher-level features are derived from lower-level features.
iii. Learned multiple levels of representations corresponding to different levels of abstraction.
1.2 Definition of learning
One challenging fact when setting up the objectives of deep learning is the definition of learning. Learning is rather conceptual, and those who have made efforts to give it meaning (psychologists, philosophers, etc.) have only succeeded in uncovering one among the many faces of the complex procedure.
However, some views of learning have been acceded to mostly by those who have made continuous efforts to divulge the concept. On many occasions, these provide a reasonable interpretation of the process. Some are the following:
i. There exists a system manipulating the information provided by its environment that is capable of improving itself.
ii. The system has numerous ways of altering its current state and the information provided can usually take many forms.
iii. The system can remember and recall things it has experienced.
1.3 Concept of scalability in machine learning
Scalability has increasingly been integrated over the years as part of deep learning. This is as a result of the likelihood of performance characteristics being affected recently; most deep neural networks are hugely involved with the overwhelming size of the dataset.
Scalability, as defined in machine learning, is the effect that a change in system parameters has on the performance characteristics of an algorithm. Its methods could be like an increase in the number of nodes, network layers, and hidden layers by systematic experimentation and/or intuition. This is done to ensure faster processing with huge datasets while preserving performance characteristics like (accuracy, and memory allocation) and reducing network complexity.
1.4 Problem statement
There has been a rise in cases of human-animal attacks and human-vehicle collisions with the latter prevalent in Nigeria. About 500-1000 vehicle collisions with large animals each year result in more than 1 billion Naira in damages. Source (Federal road safety annual report, 2017).
To cope with this problem, machine learning-based techniques could be employed, which may be on CCTV cameras connected to the relevant response team for surveillance of animals in both remote and urban places to save lives.
1.5 Aim of the research
The aim of this thesis is to provide a scalable, suitable, more generic and optimized network capable of processing huge amounts of datasets even with images having an imperfect quality or varied deformations in real-time while preserving better test accuracy.
1.6 Objectives of the research
Having at hand the different views of people as regards to what learning seems to be and how to attain it. One can perceive how challenging it is to interpret deep learning and even to set out some clear objectives. Although the concept of learning has cleared the air that the approach to deep learning by different people differs. The aim of this research is as follows:
i. Develop an artificial learning system capable of being adaptive and self-improving
ii. Develop a neural network with optimized parameters whose computational performance is unaffected by scalability.
iii. Develop a neural network system architecture with reduced complexity for large-scale image classification or prediction.
1.7 Structure of the research
Chapter 1 presents a brief introduction to the research concept primarily deep learning, the objectives of the project, and the aims.
Chapter 2 presents supporting theories of the research concept following a brief introduction of deep learning concepts and learning, forming a link with a classification problem, and then gives a brief account of the different classification approaches ranging from statistical methods to genetic algorithms. Two best learning approaches will be examined, and a brief account of similar works will follow.
Chapter 3 will present the theoretical analysis of the adopted algorithm with the proposed layers. The following information is provided:
i. A detailed description of the algorithm focusing on its peculiarities
ii. The design of the algorithm with a detailed explanation of its layers.
Chapter 4 will describe the experiments and present the results, which will be statistically analyzed to check for relative performance and validate the theoretical estimates presented in the previous chapter.
Chapter 5 summarises the results presented in the thesis and concludes their importance in the context of recognition and identification.
Alex Krizhevsky, Ilya Sutskever, G. E. H. (2007). ImageNet Classification with Deep Convolutional Neural Networks. Handbook of Approximation Algorithms and Metaheuristics, 60-1-60–16. https://doi.org/10.1201/9781420010749
Anupam Anand. (2018). Unit 13 Image Classification. (May), 41–58.
Guignard, L., & Weinberger, N. (2016). Animal identification from remote camera images. 1–4.
Jacobs, S. A., Dryden, N., Pearce, R., & Van Essen, B. (2017). Towards Scalable Parallel Training of Deep Neural Networks. (Sc 17), 1–9. https://doi.org/10.1145/3146347.3146353
Koprinkova, P., & Petrova, M. (1999). Data-scaling problems in neural-network training. Engineering Applications of Artificial Intelligence, 12(3), 281–296. https://doi.org/10.1016/S0952-1976(99)00008-1
Learning, C. (2017). ) Machine Learning ( یییییی ییییییی.
Namatēvs, I. (2018). Deep Convolutional Neural Networks: Structure, Feature Extraction and Training. Information Technology and Management Science, 20(1), 40–47. https://doi.org/10.1515/itms-2017-0007
Paliouras, G. (1993). Scalability Of Machine Learning Algorithms. Neural Networks, (November).
Pang-Ning Tan et Al. (n.d.). .
Shrivastav, U., & Singh, S. K. (2019). Digital Image Classification Techniques. (October), 162–187. https://doi.org/10.4018/978-1-5225-9096-5.ch009
Singaravel, S., Suykens, J., & Geyer, P. (2018). Deep-learning neural-network architectures and methods: Using component-based models in building-design energy prediction. Advanced Engineering Informatics, 38(June), 81–90. https://doi.org/10.1016/j.aei.2018.06.004
Trnovszky, T., Kamencay, P., Orjesek, R., Benco, M., & Sykora, P. (2017). Animal recognition system based on convolutional neural network. Advances in Electrical and Electronic Engineering, 15(3), 517–525. https://doi.org/10.15598/aeee.v15i3.2202
Yamashita, R., Nishio, M., Do, R. K. G., & Togashi, K. (2018). Convolutional neural networks: an overview and application in radiology. Insights into Imaging, 9(4), 611–629. https://doi.org/10.1007/s13244-018-0639-9
If you like this article, see others like it:
- Scalable Deep Neural Network for Wildlife Animal Recognition and Identification
- List of Interesting Electrical & Computer Engineering Research Project Topics
- SATISFIABILITY REASONING OVER VAGUE ONTOLOGIES USING FUZZY SOFT SET THEORY
- DEVELOPMENT OF AN IMPROVED PLAYFAIR CRYPTOSYSTEM USING RHOTRIX
- Design, Construction, Simulation, and Performance Evaluation of a Solar Box Cooker