• Ei tuloksia

Influence of number of image patches on board identification accuracy

The processing of the one board takes a long time, even when using architectures with less number of parameters. A comparison was made to determine how the number of patches per board affects the accuracy of the identification of the entire board. The experiment was conducted on the same test set as in the previous experiments, consisting of 177 boards. Figure 30 shows the mean accuracy along the y axis and the number of used image patches alongxaxis. Total number of image patches extracted is 250 on the average. The experiment was performed for the 50 of used patches. For each value of used patches the experiment was repeated 100 times and the mean of obtained accuracies were computed.

The experiment is performed with the dataset containing all image patches. As the result the experiment showed that it can be used only 25 image patches per board for each architecture to obtain high accuracy. Table 8 represents the accuracy and the inference time of the board identification in case of the use only 25 image patches per one board.

Figure 30.Influence of number of image patches usage on the board identification accuracy. The figure indicates the mean accuracy that can be obtained. Total number of image patches extracted from one board is 250 on the average.

Table 8.The accuracy and the inference time in case of the use only 25 image patches per board.

Architecture ACC τB

AlexNet 0.986 0.26

VGG-16 0.721 0.72

GoogLeNet 0.994 0.47

ResNet-50 0.780 0.95

6 DISCUSSION

Wood species identification is important part of the sawmilling process. Accurately iden-tified species of wooden material allow to perform properly the technological process of the sawmill. According to the literature review, manual determination of wood species is a laborious and long procedure, requiring an experienced specialist. This serves as a huge motivation for the development of an automated approach. Also, modern approaches based on computer vision and the application of the approach using convolutional neural networks are examined.

In the work the method for wood species identification was proposed. The method in-cludes the following steps: image patches extraction, wood species identification of each image patches, and combination of the identification results to the final decision about the wood species of the whole board.

Based on the results of the work, it can be concluded that despite the presence of a large number of different parameters in the CNN architecture, their architecture templates can be selected and conduct differentiation of its parameters, highlighting the most significant.

Therefore, it is guided by the highlighted features of the architecture of the CNN and the most accurate model should be chosen for classification which is resistant to changes in the data being processed.

Based on the experimental results it can be highlighted that the best performance is shown by more simple networks with small number of layers and parameters, such as AlexNet and GoogLeNet. The accuracy of these models were 91.5% and 96.1% respectively in the single high quality image patches identification performance. The highest accuracy was indicated in board level wood species identification by the same CNN architectures AlexNet and GoogLeNet. The accuracies were 98.8% and 99.4% respectively. It was shown that simple architecture in this particular type of tasks shows more accurate results and can determine the difference in various wood species images. It should be concluded that more simple networks such as AlexNet and GoogLeNet are more suitable for the wood species identification task. The reason for this is presumably a small number of parameters in CNN allows for a smaller amount of time to reveal a significant difference between the classes of wood species. That, in turn, helps to avoid overfitting.

Also, the experiments shows that the inference time for one board can be can be short-ened by decreasing the number of used patches for identification without affection on identification accuracy. The accuracy of 99.4% can be obtained using GoogLeNet with

usage of only 25 image patches. This means it needed only 10% of total number of the image patches for one board identification. In that case the inference time for one board identification was 0.47 seconds.

6.1 Future Work

Despite the accurate performance in identification, only three wood species were used in the work. In the future work it makes sense to evaluate the work of the model in the presence of other wood species. Also, the inference speed of recognition has a strong influence, since it is an important parameter in case of application in an industrial envi-ronment. It is worth considering the possibility of accelerating the system up to real-time recognition, for example, by parallelizing of the computations. Also, the main factor is the quality of the input images. It should be considered how the resolution of the image patches affects on the identification accuracy. These measurements can show which the camera parameters and lightning conditions are more suitable for accurate wood species identification

7 CONCLUSION

In the thesis a method for wood species identification was proposed. The proposed method includes the steps of extraction of patches, their subsequent identification, and application of the decision rule to make a decision about the wood species of the whole board. In the experimental part of the work, the following architectures of convolu-tional neural networks were implemented and tested: AlexNet, VGG-16, GoogLeNet and ResNet-50. The experiments were implemented in two datasets: all image patches and only high quality ones. The experiments showed that high quality patch sorting has little impact in case of single patches identification.

The highest accuracy were shown by the AlexNet and GoogLeNet architectures in case of identification of individual patches 92.3% and 96.1% respectively. The highest accuracy is 99.4% in case of board wood species identification achieved by GoogLeNet. Also, experiments showed that the inference time for board identification with the accuracy of 99.4% can be shortened to 0.47 seconds due to the usage of only 10% of total number of the image patches for each board.

REFERENCES

[1] Gullichsen J. and Paulapuru H. Forest Product Chemestry. Fapet Oy, 2000.

[2] Gulliichsen J. and Paulapuru H. Pulp and Paper Testing. Fapet Oy, 1999.

[3] Ibrahim I., Khairuddin A. S. M., Talip M. S. A., Arof H., and Yusof R. Tree species recognition system based on macroscopic image analysis. Wood Science and Tech-nology, 51(2):431–444, 2017.

[4] Digisaw project. http://www2.it.lut.fi/project/digisaw/index.

shtml. Accessed: 2018-05-24.

[5] Timar M.C., Gurau L., and Porojan M. Wood species identification, a challenge of scientific conservation. International Journal of Conservation Science, 3(1):11–22, 2012.

[6] Elisabeth A. Wheeler and Pieter Baas. Wood identification -a review. The Interna-tional Association of Wood Anatomists, 19(3):241–264, 1998.

[7] Menon P.K.B. Structure and Identification of Malayan Woods. Forest Research Institute, Malaysia, 1993.

[8] Khalid M., Lee E.L.Y., Yusof R., and Nadaraj M. Design of an intelligent wood species recognition system. International Journal of Simulation Systems, Science and Technology, 9(3), 2008.

[9] Conners R. W., Kline D. E., Araman P. A., and Drayer T. H. Machine vision tech-nology for the forest product industry. Computer, 30(7):43–48, 1997.

[10] Tou J. Y., Lau P. Y., , and Tay Y. H. Computer vision-based wood recognition system. InInternational Workshop On Advanced Image Technology, 2007.

[11] Tou J.Y., Tay Y.H., and Lau P.Y. One-dimensional grey-level cooccurrence matrices for texture classification. InInternational Symposium On Information Technology, 2008.

[12] Tou J.Y., Tay Y.H., and Lau P.Y. A comparative study for texture classification techniques on wood species recognition problem. InFifth International Conference on Natural Computation, 2009.

[13] Paula Filho P.L., L.S. Oliveira, Nisgoski S., and Britto Jr. A.S. Forest species recog-nition using macroscopic image. Machine Vision and Applications, 25:1019–1031, 2014.

[14] Hafemann L.G., Oliveira L.S., and Cavalin. Forest species recognition using deep convolutional neural networks. InInternational Conference on Pattern Recognition, 2014.

[15] Martins J.G., Oliveira L.S., Nisgoski S., and R. Sabourin. A database for automatic classification of forest species. Machine Vision and Applications, 24(3):567–578, 2013.

[16] Yadav A.R., Dewal M.L., Anand R.S., and Gupta S. Classification of hardwood species using ann classifier. In Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics, 2013.

[17] Tsuchikawa S., Hirashima Y., Sasaki Y., and Ando K. Near-infrared spectroscopic study of the physical and mechanical properties of wood with meso- and micro-scale anatomical observation. Applied Spectroscopy, 59(1):86–93, 2005.

[18] Nuopponen M.H., Birch G.M., Sykes R.J., Lee S.J., and D.J. Stewart. Estimation of wood density and chemical composition by means of diffuse reflectance mid-infrared fourier transform (driftmir) spectroscopy. Journal of Agricultural and Food Chemistry, 54(1):34–40, 2006.

[19] Orton C.R., Parkinson D.Y., and Evans P.D. Fourier transform infrared studies of heterogeneity, photodegradation, and lignin/hemicellulose ratios within hardwoods and softwoods. Applied Spectroscopy, 583(1):1265–1271, 2004.

[20] Puiri V. and Scotti F. Design of an automatic wood types classification system by using fluorescence spectra. IEEE Transactions on Systems, Man, and Cybernetics, 40(3):358–366, 2010.

[21] Jordan R., Feeney F., Nesbitt N., and Evertsen J.A. Classification of wood species by neural network analysis of ultrasonic signals. Ultrasonics, 36(1-5):219–222, 1998.

[22] Krizhevsky A. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto, 2009.

[23] F. H. C. Tivive and A. Bouzerdoum. Texture classification using convolutional neu-ral networks. InProceedings of TENCON 2006 - 2006 IEEE Region 10 Conference, pages 1–4, Nov 2006.

[24] Yann LeCun and Yoshua Bengio. Convolutional networks for images, speech, and time series. The Handbook of Brain Theory and Neural Networks, 3361(10), 1995.

[25] Y. LeCun. Generalization and network design strategies. A technical report, 1989.

[26] Krizhevsky A., Sutskever I., and Hinton G. E. Imagenet classification with deep con-volutional neural networks.In Advances in Neural Information Processing Systems, 52(128):873–879, 2009.

[27] Y. Lecun, L. Bottou, and Y. Bengino P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.

[28] Aravindh Mahendran and Andrea Vedaldi. Visualizing deep convolutional neu-ral networks using natuneu-ral pre-images. International Journal of Computer Vision (IJCV), 120(3):233–255, 2016.

[29] Thibaut Durand, Nicolas Thome, and Matthieu Cord. Weakly supervised learning of deep convolutional neural networks. InIEEE Conference on Computer Vision and Pattern Recognition, page 4743–4752, 2016.

[30] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580, 2012.

[31] Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional net-works for text classification. InProceedings of the 28th International Conference on Neural Information Processing Systems, volume 1, pages 649–657, Cambridge, MA, USA, 2015. MIT Press.

[32] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning.IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359, October 2010.

[33] S. Akçay, M. E. Kundegorski, M. Devereux, and T. P. Breckon. Transfer learning using convolutional neural networks for object classification within x-ray baggage security imagery. InProceedings of IEEE International Conference on Image Pro-cessing (ICIP), pages 1057–1061, Sept 2016.

[34] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? InProceedings of the 27th International Con-ference on Neural Information Processing Systems, volume 2, pages 3320–3328, Cambridge, MA, USA, 2014. MIT Press.

[35] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C.

Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. Inter-national Journal of Computer Vision (IJCV), 115(3):211–252, 2015.

[36] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.

[37] R. Girshick. Fast R-CNN. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 1440–1448, Dec 2015.

[38] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6):1137–1149, June 2017.

[39] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic seg-mentation. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 1520–1528, Dec 2015.

[40] Pedro H. O. Pinheiro, Ronan Collobert, and Piotr Dollár. Learning to segment object candidates. CoRR, abs/1506.06204, 2015.

[41] Szegedy C., Liu W., Jia Y., Sermanet P., Reed S., Anguelov D., Erhan D., Vanhoucke V., and Rabinovich A. Going deeper with convolutions. InIEEE Conference Com-puter Vision and Pattern Recognition, 2015.

[42] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition.

In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, June 2016.

[43] Andreas Veit, Michael J. Wilber, and Serge J. Belongie. Residual networks are ex-ponential ensembles of relatively shallow networks. CoRR, abs/1605.06431, 2016.

[44] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s neural machine translation system: Bridging the gap between human and machine transla-tion. CoRR, abs/1609.08144, 2016.

[45] Josef Kittler, Mohamad Hatef, Robert P. W. Duin, and Jiri Matas. On combin-ing classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(3):226–239, March 1998.

[46] Guido van Rossum and Python Development Team. Python 2.7.10 Language Refer-ence. Samurai Media Limited, United Kingdom, 2015.

[47] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. InProceedings of the 22-nd ACM International Confer-ence on Multimedia, pages 675–678, New York, NY, USA, 2014. ACM.

[48] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cudnn: Efficient primitives for deep learn-ing. CoRR, abs/1410.0759, 2014.

[49] MATLAB. version 9.3.0 (R2017b). The MathWorks Inc., Natick, Massachusetts, 2017.