• Ei tuloksia

This project is ongoing, and the focus remains on improving the performance of the pro-posed VQA tool. The first plan is to invest in blurriness metrics and assess it more accu-rately. Adding extra measure with a specific focus on motion blur and out of focus blur is selected as the primary approaches to pursue.

Furthermore, examining the VQA tool against videos recorded with digital zoom and testing them in particular situations is another part of the future plan. For this purpose, adding blockiness measure helps to recognize and rate the blockiness introduced with

digital zoom. Moreover, assessing the stability of the camera in zoom mode is another challenge that needs more efforts.

Another possible extension could be improving the integration module. Using a machine learning approach like SVR instead of the current weighting average could possibly im-prove the performance of the tool. Also, for future work, a more comprehensive dataset will be prepared, and the labeling will be done using crowded sourcing techniques. Pro-viding the new dataset also enables employing a proper machine learning approach for the integration module.

7 CONCLUSION

Using smartphones and tablets for filming is becoming more and more popular. Also, some camera applications capture device sensor data during filming and create sensor-rich videos. Assessing the quality of these videos has many applications in streaming services, video sharing companies, and end-users for categorizing and searching videos based on their quality.

Filming using hand-held devices by unprofessional and ordinary people in uncontrolled imaging and environmental condition, may produce poor quality videos such as blurred, poor contrast and shaky videos. This thesis started with introducing a comprehensive literature review to recognize the most annoying degradations happen in this context and the available measures to assess those degradations.

As a practical part of the thesis, a Video Quality Assessment (VQA) tool was designed and implemented to assess the quality of sensor-rich videos. The tool was designed in an objective approach which means no information regarding imaging device, environmental condition or imaging subject is required.

The proposed VQA tool comprised four modules namely pixel, bit-stream and sensor and integration modules. Several measures are employed in each module to assess the quality of given video from one specific aspect. In the pixel module, the contrast, blurriness, and naturalness were assessed. In the bit-stream module, the video motion, scene complexity and bit-stream quality of the video file are rated. Furthermore, the tool utilizes sensor data to assess the stability of the video. Finally, the results of all measures are combined in the integration module to produce one overall score as the quality of the video.

For all measures except stability, one or more approaches from the literature were se-lected as candidates. For assessing candidates, a new database containing 54 videos was prepared, and all measures were examined against the database. For stability measure, two novel algorithms were suggested, and one specific database was designed and pre-pared to assess the performance of proposed methods. The measures which correlated best with the benchmarks were selected to be employed in the final VQA tool.

The performance of the proposed VQA tool was assessed against CVD2014 [1]. The results showed that using a combination of pixel and bit-stream module improve the per-formance regardless of the content of the videos.

REFERENCES

[1] Mikko Nuutinen, Toni Virtanen, Mikko Vaahteranoksa, Tero Vuori, Pirkko Oitti-nen, and Jukka Häkkinen. CVD2014-A Database for Evaluating No-Reference Video Quality Assessment Algorithms. IEEE Transactions on Image Processing, 25(7):3073–3086, July 2016.

[2] Deepti Ghadiyaram, Janice Pan, Alan C. Bovik, Anush K. Moorthy, Prasanjit Panda, and Kai-Chieh Yang. In-capture mobile video distortions: A study of subjective behavior and objective algorithms. IEEE Transactions on Circuits and Systems for Video Technology, 2018. (under review).

[3] Kjell Brunnstrom, Sergio Ariel Beker, Katrien De Moor, Ann Dooms, and others.

Qualinet white paper on definitions of quality of experience. European Network on Quality of Experience in Multimedia Systems and Services, COST Action IC1003, March 2013.

[4] David S. Hands and S. E. Avons. Recency and duration neglect in subjective assess-ment of television picture quality. Applied Cognitive Psychology, 15(6):639–657, 2001.

[5] Maria Torres Vega, Vittorio Sguazzo, Decebal Constantin Mocanu, and Antonio Liotta. An experimental survey of no-reference video quality assessment methods.

International Journal of Pervasive Computing and Communications, 12(1):66–86, April 2016.

[6] International Telecommunication Union Telecommunication Standardization Sector (ITU-T). Subjective video quality assessment methods for multimedia applications.

Recommendation P.910, April 2008.

[7] Sharath Chandra Guntuku, Michael James Scott, Gheorghita Ghinea, and Weisi Lin.

Personality, culture, and system factors-impact on affective response to multimedia.

Computing Research Repository (CoRR), abs/1606.06873, June 2016.

[8] Damon M. Chandler. Most apparent distortion: full-reference image quality assess-ment and the role of strategy. Journal of Electronic Imaging, 19(1):011006, January 2010.

[9] Hiray Yogita V. and Hemprasad Y. Patil. A survey on image quality assessment techniques, challenges and databases. IJCA Proceedings on National Conference on Advances in Computing (NCAC), 2015(7):34–38, December 2015.

[10] Silvio Borer. A model of jerkiness for temporal impairments in video transmission.

InSecond International Workshop on Quality of Multimedia Experience (QoMEX), pages 218–223. IEEE, June 2010.

[11] Michele A. Saad and Alan C. Bovik. Blind quality assessment of videos using a model of natural scene statistics and motion coherency. InForty six Asilomar Con-ference on Signals, Systems and Computers (ASILOMAR), pages 332–336. IEEE, November 2012.

[12] Christos G. Bampis and Alan C. Bovik. Learning to predict streaming video QoE:

Distortions, rebuffering and memory. Computing Research Repository (CoRR), abs/1703.00633, March 2017.

[13] Kalpana Seshadrinathan and Alan C. Bovik. Motion tuned spatio-temporal quality assessment of natural videos. IEEE Transactions on Image Processing, 19(2):335–

350, February 2010.

[14] Phong V. Vu, Cuong T. Vu, and Damon M. Chandler. A spatiotemporal most-apparent-distortion model for video quality assessment. In18th IEEE International Conference on Image Processing (ICIP), pages 2505–2508. IEEE, September 2011.

[15] Zheng Zhang, Yu Liu, Zhihui Xiong, Jing Li, and Maojun Zhang. Focus and blur-riness measure using reorganized dct coefficients for autofocus application. IEEE Transactions on Circuits and Systems for Video Technology, 28(1):15–30, Jan 2018.

[16] Video Quality Experts Groups. Final report from the Video Quality Experts Group on the validation of objective models of video quality assessment, Phase II (FR_tv2).

VQEG, 2008.

[17] Michael Yuen and H. R. Wu. A survey of hybrid MC/DPCM/DCT video coding distortions. Signal processing, 70(3):247–278, November 1998.

[18] Ni Pengpeng. Multimedia quality assessment. https://goo.gl/Ef72sc, 2010. [Online; Accessed October 2017].

[19] Perry Sprawls. The Physical principles of medical imaging. Medical Physics Pub-lishing, Madison, Wisconsin, 2 edition, 1995.

[20] Rania Hassen, Zhou Wang, and Magdy Salama. No-reference image sharpness as-sessment based on local phase coherence measurement. InIEEE International Con-ference on Acoustics Speech and Signal Processing (ICASSP), pages 2434–2437.

IEEE, March 2010.

[21] Hantao Liu and Ingrid Heynderickx. A perceptually relevant no-reference blocki-ness metric based on local image characteristics. EURASIP Journal on Advances in Signal Processing, 2009(1):263540, March 2009.

[22] Muhammad Shahid, Andreas Rossholm, Benny Lovstrom, and Hans-Jürgen Zeper-nick. No-reference image and video quality assessment: a classification and re-view of recent approaches. EURASIP Journal on image and Video Processing, 2014(1):40, Aug 2014.

[23] Min Goo Choi, Jung Hoon Jung, and Jae Wook Jeon. No-reference image quality assessment using blur and noise. International Journal of Computer Science and Engineering, 3(2):76–80, February 2009.

[24] Alessandro Rizzi, Thomas Algeri, Giuseppe Medeghini, and Daniele Marini. A proposal for contrast measure in digital images. InConference on colour in graphics, imaging, and vision (CGIV), volume 6, pages 187–192. Society for Imaging Science and Technology, 2004.

[25] Albert Abraham Michelson.Studies in optics. University of Chicago Press, Chicago, Illinois, United States, 1927.

[26] Deepti Ghadiyaram and Alan C. Bovik. Perceptual quality prediction on authenti-cally distorted images using a bag of features approach. Journal of Vision, 17(1):32, January 2017.

[27] Chaofeng Li, Alan Conrad Bovik, and Xiaojun Wu. Blind image quality assessment using a general regression neural network. IEEE Transactions on Neural Networks, 22(5):793–799, May 2011.

[28] Hyung-Ju Park and Dong-Hwan Har. Digital Image Quality Assessment Based on Standard Normal Deviation. International Journal of Contents, 11(2):20–30, June 2015.

[29] Decebal Constantin Mocanu, Jeevan Pokhrel, Juan Pablo Garella, Janne Seppanen, Eirini Liotou, and Manish Narwaria. No-reference video quality measurement:

added value of machine learning. Journal of Electronic Imaging, 24(6):061208, December 2015.

[30] Hong Zhang, Fan Li, and Na Li. Compressed-domain-based no-reference video quality assessment model considering fast motion and scene change. Multimedia Tools and Applications, 76(7):9485–9502, April 2017.

[31] Savvas Argyropoulos, Alexander Raake, Marie-Neige Garcia, and Peter List. No-reference bit stream model for video quality assessment of H. 264/AVC video based on packet loss visibility. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1169–1172. IEEE, May 2011.

[32] Bowen Wei and Yuan Zhang. No-reference video quality assessment with frame-level hybrid parameters for mobile video services. In2nd IEEE International Con-ference on Computer and Communications (ICCC), pages 490–494. IEEE, October 2016.

[33] Alexandre Ciancio, Andre Luiz N. Targino da Costa, Eduardo A. B. da Silva, Amir Said, Ramin Samadani, and Pere Obrador. No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE Transactions on Image Processing, 20(1):64–75, January 2011.

[34] Deepa Maria Thomas. A novel hybrid approach to quality assessment using no reference blur and blockiness measure in JPEG images. International Journal of Innovative Research and Development, 2(13), December 2013.

[35] Kanjar De and V. Masilamani. Fast no-reference image sharpness measure for blurred images in discrete cosine transform domain. In IEEE Students on Tech-nology Symposium (TechSym), pages 256–261. IEEE, September 2016.

[36] Halime Boztoprak. An alternative image quality assessment method for blurred images.Balkan Journal of Electrical and Computer Engineering, 4(1), March 2016.

[37] Eftichia Mavridaki and Vasileios Mezaris. No-reference blur assessment in natural images using fourier transform and spatial pyramids. InIEEE International Confer-ence on Image Processing (ICIP), pages 566–570. IEEE, Oct 2014.

[38] Min Goo Choi, Jung Hoon Jung, and Jae Wook Jeon. No-reference image quality assessment using blur and noise. International Journal of Computer Science and Engineering, 3(2):76–80, 2009.

[39] Luhong Liang, Shiqi Wang, Jianhua Chen, Siwei Ma, Debin Zhao, and Wen Gao.

No-reference perceptual image quality metric using gradient profiles for jpeg2000.

Signal Processing: Image Communication, 25(7):502 – 516, Aug 2010. Special Issue on Image and Video Quality Assessment.

[40] Shiqian Wu, Weisi Lin, Shoulie Xie, Zhongkang Lu, Ee Ping Ong, and Susu Yao.

Blind blur assessment for vision-based applications. Visual Communication and Image Representation, 20(4):231 – 241, May 2009.

[41] Cuong T. Vu, Thien D. Phan, and Damon M. Chandler. S3: A Spectral and Spatial Measure of Local Perceived Sharpness in Natural Images. IEEE Transactions on Image Processing, 21(3):934–945, March 2012.

[42] Heng-Jun Zhao. No-inference image sharpness assessment based on wavelet trans-form and image saliency map. InInternational Conference on Wavelet Analysis and Pattern Recognition (ICWAPR), pages 43–48, Jeju, South Korea, July 2016. IEEE.

[43] J. L. Pech-Pacheco, Gabriel Cristobal, Jesus Chamorro-Martinez, and J. Fernandez-Valdivia. Diatom autofocusing in brightfield microscopy: a comparative study. In 15th International Conference on Pattern Recognition. ICPR-2000, volume 3, pages 314–317 vol.3. IEEE, Aug 2000.

[44] Kanjar De and V. Masilamani. Image sharpness measure for blurred images in frequency domain. Procedia Engineering, 64(Supplement C):149–158, September 2013.

[45] Kanjar de and V. Masilamani. Image Quality Assessment for Blurred Images Using Nonsubsampled Contourlet Transform Features. Journal of Computers, 12(2):156–

164, March 2017.

[46] Taegeun Oh, Jincheol Park, Kalpana Seshadrinathan, Sanghoon Lee, and Alan Con-rad Bovik. No-Reference Sharpness Assessment of Camera-Shaken Images by Analysis of Spectral Structure. IEEE Transactions on Image Processing, 23(12):5428–5439, December 2014.

[47] Taegeun Oh and Sanghoon Lee. Blind sharpness prediction based on image-based motion blur analysis.IEEE Transactions on Broadcasting, 61(1):1–15, March 2015.

[48] Hashim Mir, Peter Xu, and Peter Van Beek. An extensive empirical evaluation of focus measures for digital photography. InProceedings of Society of Photo-Optical Instrumentation Engineers (SPIE) Electronic Imaging, volume 9023, pages 9023 – 9023 – 11, San Francisco, California, United States, March 2014. SPIE.

[49] Said Pertuz, Domenec Puig, and Miguel Angel Garcia. Analysis of focus measure operators for shape-from-focus. Pattern Recognition, 46(5):1415–1432, May 2013.

[50] Said Pertuz, Domenec Puig, and Miguel Angel Garcia. Reliability measure for shape-from-focus. Image and Vision Computing, 31(10):725–734, October 2013.

[51] Ping Hsu and Bing-Yu Chen. Blurred image detection and classification. In Ad-vances in Multimedia Modeling: 14th International Multimedia Modeling Confer-ence (MMM), pages 277–286. Springer, January 2008.

[52] Phong V. Vu and Damon M. Chandler. A fast wavelet-based algorithm for global and local image sharpness estimation. IEEE Signal Processing Letters, 19(7):423–426, July 2012.

[53] Niranjan D. Narvekar and Lina J. Karam. A no-reference image blur metric based on the cumulative probability of blur detection (CPBD). IEEE Transactions on Image Processing, 20(9):2678–2683, Sept 2011.

[54] Rony Ferzli and Lina J. Karam. A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE Transactions on Image Processing, 18(4):717–728, April 2009.

[55] Alan Conrad Bovik Hamid Rahim Sheikh, Zhou Wang. Live image quality as-sessment database release 2. http://live.ece.utexas.edu/research/

quality. [Online; Accessed June 2018].

[56] Kongfeng Zhu. No-reference Video Quality Assessment and Applications. PhD thesis, Department of Computer and Information Science, University of Konstanz, Konstanz, Germany, July 2014.

[57] Chunhua Chen and Jeffrey A. Bloom. A blind reference-free blockiness measure.

In Guoping Qiu, Kin Man Lam, Hitoshi Kiya, Xiang-Yang Xue, C.-C. Jay Kuo, and Michael S. Lew, editors, Advances in Multimedia Information Processing - PCM 2010, pages 112–123, Berlin, Heidelberg, September 2010. Springer Berlin Heidel-berg.

[58] Guangtao Zhai, Wenjun Zhang, Xiaokang Yang, Weisi Lin, and Yi Xu. No-reference noticeable blockiness estimation in images. Signal Processing: Image Communica-tion, 23(6):417 – 432, 2008.

[59] Cristian Perra. A low computational complexity blockiness estimation based on spatial analysis. In22nd Telecommunications Forum Telfor (TELFOR), pages 1130–

1133. IEEE, November 2014.

[60] Maria Torres Vega, Decebal Constantin Mocanu, Jeroen Famaey, Stavros Stavrou, and Antonio Liotta. Deep learning for quality assessment in live video streaming.

IEEE Signal Processing Letters, 24(6):736–740, June 2017.

[61] Gabriele Simone, Marius Pedersen, and Jon Yngve Hardeberg. Measuring percep-tual contrast in digital images. Journal of Visual Communication and Image Repre-sentation, 23(3):491–506, April 2012.

[62] M. Pavel, George Sperling, Thomas Riedl, and August Vanderbeek. Limits of visual communication: the effect of signal-to-noise ratio on the intelligibility of american sign language. Journal of the Optical Society of America A: Optics and Image Science, and Vision, 4(12):2355–2365, December 1987.

[63] Andrew M. Haun and Eli Peli. Complexities of complex contrast. InColor Imaging XVII: Displaying, Processing, Hardcopy, and Applications, page 82920E. Society of Photo Optical Instrumentation Engineers (SPIE), January 2012.

[64] Alessandro Rizzi, Thomas Algeri, Giuseppe Medeghini, and Daniele Marini. A proposal for contrast measure in digital images. Second European Conference on Color in Graphics, Imaging and Vision, 2004(1):187–192, January 2004.

[65] Alessandro Rizzi, Gabriele Simone, and Roberto Cordone. A modified algorithm for perceived contrast measure in digital images. InSecond European Conference on Color in Graphics, Imaging and Vision, volume 2008, pages 249–252. Society for Imaging Science and Technology, 2008.

[66] Kanjar De and V. Masilamani. No-reference image contrast measure using image statistics and random forest. Multimedia Tools and Applications, 76(18):18641–

18656, September 2017.

[67] Yuming Fang, Kede Ma, Zhou Wang, Weisi Lin, Zhijun Fang, and Guangtao Zhai.

No-reference quality assessment of contrast-distorted images based on natural scene statistics. IEEE Signal Processing Letters, 22(7):838–842, July 2015.

[68] Gabriele Simone, Marius Pedersen, Jon Yngve Hardeberg, and Alessandro Rizzi.

Measuring perceptual contrast in a multilevel framework. In Human Vision and Electronic Imaging XIV, pages 72400Q–72400Q–9. Society of Photo Optical In-strumentation Engineers (SPIE), February 2009.

[69] Jean Baptiste Thomas, Jon Yngve Hardeberg, and Gabriele Simone. Image con-trast measure as a gloss material descriptor. InComputational Color Imaging: 6th International Workshop (CCIW), pages 233–245. Springer, March 2017.

[70] Omar Alaql, Kambiz Ghazinour, and Cheng Chang Lu. No-reference image quality metric based on features evaluation. In IEEE 7th Annual Computing and Com-munication Workshop and Conference (CCWC), pages 1–7, Las Vegas, NV, USA, January 2017. IEEE.

[71] Michele A. Saad, Alan Conrad Bovik, and Christophe Charrier. Blind Image Qual-ity Assessment: A Natural Scene Statistics Approach in the DCT Domain. IEEE Transactions on Image Processing, 21(8):3339–3352, August 2012.

[72] Anish Mittal, Rajiv Soundararajan, and Alan C. Bovik. Making a completely blind image quality analyzer. IEEE Signal Processing Letters, 20(3):209–212, November 2013.

[73] Tuomas Eerola, Lasse T. Lensu, Heikki Kälviäinen, and Alan C. Bovik. Study of no-reference image quality assessment algorithms on printed images. Journal of Electronic Imaging, 23(6):061106, August 2014.

[74] Michele A. Saad and Alan Conrad Bovik. Blind quality assessment of videos us-ing a model of natural scene statistics and motion coherency. In2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), pages 332–336, Nov 2012.

[75] Anish Mittal, Michele A. Saad, and Alan Conrad Bovik. A completely blind video integrity oracle.IEEE Transactions on Image Processing, 25(1):289–300, Jan 2016.

[76] P Muhammed Shabeer, Saurabhchand Bhati, and Sumohana S. Channappayya.

Modeling sparse spatio-temporal representations for no-reference video quality as-sessment. InIEEE Global Conference on Signal and Information Processing (Glob-alSIP), pages 1220–1224. IEEE, Nov 2017.

[77] Mohammed A. Aabed. Perceptual Video Quality Assessment and Analysis Using Adaptive Content Dynamics. PhD thesis, Georgia Institute of Technology, May 2017.

[78] International Telecommunication Union Telecommunication Standardization Sector (ITU-T). Parametric bitstream-based quality assessment of progressive download and adaptive audiovisual streaming services over reliable transport. Recommenda-tion P.1203, December 2016.

[79] Jing Hu and Herb Wildfeuer. Use of content complexity factors in video over ip quality monitoring. In International Workshop on Quality of Multimedia Experi-ence, pages 216–221. IEEE, July 2009.

[80] Greg Milette and Adam Stroud. Professional Android sensor programming. John Wiley & Sons, Inc, Indianapolis, Indiana, 1 edition, 2012. OCLC: ocn779864098.

[81] Thibaud Michel, Pierre Genevès, Hassen Fourati, and Nabil Layaïda. On Attitude Estimation with Smartphones. InIEEE International Conference on Pervasive Com-puting and Communications, Kona, United States, March 2017.

[82] Jay Esfandyari, Roberto De Nuccio, and Gang Xu. Solutions for MEMS sensor fusion. Solid State Technology, 54(7):18–21, July 2011.

[83] Paul Lawitzki. Android Sensor Fusion Tutorial.https://www.codeproject.

com/Articles/729759/Android-Sensor-Fusion-Tutorial, Febru-ary 2014. [Online; Accessed October 2017].

[84] William John Freeman.Digital Video Stabilization with Inertial Fusion. PhD Thesis, Virginia Tech, April 2013.

[85] Gustav Hanning, Nicklas Forslow, Per-Erik Forssen, Erik Ringaby, David Torn-qvist, and Jonas Callmer. Stabilizing cell phone video using inertial measurement sensors. InIEEE International Conference on Computer Vision Workshops (ICCV Workshops), pages 1–8. IEEE, November 2011.

[86] Sophocles J. Orfanidis.Introduction to signal processing. Prentice Hall, Englewood Cliffs, N.J., 1996.

[87] Nikolay Ponomarenko, Lina Jin, Oleg Ieremeiev, Vladimir Lukin, Karen Egiazarian, Jaakko Astola, Benoit Vozel, and Kacem. Image database tid2013: Peculiarities, results and perspectives. Signal Processing: Image Communication, 30:57 – 77, 2015.