• Ei tuloksia

}

Code 9. Retrieving wall data from model

Code 10. Function for pipe creation

tateX(angle), object.rotateY(angle) and object.rotateZ(angle) can be used. These func-tions rotate the object in local axis.

Once the scene is done, the application includes a button that when pressing on it, the application makes the intersections between the pipes. As in the pipe detection, first the pipe as an object was segmented and after that, the cylinders were searched inside this object, it is known which cylinders are part of the same object. There are many types of intersections depending on the position of one pipe respect another one. These possible intersections where calculated based on this works sample file cases:

1. Two perpendicular pipes that cross forming a T. In this case a pipe is enlarged to the centre of the other pipe.

2. Two perpendicular pipes forming an L that their extremes need to be joint. A torus is added joining the extremes of the two pipes.

3. Two parallel pipes but that the distance between their axis is bigger than a value and are bypassing a column. A perpendicular cylinder is added between the two pipes and two tori are added to joint this cylinder two the two pipes.

4. Two parallel pipes but that the distance of the axis is smaller than a certain value.

A tube is added with a spline curvature approximated to the extremes of the pipes.

When clicking on a pipe, the information of the pipe, the product that matches with the pipe and the suppliers of it are shown in a panel. For providing accurate information, reasoning is needed. The selected pipe is compared with the products catalogue selecting, selecting the product if the radiuses are equal and the length is similar. Each products information is obtained from the Fuseki server where the owl file is loaded.

A product class has an attribute called provided_by that links a product with its suppliers.

When the product has been matched with a pipe, the Javascript program gets the infor-mation of the suppliers querying to Fuseki and prints it in the web application’s info panel.

Figure 27. Cylinder's info panel

The system is divided into many parts, the C++ program, Fuseki server and the web ap-plication. The Fig. 28 shows the architecture of the system. The C++ program processes the scanned data and obtains the information of each element of the scene. The ontologi-cal model is loaded in Fuseki, the model is updated from the C++ program using SPARQL over HTTP Update. The web application queries information from the model in Fuseki using SPARQL for the model creation.

The next activity diagram explains step by step the process to build the web application.

In the C++ program the steps to obtain the scenes information are: filtering, plane and cylinder searching and finally populating the ontological model. The ontological model is done in Olingvo and uploaded to Fuseki. The Fuseki applications takes the updating and querying requests and respond to the client. The web application asks data to Fuseki and recieves an answer which is processed to build the 3D model and the provide replacement information of the market.

C++

program

Fuseki server

Web appli-cation

SPARQL over HTTP SPARQL over HTTP

Figure 28. Architecture of the system

Figure 29. Activity diagram

5. RESULTS

The goal of the cleaning process is to remove all the elements that are not structural like machines or tables. The shape of the structure and the pipes could be detected without cleaning the point cloud of unnecessary points. It could be possible to apply the RANSAC algorithm but the execution time would be higher and unwanted elements would be de-tected. Some conditions must be established to accept or reject the detected items.

In the Fig. 30 the point cloud after the cleaning process is shown. Comparing to the Fig.

11, you see that pipes or other items have been removed. Facilitating the recognition of walls.

Figure 30. Clean point cloud

Figure 11. Clean point cloud

The remaining point cloud contains the pipes and the elements that have been extracted from the cleaned cloud. The Fig. 31 cloud is more suitable to search for pipes, improving the search time and obtaining better results. The RANSAC algorithm can be applied di-rectly in this cloud, but there is a risk to obtain false positives and recognize cylinders that in the reality are not. Moreover, the pipes are recognized as objects by segmenting the cloud using region-growing algorithm. After applying RANSAC it is known which cylinder belongs to which tube.

Figure 31. Point cloud for cylinder search

In the cleaning process, first region growing and then RANSAC is applied to search for planes. These planes could be used to determine the shape of the structure, but there are too many planes detected and most of them unnecessary. Establishing some conditions planes would be accepted or rejected. This approach is not followed and a plane search is carried out using RANSAC algorithm. The Fig. 32 shows the results after searching the planes. Each detected plane is shown in a different colour.

Figure 32. Segmented planes

For an accurate recognition, some parameters must be adjusted, like the weight of the angular and the distance of each point to the model. Depending on the density of the point cloud and the profundity of the points, these values will change. In a point cloud with similar features, the same method with the same configurations could be applied.

In the cylinder detection, the program is not able to identify cylinders with small radius.

This is because the cylinders with small radiuses are represented with a low number of points comparing to the big ones. RANSAC algorithm parameters should be adjusted to identify them. The Fig. 33 shows the identified cylinders.

Figure 33. Detected cylinders

A negative point of the RANSAC cylinder search algorithm is that it cannot detect the curvatures of a pipe. For searching the curvatures of a pipe, the volumes of these parts should be approximated to a mathematical model, which is not easy and are not imple-mented in the PCL library used in this work. It would be possible to consider the remain-ing points of the cylinder search as intersections and convert them to meshes. However, if the points just represent the visible part of the pipe to the scanner there would be missing points and reconstruction of these meshes would be required.

The Three js library gives facilities to build a web application with 3D content. The 3D model could be done in different ways, directly using the Three js library or creating a model like .obj for example. Saving the model in a certain file format would enable to open this file in many software for visualizing it or to modify it.

Three js is suitable to create animated 3D scenes, it is lightweight and smooth. In this case, when the button to intersect cylinders, the scene is updated modifying pipes or add-ing new objects.

The scene created out of the data was obtained from the scanning and processing of it is quite accurate. The shape of the structure is perfectly digitized, representing all the walls and columns.

Pipes are represented as cylinders and comparing the virtualized pipes to the real ones, the conclusion is that digital pipes have a precise position, length and radius.

The Fig. 34 shows the web application. The interface contains the digitized 3D model, an info panel in the top left corner to print the information of the pipes and a button in the right top corner for making the intersection between the cylinders.

Figure 34. Web application interface

In the Fig. 35, we can compare the real scene and the virtual one. All the machines are removed and some elements are missing like small pipes, grids, but the main structure and main pipes are shown.

Figure 35. Real scene and 3D model

In the Fig. 34 the model is shown without making the intersections of the pipes. After the button is clicked these are made and represented in the scene. The four cases explained in the implementation part are exposed in the following figures.

Figure 36. Intersection type T

Fig. 36 shows two perpendicular pipes forming a T shape. The vertical pipe is enlarged until the centre of the horizontal axis.

Figure 37. Intersection type L

Two perpendicular pipes forming an L are shown in Fig. 37. A torus is added for joining the two tubes.

Figure 38. Intersection of two parallel pipes, bypassing a column

Fig. 38 shows the intersection of pipes that are bypassing a column, the intersection is done by adding two perpendicular pipes in the normal direction of the wall and joining these pipes with the others using torus geometry.

Figure 39. Intersection of parallel pipes, spline

The intersection between two parallel pipes is done in Fig. 39. The intersecting tube is approximated to a spline curve, which goes from the extremes of two pipes. The real intersection is not as the one of the web application. The 3D models joint is an approximation of the real one that has like two elbows.

The information for purchasing a replacement of a pipe benefits the user showing the price of the it and the information of the supplier such as, the distance and the delivery time of the supplier as well as products radio and length. The Fig. 40 and Fig. 41 show the purchase information of two pipes. These two pipes have the same radius but different length, so the mathching product is different as well as the suppliers.

Figure 40. Cylinder 4 purchase information

Figure 41. Cylinder 3 purchase information

6. CONCLUSION

A knowledge-driven method for 3D reconstruction of technical installations in building rehabilitation is presented in this thesis. This chapter discusses the completion of the as-pects that have been presented in the introductory chapter, explaining the factors that can affect the final result.

Digitizing a scene can be done in many different ways using many techniques. In this thesis, a 3D laser scanner was used to capture the shape of the FASTory Lab. It is a con-sistent and a fast method to catch the scene but big objects like machines can create in-terferences in the point cloud data. When scanning a scene, the position of the scanner must be taken into account to obtain the most accurate possible data. In this case, the scanner, must focus on all the pipes and walls. The obtained point cloud data was missing points that were behind the machines, that affected in the computing the length of a pipe.

The method presented in this paper can be applied to other scenes or point clouds. De-pending on the features of the point cloud, it may be needed to adjust some parameters like the distance threshold of the points when searching for planes or cylinders. These parameters affect the model fitting segmentations, over or under segmenting the scene.

The detection of pipes with small radius has not been achieved. This kind of pipes are represented with a low number of points, making difficult the fitting of the points to a cylinder model.

The 3D model that was built from the data obtained from the processing of the point cloud was quite accurate, representing all the walls and pipes as in the real scene. However, as the points that were behind the machines were not captured perfectly, the length of a pipe is shorter than in reality.

Furthermore, the current proposal developed in this work has many advantages. Allows the user to access the application from any platform without installing, updating or main-taining any software. A simple click to execute the web browser allows them to run the application.

For future development or for another application using the data that has been stored in the ontological model. The architecture of the application allows to make modifications or to update it easily as the program is divided into many modules.

The final application can be very useful for reconstruction works, providing the user in-formation of the pipes that must be replaced.

6.1 Future work

The present thesis has explained to the reader the potential of digitizing a real scene.

Nowadays, digitizing and connecting every system to the cloud is becoming very popular.

Having a virtual model allows working remotely without visiting the scene and simplify-ing the tasks. The proposal that is suggested in this thesis offers a lot of possibilities for further development and expansion.

First of all, the scene recognition and modelling should be improved to capture the entire room and not just a wall. A whole room could be virtualized with just scanning it in order to build the model manually. All the pipes should be detected, including the small ones and regardless the direction. The intersections could be fitted to geometrical mathematical models for RANSAC algorithm search.

The texture and colour of the scene can be used for the identification of the elements in the scene. The visualization can be done with the original textures, approximating the model to the reality.

Furthermore, the database of the products and suppliers could be updated with real prod-ucts and suppliers, showing the user a wide variety of prodprod-ucts. Having real time infor-mation of each supplier would be a further step, allowing the user to decide immediately and avoiding asking the supplier if a product is available or not.

The web application could be executed on a server, where the scanned data would be uploaded. The program would do all the recognition of the elements and the modelling.

The program would return to the user the 3D model to navigate on it and also, the infor-mation of the pipes.

Finally, the 3D model could be saved on a CAD file, enabling to open the model on CAD programs, where §it could be modified and extract the desired information.

REFERENCES

[1] Wolfgang BOEHLER and Andreas MARBS, “3D SCANNING INSTRUMENTS.” .

[2] R. Schmidt, A. Zimmermann, M. Möhring, S. Nurcan, B. Keller, and F. Bär, “Digitization – Per-spectives for Conceptualization,” in Advances in Service-Oriented and Cloud Computing, 2015, pp. 263–

275.

[3] H. Hirsch-Kreinsen, “Digitization of industrial work: development paths and prospects,” J. Labour Mark. Res., vol. 49, no. 1, pp. 1–14, Jul. 2016.

[4] A. Mathys, J. Brecko, and P. Semal, “Comparing 3D digitizing technologies: What are the differ-ences?,” in 2013 Digital Heritage International Congress (DigitalHeritage), 2013, vol. 1, pp. 201–204.

[5] A. Núñez Andrés, F. Buill Pozuelo, J. Regot Marimón, and A. de Mesa Gisbert, “Generation of virtual models of cultural heritage,” J. Cult. Herit., vol. 13, no. 1, pp. 103–106, Jan. 2012.

[6] W. BOEHLER and A. MARBS, “3D SCANNING INSTRUMENTS,” p. 4.

[7] G. Pavlidis, A. Koutsoudis, F. Arnaoutoglou, V. Tsioukas, and C. Chamzas, “Methods for 3D dig-itization of Cultural Heritage,” J. Cult. Herit., vol. 8, no. 1, pp. 93–98, Jan. 2007.

[8] L. Gomes, O. Regina Pereira Bellon, and L. Silva, “3D reconstruction methods for digital preser-vation of cultural heritage: A survey,” Pattern Recognit. Lett., vol. 50, pp. 3–14, Dec. 2014.

[9] S. Zhang, “High-speed 3D shape measurement with structured light methods: A review,” Opt. La-sers Eng., vol. 106, pp. 119–131, Jul. 2018.

[10] V. Carbone, M. Carocci, E. Savio, G. Sansoni, and L. De Chiffre, “Combination of a Vision System and a Coordinate Measuring Machine for the Reverse Engineering of Freeform Surfaces,” Int. J. Adv.

Manuf. Technol., vol. 17, no. 4, pp. 263–271, Jan. 2001.

[11] OR3D, “What is 3D Scanning? - Scanning Basics and Devices,” OR3D. .

[12] J. Lee and S. Zlatanova, Eds., 3D geo-information sciences. Berlin: Springer, 2009.

[13] “FARO Focus | FARO SPAIN, S.L.U.” [Online]. Available: https://www.faro.com/es-es/produc-tos/construccion-bim-cim/faro-focus/. [Accessed: 04-May-2018].

[14] H. Radvar-Esfahlan and S.-A. Tahan, “Robust generalized numerical inspection fixture for the me-trology of compliant mechanical parts,” Int. J. Adv. Manuf. Technol., vol. 70, no. 5–8, pp. 1101–1112, Feb.

2014.

[15] X.-T. Yan, B. Eynard, and W. J. Ion, Eds., Global design to gain a competitive edge: an holistic and collaborative design approach based on computational tools. London: Springer, 2008.

[16] L. Ladický, O. Saurer, S. Jeong, F. Maninchedda, and M. Pollefeys, “From Point Clouds to Mesh Using Regression,” in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3913–

3922.

[17] Q. Li, R. Xiong, S. Huang, and Y. Huang, “Building a dense surface map incrementally from semi-dense point cloud and RGBimages,” Front. Inf. Technol. Electron. Eng., vol. 16, no. 7, pp. 594–606, Jul.

2015.

[18] A. Nguyen and B. Le, “3D point cloud segmentation: A survey,” in 2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM), 2013, pp. 225–230.

[19] E. Grilli, F. Menna, and F. Remondino, “A REVIEW OF POINT CLOUDS SEGMENTATION AND CLASSIFICATION ALGORITHMS,” ISPRS - Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., vol. XLII-2/W3, pp. 339–344, Feb. 2017.

[20] D. Bazazian, J. R. Casas, and J. Ruiz-Hidalgo, “Fast and Robust Edge Extraction in Unorganized Point Clouds,” 2015, pp. 1–8.

[21] “Line segment extraction for large scale unorganized point clouds - ScienceDirect.” [Online].

Available: https://www.sciencedirect.com/science/article/pii/S0924271615000362. [Accessed: 04-May-2018].

[22] H. Ni, X. Lin, X. Ning, and J. Zhang, “Edge Detection and Feature Line Tracing in 3D-Point Clouds by Analyzing Geometric Properties of Neighborhoods,” Remote Sens., vol. 8, no. 9, p. 710, Sep. 2016.

[23] P. J. Besl and R. C. Jain, “Segmentation through variable-order surface fitting,” IEEE Trans. Pat-tern Anal. Mach. Intell., vol. 10, no. 2, pp. 167–192, Mar. 1988.

[24] J. P. Mills and J. H. Chandler, “ISPRS Commission V Symposium: Image Engineering And Vision Metrology,” Photogramm. Rec., vol. 22, no. 117, pp. 94–96, Mar. 2007.

[25] A. Khaloo and D. Lattanzi, “Robust normal estimation and region growing segmentation of infra-structure 3D point cloud models,” Adv. Eng. Inform., vol. 34, pp. 1–16, Oct. 2017.

[26] “Documentation - Point Cloud Library (PCL).” [Online]. Available: http://pointclouds.org/docu-mentation/tutorials/region_growing_segmentation.php#region-growing-segmentation. [Accessed: 23-May-2018].

[27] J. Illingworth and J. Kittler, “A survey of the hough transform,” Comput. Vis. Graph. Image Pro-cess., vol. 44, no. 1, pp. 87–116, Oct. 1988.

[28] M. A. Fischler and R. C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Commun ACM, vol. 24, no. 6, pp. 381–395, Jun. 1981.

[29] A. Hidalgo-Paniagua, M. A. Vega-Rodríguez, N. Pavón, and J. Ferruz, “A Comparative Study of Parallel RANSAC Implementations in 3D Space,” Int. J. Parallel Program., vol. 43, no. 5, pp. 703–720, Oct. 2015.

[30] Y. Li, X. Wu, Y. Chrysathou, A. Sharf, D. Cohen-Or, and N. J. Mitra, “GlobFit: Consistently Fit-ting Primitives by Discovering Global Relations,” p. 11.

[31] C. Papazov and D. Burschka, “An Efficient RANSAC for 3D Object Recognition in Noisy and Occluded Scenes,” in Computer Vision – ACCV 2010, 2010, pp. 135–148.

[32] R. Schnabel, R. Wahl, and R. Klein, “Efficient RANSAC for Point-Cloud Shape Detection,” Com-put. Graph. Forum, vol. 26, no. 2, pp. 214–226, Jun. 2007.

[33] O. Chum and J. Matas, “Matching with PROSAC — Progressive Sample Consensus,” 2005, vol.

1, pp. 220–226.

[34] P. H. S. Torr and A. Zisserman, “MLESAC: A New Robust Estimator with Application to Estimat-ing Image Geometry,” Comput. Vis. Image Underst., vol. 78, no. 1, pp. 138–156, Apr. 2000.

[35] S. Barnea and S. Filin, “Segmentation of terrestrial laser scanning data using geometry and image information,” ISPRS J. Photogramm. Remote Sens., vol. 76, pp. 33–48, Feb. 2013.

[36] F. Hamid-Lakzaeian and D. F. Laefer, “An Integrated Octree-RANSAC Technique for Automated LiDAR Building Data Segmentation for Decorative Buildings,” in Advances in Visual Computing, 2016, pp. 454–463.

[37] L. Li, F. Yang, H. Zhu, D. Li, Y. Li, and L. Tang, “An Improved RANSAC for 3D Point Cloud Plane Segmentation Based on Normal Distribution Transformation Cells,” Remote Sens., vol. 9, no. 5, p.

433, May 2017.

[38] J. MacQueen, “Some methods for classification and analysis of multivariate observations,” pre-sented at the Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics, 1967.

[39] S. S. Savkare, A. S. Narote, and S. P. Narote, “Comparative Analysis of Segmentation Algorithms Using Threshold and K-Mean Clustering,” in Intelligent Systems Technologies and Applications 2016, 2016, pp. 111–118.

[40] K. V. A. Muneer and K. P. Joseph, “Performance Analysis of Combined k-mean and Fuzzy-c-mean Segmentation of MR Brain Images,” in Computational Vision and Bio Inspired Computing, Springer, Cham, 2018, pp. 830–836.

[41] F. Kharbat and H. Ghalayini, “New Algorithm for Building Ontology from Existing Rules: A Case Study,” in 2009 International Conference on Information Management and Engineering, 2009, pp. 12–16.

[42] R. T. Ng and J. Han, “Efficient and Effective Clustering Methods for Spatial Data Mining,” p. 12.

[42] R. T. Ng and J. Han, “Efficient and Effective Clustering Methods for Spatial Data Mining,” p. 12.