• Ei tuloksia

Temporal and Space Invariance Tests

7.3 Trajectory Replication by a Robot

7.3.3 Temporal and Space Invariance Tests

Spatial Invariance. In order to check the property of the Dynamic movement primitives, called spatial invariance, the series of tests was performed in which the goal position was changed. The system was expected to generate the trajectory with the shape close to the initial, but that would converge to the new goal position. The parameters of the motion are as shown in Table 11.

In test 2, Figure 43, time of the motion is the same as for the motion generated in Mat-lab, but goal position was changed. Figure 43 demonstrates how the change in the goal position affected the trajectory.

0.05

Figure 43. Generated Trajectories, Change of the Goal Position

In Figure 43 the motion, generated in Matlab (red line), and the motion, generated online for the robot with the changed goal position (blue line), are represented. As it is shown

the generation program adjusted the Dynamic Motion Primitives to produce a trajectory that converged to the new goal position.

Temporal Invariancewas checked by setting the recalculated parameter τ depending on the desired duration of the motion. Motion parameters are as follows:

Table 11. Parameters of the Motion

Parameter Value

Move Time (s) 5

Number of GaussiansM 9 Width of Gaussiansw 10

Spring ConstantK 150 Damping FactorD 17 Temporal Scalingτ 4.75 Runge-Kutta Step(s) 0.007

Figure 44 demonstrates the trajectories with different durations.

0

Figure 44. Generated Trajectories, Change of the Duration of the Motion

In Figure 44 red-line motion lasts for 5 seconds and blue-line one for 3.5 seconds. The shape of the motion remained the close to the original, although the trajectory was stretched in time. In order to get the motion close to the original one but with other duration, one would need to adjust also spring and damping factors.

8 CONCLUSION

During the research the first objective, studying the way of programming robot with visual input, was accomplished. A demonstrated motion was recorded by a stereo camera, the sequence of images were processed, the learning phase provided the weight coefficients of the motion and the generation procedure produced the velocities for robot joints.

Dynamic movement primitives concept proved to be an efficient tool for motion learning and generation. One of the advantages of the DMP is that the motion is stored as a set of weight coefficients, which allows to save memory. Another beneficial point is that the DMP makes it possible for a motion to be generated with different target positions and different duration, which are the space and temporal invariance properties of the DMPs.

The second objective was defined as the analysis of the influence of different parameters values on the trajectory generated was performed. The part of the experiments concerning the visual sensing proved that the methods for pointer detection and 3D reconstruction can be used in the system and give a predictable result.

During the second part of experiments the different parameters used in motion learning and generation were studied. It was shown that the number of Gaussians should be such that the whole motion was described without loss of information about the path and at the same time not to store excess weight coefficient that would not bring new knowledge.

The centers of Gaussian basis functions should be equally distributed in time, and it is important to avoid unnecessary overlapping, which is regulated by the width parameter.

Damping and spring constants should be chosen for each application separately, since there seems not to be unique values that would satisfy all the types of motions. On the other hand, it might be possible to choose suitable parameters for the movements within one type. It is important to consider damping factor and spring coefficient together.

The parabolic trajectories were learnt and performed by the robot successfully, whereas for the line motions there were problems related to the difficulty in choosing one set of damping and spring parameters for all three dimensions. As a solution, different coeffi-cients for different dimensions were suggested.

The third part of the experiments proved some of the properties of DMP, such as spatial and temporal invariance. The former implied the capability of the system to converge to

the changed goal position. The latter implies that the duration of the motion can be varied changing the temporal coefficients.

The system designed includes the Matlab image analysis and learning program and the C++ motion generation software, which might be extended to add new functionality to the system.

As one of the improvements, obstacle avoidance could be implemented, which was not considered in the scope of the work. A good idea might be to build a library of motions, so that the robot obtained ability not only to perform the demonstrated motions but also to recognize the movements that were shown.

REFERENCES

[1] N. Roy J. Peters, R. Tedrake and J. Morimoto. Robot learning. IEEE Robotics&

Automation Magazine, 16:19–20, 2009.

[2] C. Breazeal and B. Scassellati. Robots that imitate humans. TRENDS in Cognitive Sciences, 3:233–242, 2002.

[3] S. Schaal. Is imitation learning the route to humanoid robots? Trends in Cognitive Sciences, 1999.

[4] K. Grochow R. Chalodhorn, D. B. Grimes and R. P. N. Rao. Learning to walk through imitation. IJCAI’07: Proceedings of the 20th international joint conference on Artificial Intelligence, pages 2084–2090, 2007.

[5] T. Asfour P. Pastor, H. Hoffmann and S. Schaal. Learning and generalization of motor skills by learning from demonstration. IEEE International Conference on Robotics and Automation, 2009.

[6] S. Schaal A. J. Ijspeert, J. Nakanishi. Learning attractor landscapes for learning mo-tor primitives. Advances in Neural Information Processing Systems 15 (NIPS2002), 9:1040–1046, 2002.

[7] S. Schaal A. J. Ijspeert, J. Nakanishi. Movement imitation with nonlinear dynam-ical systems in humanoid robots. IEEE International Conference on Robotics and Automation (ICRA2002), 2002.

[8] I. Oakley S. Strachan, R. Murray-Smith and J. Angesleva. Dynamic primitives for gestural interaction.Lecture Notes in Computer Science 3160, pages 325–330, 2006.

[9] J. E. Slotine B. E. Perk. Motion primitives for robotic flight control.ArXiv Computer Science e-prints, 2006.

[10] J. Ijspeert S. Schaal, J. Nakanishi. Learning movement primitives. International Symposium on Robotics Research (ISRR2003), 2004.

[11] D. Park H. Hoffmann, P. Pastor and S. Schaal. Biologically-inspired dynamical systems for movement generation: automatic real-time goal adaptation and obstacle avoidance.2009 IEEE International Conference on Robotics and Automation, pages 2587–2592, 2009.

[12] P. Pastor H. Hoffmann and S. Schaal. Dynamic movement primitives for movement generation motivated by convergent force fields in frog.Adaptive Motion of Animals and Machines (AMAM), 2008.

[13] B. Fajen and W. Warren. Behavioral dynamics of steering, obstacle avoidance, and route selection. Journal of Experimental Psychology: Human Perception and Per-formance, 29:343–362, 2003.

[14] A. S. Hadi S. Chatterjee.Regression analysis by example. Wiley-Interscience, 2006.

[15] O. Silven J. Heikkila. A four-step camera calibration procedure with implicit image correction. Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on, pages 1106–1112, 1997.

[16] http://www.vision.caltech.edu/bouguetj/calib _doc/index.html.

[17] http://mysite.du.edu/ jcalvert/optics/stereops.htm.

[18] Emanuele. Trucco. Introductory techniques for 3-D computer vision. Upper Saddle River (NJ), 1998.

[19] P. F. Sturm R. I. Hartley. Triangulation. Lecture Notes In Computer Science, 970:190–197, 1995.

[20] P. Sturm R. I. Hartley. Triangulation. Computer Vision and Image Understanding, 68:146–157, 1997.

[21] Bahvalov N. Numerical Methods. M. Nauka, 1975.

[22] Y. C. Sirma. Kinematic analysis for robot arm. Master’s thesis, Yildiz Technical University, 2009.

[23] www.tirbosquid.com/3d-models/maya-robotic arm/281750.

[24] M. Wahde. Lecture notes, "humanoid robotics".Chalmers University of Technology, 2009.