• Ei tuloksia

Trajectory Learning and Generation

Figure 5 presents the diagram of the learning process.

Canonical System

Error Minimization ftarget(s)

Gaussians x,x,˙ x¨

Weights

Figure 5. Learning Process

The canonical equation generates the phase parameters, for which the target non-linear functionftarget and Gaussian basis functions are calculated, given the coordinatesx, ve-locities x˙ and accelerations x¨of the trajectory demonstrated. As an output the learning block provides the set of weight coefficients, the result of the error criterion minimization box.

Figure 6 demonstrates the process of motion generation.

Canonical System

Transformation System f(s, w)

Learned Weight Coefficients

x, v,v˙ x0, g

Figure 6. Motion Generation

Knowing the weights of the non-linear function describing the motion and given the task specific start and goal positions the program outputs the set of expected trajectory coordi-nates, velocities and accelerations.

Similarly to the learning stage the canonical system generates the phase variable for the whole duration of motion, using which the non-linear function is calculated. After that the transformation system is employed to compute the following position, velocity and acceleration. The implementation of learning and generating are discussed in Section 5.

4 VISUAL SENSING

The importance of visual sensing, the first stage in the system of learning robot motion, is difficult to overestimate, since the accuracy of trajectory replication strongly depends on the accuracy of the measured path. This section represents the fundamentals of stereo geometry as well as the description of pointer tracking, normalizing frame coordinates and 3D reconstruction.

4.1 Pointer Detection

4.1.1 Target Selection

One part of the problem was to choose the pointer for motion demonstration. A tutor was using a red ball to demonstrate the movement and a stereo camera with known calibration parameters is recording left and right images, that will make up the representation of the whole motion. The frame rate of the saved images might not be constant (i.e., the gaps between images might be the expected to be 15Hz seconds most of the time but sometimes more). The color of the pointer was intended to be easy detected on the background, so it is red. A round object was chosen to get almost the same view of it regardless of the projection, which can differ in different points of the trajectory. The size of the ball is convenient to demonstrate the motion and at the same time big enough to detect its center of mass correctly, so that the trajectory obtained is close to the original one. The material of the pointer should have as little specular reflection as possible not to get holes in the image of the ball.

4.1.2 Detection Algorithm

Pointer localization and tracking is the first part of the trajectory detection, which can not be much time and resource consuming. That is why color segmentation was employed to detect the pointer. Since the lighting conditions and the position of the camera might change from one experiment to another, a user is asked to show the position of the ball in the first image, and after that the program is using these samples of the color to localize the pointer in further images. Colors are considered in HSV color space. Maximum and minimum of HSV-values of the pixels specified by the user are utilized as the threshold

for detecting the area of points of the ball. In addition, the system memorizes the previous location of the ball and later on takes into account not the whole image but the part that is expected to contain the pointer.

Blue points in Figure 7 represents the pixels assumed to be the area of the ball due to the thresholds:

Figure 7. Localizing the Pointer, Color Segmentation

There might be several regions including pixels of the sample color, as Figure 7 shows there is also a part of a shoe detected as the pointer. In order to resolve this problem, first morphological closing is performed on the binary image to make the regions more homogeneous. A disk structuring element is used to preserve the circular nature of the object. After that blob detection reveals the areas remained and for the region with the largest area the center of mass is calculated. Figure 8 shows the region of the ball as a binary image and the detected center of mass.

Figure 8. Pointer Localization

After the color threshold was obtained, the further algorithm of a pointer detection is as follows:

1. determine the window, containing the pointer, according to the previous center of mass detected;

2. detect the clouds of the red points according to the threshold;

3. get a binary image, where red areas are white and all the other points black;

4. perform image closing;

5. blob detection;

6. choose the blob with the largest area;

7. calculate the center of mass of the largest blob.

The pointer is localized in the same way in all of the images of the trajectory, including right and left views of the camera. After this stage the system obtained the trajectory in the left and the right frame of the camera, but the question is how to get the motion in 3D, and here stereo geometry needs to be employed.