• Ei tuloksia

582631 Introduction to Machine Learning, Autumn 2014 Exercise set 7

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "582631 Introduction to Machine Learning, Autumn 2014 Exercise set 7"

Copied!
3
0
0

Kokoteksti

(1)

582631 Introduction to Machine Learning, Autumn 2014 Exercise set 7

To be discussed in the exercise session 11 December. Attending the session is completely voluntary.

Credit is given based on the written solutions turned in to the course assistant.

The deadline for turning in your solutions isFriday, 12 December, at 23:59.

Send your solutions to the course assistant (johannes.verwijnen@cs.helsinki.fi) by e-mail. You should send one PDF file including your solutions to the pen-and-paper problems and the report for the programming problem, and a single compressed file including the code for the programming problem.

Pen-and-paper problems

Problem 1 (3 points)

The K-medoids algorithm is similar to K-means but can work with any data objects for which a pairwise similarity measure is defined, including cases for which taking the mean of a set of objects does not make sense.

Instead of selecting themeanof the objects belonging to a cluster as the prototype for the cluster, the K-medoids algorithm selects the most representative of the objects belonging to the cluster as the prototype. Thus the K-medoids algorithm is exactly like the K-means algorithm (Algorithm 8.1 in the textbook, also presented in the slides for lecture 10), except that in line 4 of the algorithm, in K-medoids the cluster prototype is selected to equal one of the objects assigned to that cluster, such that thesum of the similarities between the prototype and the objects in the cluster is maximized.

(a) Is this algorithm guaranteed to converge after a finite number of steps? If so, show this. (Hint: what is the corresponding objective function that the algorithm optimizes?)

(b) If the data objects are points in the Euclidean space, and the similarity measure for the K-medoids algorithm is taken to be the negative of the squared Euclidean distance, in general, which algorithm (K-means or K-medoids) would typically result in a lower sum of squared errors?

Problem 2 (3 points)

One practical problem with K-means is that it is possible for clusters to become empty. That is, in some cases the centroidscjmay end up in such locations that no points are assigned to one (or several) of the clusters! Such an example is trivial to construct with random initial locations for the centroids (since a centroid can simply be initialized very far from the data), but can also happen when centroids are initialized to equal randomly chosen datapoints (even when no two centroids are initialized to the exact same location).

(a) Manually construct such an example. (Hint: This problem does not happen in the first iteration since at least the datapoint which is equal to the centroid will be assigned to that centroid at first. But it can happen in the second iteration. It is probably simplest to construct with one-dimensional samplesxi∈R.

You will need at leastK = 3. Six datapoints are sufficient if suitably placed, but feel free to use more if you wish.) If you find it hard to manually construct such a case, try to find such an example in numerical simulations (using the K-means algorithm that you construct in Problem 4 below).

(b) What are some strategies that you could employ to handle this problem when it occurs in the K-means algorithm?

1

(2)

Problem 3 (3 points)

We consider hierarchical clustering on a toy data set consisting of seven data points on the Euclidean plane.

The data points arep1= (0.5,2.0), p2= (2.5,3.0),p3= (4.2,0.7),p4= (5.5,0.3), p5= (4.8,3.5),p6= (7.0,2.5) andp7= (8.5,2.8), or as a picture:

The matrix of Euclidean distances between the data points is then as follows:

p1 p2 p3 p4 p5 p6 p7

p1 0 2.24 3.92 5.28 4.55 6.52 8.04 p2 2.24 0 2.86 4.04 2.35 4.53 6.00 p3 3.92 2.86 0 1.36 2.86 3.33 4.79 p4 5.28 4.04 1.36 0 3.28 2.66 3.91 p5 4.55 2.35 2.86 3.28 0 2.42 3.77 p6 6.52 4.53 3.33 2.66 2.42 0 1.53 p7 8.04 6.00 4.79 3.91 3.77 1.53 0

(a) Apply basic agglomerative hierarchical clustering to this data using the single link notion of dissimilarity between clusters. Visualize the result as a dendrogram.

(b) Repeat the clustering using now complete link dissimilarity. Compare the results.

Computer problem

General instructions:

• Return a brief written report (as PDF, included in the same file as your pen-and-paper solutions) and one directory (as a zip or tar.gz file) including all your code.

• Do not include any of the data files in your solution file.

• Always mention your name and student ID in the report.

• We use the report as the main basis for grading: All your results should be in the report. We also look at the code, but we won’t however go fishing for results in your code.

• The code needs to be submitted as a runnable file or set of files (command to run it given in the report).

• Your code should be reasonably efficient. You are not expected to really optimize the code, but your solution should not take excessive amounts of time on data sets relevant to the problem. A typical cause for a failure here would be using for-loops instead of vector operations in MATLAB. If your solution is grossly inefficient, your score may be reduced accordingly.

2

(3)

• In your report, the results will be mostly either in the form of a figure or program output. In both cases, add some sentences which explain what you are showing and why the results are the answer to the question.

• If we ask you to test whether an algorithm works, always give a few examples showing that the algorithm indeed works

Problem 4 (15 points)

In this problem we will implement the K-means algorithm and the K-medoids algorithm (see Problem 2) and test them on the MNIST handwritten digit data.

(a) Write your own implementation of the K-means algorithm (Algorithm 8.1 in the textbook, also presented in the slides of lecture 10). This should be a function that takes as inputs the data matrix and initial centroids, and outputs the final centroids and the assignments specifying which data vectors are assigned to which centroids after convergence of the algorithm. (Use matrix operations wherever possible, avoiding explicit loops, to speed up the algorithm sufficiently for running the algorithm on the MNIST data below.) (b) Load the first 500 of the MNIST digits. Run your K-means algorithm, usingK = 10 clusters, with the initial centroids equal to the first 10 images in the dataset. After convergence, for each cluster, visualize the centroid of that cluster as well as all images assigned to that cluster (using the provided visual() function). To what extent do the 10 clusters correspond to the 10 different digits? Re-run K-means but selecting the first instance of each class as the initial centroids (so that the initial centroids all represent distinct digits), and compare with the previous results. (Hint: There should be some correpondence, but some digits are definitely clustered together, and it should be clear from the visualization that the clustered images are ‘similar’ even if they do not necessarily represent the same digits.)

(c) Write your own implementation of the K-medoids algorithm (see Problem 1 above). This should be a function that takes as inputs a similarity matrix and the initial medoids, and outputs the final medoids and the assignments specifying which data vectors are assigned to which clusters after convergence of the algorithm. (As above, use matrix operations to avoid explicit loops as much as opssible.)

(d) Run your K-medoids implementation on the first 500 digits of the MNIST dataset, mirroring what you did in part (b) above. For simplicity, use the negative of the Euclidean distance as the similarity measure, but you can also try other similarity measures if you wish. Just as in part (b), try running the algorithm with the first 10 images as the initial medoids, and compare with selecting the initial medoids so that they all represent distinct digits. As in (b), visualize your results by, for each cluster, showing the cluster prototype (the medoid), as well as all images belonging to the cluster.

(e) What do your results say about the sensitivity of K-means and K-medoids to the specification of the initial cluster prototypes? What would be one way of dealing with this issue?

3

Viittaukset

LIITTYVÄT TIEDOSTOT

The developed distributed access selection algorithm is based on a high level algorithm defined in the Ambient Networks project [84] [83] [68] and additionally also extends it in

The assignment step of the proposed balanced k-means algorithm can be solved in O ( n 3 ) time with the Hungarian algo- rithm, because the number of points and cluster slots (k · (

Keywords: Optimization, Swarm Intelligence, Clustering, Firefly Optimization, Cuckoo Search Optimization, Firefly Algorithm, Cuckoo Search Algorithm, K-means

− valmistuksenohjaukseen tarvittavaa tietoa saadaan kumppanilta oikeaan aikaan ja tieto on hyödynnettävissä olevaa & päähankkija ja alihankkija kehittävät toimin-

We observe that the density estimation is the bottleneck in with the full search, but the k-means becomes now the bottleneck if DDDE algorithm is used.. The median speed- up

Aineistomme koostuu kolmen suomalaisen leh- den sinkkuutta käsittelevistä jutuista. Nämä leh- det ovat Helsingin Sanomat, Ilta-Sanomat ja Aamulehti. Valitsimme lehdet niiden

We observe that the density estimation is the bottleneck in with the full search, but the k-means becomes now the bottleneck if DDDE algorithm is used.. The median speed- up

Finally, the backtracking search algorithm (BSA) is used as an efficient optimization algorithm in the learning phase of the ANFIS approach to provide a more precise prediction