• Ei tuloksia

Segmentation of overlapping convex objects

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Segmentation of overlapping convex objects"

Copied!
90
0
0

Kokoteksti

(1)

Degree Program in Computer Science

Master’s Thesis

Sahar Zafari

SEGMENTATION OF OVERLAPPING CONVEX OBJECTS

Examiners: Prof. Heikki Kälviäinen Prof. Heikki Haario

Supervisors: D.Sc (Eng). Tuomas Eerola D.Sc (Eng). Jouni Sampo

(2)

ABSTRACT

Lappeenranta University of Technology

School of Industrial Engineering and Management Degree Program in Computer Science

Sahar Zafari

Segmentation of Overlapping Convex Objects

Master’s Thesis 2014

90 pages, 34 figures, 8 tables, and 1 appendix.

Examiners: Prof. Heikki Kälviäinen Prof. Heikki Haario

Keywords: segmentation, overlapping objects, convex objects, image processing

This thesis presents a framework for segmentation of clustered overlapping convex objects. The proposed approach is based on a three-step framework in which the tasks of seed point extraction, contour evidence extraction, and contour estimation are addressed. The state-of-art techniques for each step were studied and evaluated using synthetic and real microscopic image data. Ac- cording to obtained evaluation results, a method combining the best performers in each step was presented. In the proposed method, Fast Radial Symmetry transform, edge-to-marker association algorithm and ellipse fitting are employed for seed point extraction, contour evidence extraction and contour estimation respectively. Using synthetic and real image data, the proposed method was evaluated and compared with two competing methods and the results showed a promising improvement over the competing methods, with high segmentation and size distribution estima- tion accuracy.

(3)

PREFACE

I would like to express my deepest gratitude to my supervisors, DR. Tuomas Eerola and DR.

Jouni Sampo for their excellent guidance, caring, and patience during the thesis. I am also grateful to Professor Kälviäinen and Professor Heikki Haario for their insightful comments and continued support.

Finally, I would like to thank my loved ones, who have supported me throughout entire process.

I will be grateful forever for your love.

Lappeenranta, August 31th, 2014

Sahar Zafari

(4)

Contents

1 Introduction 7

1.1 Background . . . 7

1.2 Objectives and Restrictions . . . 8

1.3 Structure of the Thesis . . . 9

2 Machine Vision and Image Segmentation 10 2.1 Segmentation Methods . . . 10

2.1.1 Threshold-Based Segmentation . . . 10

2.1.2 Edge-Based Segmentation . . . 11

2.1.3 Region-Based Segmentation . . . 12

2.1.4 Watershed Transform . . . 13

2.1.5 Active Contour . . . 14

2.1.6 Graph Cut . . . 15

2.2 Segmentation with Shape Prior . . . 16

2.3 Segmentation of Overlapping Objects . . . 18

3 Framework for Overlapping Convex Object Segmentation 20 3.1 General Framework . . . 20

3.2 Seed Point Extraction . . . 21

3.2.1 Local Convergence Filters . . . 21

3.2.2 Ultimate Erosion for Convex Sets . . . 24

3.2.3 Distance Transform . . . 27

3.2.4 Fast Radial Symmetry . . . 28

3.3 Contour Evidence Extraction . . . 31

3.3.1 Edge-to-Marker Association . . . 32

3.3.2 Concave-point Based Contour Segmentation . . . 33

3.4 Contour Estimation . . . 38

3.4.1 Ellipse Fitting . . . 39

3.4.2 B-spline . . . 44

3.4.3 Active Contour . . . 48

4 Experiments 52 4.1 Data . . . 52

4.2 Performance Measures . . . 54

(5)

4.3 Sub-method Evaluation . . . 55

4.3.1 Seed Point Extraction . . . 55

4.3.2 Contour Evidence Extraction . . . 57

4.3.3 Contour Estimation . . . 58

4.4 Proposed Method . . . 60

4.4.1 Parameter Selection . . . 61

4.4.2 Overall Performance and Comparison . . . 62

4.4.3 Size Distribution Comparison . . . 64

5 Discussion 70 5.1 Methods and Results . . . 70

5.2 Future Work . . . 70

6 Conclusions 72

References 72

Appendices

Appendix 1: Segmentation Results and Size Distribution Comparison (real dataset)

(6)

Abbreviations

ARF Adaptive Ring Filter AD Average Distance

ADD Average Distance Deviation ASM Active Shape Model

BD Bhattacharyya Distance CI convergence of individual CF Coine Filter

CBCS Concave-point Based Contour Segmentation DT Distance Transform

ECM Expectation Conditional Maximization EM Expectation Maximization

FRS Fast Radial Symmetry FP False Positive

FN False Negative GoF Goodness of Fit GT Grand Truth IF Iris Filter

JSC Jaccard Similarity Coefficient LCF Local Convergence Filter LoG Laplacian of Gaussian

MSER Maximally Stable Extremal Regions NPS Nano Particle Segmentation

PCA Principal Component Analysis PDE Partial Differential Equation PPV Positive Predictive Value SBF Sliding Band Filter

SCE Segment Contour Evidences SSD Sum of Square Error

TPR True Positive Rate TP True Positive

UECS Ultimate Erosion for Convex Sets

(7)

1 Introduction

1.1 Background

In the field of computer vision, the image segmentation has been the subject of intensive research for many years [1, 2]. Formally, image segmentation is the science of making simplified digital images by decomposing them into parts that are meaningful to analysis. To be simple and brief, segmentation is the task of partitioning an image into multiple unique regions based on specified criteria, including but not limited to texture, color, and intensity.

There is a broad range of applications in the field of computer vision, such as medical imaging and object detection, which strictly relies on the image segmentation. In medical imaging, the image segmentation is used to provide more in-depth information that supports medical experts in tedious tasks such as locating abnormalities, surgery planning, and cell counting [3]. In the field of object detection, such as face and pedestrian detection, the segmentation is the key advanced tool for identifying the regions of interest. Due to vast applications of segmentation in the domain of computer vision, it is no surprise that despite the active research during the past years, the image segmentation techniques are still considered as an active research area [4].

Segmentation of overlapping objects is a hot topic in the field of computer vision where the traditional segmentation methods are not capable of producing accurate and complete results.

The complexity arises from the fact that the objects with occluded or overlapped parts are not informative enough to be used in the segmentation process. For example, in terms of contour estimation, the visual segments of object contours do not usually contain enough geometrical data points that can yield satisfactory results. Several attempts have been conducted to address the involved complexity observed in the segmentation of overlapping objects [5, 6, 7, 8, 9]. In general, the existing methods split the entire visual scene of image into meaningful parts to be used for separating and segmenting the overlapping objects.

This Master’s thesis project focused on the segmentation task of the images where objects to be detected are not completely visible and partially occluded. Provided an image of multiple over- lapping objects, the segmentation framework should be able to detect the objects and complete the missing parts.

(8)

1.2 Objectives and Restrictions

The main goal of this thesis was to develop a method for segmentation of overlapping convex- shape objects, which is capable of distinguishing individual particles, and estimate their contours from the particle bundles. First, the state-of-art techniques for the segmentation of overlapping objects were explored and studied. Second, the existing methods were applied to synthetic and real datasets, and eventually the obtained results were evaluated. Third, considering the existing segmentation methods and the evaluations performed, a method for the segmentation of overlap- ping convex-shape objects comprising of seed point extraction, contour evidence extraction, and contour estimation was proposed. Finally, the proposed method is evaluated by synthetic and real datasets.

To be specific, only convex-shape objects are subject to this study (see Figure 1) where the segmentation of individuals from a group of overlapped objects, followed by the inference of occluded boundaries are addressed. In particular, the objects to be segmented are clearly distin- guishable from the background and their shape is known. Therefore, the background segmenta- tion and the shape classification are not considered.

Figure 1.Examples of real images.

(9)

1.3 Structure of the Thesis

This thesis is organized in 6 chapters as follows. In Chapter 2, the introductory objectives reviewing the traditional segmentation methods, segmentation methods with shape prior, and state-of-art methods for overlapping object segmentation are presented. In Chapter 3, the general framework to be adopted for segmentation of overlapping objects and the existing methods for each framework step are described. In Chapter 4, each module of the framework is explored and evaluated and based on the obtained results, a method for segmentation of overlapping convex objects is proposed. The proposed method is applied to the synthetic and real datasets and the es- timated results are depicted in parallel with the two existing methods. In Chapter 5, a discussion on future research directions are presented. Finally, in Chapter 6, the conclusions are drawn.

(10)

2 Machine Vision and Image Segmentation

This chapter provides introductory information about different segmentation techniques. In Sec- tion 2.1, popular segmentation methods are reviewed. Section 2.2 provides basic information about the segmentation of objects with known shapes. Section 2.3 investigates several existing methods for overlapping object segmentation.

2.1 Segmentation Methods

Segmentation is one of the fundamental steps in machine vision and image analysis. The main objective of image segmentation is to divide an image into constituent parts and extract important details in an image. Image segmentation partitions an image into several similarity regions in which each of the pixels in the regions are similar with respect to homogeneous characteristics such as gray-level, color, intensity, or texture. There are several approaches available for image segmentation. In the following sections, popular segmentation methods are briefly reviewed.

2.1.1 Threshold-Based Segmentation

Threshold-based segmentation is considered to be the simplest technique for extracting a fore- ground region from the image background. The main idea is based on the intensity difference where the areas of an image, whose pixel values fall under a certain range, corresponds to a par- ticular segment. The characteristic of regions of interest is set by a predefined threshold value.

The threshold-based segmentation methods depend solely on pixel values and do not consider the spatial properties of the image. Global and local thresholding are the two most commonly used approaches for threshold-based segmentation. In global thresholding all pixels in the image are segmented with same threshold value while in local thesholding the threshold value varies locally. The key of thresholding segmentation is to choose the threshold value. Otsu’s [10], and Kittler’s [11] methods are the two most commonly used methods to choosing the threshold value.

The Otsu’s method is aimed at finding the optimal value for the global threshold by minimizing the within-class variance of the resulting foreground and background classes. Kittler’s method presents the segmentation as a two class classification problem that can be distinguished based

(11)

on the gray-level histogram of the image, so as to find a threshold that minimizes the number of miss-classified pixels by using Bayesian classification rules. An example of image segmentation by the thresholding method is shown in Figure 2. A comprehensive survey can be found in [12].

Figure 2. An example of threshold based segmentation: a) Original image; b) Segmentation result. [13]

2.1.2 Edge-Based Segmentation

The edge-based detection techniques are a common approach in image segmentation in which the edges obtained from the image are exploited to recognize the regions boundaries. The main motivation is that every single object in an image is surrounded by a closed border that can be de- tected by means of image intensity values. Edge points are the discontinuities in image intensity which basically are estimated by image first and second order derivatives. The most commonly used ones are the Sobel [14], Canny [15], Prewitt [16], and Roberts [17] edge detectors.

Edge-based segmentation basically applies an edge filter to the image, by which each pixel is classified as either edge or non-edge. Complementary processing, e.g. thresholding, is required to combine edges into edge chains by building up better boundaries and dividing the image into different segments. Figure 3 shows an example of edge-based segmentation with thresholding.

(12)

Figure 3. An example of edge-based segmentation. [18]

2.1.3 Region-Based Segmentation

Region-based segmentation algorithms are based on the idea that the pixels in the neighborhood of a particular region share similar intensity values. In general, region-based segmentation is a pixel-wise operative algorithm in which each pixel is compared to its neighborhood pixels, and if a pre-defined homogeneity condition is satisfied, the pixel and its neighbors are classified as a particular object. Region-based techniques can be divided into two categories: region growing [19] and region splitting-merging [20] algorithms.

The region growing segmentation starts with a set of seed points as the initial region hypothe- sis and then proceeds to grow the regions by adding adjacent pixels to the regions based on a predefined criteria. This procedure continues untill there is no more adjacent pixels that satisfy the criteria. The seed point initialization and similarity criteria are the key elements for region growing algorithm affecting their performances.

The region splitting-merging algorithm is an iterative implementation of region-based segmenta- tion that follows a top-to-bottom approach. The splitting-merging algorithm compares adjacent regions and merges them if they are homogeneous (see Figure 4). Specifically, it starts with the whole image as a single region, and then splits it into subsequent four disjoint quadrants. It iter- atively continues this process for each subsequent region until all pixels contained in the regions do not satisfy a certain similarity constraint, e.g. grey-level homogeneity. The splitting-merging algorithm produces adjacent regions that contain very similar properties. The most serious dis-

(13)

advantage of this method is that the algorithm is sensitive to image translation [21].

2.1.4 Watershed Transform

Watershed transform [22] can be seen as a region-based segmentation applied to a gradient im- age. It is basically, a morphological algorithm in which image edges are processed in order to extract the regions of similar and homogeneous pattern or color. The Watershed algorithm defines the region by pixels with local intensity minima and the region boundaries, watershed lines, by pixels that have the highest gradient magnitude intensities. In watershed transforma- tion, the gradient image is considered as a topographical surface in which the edges resemble the region boundaries (watershed lines) and the areas with local intensity minima resemble the regions (catchment basins).

Watershed transform is simple, computationally efficient, reliable, and it can reach acceptable results even with low contrast images. However, there are limits to what extent watershed trans- form can be used for segmentation. As the Watershed transform explores the image gradient it is highly sensitive to local minima as at each local minima an object is allocated. Watershed shows low performance when it is applied to images with thin structures [23]. In [24], three approaches namely filter and preprocessing, marker watershed, and hierarchical watershed are proposed to solve the problem of over-segmentation. Figure 5 shows an example of watershed segmentation where region markers were adopted to eliminate the overestimation.

Figure 4. Examples of region based segmentation: a) Original image; b) Segmentation result. [25]

(14)

Figure 5. An example of watershed segmentation: a) Original image; b) Gradient image; c) Watershed segmentation without using region markers (oversegmentation); d) Watershed segmentation using region markers. [26]

2.1.5 Active Contour

Active contour or snake is one of the most widely used methods in image processing [27]. Ac- tive contour aims to divide an image into multiple sub-regions of continuous boundaries. Given an initial boundary shape in the form of a closed contour, active contour models seek for the region of interest by modifying the shape boundaries through shrinking and expansion opera- tions of subject in a particular set of constraints, a so-called contour evolution. The underlining constraints that controls the contour evolution process is modeled by minimization of a certain energy functions, such as the region-based segmentation methods. The energy functionals in active contour models resemble a sort of energy criteria that, e.g. through Euler-Lagrange equa-

(15)

tions, build up a parabolic Partial Differential Equation (PDE) whose solution is the changes to apply active contour model.

The original active contour model introduced by Kass et al [28] is a boundary-based method that are governed by two terms of energy components: internal and external energy functionals.

The internal energy functional controls the smoothness of the curves and reduces the tension and stiffness. The external energy functional forces the curve to move towards the boundary of the objects in the image. Also region-based active contour models have been proposed, in which the energy functionals are based on the region statistical features, e.g. mean intensity value, rather than boundary driven forces. The Mumford and Shah model [29] is the commonly used approach that divides image a into a set of disjointed homogeneous regions. Figure 6 shows an example of active contour segmentation.

Figure 6. Active contour segmentation: a) Original image with initial contour; b) Segmentation result.

[30]

2.1.6 Graph Cut

In the recent years, there has been an increasing interest in the graph cut segmentation methods.

Graph cut methods are energy minimization segmentation models where the image is interpreted as discrete graph space. The graph model of an image is formed by nodes, and the edges rep- resenting the pixels, and the connected pixel based on similarity measurements. Figure 7 shows an example of a graph cut segmentation on an image, where the background pixels are blue, the

(16)

foreground pixels red, and the unlabeled pixels green.

Several graph cut methods with different energy functions have been proposed. Graph cut seg- mentation methods can be categorized as speed up-based graph cut, interactive-based graph cut and shape prior-based graph cut methods [31]. In general, the energy function to be minimized is constructed from summing up the edge weights. The weight of an edge in the image graph model is basically a measure of the similarity between the pixels that depends on the design of the algorithm. Graph cut is similar to active contour, but it is non-iterative and can reach global optimal result for different classes of energy functions. An advance discussion is further available in [31].

Figure 7. Graph cut segmentation: a) Original image; b) Graph cut segmentation result. [32]

2.2 Segmentation with Shape Prior

The segmentation methods presented in Section 2.1 usually rely on the image derived features and do not employ any external information about the objects in the image. Such schemes while applying to segmentation of images with either noisy objects, clutter background or occluded parts may fail to yield acceptable results. In a wide variety of applications for image segmentation [33, 34, 35, 36], a prior knowledge about the shape of the interest can significantly improve the segmentation results. Considering the information about the shape of objects in the image even in the form of a very basic shape template, can help in image segmentation problems. These

(17)

methods usually require defining the shape model through a statistical approach like Principal Component Analysis (PCA) [37]. Several methods have been proposed in order to formulate the image segmentation with shape prior knowledge. Such methods can be classified into three categories [38]: template matching, landmark-based, and variational methods.

Template matching [39, 40, 41], is the basic form of image segmentation with shape prior. In template matching, the shape prior is defined through a parametric geometric model. Hough transform applied to contour-based image segmentation also falls under the heading of this class where object shapes are described with an analytic equations [42]. Due to the superimposed com- plexity of high dimensional parametric shape models, template matching segmentation methods experience considerable difficulty in reaching an optimal solution when applied to the segmenta- tion of complex shapes, modeled by a large number of parameters. In addition template matching is computationally involved because multiple templates are required for addressing the segmen- tation of the objects with highly variable appearance shape models.

In landmark-based segmentation methods, the prior information regarding the shape of interest is represented through certain landmark features. The landmark-based segmentation is basically a supervised shape modeling in which a set of aligned training shape samples are used to build the reference shape model. The two most popular examples may be referred to as the Active Shape Model (ASM) [43] and the Active Appearance Model (AAM) [44] where the reference landmark shape model is obtained through PCA. The landmark-based segmentation methods are simple and efficient but, similar to any other supervised method relying on input training data, suffer from being semi-automatic and dependent on prior manually provided shape information.

Variational methods solve the segmentation problem through energy function minimization.

There are several variational based approaches that use level set contour models to perform segmentation on the objects with complex boundaries not easily formulated through analyti- cal equations [33, 45, 46]. Within the level set-based variation methods, the shape prior model is essentially a level set representation that introduces the reference model to the energy functional controlling the shape constraint. In [45], a level set based variational framework was proposed that uses an energy function to govern the shape constraint. The shape information is defined by a probability distribution encapsulating the variation of a set of training data. In [47], the shape knowledge is implemented through a two-stage procedure. First, the segmentation results are ob- tained by a data-driven energy term, and is then improved by exploiting the level set shape prior model constructed by PCA. [47] presents a variation framework with shape prior, in which the

(18)

similarity measure between the reference and the evolving shape is determined by the Euclidean distance.

2.3 Segmentation of Overlapping Objects

In the image processing, the segmentation of overlapping objects is considered to be the chal- lenging task that tries to address the segmentation of multiple objects with partial views. The presence of multiple occluded objects is the main difficulty for segmenting of overlapping ob- jects. Several methods have been proposed to solve the complexity observed in the segmentation of overlapping objects. In the following these methods are briefly reviewed.

In [9], automated morphology analysis coupled with a statistical model for contour inference is applied in order to separate the partially overlapping nano particles, here after referred to as Nano Particles Segmentation (NPS). NPS is based on, a two stage processing model: 1) im- age segmentation and 2) contour inference along the shape classification. In the first stage, a modified Ultimate Erosion model specific to convex-shape objects, namely Ultimate Erosion for Convex-Shape (UECS), followed by an edge-to-marker association method is used for separat- ing individual particles from an agglomerate of overlapping nano-objects. In the second stage, the proposed model solves the problem of contour inference and shape classification simultane- ously by the Gaussian mixture model on B-splines, where the unknown parameters of model are estimated using the Expectation Conditional Maximization (ECM) algorithm.

In [8] an approach for segmentation of overlapping cell nuclei in digital histopathology images is presented. This method is the composition of foreground extraction, seed point extraction and seed point watershed segmentation. The method particularly follows a four-component process- ing pipeline. First, a global-local thresholding is applied to extract the foreground regions from the background, and then proceeds to seed point detection. Seed point detection is performed by combining morphological filtering and intensity based region growing. The obtained seed points are then introduced to the seed point watershed algorithm by which the cluster nuclei are segmented. Finally, the method outputs the final result through a post-processing in which the off-spec cell nuclei are eliminated.

[48] introduces a semi automatic approach for detection and segmentation of cell nuclei. The

(19)

approach follows a two stage procedure, in which an automatic segmentation model is first used, the cell nuclei are determined, and then improved via the expert’s interaction. The automatic segmentation of nuclei consist of three specific steps: 1) image binarization, and 2) seed de- tection, and 3) segmentation. Through the process of binarization, performed by the graph-cut binarization algorithm, the foreground is separated from the image background. The seed points in an image are detected using a compound model of multi-scale Laplacian-of-Gaussian filtering along with distance-map-based scale adaptive selection. Upon the detection of seed points, and in order to achieve a better delineation of true edge between the cell nuclei, the segmentation results are obtained by a second graph-cuts-based algorithm incorporating the method of alpha expansions and graph coloring to reduce computational complexity.

A method for the estimation of the bubble size distribution is introduced in [7], here after referred to as Concave-point Based Contour Segmentation (CBCS). The method solves the problem of overlapping bubbles in the three steps: 1) image pre-processing and contour extraction , 2) con- tour segmentation and, 3) contour segment grouping. In the pre-processing step, the Otsu’s method [10] is used to convert the image into the binary image, and then the boundaries are ex- tracted and smoothed. After extracting the boundaries, the polygonal approximation is used for detecting the dominant points. The obtained dominant points support the contour segmentation based on the assumption that concave points in the sequence of dominant points are considered the points connecting the contour segments. In the third step, for the purpose of grouping the contour segments, ellipse fitting combined with the average distance deviation criteria is used.

Additionally, two constraints are applied to prevent missgrouping. Although the proposed frame- work is an efficient segmentation method for overlapping objects, it does not work with an object shape that largely deviates from a perfect ellipse.

The authors in [49] presents a generative supervised learning approach for segmenting of over- lapping objects. In this method, it is assumed that the input image, which may contain multiple instances of a certain object class, can be detected by a region based detection method. For the detection purpose, a set of candidate regions are detected using the Maximally Stable Extremal Regions (MSER) algorithm. A set of classifiers is used to evaluate the similarity of those regions to each of the classes indicating the number of object instances in the regions. By this means, a value is assigned to each region, which implies the number of objects in those region. The learning process is done by a structured output SVM framework [50] and dynamic programming detects the subset of those regions that have the highest similarity to the class of the cell.

(20)

3 Framework for Overlapping Convex Object Segmentation

There is no universal methodology that can be used for segmentation of overlapping convex ob- jects. However, the majority of works employed in this domain fall under a three-step framework, in which three particular sub-tasks either in order or in parallel are addressed. In this framework, through the detection of any individual object and eventually the relevant visual contour parts, the overlapping objects are segmented by the deduced complete contours. This Chapter presents the details of this scheme and the steps involved.

3.1 General Framework

A typical framework for segmenting the overlapping convex objects consists of three specific steps (see Figure 8): 1) seed region/point extraction, 2) contour evidence extraction, and 3) contour estimation [9, 7, 8].

Figure 8.The general framework.

(21)

The first step is to extract the seed points or regions corresponding to each overlapping object.

The seed points, usually refer to the geometric central points or a point in the interior of the object, which conceptually provide a basic cue to separate overlapping objects. The goal is to recognize the presence and the number of the individual objects in an image as identified by seed points. The detected seed points within the process of segmentation of the object with occlusion is considered priori information by which the performance of the ultimate segmentation results can be improved.

The second step is determining the contour evidence. The contour evidence is the visible parts of the objects boundaries that can be used to inference the occluded parts of overlapped objects.

The contour evidence extraction aims to group of edge points that belonged to each object using seed points or seed regions.

The last step in segmentation of overlapping objects is dedicated to contour estimation, where, by means of the visual information produced from the previous two steps, the missing parts of the object contours are estimated.

Several approaches have been proposed to address the each step of the framework. In Sections 3.2, 3.3, and 3.4 common methods used for seed point extraction, contour evidence extraction, and contour estimation are reviewed.

3.2 Seed Point Extraction

For seed point extraction several methods can be applied to provide an adequate range of de- tectable objects. This section introduces the methods that are applied or implemented for the task of seed point extraction.

3.2.1 Local Convergence Filters

When the general convexity of cell shaped objects is assumed, Local Convergence Filter (LCF) [51] is one of the commonly used methods in detecting nuclei and shape estimation. LCF is based on the direction of the image gradient vectors and not on the corresponding magnitudes. This

(22)

(a) (b) (c) (d)

Figure 9. Four different LCF filters are represented by the support regions, highlighted in gray. The support regions are based on 8 radial directions and the vectorr points to a sample point in the support region: (a) Coin filter; (b) IRIS filter; (c) Adaptive ring filter; (d) Slide band filter. [51]

makes LCF robust in illumination variation effects. LCF examines the degree of convergence of the gradient vector within a predefined support region toward a pixel of interest. The local convergence of the image gradient vector at a given pixel is formulated by the cosine of the angle between the gradient orientation and the line connecting that pixel and center of the support region.

A broad range of LCF methods can be found in computer vision literature. They share the same definition for the convergence degree, but differ from their approach in defining the support regions. Figure 9 shows four different LCF methods with their corresponding support regions, Coin Filter (CF) [52], IRIS Filter (IF) [52], Adaptive Ring Filter (ARF) [53], and Sliding Band Filter (SBF) [54].

CF or the Convergence Index Filter is the simplest LCF method whose support region is defined by a circular shape of variable radius. The IRIS Filter is not limited to a circular shape, and the size and the shape of the support region could change around the pixel of interest throughN directions. Since, the supporting region of IF varies, compared to CF, it can handle more complex shapes. In Adaptive Ring Filter, the support region is defined by a ring shaped convergence region. The ring shape support region is inspired by the idea that the convergence of convex shaped objects are rooted in the objects with edges. The Sliding Band Filter integrates the ideas of IF and ARF by defining a support region formed by a fixed width convergence band whose radius could vary in each direction. More information about LCF methods for cell nuclei detection and shape estimation can be found in [5] and [51]. In LCF filters, the support region of a pixel point of interest is defined by dividing the neighborhood toN equally spaced radial directions. Each

(23)

(a) (b) (c)

(d) (e) (f)

Figure 10.Seed point extraction by Slide Band Filter: (a) Original grayscale image; (b) Binary image; (c) Binary image after morphological opening (d) Support region by SBF radial size [15-25]; (e) Seed points identified by SBF with radial size [15-25]; (f) Seed points identified by SBF with radial size [5-25].

point(x0, y0)on the support region is given in Cartesian coordinates, resolved by the coordinates of the interest pixel point(x, y), the orientation ofithradial with respect to x-axisθi = N(i−1) and the sample pixel pointmatith direction as

xo =x+msin(θi),

yo =y+mcos(θi) (1)

Accordingly, in the 2-dimensional discrete space of image I, the convergence degree of the sample point(xo, yo)within the defined support region is determined by the cosine of the angle between the gradient orientation α and the radial direction θi. The convergence of individual (CI) is given by

CI(x, y, i, m) = cos(θi

α(x, y, θi, m)), (2) α(x, y, θi, m) = tan−1(∂I(xo, yo)/∂x

∂I(xo, yo)/∂y), (3)

(24)

whereδI(xo, yo)/δxandδI(xo, yo)/δyare the row wise and the column wise image derivatives.

The SBF estimates the overall convergence by combining all the individual convergence degrees of the sample points, such that the convergence of the pixel interest point is maximized along each radial direction. Provided the allowable radius range of convergence and the ring width, the overall convergence degree at the interest point(x, y)is given by:

SBF(x, y) = 1 N

N−1

X

i=0

Rminmax≤r≤Rmax

[

r+d/2

X

m=r−d/2

CI(x, y, i, m)

]

, (4)

whereRmin is radius of minimum convergence,Rmax is radius of maximum convergence,d is ring width, andN is number of radial directions for which convergence is evaluated.

Similarly, the shape radius per each radial direction is given by

Rshape(x, y, i) = argmax

Rmin≤r≤Rmax

[

1

r

r+d/2

X

m=r−d/2

CI(x, y, i, m)

]

. (5)

Figure 10 demonstrates the SBF filter employed for seed point extraction. The original image, as a slice from a real dataset, is converted to a binary image through global thresholding. To eliminate the redundant noisy and deformed contours, image boundaries are smoothed by means of simple morphological opening. The final results of SBF are the points with maximal overall convergences, marked in red circles. The radii range of support regions is the key parameter in the SBF seed point extraction method. The more realist radius range yield to more accurate results. The SBF method is summarized in Algorithm 1.

3.2.2 Ultimate Erosion for Convex Sets

Ultimate Erosion for Convex Sets (UECS) [9] is an iterative morphological algorithm that ex- tracts the seed regions from overlapping object. UECS is an extension of the Ultimate Erosion (UE) method with a modified stopping criteria [9]. The early stopping of the erosion process in UECS is the key feature overcoming the problem of over segmentation, which is the commonest problem with ordinary UE.

(25)

In mathematical morphology, given a binary silhouetteIand a set of convex objectsI =∪ni=1Ci, the UECS is to apply a recursive erosion operation to extract a subset for everyCi, called marker, such that they are pairwise disconnected. That is, at t-th iteration of erosion process, each con- nected componentAi of image silhouetteI(t−1) undergoes the Minkowski subtraction [55] with respect to a closed disc structuring elementB(0,1)of the radius 1 as

Ri =A(t)i B(0,1), (6)

where stands for Minkowski subtraction defined by

A(t)i B(0,1) =

β∈B(A(t)i +β) (7) andRiis the result of erosion process of each connected component by which the image silhou- etteI(t) is evolved

I(t+1) =

iRi. (8)

The criteria to stop the erosion process is determined by the convexity of the connected object Aiobtained from the erosion process. It is shown in [56] that the most robust convex (concavity) measure with regard to the coarseness of a digitized image is equivalent to the maximum distance from the chord of the region boundary to the boundary itself. For the image silhouetteI and its convex and concave hulls, denoted asO =conv(I)andV = O\I respectively, the concavity measurec(v)is defined as

c(V) = max

j=1,...,m

d(Vj∩∂O, Vj ∩∂I)

l(Vj∩∂O) , (9)

where

d(X, Y) = max

x∈X min

y∈Y kx−yk. (10)

and

l(L) =length of line segmentL. (11)

(26)

Figure 11 demonstrates UECS employed for seed point extraction. The original image, as a slice from a real dataset, is converted to a binary image through global thresholding. To eliminate the redundant noisy and deformed contours, image boundaries are smoothed by means of simple morphological opening. The final results of UECS is a subset disconnected objects obtained from erosion process with concavity measurement as stopping criteria. The key parameter in UECS seed point extraction method is to define a threshold for concavity measurement. The UECS method is summarized in Algorithm 2.

Algorithm 1:Sliding Band Filter (SBF) [51]

Input: a grayscale imageI

Output: seed interest pointsS, the shape radius with maximally convergenceRshape Parameters: Number of radial directions for which convergence is evaluatedN, radius of minimum convergenceRmin, radius of maximum convergenceRmax, ring widthd

1. Construct the Gaussian smoothed derivative per each pixel of image.

2. For each pixel in the image space (excluding the border regions), estimate the overall convergence by summing up all the individual convergences through Equation 4.

3. Smooth object boundaries via morphological opening.

4. Collect the pixels with the maximum of SBF response as seed interest pointsS.

5. Determine the corresponding shape radius for each seed interest point through Equation 5.

Algorithm 2:Ultimate Erosion for Convex Sets (UECS) [9]

Input: binary silhouette imageI

Output: object marker setM, seed interest pointsS (center of mass of markersM) Parameters: concavity measure thresholdρ

1. InitializeI(0)byI.

2. For each connected componentA(t−1)i of image silhouetteI(t−1), compute the Minkowski subtraction toA(t−1)i with respect to a closed ball structuring element, if the

corresponding concavity measure (9) is less than threshold valueρ.

3. Update imageI with obtainedRi. 4. Stop erosion ifI(t) =I(t−1).

(27)

(a) (b) (c)

(d) (e) (f)

Figure 11. Seed point extraction by Ultimate Erosion for Convex Set: (a) Original grayscale image; (b) Binary image; (c) Binary image after morphological opening; (d) Seed points/regions identified by UECS with concavity threshold 0.1; (e) Seed points/regions identified by UECS with concavity threshold 0.2;

and, (f) Seed points/regions identified by UECS with concavity threshold 0.3.

3.2.3 Distance Transform

The Distance Transform (DT) [57] is a simple operator that is usually applied to image segmen- tation. It is calculated in such way that the value of each pixel is replaced by its distance to the nearest background pixel (nearest non-zero pixel). The medial axis of an object in image can be seen as the ridges in the DT by which the distance from object boundary to the medial axis is estimated. In this way, the local maxima in DT may correspond to the seed regions of the object in the image by which the number of objects can be counted.

Provided that the core of objects are separable, i.e. there is only a single local maximum in each object region, DT combined with a global thresholding scheme can be used to separate and count the overlapping objects. In particular, in the watershed transformation, the local maxima regions of DT are watershed markers which eventually result in the segmentation of the objects. The

(28)

main steps to extract the seed regions by means of DT is presented in Algorithm 3.

Algorithm 3:Seed region extraction with Distance Transform (DT) Input: binary silhouette imageI

Output: object marker setM, seed interest pointsS (center of mass of markersM) Parameters: distance transform thresholdρ

1. Perform morphological opening/erosion to smooth image boundaries 2. Compute distance transform imageDand normalize to[0 1].

3. Obtain the seed regionM by thresholding the normalizedDwith a threshold valueρ.

Figure 12 demonstrates DT as employed for seed region extraction. The original grayscale image is converted to a binary image through global thresholding. To eliminate the redundant noisy and deformed contours, image boundaries are smoothed by means of morphological opening. The DT is computed from the binary image. The seed regions (local maxima regions of DT) are obtained by global thresholding.

3.2.4 Fast Radial Symmetry

Fast radial symmetry (FRS) [58] is a feature extraction technique that transforms the original image to a new representation, highlighting the local radial symmetry of the image gradient.

In other words, FRS is a spatial transformation that explores the image gradient to mark the central points of symmetry. Due to symmetric characteristics of various convex objects, FRS is a potential tool to determine the object seed points. The overview of the method is show in Algorithm 4.

The main idea behind FRS transformation is that every pixel in the image space gives vote for the plausible radial symmetry at some specific distant from the point. Provided the distance valuemof the range[RminRmax], for every pixel(x, y)of the gradient imageg, FRS determines the positively-effected p+ve and negatively-effectedp−vepixels, and sequentially constructs the

(29)

(a) (b) (c)

(d) (e) (f)

Figure 12.Seed point extraction by Distance Transform: (a) Original grayscale image; (b) Binary image;

(c) Binary image after morphological opening; (d) Result of DT; (e) Seed points/regions identified by DT with thershold 0.5; (f) Seed points/regions identified by DT with thershold 0.7.

orientation projection imageOnand the magnitude projection imageMn. p+ve(x, y) = (x, y) +round

(

g(x, y)

kg(x, y)k ×m

)

, p−ve(x, y) = (x, y)−round

(

g(x, y)

kg(x, y)k×m

)

(12)

Om(p+ve(x, y)) =Om(p+ve(x, y)) + 1,

Om(p−ve(x, y)) =Om(p−ve(x, y))−1 (13) Mm(p+ve(x, y)) =Mm(p+ve(x, y)) +kg(x, y)k,

Mm(p−ve(x, y)) =Mm(p−ve(x, y))− kg(x, y)k (14) Upon constructing the orientation and magnitude images, the radial symmetry contribution Sm for the radiusm ∈[Rmin, Rmax]is calculated by the convolution ofFm with a two dimensional

(30)

Algorithm 4:Fast Radial Symmetry (FRS) [58]

Input: a grayscale imageI

Output: seed interest pointsSas the average of the symmetry contributions

Parameters: minimum radial symmetryRmin, maximum radial symmetryRmax, the radial symmetric strictness coefficientα

1. Construct the Gaussian smoothed image gradientg.

2. Forevery radiusm∈[RminRmax]Do

(a) Build the orientationOnand magnitudeMnprojection images (Equations 13 and 14).

(b) EstimateFm as per Equation 16.

(c) Estimate the radial symmetry contributionSmas per Equation 15.

3. Estimate the full transformS as the average of the symmetry contributions over all the radii considered, Equation 18.

GaussianAm.

Sm =Fm∗Am, (15) whereFmis formulated in Equation below

Fm(x, y) = Mm(x, y)

km

(

|O˜m(x, y)|

km

)

α

, (16)

whereαandkm are respectively the radial strictness and the scaling factor that normalizesMm andOm across different radii. O˜m is defined as

m(x, y) =

Om(x, y), ifOm(x, y)< km. km, otherwise.

(17)

The full FRS transformS by which the interest symmetric regions are defined, is given by the average of the symmetry contributions over all the radiim ∈[Rmin, Rmax]considered.

S = 1

|N|

X

m∈[Rmin,Rmax]

Sm. (18)

(31)

Figure 13 demonstrates FRS applied for seed region extraction. The original grayscale image is converted to a binary image through global thresholding. To eliminate the redundant noisy and deformed contours, image boundaries are smoothed by means of a simple morphological opening. The result of FRS is the average of the symmetry contributions over all the radii range.

(a) (b) (c)

(d) (e) (f)

Figure 13. Seed point extraction by Fast Radial Symmetry: (a) Original grayscale image; (b) Binary image; (c) Binary image after morphological opening; (d) Seed points identified by FRS with radii ranf [5-10]; (e) Seed points identified by FRS with radii range [13-18] and; (f) Seed points identified by FRS radii size [5-25].

3.3 Contour Evidence Extraction

In this section, the contour evidence extraction by two different methods are reviewed. The con- tour evidence extraction task may utilize either the object seed points or the contour breaking points. In this section, two existing approaches for contour evidence extraction are reviewed,

(32)

marker and curvature-based methods, adapted from [9] and [7] respectively. The marker-based contour evidence methods directly rely on the object seed points, the curvature-based manipu- lates break points to execute contour evidence.

3.3.1 Edge-to-Marker Association

The edge-to-marker association method [9] relies on the relationship between the visual con- tours of overlapped objects and the object seed points. By specifically, combining the distance and divergence metrics the method assigns the edge pixel obtained from image gradient to seed points.

Given the set of object seed pointsM = {M(1), M(2), ..., M(n)}, either in the form of a single seed point or the point set building up the seed region, every pixel pointekinE ={e1, e2, ..., em} is linked to the detected object seed points based on the relevance metricrel(ek, Mj(i))defined as

rel(ek, Mj(i)) = 1−λ 1 +g(ek, Mj(i))/n

+λ div(ek, Mj(i)) + 1

2 (19)

whereg(., .)anddiv(., .)are respectively Euclidean distance and divergence functions which are summed up by the weightλ. nis the number of iteration.

The distance functiong(ek, Mj(i))is defined as the distance from the edge pointekto the nearest point of object markerM(i) , while it is assumed that all the pixel points on the line connecting the edge point and the marker point l(ek, Mj(i)) reside in the foreground region of the image silhouette

g(ek, M(i)) =

minj(|ek−Mj(i)|), if # »

ekM(i)j ⊂M .

∞, otherwise.

(20)

The divergence functiondiv(ek, Mj(i))measures the difference between the direction of the line connecting the edge pointek to marker pointMj(i). That is

div(ek, Mj(i)) = min

j

#»g (ek, Mj(i))#»

l (ek, Mj(i)) k#»g (ek, Mj(i))kk#»

l (ek, Mj(i))k. (21)

(33)

Figure 14 demonstrates the Edge-to-Marker-association applied to extract contour evidence. The Edge-to-marker method is summarized in Algorithm 5.

(a) (b) (c)

(d) (e) (f)

Figure 14. The whole process of the contour evidence extraction performed by Edge-to-marker associ- ation: (a) Original grayscale image; (b) Converted binary image; (c) Binary image after morphological opening; (d) Gradianet image; (e) Edge-to-marker association by divergence 0; (f) Edge-to-marker asso- ciation by divergence 0.02.

3.3.2 Concave-point Based Contour Segmentation

The concave-point based methods in [7] applied to overlapping object segmentation perform the task of contour evidence extraction by combining a contour splitting method with ellipse fitting.

Here, the key feature is to extract the concave points by which the contour points are segmented.

The concave-point based algorithm used in this thesis work is from [7] and includes five specific steps, see Algorithm 6.

The first step is to estimate and generate the contours per each connected object. Either edge

(34)

Algorithm 5:Edge-to-Marker Association [9]

Input: binary silhouette imageI, the set of object markersM ={M(1), M(2), ..., M(n)}.

Output: Indexed contour setC ={C(1), C(2), ..., C(m)}.

Parameters: the divergence weight factorλ, the minimum number of edge pointsρn, the maximum distance from edge points to potential markersρd

1. Compute the set of edge pointsE ={e1, e2, ..., ek, ...}.

2. For each the edge pointsek ∈E:

(a) Collect the nearest seedpoints into the search regionMSR, the points not far thanρd (b) Estimate the relevance metric with respect to the marker pointsMj(i)in the search

regionMSR, Equation 19.

(c) Assignekto each marker with the highest degree of relevance and constructCa contour evidence set referenced by marker indices.

3. Discard any marker whose number of assigned edge points is less thanρn.

Algorithm 6:Concave-point based contour segmentation [7]

Input: Grayscale imageI

Output: Indexed contour setC ={C(1), C(2), ..., C(m)}.

1. Transform the input image to a binary image silhouetteIBW and estimate the image smoothed contour point setC ={c1, c2, ...}.

2. Determine the contour breaking point setCbreak={cb,1, cb,2, ...}by co-linear point suppression.

3. Determine contour dominant point setCdom ={cd,1, cd,2, ...}by polygonal approximation (Algorithm 7).

4. Segment the obtainedCdom as per concave pointsCcon ={cc,1, cc,2, ...}obtained by Equation 24.

5. Perform contour segment grouping.

(35)

detection or image morphology may be used to build up the contour set per each connected object in the image.

The second step is to determine the breaking points Cbreak = {cb,1, cb,2, ...}. Given the image silhouette IBW and the sequence of extracted contour points C = {c1, c2, ...}, the breaking points are determined by co-linear suppression. To be specific, every contour pointci(xi, yi) is examined for co-linearity, while it is compared to the previous and the next successive contour points. ci(xi, yi)is considered a breaking point provided if it is not located on the line connecting ci−1(xi−1, yi−1)andci+1(xi+1, yi+1).

The third step is to filter the extracted breaking point for the dominant pointsCdom ={cd,1, cd,2, ...}.

Dominant points are the points with extreme local curvature. Technically, the dominant point set Cdomis a subset ofCbreakwhose redundant members are eliminated through the process of polyg- onal approximation. The redundant points are detected and discarded by means of break point suppression; a typical approach for polygonal approximation. In this way, the initially defined dominant setCdom withCbreak is pruned by removing redundant points, but preserving a mini- mum error value. That is, the dominant points iteratively are discarded untill the minimum error condition is exceeded.

In polygonal approximation, there are several metrics to measure the error of the approximate model from the discrete true contour model. Here, the sum of the square error (SSD) is adapted

SSD =

n

X

i=1

d2i (22)

wherediis distance fromcd,i(xi, yi)to the chord connectingcd,i−1(xi−1, yi−1)tocd,i+1(xi+1, yi+1).

di = s

((xi−xi−1)(yi+1−yi−1)−(yi −yi−1)(xi+1−xi−1))2

(xi−1−xi+1)2+ (yi−1−yi+1)2 . (23) The steps to find the dominant points are presented in Algorithm 7.

The forth step of Algorithm 6 is involved with extraction of concave points that form the contour segments. The connecting points are resolved through concave points. The dominant point

(36)

Algorithm 7:Dominant point estimation [7]

Input: Contour break point setCbreak. Output: Contour dominant point setCdom. Parameters: The distance threshold valueρdist

1. InitializeCdombyCbreak.

2. For eachcd,i(xi, yi)∈Cdom, estimate the distancediofcd,i to the chord connecting cd,i−1(xi−1, yi−1)tocd,i+1(xi+1, yi+1), Equation 23.

3. Eliminate anycd,i fromCdomwithdi less thanρdist.

cd,i ∈Cdomis considered to be the connecting point ifc# »d,i−1cd,i×c# »d,icd,i+1 is positive

Ccon ={cd,i ∈Cdom : c# »d,i−1cd,i ×c# »d,icd,i+1 >0}. (24) The final step in concave-point contour segmentation is to group the contour segments. The contour segmentsS = {s1, s2, ..., sns}are divided to the target and the candidate segments and grouped.

The criteria to divide the contour segments are their length and their condition whether they have been grouped or not. Here, the main idea is to distinguish the short and noisy contour segments from the ones containing the potential of being an individual contour segment. In this way, the segment which still is not assigned to any other segments group is considered as a candidate segmentsc,i ∈ Sc and the segment whose length is greater than the specific limit is the target segmentst,i ∈St.

The contour segment grouping is carried out through the process of ellipse fitting applied to the target, candidate, and joint target-candidate contour segments and eventually examining the goodness of fitted (GoF) ellipse with respect to the actual segments. The main objective is to find the group of contour segments that all together could form a particular ellipse shape object. That is, the candidate segmentsc,i is merged into the target segmentst,i, provided that the goodness of ellipse fitted to the joint target-candidate segments are improved compared to the ellipse fitted to the individual target and the candidate segments alone:

GoF(st,i ∪sc,j)≤GoF(st,i) + GoF(sc,j). (25)

(37)

The goodness of fit is described as the Average Distance Deviation (ADD) which measures the discrepancy between the fitting curve and the contour segment under the ellipse model in question. For a given contour segment

sk={ck,1(x1, y1), ck,2(x2, y2), ..., ck,n(xn, yn)} (26) and the corresponding fitted ellipse points

sk,f ={ck,f1(xf1, yf1), ck,f2(xf2, yf2), ..., ck,f n(xf n, yf n)}. (27) ADDis defined as

ADDk = 1 n

n

X

i=1

q

(xi−xf i)2 + (yi−yf i)2. (28)

Within the transformed coordinate system

"

x0i yi0

#

=

"

cosθ −sinθ sinθ cosθ

# "

xi−xeo

yi−yeo

#

, (29)

Equation 28 can be simplified to

ADDk= 1 n

[

n

X

i=1

q

x02i +y02i(1− 1

|Di|)

]

, (30) andDi is defined as

Di2 = x02i a2 + y02i

b2 , (31)

where a, b, (xeo, yeo) andθ are the ellipse standard parameters, semi-major axis length, semi- minor axis length, ellipse center point, and the ellipse orientation angle with respect to x axis, respectively. The detailed pseudo code of contour segment grouping is shown in Algorithm 8.

(38)

Algorithm 8:Contour segment grouping [7]

Input: Contour segment setS ={s1, s2, ..., sns}

Output: Grouped contour segmentSt={st,1, st,2, ..., st,nt}.

1. initialization: Candidate segment setSc← {s;s∈S,|s| ≥6}.

Target segment setSt←S.

2. Perform segment grouping:

i←1

while|St| ≥1do j ←1

while|Sc| ≥1do ifsc,i *sc,j

computeADDtofst,i by Equation 26 computeADDcofsc,j by Equation 26 computeADDtc ofst,i ∪Sc,j by Equation 26

if(ADDtc<ADDt+ ADDc)∧(st,i nearest tost,i∪sc,j) st,i ←st,i∪sc,j

Sc←Sc\sc,j ifsc,j ⊂St

St←St\sc,j j ←j + 1

i←i+ 1

Eliminatest,iif the enclosed angle is less than90.

3.4 Contour Estimation

In this section, three classes of methods, particularly ellipse fitting, B-spline, and active con- tour with shape prior, to perform the contour estimation task are presented. The main idea and practical considerations are elaborated.

(39)

3.4.1 Ellipse Fitting

Ellipse fitting is a very common approach within the area of overlapping object segmentation as found in the medical and industrial system [59] . Ellipse fitting techniques are basically used in the task of contour estimation to infer the missing parts of partially observed object. The methods generally depend exclusively on object available boundary points where the ellipse is fitted using a parametric model.

The most efficient recent ellipse fitting methods based on shape boundary points are generally addressed through the classic least square fitting problem. Provided data points, an objective cost function characterizing the goodness of the ellipse is optimized. Based on the defined objective cost function used in the optimization problem, the ellipse fitting may fall into two classes of least squares problems: algebraic and geometric fittings. While in algebraic ellipse fitting the objective function relies on algebraic deviation, the geometric method minimizes the sum of square distances to the design data.

Algebraic methods:

In algebraic ellipse fitting [60], ellipse as a special case of generic conic is formulated by the zero set of the second order polynomial equation. For given point(x, y)an ellipse with a parameter vectorais defined as

F(a,(x, y)) =a0x2 + a1xy + a2y2 + a3y + a4y + a5 (32)

=aTx= 0, where

a = h

a0 a1 a2 a3 a4 a5 iT

, x = h

x2 xy y2 x y 1 iT

. (33)

The Equation 32 defines an ellipse, provided that the quadratic condition∆<0

∆ = a21 − 4a0a2 < 0 (34) is satisfied. The goodness of the ellipse fitting is modeled as the sum of squared algebraic dis- tances of every point involved. Given the ellipse parameter vector model a, the algebraic dis-

(40)

tance di(a,(xi, yi)) is the deviation of point (xi, yi) when applied to the implicit polynomial conic equation as

di(a,(xi, yi)) = F(a,(xi, yi)) (35) For the given set of points{(xi, yi)|i= 1,2, . . . , n}the objective cost function is

ˆ

a= argmin

a n

X

i=1

d2i(a,(xi, yi)). (36)

Equivalently, collecting the data points into design matrixD∈ Rn×6as

D =

x21 x1y1 y21 x1 y1 1 x22 x2y2 y22 x2 y2 1 ... ... ... ... ... ... x2n xnyn yn2 xn yn 1

, (37)

the objective function can be re-formulated through the matrix representation

d(a) = kDak (38)

= aTDTDa

=aTSa, whereS =DTDis a scatter matrix.

In essence, the matrix form representation of the ellipse fitting problem in Equation 38 along the quadratic condition in matrix form

aTCa <0 (39)

Viittaukset

LIITTYVÄT TIEDOSTOT

Our solution was based on a traditional automatic segmentation algorithm to localize the LA (stage I) and on a 3-D or 2-D CNN to output a fine LA segmentation (stage II). Trained

Several methods for the segmentation of retinal image have been reported in litera- ture. Supervised methods are based on the prior labeling information which clas-..

5, we use the compactness measure to evaluate multiphase segmentation of three methods: proposed KM using Mumford – Shah model, KM, and regularized KM (reg-KM).. Section 6 shows

The purpose of this thesis is to study benefits of using machine learning methods in bankruptcy prediction instead traditional methods such as logistic regression and Z-score

For object detection and segmentation there are many useful algorithms, like region grow- ing, model fitting, machine learning… Model fitting segmentation is valid for this prob- lem

This thesis is structured as follows: chapter 2 elaborates on the philosophy of research. The concept of “research onion” is presented and discussed in detail, including justifi-

In this thesis a new method for automatic pectoral muscle segmentation using log Gabor filters is proposed. The method works quite reliably, as in most cases the segmentation

Even though WMH segmentation study is not directly comparable with WMH and infarct segmentation study due to different number of test images, results indicate that adding