• Ei tuloksia

The concept of crowdsourcing was first introduced by Jeff Howe in Wired magazine in 2006. Howe mentioned four examples in his article as crowdsourcing models:

Threadless.com, InnoCentive.com, Amazon’s Mechanical Turk and iStockphoto.com (Brabham, 2013). In his weblog(www.crowdsourcing.com) Howe uses two definitions for crowdsourcing:

“The White Paper Version: Crowdsourcing is the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.” (Howe, 2010)

“The Soundbyte Version: The application of Open Source principles to fields outside of software.” (Howe, 2010)

Brabham (2013) describes crowdsourcing as follows:

“An online, distributed problem-solving and production model that leverages the collective intelligence of online communities to serve specific organizational goals. Online communities, also called crowds, are given the opportunity to respond to crowdsourcing activities promoted by the

5

organization, and they are motivated to respond for a variety of reasons. “ (Brabham, 2013)

And Doan, Ramakrishnan and Halevy (Doan et al., 2011) proposed the following definition for crowdsourcing:

“We say that a system is a CS system if it enlists a crowd of humans to help solve a problem defined by the system owners, and if in doing so, it addresses the following four fundamental challenges: How to recruit and retain users?

What contributions can users make? How to combine user contributions to solve the target problem? How to evaluate users and their contributions?”

(Doan et al., 2011)

In some definitions there might be sometimes a confusion between open source concept and crowdsourcing. In opensource.com there is a definition for this word:

“The term open source refers to something people can modify and share because its design is publicly accessible. The term originated in the context of software development to designate a specific approach to creating computer programs”. This definition makes it clear that crowdsourcing is different from open source.

Crowdsourcing is a Web 2.0 phenomenon, and it’s a compound word which is formed from two words, crowd and sourcing. Crowdsourcing can be described as outsourcing a task to a group of anonymous individuals with an open invitation which is mostly done through Internet. The anonymous individuals who are performing the task are called crowd, who can be experts and professionals in different fields or even beginners or normal people. In crowdsourcing process many individuals may participate at the same time on a task, and the crowdsourcing organization which can be called crowdsourcer will finally select the best outputs (Schenk and Guittard, 2011)

The third definition resented by Doan, Ramakrishnan and Halevy (Doan et al., 2011) is the closest definition of crowdsourcing based on what is performed in this study. A task is defined, and some randomly selected anonymous individuals (crowd) are asked to perform the task in an online environment.

6 2.2.1 Examples of crowdsourcing business

Threadless(Threadless.com) which was mentioned as an example in Jeff Howe’s article in Wired magazine is an online clothing company that sells silk-screened graphic T-shirts. Members of the Threadless online community can create their own designs using available templates on the website and upload them to a gallery. The other members of the community can score the designs on a 0 to 5 rating scale. The designs with highest scores are printed in the company’s Chicago headquarters and sold to the community in the online store in the website. The winners receive a 2000$ cash and a 500$ gift certificate as a reward. The company profits from this procedure since they know they’re only printing shirts which are already demanded by customers (Brabham, 2013).

InnoCentive (innocentive.com) is another example of crowdsourcing. Companies can define challenging scientific research and development problems in InnoCentive website and offer cash prizes for people who solve the problems. Companies can benefit from fast and low-cost solutions that are proposed from members of InnoCentive’s online community. Amazon’s Mechanical Turk (mturk.com) service helps organizations to crowdsource tasks with low cost to an online community of workers. Tasks that human beings can do more efficiently than computers (Brabham, 2013).

2.2.2 Benefits

There are several benefits of crowdsourcing. It simplifies accessing to talent that cannot be found in any other way, in other words crowdsourcing eases the procedure of finding individuals who have certain skills. It also makes it possible to find people who have ability to perform tasks that are difficult or impossible for machines to do.

It’s also possible to have different crowd groups to perform different tasks. It helps crowdsourcer to follow the trends in the market, and simplifies organisational processes. Different groups of crowdsourcers including small businesses, large businesses, non-profit organizations, scientists and researchers, artists and even single individuals can benefit from crowdsourcing (Grier et al., 2013).

7 2.2.3 Challenges

There are different problems regarding to crowdsourcing process which are addressed by scholars. All the people who are considered as the crowd in the crowdsourcing process have some kind of motivation for taking part in the process.

knowing the intentions of the crowd for participating in crowdsourcing can be challenging. It’s important for the crowdsourcer to know what motivates the crowd to participate before launching crowdsourcing. There might be different motivations, in most studies the common motivations are mentioned as follows: growing creative skills, improving resume for future employment, earning money, experiencing a challenge to find a solution for a problem, communicating with other creative individuals, finding new friends, keeping oneself busy when bored, or just having fun (Brabham, 2013).

The other difficulty about crowdsourcing can be legal issues. These legal issues can be caused by different reasons. One issue can be that in crowdsourcing process there isn’t any clear boundary between a professional individual and an amateur. In business context common concerns about crowdsourcing are copyright problems and intellectual property. For protecting both crowdsourcing organizations and crowd from any legal problems all the websites that contain user-generated content need to have terms of use, Digital Millennium Copyright Act (DMCA) statements, and all this statements and policies should be easy to find and understand. Crowdsourcer firm should always have clear rules for preventing the crowd from submitting content that is originated from another party (Brabham, 2013).

2.2.4 Crowdsourcing in scientific researches

Collecting large data sets in scientific researches can be sometimes challenging.

Crowdsourcing can be used as a data collection method in scientific research. Target groups for crowd in scientific researches can be non-professional individuals and also professionals and scientists.

The tasks or questions in scientific methods can be shared remotely through electronic channels like online communities and email. The collection of results can also be managed remotely (Buecheler et al., 2010). There are different research cases that utilized crowdsourcing for large data collection. Rudoy, Goldman, Shechtman

8

and Zelnik-Manor (Rudoy et al., 2012) proposed a method for crowdsourcing gaze data collection. In their study they used a method for acquisition of gaze direction data from a large number of crowd, using a mechanism which is self-reporting. They applied their technique to a data set of videos and demonstrated that outputs are similar to traditional gaze tracking. Eskenazi (2013) in crowdsourcing for speech processing discussed crowdsourcing applications in data collection for speech processing. In his book he described different web-based technologies for developing online applications to collect audio.

Leifman, Swedish, Roesch and Raskar (2015) proposed a technique for labelling medical images using crowdsourcing. In their approach they used a web application with two different user interfaces for different labeling tasks. They have two types of crowd, expert and crowd-workers. They also illustrated a validation approach designed to cope with noisy ground-truth data and with non-consistent input from both experts and crowdworkers (Leifman et sl., 2015).