WCSE 2022 Spring
ISBN: 978-981-18-5852-9 DOI: 10.18178/wcse.2022.04.179

Design Task Representation and Automated Evaluation Method in the Context of Crowdsourcing

Zhenchong Mo, Lin Gong, Ziyao Huang, Mingren Zhu, Xie Jian, Junde Lan, Shicheng Zhang

Abstract— Under the development of the Internet, the number of user requirements has exploded, personalized requirements are diverse, and new design models such as crowdsourcing design have emerged. The design task is the core element that distinguishes the new design models from traditional ones. The design task evaluation process is the core driving force that drives the downstream design stage such as the matching of resources on design platforms, the evolution of design teams and the self-iteration of user requirements. However, most evaluation methods are based on expert scoring or traditional calculation theory. Not only are there subjective deviations, but also some necessary parameters of the evaluation algorithm have to rely on manual identification, which makes it difficult to be integrated in automatic crowdsourcing application platforms. Therefore, this article proposes design task representation and automated evaluation methods in the context of crowdsourcing: First, for the problem of crowdsourcing design task representation, relying on the crowdsourcing design platform, propose a multi-domain representation method of design tasks based on requirements mapping; Second, for the evaluation of functional tasks and technical tasks in the context of crowdsourcing design, a task evaluation system is constructed, including: functional task scale measurement based on design entropy, functional and technical contradiction analysis based on TRIZ and axiomatic design, and automated analysis of technology maturity based on domain knowledge graph; Third, in response to the diverse needs of multi-agents for task evaluation in the context of crowdsourcing design, a multi-index integrated evaluation method for crowdsourcing design tasks is proposed; Finally, through the development of task evaluation application tools and a case study from a crowdsourcing platform, the degree of automation of the evaluation method is verified.

Index Terms— Crowdsourcing design, design task evaluation, functional task evaluation, technical task evaluation.

Zhenchong Mo
Beijing Institute of Technology, China
Lin Gong
Beijing Institute of Technology, China; Yangtze Delta Region Academy of Beijing Institute of Technology, China
Ziyao Huang
Beijing Institute of Technology, China
Mingren Zhu
Beijing Institute of Technology, China
Jian Xie
Beijing Institute of Technology, China
Junde Lan
Beijing Institute of Technology, China
Shicheng Zhang
Xianhe Oil Production Plant, Sinopec Shengli Oilfield Co., Ltd, China

[Download]


Cite: Zhenchong Mo, Lin Gong, Ziyao Huang, Mingren Zhu, Xie Jian, Junde Lan, Shicheng Zhang, " Design Task Representation and Automated Evaluation Method in the Context of Crowdsourcing, " WCSE 2022 Spring Event: 2022 9th International Conference on Industrial Engineering and Applications, pp. 1547-1557, Sanya, China, April 15-18, 2022.