語系:
繁體中文
English
說明(常見問題)
圖資館首頁
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Human-in-the-Loop Learning from Crow...
~
Liu, Tong.
Human-in-the-Loop Learning from Crowdsourcing and Social Media.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Human-in-the-Loop Learning from Crowdsourcing and Social Media.
作者:
Liu, Tong.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, 2020
面頁冊數:
148 p.
附註:
Source: Dissertations Abstracts International, Volume: 82-03, Section: B.
附註:
Advisor: Homan, Christopher;Phillips, Dan.
Contained By:
Dissertations Abstracts International82-03B.
標題:
Computer science.
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28025038
ISBN:
9798662572287
Human-in-the-Loop Learning from Crowdsourcing and Social Media.
Liu, Tong.
Human-in-the-Loop Learning from Crowdsourcing and Social Media.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 148 p.
Source: Dissertations Abstracts International, Volume: 82-03, Section: B.
Thesis (Ph.D.)--Rochester Institute of Technology, 2020.
This item must not be sold to any third party vendors.
Computational social studies using public social media data have become more and more popular because of the large amount of user-generated data available. The richness of social media data, coupled with noise and subjectivity, raise significant challenges for computationally studying social issues in a feasible and scalable manner. Machine learning problems are, as a result, often subjective or ambiguous when humans are involved. That is, humans solving the same problems might come to legitimate but completely different conclusions, based on their personal experiences and beliefs. When building supervised learning models, particularly when using crowdsourced training data, multiple annotations per data item are usually reduced to a single label representing ground truth. This inevitably hides a rich source of diversity and subjectivity of opinions about the labels. Label distribution learning associates for each data item a probability distribution over the labels for that item, thus it can preserve diversities of opinions, beliefs, etc. that conventional learning hides or ignores. We propose a humans-in-the-loop learning framework to model and study large volumes of unlabeled subjective social media data with less human effort. We study various annotation tasks given to crowdsourced annotators and methods for aggregating their contributions in a manner that preserves subjectivity and disagreement. We introduce a strategy for learning label distributions with only five-to-ten labels per item by aggregating human-annotated labels over multiple, semantically related data items. We conduct experiments using our learning framework on data related to two subjective social issues (work and employment, and suicide prevention) that touch many people worldwide. Our methods can be applied to a broad variety of problems, particularly social problems. Our experimental results suggest that specific label aggregation methods can help provide reliable representative semantics at the population level.
ISBN: 9798662572287Subjects--Topical Terms:
199325
Computer science.
Subjects--Index Terms:
Human-in-the-loop
Human-in-the-Loop Learning from Crowdsourcing and Social Media.
LDR
:03154nmm a2200361 4500
001
594592
005
20210521101703.5
008
210917s2020 ||||||||||||||||| ||eng d
020
$a
9798662572287
035
$a
(MiAaPQ)AAI28025038
035
$a
AAI28025038
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Liu, Tong.
$3
886624
245
1 0
$a
Human-in-the-Loop Learning from Crowdsourcing and Social Media.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
148 p.
500
$a
Source: Dissertations Abstracts International, Volume: 82-03, Section: B.
500
$a
Advisor: Homan, Christopher;Phillips, Dan.
502
$a
Thesis (Ph.D.)--Rochester Institute of Technology, 2020.
506
$a
This item must not be sold to any third party vendors.
520
$a
Computational social studies using public social media data have become more and more popular because of the large amount of user-generated data available. The richness of social media data, coupled with noise and subjectivity, raise significant challenges for computationally studying social issues in a feasible and scalable manner. Machine learning problems are, as a result, often subjective or ambiguous when humans are involved. That is, humans solving the same problems might come to legitimate but completely different conclusions, based on their personal experiences and beliefs. When building supervised learning models, particularly when using crowdsourced training data, multiple annotations per data item are usually reduced to a single label representing ground truth. This inevitably hides a rich source of diversity and subjectivity of opinions about the labels. Label distribution learning associates for each data item a probability distribution over the labels for that item, thus it can preserve diversities of opinions, beliefs, etc. that conventional learning hides or ignores. We propose a humans-in-the-loop learning framework to model and study large volumes of unlabeled subjective social media data with less human effort. We study various annotation tasks given to crowdsourced annotators and methods for aggregating their contributions in a manner that preserves subjectivity and disagreement. We introduce a strategy for learning label distributions with only five-to-ten labels per item by aggregating human-annotated labels over multiple, semantically related data items. We conduct experiments using our learning framework on data related to two subjective social issues (work and employment, and suicide prevention) that touch many people worldwide. Our methods can be applied to a broad variety of problems, particularly social problems. Our experimental results suggest that specific label aggregation methods can help provide reliable representative semantics at the population level.
590
$a
School code: 0465.
650
4
$a
Computer science.
$3
199325
650
4
$a
Artificial intelligence.
$3
194058
650
4
$a
Web studies.
$3
708690
653
$a
Human-in-the-loop
653
$a
Learning
653
$a
Crowdsourcing
653
$a
Social media
690
$a
0984
690
$a
0800
690
$a
0646
710
2
$a
Rochester Institute of Technology.
$b
Computing and Information Sciences.
$3
730232
773
0
$t
Dissertations Abstracts International
$g
82-03B.
790
$a
0465
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28025038
筆 0 讀者評論
全部
電子館藏
館藏
1 筆 • 頁數 1 •
1
條碼號
館藏地
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
000000193552
電子館藏
1圖書
電子書
EB 2020
一般使用(Normal)
編目處理中
0
1 筆 • 頁數 1 •
1
多媒體
多媒體檔案
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28025038
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼
登入