Language:
English
繁體中文
Help
圖資館首頁
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Human-in-the-Loop Learning from Crow...
~
Liu, Tong.
Human-in-the-Loop Learning from Crowdsourcing and Social Media.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Human-in-the-Loop Learning from Crowdsourcing and Social Media.
Author:
Liu, Tong.
Published:
Ann Arbor : ProQuest Dissertations & Theses, 2020
Description:
148 p.
Notes:
Source: Dissertations Abstracts International, Volume: 82-03, Section: B.
Notes:
Advisor: Homan, Christopher;Phillips, Dan.
Contained By:
Dissertations Abstracts International82-03B.
Subject:
Computer science.
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28025038
ISBN:
9798662572287
Human-in-the-Loop Learning from Crowdsourcing and Social Media.
Liu, Tong.
Human-in-the-Loop Learning from Crowdsourcing and Social Media.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 148 p.
Source: Dissertations Abstracts International, Volume: 82-03, Section: B.
Thesis (Ph.D.)--Rochester Institute of Technology, 2020.
This item must not be sold to any third party vendors.
Computational social studies using public social media data have become more and more popular because of the large amount of user-generated data available. The richness of social media data, coupled with noise and subjectivity, raise significant challenges for computationally studying social issues in a feasible and scalable manner. Machine learning problems are, as a result, often subjective or ambiguous when humans are involved. That is, humans solving the same problems might come to legitimate but completely different conclusions, based on their personal experiences and beliefs. When building supervised learning models, particularly when using crowdsourced training data, multiple annotations per data item are usually reduced to a single label representing ground truth. This inevitably hides a rich source of diversity and subjectivity of opinions about the labels. Label distribution learning associates for each data item a probability distribution over the labels for that item, thus it can preserve diversities of opinions, beliefs, etc. that conventional learning hides or ignores. We propose a humans-in-the-loop learning framework to model and study large volumes of unlabeled subjective social media data with less human effort. We study various annotation tasks given to crowdsourced annotators and methods for aggregating their contributions in a manner that preserves subjectivity and disagreement. We introduce a strategy for learning label distributions with only five-to-ten labels per item by aggregating human-annotated labels over multiple, semantically related data items. We conduct experiments using our learning framework on data related to two subjective social issues (work and employment, and suicide prevention) that touch many people worldwide. Our methods can be applied to a broad variety of problems, particularly social problems. Our experimental results suggest that specific label aggregation methods can help provide reliable representative semantics at the population level.
ISBN: 9798662572287Subjects--Topical Terms:
199325
Computer science.
Subjects--Index Terms:
Human-in-the-loop
Human-in-the-Loop Learning from Crowdsourcing and Social Media.
LDR
:03154nmm a2200361 4500
001
594592
005
20210521101703.5
008
210917s2020 ||||||||||||||||| ||eng d
020
$a
9798662572287
035
$a
(MiAaPQ)AAI28025038
035
$a
AAI28025038
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Liu, Tong.
$3
886624
245
1 0
$a
Human-in-the-Loop Learning from Crowdsourcing and Social Media.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
148 p.
500
$a
Source: Dissertations Abstracts International, Volume: 82-03, Section: B.
500
$a
Advisor: Homan, Christopher;Phillips, Dan.
502
$a
Thesis (Ph.D.)--Rochester Institute of Technology, 2020.
506
$a
This item must not be sold to any third party vendors.
520
$a
Computational social studies using public social media data have become more and more popular because of the large amount of user-generated data available. The richness of social media data, coupled with noise and subjectivity, raise significant challenges for computationally studying social issues in a feasible and scalable manner. Machine learning problems are, as a result, often subjective or ambiguous when humans are involved. That is, humans solving the same problems might come to legitimate but completely different conclusions, based on their personal experiences and beliefs. When building supervised learning models, particularly when using crowdsourced training data, multiple annotations per data item are usually reduced to a single label representing ground truth. This inevitably hides a rich source of diversity and subjectivity of opinions about the labels. Label distribution learning associates for each data item a probability distribution over the labels for that item, thus it can preserve diversities of opinions, beliefs, etc. that conventional learning hides or ignores. We propose a humans-in-the-loop learning framework to model and study large volumes of unlabeled subjective social media data with less human effort. We study various annotation tasks given to crowdsourced annotators and methods for aggregating their contributions in a manner that preserves subjectivity and disagreement. We introduce a strategy for learning label distributions with only five-to-ten labels per item by aggregating human-annotated labels over multiple, semantically related data items. We conduct experiments using our learning framework on data related to two subjective social issues (work and employment, and suicide prevention) that touch many people worldwide. Our methods can be applied to a broad variety of problems, particularly social problems. Our experimental results suggest that specific label aggregation methods can help provide reliable representative semantics at the population level.
590
$a
School code: 0465.
650
4
$a
Computer science.
$3
199325
650
4
$a
Artificial intelligence.
$3
194058
650
4
$a
Web studies.
$3
708690
653
$a
Human-in-the-loop
653
$a
Learning
653
$a
Crowdsourcing
653
$a
Social media
690
$a
0984
690
$a
0800
690
$a
0646
710
2
$a
Rochester Institute of Technology.
$b
Computing and Information Sciences.
$3
730232
773
0
$t
Dissertations Abstracts International
$g
82-03B.
790
$a
0465
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28025038
based on 0 review(s)
ALL
電子館藏
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
000000193552
電子館藏
1圖書
電子書
EB 2020
一般使用(Normal)
in cat dept.
0
1 records • Pages 1 •
1
Multimedia
Multimedia file
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28025038
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login