語系:
繁體中文
English
說明(常見問題)
圖資館首頁
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Assessing the Effects and Risks of Large Language Models in AI-Mediated Communication.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Assessing the Effects and Risks of Large Language Models in AI-Mediated Communication.
作者:
Jakesch, Maurice.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, 2022
面頁冊數:
199 p.
附註:
Source: Dissertations Abstracts International, Volume: 84-08, Section: B.
附註:
Advisor: Naaman, Mor.
Contained By:
Dissertations Abstracts International84-08B.
標題:
Information science.
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29400004
ISBN:
9798368460024
Assessing the Effects and Risks of Large Language Models in AI-Mediated Communication.
Jakesch, Maurice.
Assessing the Effects and Risks of Large Language Models in AI-Mediated Communication.
- Ann Arbor : ProQuest Dissertations & Theses, 2022 - 199 p.
Source: Dissertations Abstracts International, Volume: 84-08, Section: B.
Thesis (Ph.D.)--Cornell University, 2022.
This item must not be sold to any third party vendors.
Large language models like GPT-3 are increasingly becoming part of human communication. Through writing suggestions, grammatical assistance, and machine translation, the models enable people to communicate more efficiently. Yet, we have a limited understanding of how integrating them into communication will change culture and society. For example, a language model that preferably generates a particular view may influence people's opinions when integrated into widely used applications. This dissertation empirically demonstrates that embedding large language models into human communication poses systemic societal risks. In a series of experiments, I show that humans cannot detect language produced by GPT-3, that using large language models in communication may undermine interpersonal trust, and that interactions with opinionated language models change users' attitudes. I introduce the concept of AI-Mediated Communication–where AI technologies modify, augment, or generate what people say–to theorize how the use of large language models in communication presents a paradigm shift from previous forms of computer-mediated communication. I conclude by discussing how my findings highlight the need to manage the risks of AI technologies like large language models in ways that are more systematic, democratic, and empirically grounded.
ISBN: 9798368460024Subjects--Topical Terms:
190425
Information science.
Subjects--Index Terms:
AI ethics
Assessing the Effects and Risks of Large Language Models in AI-Mediated Communication.
LDR
:02573nmm a2200397 4500
001
636189
005
20230501063923.5
006
m o d
007
cr#unu||||||||
008
230724s2022 ||||||||||||||||| ||eng d
020
$a
9798368460024
035
$a
(MiAaPQ)AAI29400004
035
$a
AAI29400004
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Jakesch, Maurice.
$0
(orcid)0000-0002-2642-3322
$3
942604
245
1 0
$a
Assessing the Effects and Risks of Large Language Models in AI-Mediated Communication.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2022
300
$a
199 p.
500
$a
Source: Dissertations Abstracts International, Volume: 84-08, Section: B.
500
$a
Advisor: Naaman, Mor.
502
$a
Thesis (Ph.D.)--Cornell University, 2022.
506
$a
This item must not be sold to any third party vendors.
520
$a
Large language models like GPT-3 are increasingly becoming part of human communication. Through writing suggestions, grammatical assistance, and machine translation, the models enable people to communicate more efficiently. Yet, we have a limited understanding of how integrating them into communication will change culture and society. For example, a language model that preferably generates a particular view may influence people's opinions when integrated into widely used applications. This dissertation empirically demonstrates that embedding large language models into human communication poses systemic societal risks. In a series of experiments, I show that humans cannot detect language produced by GPT-3, that using large language models in communication may undermine interpersonal trust, and that interactions with opinionated language models change users' attitudes. I introduce the concept of AI-Mediated Communication–where AI technologies modify, augment, or generate what people say–to theorize how the use of large language models in communication presents a paradigm shift from previous forms of computer-mediated communication. I conclude by discussing how my findings highlight the need to manage the risks of AI technologies like large language models in ways that are more systematic, democratic, and empirically grounded.
590
$a
School code: 0058.
650
4
$a
Information science.
$3
190425
650
4
$a
Computer science.
$3
199325
650
4
$a
Psychology.
$3
181533
653
$a
AI ethics
653
$a
Human-AI interaction
653
$a
Large language models
653
$a
Risk assessment
653
$a
Social influence
690
$a
0723
690
$a
0984
690
$a
0621
710
2
$a
Cornell University.
$b
Information Science.
$3
942355
773
0
$t
Dissertations Abstracts International
$g
84-08B.
790
$a
0058
791
$a
Ph.D.
792
$a
2022
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29400004
筆 0 讀者評論
全部
電子館藏
館藏
1 筆 • 頁數 1 •
1
條碼號
館藏地
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
000000223093
電子館藏
1圖書
電子書
EB 2022
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
多媒體檔案
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29400004
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼
登入