語系:
繁體中文
English
說明(常見問題)
圖資館首頁
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Human-Guided Generation of Sketches and Prototypes.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Human-Guided Generation of Sketches and Prototypes.
作者:
Huang, Zifeng Forrest.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, 2022
面頁冊數:
157 p.
附註:
Source: Dissertations Abstracts International, Volume: 84-04, Section: B.
附註:
Advisor: Canny, John.
Contained By:
Dissertations Abstracts International84-04B.
標題:
Computer science.
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29261830
ISBN:
9798352951781
Human-Guided Generation of Sketches and Prototypes.
Huang, Zifeng Forrest.
Human-Guided Generation of Sketches and Prototypes.
- Ann Arbor : ProQuest Dissertations & Theses, 2022 - 157 p.
Source: Dissertations Abstracts International, Volume: 84-04, Section: B.
Thesis (Ph.D.)--University of California, Berkeley, 2022.
This item must not be sold to any third party vendors.
Sketching and prototyping are central to creative activities that improve and advance many aspects of human lives. They enable non-experts to express themselves through drawing, or help User Interface (UI) designers explore diverse alternatives through low-fidelity prototyping. Generating these sketches and prototypes, however, typically requires significant expertise that casual users might not possess, and may be effortful and time-consuming even for professional users.In this dissertation, I will introduce multiple deep-learning methods and systems that can generate sketches and prototypes. The generation of these artifacts is designed to be guided by annotations in familiar modalities (e.g., generating user interfaces from text descriptions). The presented generation systems and methods include Sketchforme, a system that generates individual sketched scenes from text descriptions; Scones, a system that iteratively generates and refines sketched scenes based on users' multiple text instructions; and Words2ui, a collection of methods that can create UI prototypes from high-level text descriptions. This research creates unique affordances, advances the state-of-the-art of creativity support tools, contributes benchmark metrics, and explores novel interaction paradigms in diverse domains from non-expert sketching to professional UI design. These research contributions can serve as important building blocks towards future multi-modal systems that enable more effective and efficient sketching and prototyping for all.
ISBN: 9798352951781Subjects--Topical Terms:
199325
Computer science.
Subjects--Index Terms:
Deep learning
Human-Guided Generation of Sketches and Prototypes.
LDR
:02775nmm a2200409 4500
001
636148
005
20230501063911.5
006
m o d
007
cr#unu||||||||
008
230724s2022 ||||||||||||||||| ||eng d
020
$a
9798352951781
035
$a
(MiAaPQ)AAI29261830
035
$a
AAI29261830
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Huang, Zifeng Forrest.
$3
942494
245
1 0
$a
Human-Guided Generation of Sketches and Prototypes.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2022
300
$a
157 p.
500
$a
Source: Dissertations Abstracts International, Volume: 84-04, Section: B.
500
$a
Advisor: Canny, John.
502
$a
Thesis (Ph.D.)--University of California, Berkeley, 2022.
506
$a
This item must not be sold to any third party vendors.
520
$a
Sketching and prototyping are central to creative activities that improve and advance many aspects of human lives. They enable non-experts to express themselves through drawing, or help User Interface (UI) designers explore diverse alternatives through low-fidelity prototyping. Generating these sketches and prototypes, however, typically requires significant expertise that casual users might not possess, and may be effortful and time-consuming even for professional users.In this dissertation, I will introduce multiple deep-learning methods and systems that can generate sketches and prototypes. The generation of these artifacts is designed to be guided by annotations in familiar modalities (e.g., generating user interfaces from text descriptions). The presented generation systems and methods include Sketchforme, a system that generates individual sketched scenes from text descriptions; Scones, a system that iteratively generates and refines sketched scenes based on users' multiple text instructions; and Words2ui, a collection of methods that can create UI prototypes from high-level text descriptions. This research creates unique affordances, advances the state-of-the-art of creativity support tools, contributes benchmark metrics, and explores novel interaction paradigms in diverse domains from non-expert sketching to professional UI design. These research contributions can serve as important building blocks towards future multi-modal systems that enable more effective and efficient sketching and prototyping for all.
590
$a
School code: 0028.
650
4
$a
Computer science.
$3
199325
650
4
$a
Design.
$3
204000
653
$a
Deep learning
653
$a
Human-computer interaction
653
$a
Intelligent user interface
653
$a
Sketching
653
$a
Transformer
653
$a
UI design
690
$a
0984
690
$a
0800
690
$a
0389
710
2
$a
University of California, Berkeley.
$b
Computer Science.
$3
942495
773
0
$t
Dissertations Abstracts International
$g
84-04B.
790
$a
0028
791
$a
Ph.D.
792
$a
2022
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29261830
筆 0 讀者評論
全部
電子館藏
館藏
1 筆 • 頁數 1 •
1
條碼號
館藏地
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
000000223052
電子館藏
1圖書
電子書
EB 2022
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
多媒體檔案
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29261830
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼
登入