語系:
繁體中文
English
說明(常見問題)
圖資館首頁
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Towards the Exploration and Improvem...
~
Leone, Stephen.
Towards the Exploration and Improvement of Generative Adversarial Attacks.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Towards the Exploration and Improvement of Generative Adversarial Attacks.
作者:
Leone, Stephen.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, 2020
面頁冊數:
130 p.
附註:
Source: Masters Abstracts International, Volume: 82-05.
附註:
Advisor: Fontaine, Fred L.
Contained By:
Masters Abstracts International82-05.
標題:
Artificial intelligence.
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=27994576
ISBN:
9798684653148
Towards the Exploration and Improvement of Generative Adversarial Attacks.
Leone, Stephen.
Towards the Exploration and Improvement of Generative Adversarial Attacks.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 130 p.
Source: Masters Abstracts International, Volume: 82-05.
Thesis (M.E.)--The Cooper Union for the Advancement of Science and Art, 2020.
This item must not be sold to any third party vendors.
Adversarial examples represent a major security threat for emerging technologies that use machine learning models to make important decisions. Adversarial examples are typically crafted by performing gradient descent on the target model, but recently, researchers have started to look at methods to generate adversarial examples without requiring access to the target model at decision time. This thesis investigates the place that generative attacks have in the machine learning security threat landscape and improvements that can be made on existing attacks. By evaluating attacks and defenses on image classification problems, we found that generative attackers are most relevant in black box settings where query access is given to the attacker before test time. We evaluate generative methods in these settings as standalone attacks and we give examples of how generative attacks can reduce the number of queries or amount of time required to produce a successful adversarial example by narrowing the search space of existing optimization algorithms.
ISBN: 9798684653148Subjects--Topical Terms:
194058
Artificial intelligence.
Subjects--Index Terms:
Adversarial Examples
Towards the Exploration and Improvement of Generative Adversarial Attacks.
LDR
:02228nmm a2200373 4500
001
594572
005
20210521101657.5
008
210917s2020 ||||||||||||||||| ||eng d
020
$a
9798684653148
035
$a
(MiAaPQ)AAI27994576
035
$a
AAI27994576
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Leone, Stephen.
$3
886587
245
1 0
$a
Towards the Exploration and Improvement of Generative Adversarial Attacks.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
130 p.
500
$a
Source: Masters Abstracts International, Volume: 82-05.
500
$a
Advisor: Fontaine, Fred L.
502
$a
Thesis (M.E.)--The Cooper Union for the Advancement of Science and Art, 2020.
506
$a
This item must not be sold to any third party vendors.
520
$a
Adversarial examples represent a major security threat for emerging technologies that use machine learning models to make important decisions. Adversarial examples are typically crafted by performing gradient descent on the target model, but recently, researchers have started to look at methods to generate adversarial examples without requiring access to the target model at decision time. This thesis investigates the place that generative attacks have in the machine learning security threat landscape and improvements that can be made on existing attacks. By evaluating attacks and defenses on image classification problems, we found that generative attackers are most relevant in black box settings where query access is given to the attacker before test time. We evaluate generative methods in these settings as standalone attacks and we give examples of how generative attacks can reduce the number of queries or amount of time required to produce a successful adversarial example by narrowing the search space of existing optimization algorithms.
590
$a
School code: 0057.
650
4
$a
Artificial intelligence.
$3
194058
650
4
$a
Electrical engineering.
$3
454503
650
4
$a
Computer science.
$3
199325
653
$a
Adversarial Examples
653
$a
Computer Security
653
$a
Deep Learning
653
$a
Generative Attacks
653
$a
Machine Learning
690
$a
0800
690
$a
0544
690
$a
0984
710
2
$a
The Cooper Union for the Advancement of Science and Art.
$b
Electrical Engineering.
$3
886588
773
0
$t
Masters Abstracts International
$g
82-05.
790
$a
0057
791
$a
M.E.
792
$a
2020
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=27994576
筆 0 讀者評論
全部
電子館藏
館藏
1 筆 • 頁數 1 •
1
條碼號
館藏地
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
000000193532
電子館藏
1圖書
電子書
EB 2020
一般使用(Normal)
編目處理中
0
1 筆 • 頁數 1 •
1
多媒體
多媒體檔案
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=27994576
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼
登入