Language:
English
繁體中文
Help
圖資館首頁
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Towards the Exploration and Improvem...
~
Leone, Stephen.
Towards the Exploration and Improvement of Generative Adversarial Attacks.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Towards the Exploration and Improvement of Generative Adversarial Attacks.
Author:
Leone, Stephen.
Published:
Ann Arbor : ProQuest Dissertations & Theses, 2020
Description:
130 p.
Notes:
Source: Masters Abstracts International, Volume: 82-05.
Notes:
Advisor: Fontaine, Fred L.
Contained By:
Masters Abstracts International82-05.
Subject:
Artificial intelligence.
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=27994576
ISBN:
9798684653148
Towards the Exploration and Improvement of Generative Adversarial Attacks.
Leone, Stephen.
Towards the Exploration and Improvement of Generative Adversarial Attacks.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 130 p.
Source: Masters Abstracts International, Volume: 82-05.
Thesis (M.E.)--The Cooper Union for the Advancement of Science and Art, 2020.
This item must not be sold to any third party vendors.
Adversarial examples represent a major security threat for emerging technologies that use machine learning models to make important decisions. Adversarial examples are typically crafted by performing gradient descent on the target model, but recently, researchers have started to look at methods to generate adversarial examples without requiring access to the target model at decision time. This thesis investigates the place that generative attacks have in the machine learning security threat landscape and improvements that can be made on existing attacks. By evaluating attacks and defenses on image classification problems, we found that generative attackers are most relevant in black box settings where query access is given to the attacker before test time. We evaluate generative methods in these settings as standalone attacks and we give examples of how generative attacks can reduce the number of queries or amount of time required to produce a successful adversarial example by narrowing the search space of existing optimization algorithms.
ISBN: 9798684653148Subjects--Topical Terms:
194058
Artificial intelligence.
Subjects--Index Terms:
Adversarial Examples
Towards the Exploration and Improvement of Generative Adversarial Attacks.
LDR
:02228nmm a2200373 4500
001
594572
005
20210521101657.5
008
210917s2020 ||||||||||||||||| ||eng d
020
$a
9798684653148
035
$a
(MiAaPQ)AAI27994576
035
$a
AAI27994576
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Leone, Stephen.
$3
886587
245
1 0
$a
Towards the Exploration and Improvement of Generative Adversarial Attacks.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
130 p.
500
$a
Source: Masters Abstracts International, Volume: 82-05.
500
$a
Advisor: Fontaine, Fred L.
502
$a
Thesis (M.E.)--The Cooper Union for the Advancement of Science and Art, 2020.
506
$a
This item must not be sold to any third party vendors.
520
$a
Adversarial examples represent a major security threat for emerging technologies that use machine learning models to make important decisions. Adversarial examples are typically crafted by performing gradient descent on the target model, but recently, researchers have started to look at methods to generate adversarial examples without requiring access to the target model at decision time. This thesis investigates the place that generative attacks have in the machine learning security threat landscape and improvements that can be made on existing attacks. By evaluating attacks and defenses on image classification problems, we found that generative attackers are most relevant in black box settings where query access is given to the attacker before test time. We evaluate generative methods in these settings as standalone attacks and we give examples of how generative attacks can reduce the number of queries or amount of time required to produce a successful adversarial example by narrowing the search space of existing optimization algorithms.
590
$a
School code: 0057.
650
4
$a
Artificial intelligence.
$3
194058
650
4
$a
Electrical engineering.
$3
454503
650
4
$a
Computer science.
$3
199325
653
$a
Adversarial Examples
653
$a
Computer Security
653
$a
Deep Learning
653
$a
Generative Attacks
653
$a
Machine Learning
690
$a
0800
690
$a
0544
690
$a
0984
710
2
$a
The Cooper Union for the Advancement of Science and Art.
$b
Electrical Engineering.
$3
886588
773
0
$t
Masters Abstracts International
$g
82-05.
790
$a
0057
791
$a
M.E.
792
$a
2020
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=27994576
based on 0 review(s)
ALL
電子館藏
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
000000193532
電子館藏
1圖書
電子書
EB 2020
一般使用(Normal)
in cat dept.
0
1 records • Pages 1 •
1
Multimedia
Multimedia file
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=27994576
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login