Skip to content

ginevracoal/adversarial_examples

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

404 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial Robustness

Description

Adversarial examples are special inputs to deep learning models, maliciously crafted to fool them into incorrect outputs. Even the state-of-the-art models are vulnerable to adversarial attacks, thus a lot of issues arise in many security fields of artificial intelligence. In this repo we aim at investigating techniques for training adversarially robust models.

Examples of adversarial perturbations:

Repo structure

  • data/ training data and adversarial perturbations
  • notebooks/
  • results/ collected results and plots
    • images/
  • src/ implementations
    • RandomProjections/ methods based on random projections
    • BayesianSGD/ implementation of Bayesian SGD from Blei et al. (2017)
    • BayesianInference/ BNN training using VI and HMC
  • trained_models/
    • baseline/
    • randens/
    • randreg/
    • bnn/
  • tensorboard/

Scripts should be executed from src/ directory.

About

Random Projections for improved Adversarial Robustness

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors