13 Matching Annotations
  1. Feb 2019
    1. On Evaluating Adversarial Robustness

      大神 Goodfellow 参与的 paper 怎能不收藏呢?对抗样本始终是一个相当难啃的骨头。。。。

      事实证明,正确评估针对对抗性案例的防御是非常困难的。尽管最近大量的工作试图设计出能够抵抗自适应攻击的防御措施,但很少有人成功;提出防御的大多数论文很快就被证明是错误的。作者认为一个很大的因素是执行安全评估的难度。在这篇论文中,作者讨论了方法论基础,回顾了普遍接受的最佳实践,并提出了评估对抗性示例的防御的新方法。这是一个开放性工作,贡献者包括Ian Goodfellow、Nicholas Carlini等人。

    2. Towards a Deeper Understanding of Adversarial Losses

      研究了各种对抗生成训练的 losses,还可以 know which one of them makes an adversarial loss better than another。

  2. Jan 2019
    1. Image Transformation can make Neural Networks more robust against Adversarial Examples

      这个小文就是想告诉我们,要想提高对抗样本的鲁棒性,“转一转”你手上的样本就好了。。。。。

  3. Nov 2018
    1. Analyzing the Noise Robustness of Deep Neural Networks

      清华的这篇文章似乎有着很不错的可视化图像,企图对模型对抗性进行可视化解释,不知道他们是否有在非DL模型上去考察对抗样本是如何分错的?毕竟并不仅仅是复杂模型才会有对抗问题哦。。

    2. Interpreting Adversarial Robustness: A View from Decision Surface in Input Space

      通常人们都认为,局部最小损失的超平面在参数空间中越平坦,就意味着泛化能力越好。但此文通过可视化某种决策边界认为在原始输入空间中就可以察觉到对抗性鲁棒的端倪~(这个结论还需广泛复现吧,自己不试验下也不敢确信,毕竟缺乏理论基础~[可怜])

    3. Spurious samples in deep generative models: bug or feature?

      此文引言还算引人入胜的。全文似乎就为了阐述一件事情:Spurious samples are not simply errors but a feature of deep generative nets. 但我怎么觉得这是一句废话呢?不然你以为 generate model 是根据什么 generate samples 的呢?

    4. Adversarial Attacks and Defences: A Survey

      一篇印度人写的对抗性防御的综述paper。

    5. Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

      这文帅了~ 信息丰富 超多的图~ 让人眼前一亮~

      探讨了18个模型的鲁棒性和准确率。结论很多,如模型构架是影响鲁棒性和准确率的重要因素(似乎是废话);相似模型构架基础上增加“深度”对鲁棒性的提升很微弱;有些模型(Vgg类)的表现出很强的对抗样本迁移性。。。

    6. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
      SUMMARY (From)

      The researchers found that defenses against adversarial examples commonly use obfuscated gradients, which create a false sense of security but, in fact, can be easily circumvented. The study describes three ways in which defenses obfuscate gradients and shows which techniques can circumvent the defenses. The findings can help organizations that use defenses relying on obfuscated gradients to fortify their current methods.

      WHAT’S THE CORE IDEA OF THIS PAPER?
      • There are three common ways in which defenses obfuscate gradients:
      • shattered gradients are nonexistent or incorrect gradients caused by the defense either intentionally (through non-differentiable operations) or unintentionally (through numerical instability);
      • stochastic gradients are caused by randomized defenses;
      • vanishing/exploding gradients are caused by extremely deep neural network evaluation.

      • There are number of clues that something is wrong with the gradient including:

        • one-step attacks performing better than iterative attacks;
        • black-box attacks working better than white-box attacks;
        • unbounded attacks not reaching 100% success;
        • random sampling finding adversarial examples;
        • increasing distortion bound not leading to increased success.
      WHAT’S THE KEY ACHIEVEMENT?
      • Demonstrating that most of the defense techniques used these days are vulnerable to attacks, namely:
        • 7 out of 9 defense techniques accepted at ICLR 2018 cause obfuscated gradients;
        • new attack techniques developed by researchers were able to successfully circumvent 6 defenses completely and 1 partially.
      WHAT DOES THE AI COMMUNITY THINK?
      • The paper won the Best Paper Award at ICML 2018, one of the key machine learning conferences.
      • The paper highlights the strengths and weaknesses of current technology.
      WHAT ARE FUTURE RESEARCH AREAS?
      • To construct defenses with careful and thorough evaluation so that they can defend against not only existing attacks but also future attacks that may be developed.
      WHAT ARE POSSIBLE BUSINESS APPLICATIONS?
      • By using the guidance provided in the research paper, organizations can identify if their defenses rely on obfuscated gradients, and if necessary, switch to more robust methods.
    7. Seamless Nudity Censorship: an Image-to-Image Translation Approach based on Adversarial Training

      【用GAN给裸女自动“穿”上比基尼】

      这么多人对这篇文章经验的实验效果表示赞叹和好奇~ 我也去瞻仰一番去。。。

    8. Generating Natural Adversarial Examples

      讨论对抗样本生成的~

    9. Are adversarial examples inevitable?

      从结论部分来看,分析了“问题”成因,“问题”分类,“问题”特性,“问题”本质,可谓“问题”之不可避免而任重而道远,就说没提“问题”的解决办法。。。

      现在关于对抗性的 paper 都要不可避免的谨慎对待啊~

    10. Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

      貌似就是利用 dropout 来防御对抗样本~ 然而前两天才看了 Goodfellow 在 cs231n 2017 Spring 上的报告提到说,一切用传统正则化技巧企图防御的手段都是失效的~当然包括 dropout~ 比如可以看看 Nicholas Carlini 的文章。

      贴上Goodfellow讲座的 Slice 和笔记:https://iphysresearch.github.io/cs231n/cs231n_Guest%20Lecture.%20Adversarial%20Examples%20and%20Adversarial%20Training.html