FlyAI小助手

  • 3

    获得赞
  • 85873

    发布的文章
  • 0

    答辩的项目

Overcoming the Convex Relaxation Barrier for Neural Network Verification via Nonconvex Low-Rank Semidefinite Relaxations

用非凸低阶半定松弛克服神经网络验证的凸松弛障碍

作者: Hong-Ming Chiu,Richard Y. Zhang

作者邀请

论文作者还没有讲解视频

邀请直播讲解

您已邀请成功, 目前已有 $vue{users_count} 人邀请!

再次邀请

To rigorously certify the robustness of neural networks to adversarial perturbations, most state-of-the-art techniques rely on a triangle-shaped linear programming (LP) relaxation of the ReLU activation. While the LP relaxation is exact for a single neuron, recent results suggest that it faces an inherent "convex relaxation barrier" as additional activations are added, and as the attack budget is increased. In this paper, we propose a nonconvex relaxation for the ReLU relaxation, based on a low-rank restriction of a semidefinite programming (SDP) relaxation. We show that the nonconvex relaxation has a similar complexity to the LP relaxation, but enjoys improved tightness that is comparable to the much more expensive SDP relaxation. Despite nonconvexity, we prove that the verification problem satisfies constraint qualification, and therefore a Riemannian staircase approach is guaranteed to compute a near-globally optimal solution in polynomial time. Our experiments provide evidence that our nonconvex relaxation almost completely overcome the "convex relaxation barrier" faced by the LP relaxation.

严格证明神经网络对对手攻击的健壮性 微扰,大多数最先进的技术依赖于三角形 REU激活的线性规划(LP)松弛。而LP 松弛对于单个神经元来说是准确的,最近的结果表明它面临着 当添加额外的激活时固有的“凸弛豫势垒”, 随着攻击预算的增加。在这篇文章中,我们提出了一个非凸的 RELU松弛的松弛,基于低阶限制 半定规划(SDP)松弛。我们证明了非凸性 松弛具有与LP松弛类似的复杂性,但享受到了改进 紧密性,可与成本高得多的SDP放松相媲美。尽管 非凸性,我们证明了验证问题满足约束 资格,因此利曼式楼梯方法保证 在多项式时间内计算出近全局最优解。我们的实验 提供证据证明我们的非凸松弛几乎完全克服了 LP松弛所面临的“凸松弛障碍”。

文件下载

论文代码

关联比赛

本作品采用 知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议进行许可,转载请附上原文出处链接和本声明。
本文链接地址:https://flyai.com/paper_detail/12435
讨论
500字
表情
发送
删除确认
是否删除该条评论?
取消 删除