可信人工智能
NASAC/FMAC
随着人类生活被人工智能广泛渗透,公众接受人工智能的程度越来越高,人工智能的可信开始受到广泛的关注。人工智能的“可信性”是在可靠性、安全性、鲁棒性、可解释性、可控性等众多概念的基础上发展起来的一个新概念。在智能系统中,“可信”是指一个智能实体在开放、动态环境下实现目标需求的动态过程中,其行为及产生结果总是符合人们的预期。它强调目标与实现相符,强调行为和结果的可解释性、可预测性和可控制性。除此之外,可信智能计算还要求在受到诸如环境影响、外部攻击等动态环境干扰时,仍然能够持续提供符合预期的服务。对于人工智能的可信问题,目前国内外研究尚处于摸索阶段。本次论坛将从理论、方法、技术、应用等多个维度介绍目前最新的可信技术,旨在为提高智能系统开发效率、改善软硬件设计质量、增强智能系统可信、优化开发过程和控制成本方面提供思路。论坛将邀请学术界专家介绍高水平研究成果,并开展相关的技术思辨讨论。
论坛组织委员会:
李 钦(华东师范大学)
陈仪香(华东师范大学)
日程安排:
时 间:2020年11月21日(星期六)14:00-17:30
地 点:重庆富力假日酒店 会议室1
论坛议程:
时 间 |
主 题 |
讲 者 |
14:00-14:30 |
Deep Neural Networks: From Verification to Generalisation |
张立军, 中国科学院软件研究所 |
14:30-15:00 |
基于无梯度优化的神经网络对抗样本生成 |
卜磊,南京大学 |
15:00-15:30 |
Accelerating Robustness Verification of Deep Neural Networks with Lazy&Eager Falsification |
张民,华东师范大学 |
15:30-16:00 |
茶 歇 |
|
16:00-16:30 |
Verifying ReLU Neural Networks from a Model Checking Perspective |
刘万伟,国防科技大学 |
16:30-17:00 |
PAC Model Checking of Block-Box Continuous-Time Dynamical Systems |
薛白,中国科学院 软件研究所 |
17:00-17:30 |
深度神经网络的安全威胁与攻击实践 |
孟国柱, 中国科学院信息工程研究所 |
报告及嘉宾简介:
1. 张立军:Deep Neural Networks: From Verification to Generalisation
摘要:In this talk, we survey major techniques and tools for verifying deep neural networks, in particular, the notion of robustness for specifying safety properties. Moreover, we study the novel concept of weight correlation in deep neural networks and discusses its impact on the networks’ generalisation ability. We argue that weight correlation can improve generalisation of neural networks. Finally, we discuss how it can be intertwined with robustness to get reliable networks.
简介:Lijun Zhang is a research professor at State Key Laboratory of Computer Science, Institute of Software Chinese Academy of Sciences (ISCAS). He is also the director of the Sino-Europe Joint Institute of Dependable and Smart Software at the Institute of Intelligent Software in Guangzhou. Before this he was a postdoctoral researcher at University of Oxford, and then an associate professor at Language-Based Technology section, DTU Compute, Technical University of Denmark. He gained a Diploma Degree and a PhD (Dr. Ing.) at Saarland University. His research interests include: probabilistic model checking, simulation reduction and decision algorithms, abstraction and model checking, learning algorithms, and verification of deep neural networks. He is leading the development of several tools including PRODeep for verifying neural networks, ROLL for learning automata, and the model checker ePMC, previously known as IscasMC.
2. 卜磊:基于无梯度优化的神经网络对抗样本生成
摘要:近年来以图像识别为代表的人工智能系统广泛应用在各个领域。然而,相关系统与模型由于内在不可解释性,其行为极易收到数据干扰。如何对相关系统进行对抗样本生成,对理解相关模型行为,规避风险具有重要意义。我们面向黑盒图像识别模型的对抗样本生成进行了探索与创新,提出一种基于无梯度优化的黑盒模型对抗样本生成方法,将对抗样本生成转化为优化问题,并基于智能化方法进行求解,基于此思路我们设计与实现了相关工具DFA,并在公认案例集上佐证其处理能力。
简介:卜磊 南京大学计算机科学与技术系教授,博士生导师。主要研究领域是软件工程与形式化方法,包括模型检验技术,实时混成系统,信息物理融合系统等方向。2010年在南京大学获取计算机博士学位。曾在CMU、MSRA、UTD、FBK等科研机构进行访学与合作研究。目前已在相关领域重要期刊与会议如TCAD、TC、TCPS、TPDS、RTSS、CAV等上发表论文五十余篇。入选中国计算机学会青年人才发展计划,微软亚洲研究院铸星计划,NASAC青年软件创新奖等。
3. 张民:Accelerating Robustness Verification of Deep Neural Networks with Lazy&Eager Falsification
摘要: Bad scalability is one of the challenging problems to the robustness verification of neural networks. In this talk, we will introduce a simple yet effective approach to accelerate the robustness verification by lazy\&eager falsification. In the approach, we divide the robustness verification problem into sub-problems with respect to target labels. Once the property is falsified for a specific label, we can safely conclude that the neural network is non-robust. The falsification is both lazy and eager. Being lazy means that a sub-problem is not falsified unless it has to and being eager means that the sub-problems that are more likely to be falsifiable have higher priority to falsify. We leverage symbolic interval propagation and linear relaxation techniques to determine the order of the sub-problems for falsification. Our approach is orthogonal to, and can be integrated with existing verification techniques. We integrate it with four state-of-the-art verification tools, i.e., MipVerify, Neurify, DeepZ, and DeepPoly. Experimental results show that our approach can significantly improve these tools, up to 200x speedup, when the perturbation distance is set in a reasonable range.
简介:张民于2011年在日本北陆先端科学技术大学院大学获得博士学位,2011至2014年在JAIST软件验证中心从事博士后研究工作,2014年起加入华东师范大学软件工程学院。主要研究领域包括形式化方法与可信计算理论,将形式化方法应用于智能系统,物联网,嵌入式系统的可信验证与分析,相关工作发表在包括ETAPS、DAC、DATE、TCAD、软件学报等会议和期刊上,获得APSEC2019唯一最佳论文奖,DCIT2015最佳论文奖。曾担任TASE2017、FM4AI2019程序委员会主席,DATE2021,ICFEM2016等国际知名学术会议程序委员,SCP客座编辑。主持国家自然科学基金面上项目,青年项目,中法国际合作“蔡元培”项目等项目。CCF形式化专委会委员。
4. 刘万伟:Verifying ReLU Neural Networks from a Model Checking Perspective
摘要:Neural networks, as an important computing model, have a wide application in artificial intelligence (AI) domain. From the perspective of computer science, such a computing model requires a formal description of its behaviors, particularly the relation between input and output. In addition, such specifications ought to be verified automatically. ReLU (rectified linear unit) neural networks are intensively used in practice. In this paper, we present ReLUTemporal Logic (ReTL), whose semantics is defined with respect to ReLUneural networks, which could specify valuerelated properties about the network. We show that the model checking algorithm for the Σ2∪Π2 fragment of ReTL, which can express properties such as output reachability, is decidable in EXPSPACE. We have also implemented our algorithm with a prototype tool, and experimental results demonstrate the feasibility of the presented model checking approach.
简介:Wanwei Liu received his Ph.D degree in National University of Defense Technology in 2009, he is an associated professor in College of Computer Science, National University of Defense Technology. He is a senior member of CCF. His research interests include theoretical computer science (particularly in automata theory and temporal logic), formal methods (particularly in verification), and software engineering. His work has been published on TSE, ICSE, ASE, TACAS, IJCAI. His work acquires Gold prize (1st prize) in TACAS SV-Comp verification tool track multiple times.
5. 薛白:PAC Model Checking of Block-Box Continuous-Time Dynamical Systems
摘要:In this talk I will present a model checking approach to finite-time safety verification of black-box continuous-time dynamical systems within the framework of probably approximately correct (PAC) learning. The black-box dynamical systems are the ones, for which no model is given but whose states changing continuously through time within a finite time interval can be observed at some discrete time instants for a given input. The new model checking approach is termed as PAC model checking due to incorporation of learned models with correctness guarantees expressed using the terms error probability and confidence. Based on the error probability and confidence level, our approach provides statistically formal guarantees that the time-evolving trajectories of the black-box dynamical system over finite time horizons fall within the range of the learned model plus a bounded interval, contributing to insights on the reachability of the black-box system and thus on the satisfiability of its safety requirements. The learned model together with the bounded interval is obtained by scenario optimization, which boils down to a linear programming problem.
简介:Dr. Bai Xue is an associate research professor at State Key Laboratory of Computer Science, Institute of Software Chinese Academy of Sciences since November, 2017. He received the B.Sc. degree in information and computing science from Tianjin University of Technology and Education in 2008, and the Ph.D. degree in applied mathematics from Beihang University in 2014. Prior to joining Institute of Software, he worked as a research fellow in the Centre for High Performance Embedded Systems at Nanyang Technological University from May, 2014 to September, 2015, and as a postdoc in the Department füer Informatik at Carl von Ossietzky Universität Oldenburg from November, 2015 to October, 2017.
6. 孟国柱:深度神经网络的安全威胁与攻击实践
摘要:近年来以深度学习为代表的人工智能技术得到迅速发展和推广,但神经网络存在的安全缺陷给社会和个人带来很大的风险和威胁。为了确定深度神经网络的缺陷,我们开展了一项针对神经网络安全威胁的综合性调研,从模型萃取、模型逆向、模型投毒和对抗攻击四个方面探讨常见的攻击技术以及在多个量化指标中的对比。在实证分析中发现,由于攻防两端的信息不对称,在黑盒攻击中攻击者往往需要构造大量样本并与目标模型进行大量交互查询,制约着黑盒攻击的效率。因此我们采用了四种数据约减技术来降低数据的冗余和提升黑盒攻击效率。通过在多个数据集和复杂网络上进行测试,取得比SOTA更好的效果。最后根据实验数据,进一步探讨了模型训练的可解释性。
简介:孟国柱,2017年博士毕业于新加坡南洋理工大学。于2018年9月加入中国科学院信息工程研究所任副研究员。曾获2019年ACM SIGSAC中国科技新星,获得过2018年CCF-A类会议ICSE最佳论文奖;相关研究工作已经在软件工程和信息安全领域的国际顶级会议和期刊如ICSE,FSE,ASE,ISSTA等发表超过30篇文章。研究领域包括人工智能系统安全与隐私,软件漏洞分析和检测,移动应用安全等。