联邦学习应用思考:需求还是方法?
来源:AI数据派
四、 总结:需求还是方法?
综上所述,目前,“联邦学习”这个术语在市场上存在很多认识上的误解和混淆,主要原因是其既在广义上表达了保护数据前提下联合多方数据训练模型的需求,又在狭义上表示了一类通过暴露部分数据信息来提升训练性能的方法。有趣的是,作为广义上的需求,它强调为了保护数据安全,可以牺牲部分准确性;但作为狭义的方法,它反而强调通过牺牲安全来换取性能提升。
因此,作为行业用户,选择是不是存在“联邦学习”的需求(也叫做数据融合计算、数据价值流通的需求),是一个纯粹的业务问题,其判断标准是数据价值流通能否带来业务价值;在这一需求基础上,是不是要选用狭义的“联邦学习”方法和系统来满足这个需求,是个纯粹的IT技术和安全合规问题,需要考虑和平衡的是数据的敏感性、泄露的代价,和进行数据保护所需的技术成本。
参考文献:
[1] Zhu, Ligeng, and Song Han. "Deep leakage from gradients." Federated learning. Springer, Cham, 2020. 17-31.
[2] Geiping, Jonas, et al. "Inverting Gradients--How easy is it to break privacy in federated learning?." arXiv preprint arXiv:2003.14053 (2020).
[3] Lyu, Lingjuan, Han Yu, and Qiang Yang. "Threats to federated learning: A survey." arXiv preprint arXiv:2003.02133 (2020).
[4] Zhao, Bo, Konda Reddy Mopuri, and Hakan Bilen. "idlg: Improved deep leakage from gradients." arXiv preprint arXiv:2001.02610 (2020).
[5] Bonawitz, Keith, et al. "Practical secure aggregation for privacy-preserving machine learning." proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017.
[6] Kairouz, Peter, et al. "Advances and open problems in federated learning." arXiv preprint arXiv:1912.04977 (2019).
[7] Melis, Luca, et al. "Exploiting unintended feature leakage in collaborative learning." 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 2019.
[8] Nasr, Milad, Reza Shokri, and Amir Houmansadr. "Comprehensive privacy analysis of deep learning: Stand-alone and federated learning under passive and active white-box inference attacks." (2018).
[9] Bagdasaryan, Eugene, et al. "How to backdoor federated learning." International Conference on Artificial Intelligence and Statistics. PMLR, 2020.
[10] Bhagoji, Arjun Nitin, et al. "Analyzing federated learning through an adversarial lens." International Conference on Machine Learning. PMLR, 2019.
[11] Sun, Ziteng, et al. "Can you really backdoor federated learning?." arXiv preprint arXiv:1911.07963 (2019).
[12] Xie, Chulin, et al. "Dba: Distributed backdoor attacks against federated learning." International Conference on Learning Representations. 2019.
[13] Wang, Hongyi, et al. "Attack of the tails: Yes, you really can backdoor federated learning." arXiv preprint arXiv:2007.05084 (2020).
[14] Wang, Zhibo, et al. "Beyond inferring class representatives: User-level privacy leakage from federated learning." IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE, 2019.
未来智能实验室的主要工作包括:建立AI智能系统智商评测体系,开展世界人工智能智商评测;开展互联网(城市)大脑研究计划,构建互联网(城市)大脑技术和企业图谱,为提升企业,行业与城市的智能水平服务。每日推荐范围未来科技发展趋势的学习型文章。目前线上平台已收藏上千篇精华前沿科技文章和报告。
如果您对实验室的研究感兴趣,欢迎加入未来智能实验室线上平台。扫描以下二维码或点击本文左下角“阅读原文”
关注公众号:拾黑(shiheibook)了解更多
[广告]赞助链接:
四季很好,只要有你,文娱排行榜:https://www.yaopaiming.com/
让资讯触达的更精准有趣:https://www.0xu.cn/
随时掌握互联网精彩
- 1 总书记深深牵挂雪域高原上的人们 4969241
- 2 小米首款汽车起售21.59万 贵吗? 4979937
- 3 雷军“朋友圈”表情包火了 4898919
- 4 抓住“热辣滚烫”的“春日经济” 4760561
- 5 黄圣依杨子直播事件6人被刑拘 4688020
- 6 女子住民宿发现多个隐藏空间 4565806
- 7 韩国留学女生找王婆说媒 4472139
- 8 1400多年前北周武帝面貌成功复原 4395248
- 9 面具男用病毒针扎人系谣言 4223684
- 10 男子刑满释放4分钟手痒“续住” 4196552