人工智能安全实验室·上海交通大学
人工智能安全实验室·上海交通大学
在读研究生
近期事件
科研成果
联系我们
浅色
深色
自动
Conference
Personatalk: Preserving Personalized Dynamic Speech Style in Talking Face Generation
Recent visual speaker authentication methods claimed their effectiveness against deepfake attacks. However, the success is attributed …
陆千禧
,
何怡
,
王士林
How Large Language Models Encode Context Knowledge? A Layer-Wise Probing Study
Previous work has showcased the intriguing capability of large language models (LLMs) in retrieving facts and processing context …
鞠天杰
,
杜巍
,
刘功申
PDF
Cite
DOI
Backdoor NLP Models via AI-Generated Text
Backdoor attacks pose a critical security threat to natural language processing (NLP) models by establishing covert associations …
杜巍
,
鞠天杰
,
刘功申
PDF
Multi-Grained Multimodal Interaction Network for Sentiment Analysis
Multimodal sentiment analysis aims to utilize different modalities including language, visual, and audio to identify human emotions in …
方岭永
,
刘功申
PDF
Speaker-Adaptive Lipreading via Spatio-Temporal Information Learning
Lipreading has been rapidly developed recently with the help of large-scale datasets and big models. Despite the significant progress …
何怡
,
杨磊
,
王晗亦
,
王士林
PDF
Data-Free Watermark for Deep Neural Networks by Truncated Adversarial Distillation
Model watermarking secures ownership verification and copyright protection of deep neural networks. In the black-box scenario, …
闫超博
,
李方圻
,
王士林
PDF
Revisiting the Information Capacity of Neural Network Watermarks: Upper Bound Estimation and Beyond
To trace the copyright of deep neural networks, an owner can embed its identity information into its model as a watermark. The capacity …
李方圻
,
赵皓东
,
杜巍
,
王士林
PDF
DOI
NWS: Natural Textual Backdoor Attacks via Word Substitution
Backdoor attacks pose a serious security threat for natural language processing (NLP). Backdoored NLP models perform normally on clean …
杜巍
,
袁童鑫
,
赵皓东
,
刘功申
PDF
SDPSAT: Syntactic Dependency Parsing Structure-Guided Semi-Autoregressive Machine Translation
The advent of non-autoregressive machine translation (NAT) accelerates the decoding superior to autoregressive machine translation (AT) …
陈欣然
,
赵彧然
,
郭建铭
,
段苏峰
,
刘功申
PDF
Cite
DOI
Is Continuous Prompt a Combination of Discrete Prompts? Towards a Novel View for Interpreting Continuous Prompts
The broad adoption of continuous prompts has brought state-of-the-art results on a diverse array of downstream natural language …
鞠天杰
,
王晗亦
,
赵皓东
,
刘功申
PDF
Cite
DOI
«
»
Cite
×