logoProtecting Your LLMs with logonformation Bottleneck

Zichuan Liu1,2   Zefan Wang3   Linjie Xu2,4   Jinyu Wang2  
Lei Song2   Tianchun Wang5   Chunlin Chen1   Wei Cheng6   Jiang Bian2  

1Nanjing University   2Microsoft Research Asia   3Tsinghua University   4Queen Mary University of London   5Pennsylvania State University   6NEC Laboratories America  

Overview

The advent of large language models (LLMs) has revolutionized the field of natural language processing, yet they might be attacked to produce harmful content. Despite efforts to ethically align LLMs, these are often fragile and can be circumvented by jailbreaking attacks through optimized or manual adversarial prompts. To address this, we introduce the Information Bottleneck Protector (IBProtector), a defense mechanism grounded in the information bottleneck principle, and we modify the objective to avoid trivial solutions. The IBProtector selectively compresses and perturbs prompts, facilitated by a lightweight and trainable extractor, preserving only essential information for the target LLMs to respond with the expected answer. Moreover, we further consider a situation where the gradient is not visible to be compatible with any LLM. Our empirical evaluations show that IBProtector outperforms current defense methods in mitigating jailbreak attempts, without overly affecting response quality or inference speed. Its adaptability across various attack methods and target LLMs underscore the potential of IBProtector as a novel, transferable defense that bolsters the security of LLMs without requiring modifications to the underlying models.


Current perturbations may not trigger a defence of the target LLM.
Can we perturb adversarial prompts more effectively beyond mere randomness?  

Highlights

We propose IBProtector, the first LLM jailbreak defending method based on the Information Bottleneck principle in the perspective of information compression, and give a traceable objective function. The method is lightweight and requires no modifications to the LLMs. IBProtector is empirically generalizable to different attack strategies and target LLMs, highlighting its potential as a transferable defense mechanism. We evaluate IBProtector on token-level and prompt-level jailbreaking datasets. The results show that IBProtector can successfully defend against adversarial prompts without substantially affecting LLMs’ responsiveness and inference consumption.

Comparison

We present a detailed comparison between our method and other baselines.

Below, we show the main defense results and transferability that we consider.
IBProtector just passes a small model additionally, so it doesn't cause a lot of time overhead in the inference phase.
For more experiments see the paper.

Citation

@article{liu2024protecting, title={Protecting Your LLMs with Information Bottleneck}, author={Zichuan Liu and Zefan Wang and Linjie Xu and Jinyu Wang and Lei Song and Tianchun Wang and Chunlin Chen and Wei Cheng and Jiang Bian}, journal={arXiv preprint arXiv:2404.13968}, year={2404} }