What do your logits know? (The answer may surprise you!)
Masha Fedzechkina, Eleonora Gualdoni, Rita Ramos, Sinead Williamson
Feedback
Why It Matters
No evaluation available.
Contributions
- None available.
Insights
- None available.
Limitations
- None available.
Tags
- cs.AI
Abstract
arXiv:2604.09885v1 Announce Type: new Abstract: Recent work has shown that probing model internals can reveal a wealth of information not apparent from the model generations. This poses the risk of unintentional or malicious information leakage, where model users are able to learn information that the model owner assumed was inaccessible. Using vision-language models as a testbed, we present the first systematic comparison of information retained at different "representational levels'' as it is compressed from the rich information encoded in the residual stream through two natural bottlenecks: low-dimensional projections of the residual stream obtained using tuned lens, and the final top-k logits most likely to impact model's answer. We show that even easily accessible bottlenecks defined by the model's top logit values can leak task-irrelevant information present in an image-based query, in some cases revealing as much information as direct projections of the full residual stream.