Hi, I'm Lucile đź‘‹

I'm a PhD candidate in ECE at NYU

Lucile Yang

Hi, I'm Lucile đź‘‹

I'm a PhD candidate in ECE at NYU

Herd Accountability from Auditing Privacy-Preserving Algorithms

Herd Accountability from Auditing Privacy-Preserving Algorithms

Paper (This prior work is published in Decision and Game Theory for Security (GameSec 2023)): https://link.springer.com/chapter/10.1007/978-3-031-50670-3_18

Motivation

Privacy-preserving AI algorithms are widely adopted in various domains, but the lack of transparency might pose accountability issues. While auditing algorithms can address this issue, machine-based audit approaches are often costly and time-consuming.

Herd audit, on the other hand, offers an alternative solution by harnessing collective intelligence. Nevertheless, the presence of epistemic disparity among auditors, resulting in varying levels of expertise and access to knowledge, may impact audit performance. An effective herd audit will establish a credible accountability threat for algorithm developers, incentivizing them to uphold their claims.

Our approach

We develop a systematic framework that examines the impact of herd audit on algorithm developers using the Stackelberg game approach.

The herd audit framework.

The herd audit framework with the third-party organization providing rewards and penalties. In auditing privacy-preserving algorithms, there is a public-known privacy protection agreement for a budget $\epsilon^\prime$. The developer who is either responsible ($\omega=g$) or negligent ($\omega=b$) takes the first step by selecting the executed privacy budget $\epsilon$ according to his strategy $q(\epsilon|\omega)$. Then, the auditor, characterized by an epistemic factor $\lambda$, determines how to gather information $s$ that leads to her audit confidence $r(\omega|s)$ regarding compliance or non-compliance.

Main Results


Rational inattention