Abstract

Privacy protection has become a core issue in machine learning, with differential privacy widely regarded as an effective method for preserving privacy in stochastic gradient descent. We propose the GPriS (Gradient Privacy Sparse) method, a novel method that enhances differential privacy in deep learning by incorporating sparsity-based pruning. By leveraging the Lottery Hypothesis, GPriS identifies and preserves a lucky “sub-network” of critical parameters, reducing the number of parameters and improving both efficiency and privacy. Experiments on the MNIST dataset demonstrate that GPriS achieves 98.83% test accuracy with only 60% of gradient updates under a privacy budget of ε=4, a performance level comparable to non-privacy settings. Compared to traditional DP-SGD, GPriS offers a better trade-off between privacy, model accuracy, and training efficiency. Our results show that GPriS effectively allocates privacy budgets, ensuring optimal performance even under tight privacy constraints