Deploying binary classifiers on constrained platforms requires models that are both compact and accurate. This chapter explores the miniaturisation of binary classifiers through sparse neural networks, showing how sparsification techniques can significantly reduce memory and computational costs while retaining high predictive performance. The authors present comparative analyses of different sparsity levels, demonstrating the trade-offs between efficiency and accuracy, and discuss the potential of sparse models as a practical solution for resource-limited edge and embedded AI applications.