Neural networks with high sparsity levels are increasingly studied as a means of reducing computational and memory requirements while retaining predictive accuracy. This chapter investigates high sparsity training strategies in binary classification tasks, focusing on their trade-offs between efficiency and performance. The authors analyze sparsity thresholds and their impact on accuracy, demonstrating that carefully chosen sparsification techniques can preserve classification quality while significantly improving model compactness and execution efficiency. The findings are relevant for deploying AI in resource-constrained environments, such as embedded systems and edge computing.