Sparse neural networks are increasingly recognized for their ability to deliver compact yet accurate models. While much work has focused on binary classification, this chapter investigates the role of high sparsity strategies in multi-class classification tasks. The authors evaluate the trade-offs between sparsity levels, classification accuracy, and model efficiency across diverse datasets. Results indicate that, under appropriate sparsity thresholds, multi-class classifiers not only maintain performance but in some cases even outperform their dense counterparts. These findings support the use of sparsification as a practical method for building efficient AI models deployable in real-world, resource-constrained environments.

Full chapter