Federated Learning has gained increasing interest in the last years, as it allows the training of machine learning models with a large number of devices by exchanging only the weights of the trained neural networks. Without the need to upload the training data to a central server, privacy concerns and potential bottlenecks can be removed as fewer data is transmitted. However, the current state-of-the-art solutions are typically centralized, and do not provide for suitable coordination mechanisms to take into account spatial distribution of devices and local communications, which can sometimes play a crucial role. Therefore, we propose a field-based coordination approach for federated learning, where the devices coordinate with each other through the use of computational fields. We show that this approach can be used to train models in a completely peer-to-peer fashion. Additionally, our approach also allows for emergently create zones of interests, and produce specialized models for each zone enabling each agent to refine their models for the tasks at hand.We evaluate our approach in a simulated environment leveraging aggregate computing—the reference global-to-local field-based coordination programming paradigm. The results show that our approach is comparable to the state-of-the-art centralized solutions, while enabling a more flexible and scalable approach to federated learning.