Self-organizing systems (SOS) are able to perform complex tasks in unforeseen situations with adaptability. Previous work has introduced field-based approaches and rule-based social structuring for individual agents to not only comprehend the task situations but also take advantage of the social rule-based agent relations in order to accomplish their overall tasks without a centralized controller. Although the task fields and social rules can be predefined for relatively simple task situations, when the task complexity increases and task environment changes, having a priori knowledge about these fields and the rules may not be feasible. In this paper, we propose a multi-agent reinforcement learning based model as a design approach to solving the rule generation problem with complex SOS tasks. A deep multi-agent reinforcement learning algorithm was devised as a mechanism to train SOS agents for acquisition of the task field and social rule knowledge, and the scalability property of this learning approach was investigated with respect to the changing team sizes and environmental noises. Through a set of simulation studies on a box-pushing problem, the results have shown that the SOS design based on deep multi-agent reinforcement learning can be generalizable with different individual settings when the training starts with larger number of agents, but if a SOS is trained with smaller team sizes, the learned neural network does not scale up to larger teams. Design of SOS with a deep reinforcement learning model should keep this in mind and training should be carried out with larger team sizes.

This content is only available via PDF.
You do not currently have access to this content.