Learning for Simulated Soccer

One of our scientific main goals in the soccer simulation domain was and is to apply methods of distributed artificial intelligence and machine learning to the largest degree possible. To this end, the competitive character of the robotic soccer domain results in the situation that AI approaches are not just employed “for their own sake”, but in such a manner that their utilization contributes significantly to the success of the team.

Reinforcement Learning describes the situation of a machine learning system, where the only training signal provided by the environment is that of success or failure of the agent, after the system has acted over a sequence of decision cycles. This learning problem can be formulated as a Markov Decision Process (MDP) within the framework of Dynamic Programming. So, a driving motivation for our effort in the soccer domain is to investigate reinforcement learning methods in complex domains and to develop new variants and practical algprithmus. As pointed out, we consider it important to not only demonstrate the principal feasibility of learning approaches, but to actually apply learned behaviors in the team that steps into a competition.

Use Case: Learning an Aggressive Defense Behavior

Reinforcement learning is a suitable approach for both learning player individual skills as well as cooperative multi-player behaviors. In this case study, we considered a defense scenario of crucial importance: We focused on situations where one of our players must interfere and disturb an opponent ball leading player in order to scotch the opponent team’s attack at an early stage and, even better, to eventually conquer the ball initiating a counter attack.
We developed a comprehensive set of training scenarios and pursued a reinforcement learning method based on neural value function approximation to obtain a good hassling policy for our players. Using reinforcement learning we enabled our players to autonomously acquire such an aggressive duelling behavior, and we embedded it into our soccer simulation team’s defensive strategy. Employing the learned NeuroHassle policy in our competition team, we were able to clearly improve the capabilities of our defense and, thus, to increase the performance of our team as a whole.
Find more details on our approach and the results obtained in the following publications.

Use Case: Opponent Modeling and Action Prediction

The focus of this case study is on an investigation of case-based opponent player modeling in the domain of simulated robotic soccer. While in previous and related work it has frequently been claimed that the prediction of low-level actions of an opponent agent in this application domain is infeasible, we show that - at least in certain settings - an online prediction of the opponent's actions can be made with high accuracy. We also stress why the ability to know the opponent's next low-level move can be of enormous utility to one's own playing strategy.