Road Building, Year of Plenty, and Monopoly are nearly always useful, except perhaps for a late-game Road Building.
Soldiers (the most common card) provide you with a resource, deprive an opponent of a resource, free your number (if blocked by the robber), block an opponent's production, and can potentially provide 2 VP. Placing the robber on a player with an unplayed development card might not be the best move, since it is relatively likely that the player has a soldier to move the robber away. But if that player has settlements on 8-wood, 6-wood, and 4-wood, it might give him too much of an advantage. Trading a card to a player so that he can build a settlement on a wood port before his neighbor can might be a fine move, even if it upsets his neighbor a bit. Sometimes a trade that in other situations would be relatively even will give one player a large advantage. Make sure you consider the consequences of your trades
Probability of numbers: All else equal, a space bordering more dots is better than a space bordering fewer dots.Some things to keep in mind when placing your initial settlements include: Your initial settlements (along with the first couple rounds of dice rolls) have a large impact on your game, affecting whether you get off to a quick start or lag behind. Springer, Cham (2019).The placement of your initial settlements is very important. Xenou, K., Chalkiadakis, G., Afantenos, S.: Deep reinforcement learning in strategic board game environments. Szita, I., Chaslot, G., Spronck, P.: Monte-Carlo tree search in settlers of catan. Silver, D., et al.: Mastering chess and shogi by self-play with a general reinforcement learning algorithm. In: Proceedings of the International Conference on Computer Games: Artificial Intelligence, Design and Education. Pfeiffer, M.: Reinforcement learning of strategies for settlers of catan. Proceedings of Machine Learning Research, vol. (eds.) Proceedings of The 33rd International Conference on Machine Learning. Mnih, V., et al.: Asynchronous methods for deep reinforcement learning. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. Kocsis, L., Szepesvári, C.: Bandit based Monte-Carlo planning. In: 2014 IEEE Conference on Computational Intelligence and Games, pp. Guhe, M., Lascarides, A.: The effectiveness of persuasion in the settlers of catan. In: Yang, H., Pasupa, K., Leung, A.C.-S., Kwok, J.T., Chan, J.H., King, I. Gendre, Q., Kaneko, T.: Playing catan with cross-dimensional neural network. Ĭuayáhuitl, H., Keizer, S., Lemon, O.: Strategic dialogue management via deep reinforcement learning, vol.
In: van den Herik, H.J., Ciancarini, P., Donkers, H.H.L.M.J. 42(4), 244–256 (2020)Ĭoulom, R.: Efficient selectivity and backup operators in Monte-Carlo tree search.
Technical report (1993)Ĭazenave, T., et al.: Polygames: improved zero learning. In: Agent-based Systems for Human Learning and Entertainment (ABSHLE) (2007)īrügmann, B.: Monte Carlo Go.
Kluwer (2003)īranca, L., Johansson, S.J.: Using multi-agent system technologies in settlers of catan bots. In: Advances in Neural Information Processing Systems, pp. In: SeineDial 2012-The 16th Workshop on the Semantics and Pragmatics of Dialogue (2012)Īnthony, T., Tian, Z., Barber, D.: Thinking fast and slow with deep learning and tree search. Afantenos, S., et al.: Developing a corpus of strategic conversation in the settlers of catan.