Coordinated hunting is widely observed in animals, and sharing rewards is often considereda major incentive for its success. While current theories about the role played by sharing incoordinated hunting are based on correlational evidence, we reveal the causal roles of shar-ing rewards through computational modeling with a state-of-the-art Multi-agent Reinforce-ment Learning (MARL) algorithm. We show that counterintuitively, while selfish agentsreach robust coordination, sharing rewards undermines coordination. Hunting coordinationmodeled through sharing rewards (1) suffers from the free-rider problem, (2) plateaus ata small group size, and (3) is not a Nash equilibrium. Moreover, individually rewardedpredators outperform predators that share rewards, especially when the hunting is difficult,the group size is large, and the action cost is high. Our results shed new light on the actualimportance of prosocial motives for successful coordination in nonhuman animals andsuggest that sharing rewards might simply be a byproduct of hunting, instead of a designstrategy aimed at facilitating group coordination. This also highlights that current artificialintelligence modeling of human-like coordination in a group setting that assumes rewardssharing as a motivator (e.g., MARL) might not be adequately capturing what is truly nec-essary for successful coordination.