Recent studies indicate that humans are distrusting and less willing to cooperate with artificial agents, even if such cooperation leads to better results. Some literature offers contradictory evidence, suggesting that humans are overtrusting algorithms. However, the reasons for such cooperative distrust/trust remain unclear. My PhD research aims to investigate trust in human-AI cooperation by developing and testing cognitive processes of cooperative behaviours between human and AI. Whether cooperation in a social setting is successful depends on the decision to grant trust to the cooperant and to honour the trust granted by that cooperant in return. These social decisions are at the core of social interactions. As such, understanding and investigating the underlying mechanisms of social decisions in human-AI cooperation can help clarify the cooperative trust towards artificial agents.