Contrary to reported by many media, I did not say I felt 'lost' over my life's work. I explain here my own inner searching regarding the potential horror of catastrophes following our progress in AI and tie it to a possible understanding of the pronounced disagreements among top AI researchers about major AI risks, particularly the existential ones. We disagree strongly despite being generally rational colleagues that share humanist values: how is that possible? I will argue that we need more humility, acceptance that we might be wrong, that we are all human and hold cognitive biases, and that we have to nonetheless take important decisions in the context of such high uncertainty and lack of consensus.