One of the most important developments that makes it possible to implement all iterative algorithms with the help of recursion instead of explicit loop constructs is tail call optimization. This was first made mandatory in the programming language Scheme, a dialect of LISP.
The first part of the call sequence for the factorial calculation has recursive calls into RecFact only. There is never (for as long as x is still greater than 0) any return out of the function. When the unwinding process starts, the sequence is followed in reverse order by the return statements and each return takes the sequence back to the previous instance of a RecFact call. This return is important because every time a value is returned by one of the inner calls, its result must first be multiplied with the value of the current x, before the result of that calculation can be returned yet another level up the hierarchy.
For instance, at some point x is 2. There is a call to RecFact with a parameter of 1. Only when the result of the RecFact(1) call is known can the calculation of 2 * (that result) be performed. If the algorithm is implemented as shown, the fact that recursion results in this precise execution order is what makes things work, what makes the function return the correct value in the end.
The downside of having to navigate out of the entire call hierarchy step by step is that there must be a record of the place that each return must go back to, and also the state of all local variables ...