Process Reward Models (PRMs) provide step-level verification to LLM reasoning. However, PRM training data in prior work relies on human annotated labels or costly automatic methods, which generate noisy training labels. We propose FoVer, an efficient method to create accurate PRM training data from formal reasoning tasks using formal verification tools like Z3 and Isabelle. Experimental results demonstrate that FoVer improves PRM performance on widely used benchmarks for informal reasoning tasks.