Process Reward Models (PRMs) provide step-level verification to LLM reasoning. However, prior work rely on training data with human annotated labels or noisy labels. We propose FoVer, a novel method to create PRM training data by using formal verification tools like Z3 and Isabelle. FoVer is the first method that creates accurate PRM training data without relying on human annotation. FoVer creates PRM training data using symbolic tasks compatible with formal verification tools, but FoVer improves LLM-based PRMs over broad reasoning tasks.