Process Reward Models (PRMs) provide step-level verification to LLM reasoning. However, collecting accurate step-level labels for training PRMs is a bottleneck. We propose FoVer, an approach to use formal verification tools like Z3 and Isabelle to automatically annotate step-level error labels on LLM responses without relying on human annotation. This data synthesis is feasible only for tasks compatible with these tools, but our training data improves LLM-based PRMs over broad reasoning tasks.