A measurement of planetary occurrence rates based on a planet catalog should be robust against details of how initial detections were classified as planets or false positives. This is accomplished by supplying the catalogs rate of missed planets (completeness) and rate of non-planets incorrectly called planets (reliability). The final Kepler data release (DR25) includes products that can be used with the DR25 planet candidate catalog to correct for completeness and reliability in occurrence rate estimates. This is made possible by the Kepler Robovetter, which algorithmically and uniformly selects planets based on a variety of metrics and thresholds. Completeness, reliability, and occurrence rates potentially depend on these Robovetter thresholds. We study the impact of varying these vetting thresholds using the techniques of Bryson et al. 2019 (arXiv:1906.03575). We explore sets of thresholds that result in more or fewer planets (trading off completeness for reliability), as well as thresholds tuned to pass DR25 false positives identified as possible planets by the Kepler False Positive Working Group. We find that when correcting only for completeness, and not reliability, the resulting occurrence rates have a strong dependence on these threshold sets. For example, the value of SAG13 eta-Earth varies by over a factor of 4 when not corrected for reliability. However, when correcting for both completeness and reliability, occurrence rates using our threshold sets are statistically indistinguishable, with differences being well inside 1-sigma error bars. We present occurrence rates integrated over several period-radius ranges. For example, SAG13 eta-Earth is consistent with 0.127 (+0.094)(-0.054) (from Bryson et al. 2019) for all the Robovetter threshold sets. This result emphasizes the importance of correcting occurrence rates for both completeness and reliability. This suggests that inconsistent completeness and reliability correction may be a significant contributor to the large variation of occurrence rates in recent literature. We plan to make the Robovetter results for our threshold sets available, and encourage the community to use them to examine whether other occurrence rate methods yield similarly robust results.