1: "Go for effective compression."

ECR = sqrt((boost+14.7)/14.7) * CR. Some people use the formula ECR = [ (Boost÷14.7) + 1 ] x CR, which I believe to be false because the numbers end up in the 15-25:1 range which isn't very realistic.

2: "1PSI from a big turbo is not the same as 1PSI from a small turbo."

3: "PSI means nothing, aim for a HP goal"

These rules of thumb conflict with each other.

Let's try rule #1. An engine with 8:1 compression and 20psi will have an effective compression of 12.3:1 (using formula 1). An engine with 10:1 compression and 7.5psi will have the same 12.3:1 effective compression.

Now rule #2. So let's use the high compression engine from above... We'll say it's using a GT3076R (I know, not efficient at 7.5psi). According to rule #2 if you use a smaller turbo you can run more boost. So now let's use a 16G and run 12psi. Your effective compression just went up from 12.3:1 to 13.5:1. So, going by rule #2 would contradict rule #1, because now your not at a "safe" compression using the same octane.

Lastly rule #3. Using the example engines let's aim for HP and just keep upping the boost until we get our desired HP. In this case I'm going to say both engines are using a 16G and we will aim for 250hp. The 8:1 compression engine would need 17.3psi to make 250hp while the 10:1 compression engine (assuming +4%hp per 1:0 increase in compression) would need 15.2psi to make the same 250hp. Now when we go back to rule #1 we get engines with effective compression ratios of 11.8:1 and the other, 14.26:1. This again breaks rule #1. Both engines are making the same power on the same turbo at similar boost, but will they both hold up?

So the question is... What's the real way to figure out if a turbo setup is safe, before you actually build it? Should we rely on effective compression, psi, horsepower, cylinder pressure, etc? I know this may seem dumb to most people, but I like figuring out how well something will work in theory, before dumping thousands of dollars into it.