Your calculations are correct. You can easily check it in any statistical software.
To get a Cramér's V of 0.5, your table would have to look like this:
|
Fail |
Pass |
| Material A |
1 |
999 |
| Material B |
403 |
597 |
That is 0.1% failure with Material A, and 40% with material B. Is this something you would have expected? (This table is just an example, and there are many other tables that could give you a Cramér's V of 0.5, you'll find a completely different example a few paragraphs below).
Your problem seems rather to be with interpreting the result. In short, you may want to stay away from Cramér's V, and simply report relative risk instead. Indeed, I see at least two problems with the approach you describe:
- You rely too much on an arbitrary benchmark to say that V is "strong" only when $\geq0.5$.
As you found out, it may be meaningless or confusing to use someone else's benchmark without thinking if it would make sense in your specific situation. There are many different benchmarks out there for Cramér's V, so it make little sense to pick one of them without thinking first "why this benchmark?".
It depends on the specific study we are considering, and it may be better to use your own customized benchmark, as suggested by Jacob Cohen - see Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences, p.224: "The best guide here, as always, is the development of some sense of magnitude ad hoc, for a particular problem or a particular field.".
- Cramér's V might not be the appropriate effect size you should look at.
Indeed, you say "Material B shows 10X higher fail rate compare to Material A and I expect strong association of Material type to Fail/Pass frequency ". It sounds like you're interested in relative risk, that is "how much more/less likely is it for Material A to fail, compared to Material B?". If it's what you're interesting in, you could simply report the relative risk (10), hopefully along with its confidence interval. (Here, the 95% confidence interval is [6.4, 15.7]).
From the information you give, I don't see the need to involve Cramér's V here, unless you're interested in something else than the difference between Material A and B.
Indeed, two tables with the same relative risk may have very different Cramér's Vs, so Cramér's V alone won't give you a lot of information about that. Another problem is that Cramér's V won't tell you the direction of the difference (is it Material B that is more likely to fail, or Material A?).
For a more concrete illustration of the problem with using Cramér's V in your situation, consider the three following tables:
Table X
|
Fail |
Pass |
| Material A |
20 |
980 |
| Material B |
200 |
800 |
Relative risk: 0.1. Cramér's V: 0.288
Table Y
|
Fail |
Pass |
| Material A |
60 |
940 |
| Material B |
600 |
400 |
Relative risk: 0.1. Cramér's V: 0.5
Table Z
|
Fail |
Pass |
| Material A |
200 |
800 |
| Material B |
20 |
980 |
Relative risk: 10. Cramér's V: 0.288
As you see, we have the same relative risk for table X and Y, but very different Cramér's V. Table X and Z have the exact same Cramér's V, but in table X it's material B more likely to fail, while in table Z it's material A.
So relative risk may be what you're after, much more than Cramér's V. Another effect size you may be interested in is risk difference (e.g. if failure rate for material A is 1% and failure rate for material B is 10%, the risk difference will be -9%). It might be more appropriate than relative risk, depending on the purpose of your study.
Anyway, the thing is that you should ask yourself what it is you want to learn from your table, what you want to communicate to others, and what effect size is appropriate for that. Here, from the information you give, Cramér's V doesn't seem to be appropriate.