Not that I would instinctively agree or anything, but I was really curious (a) exactly how bad/not-so-bad Java would be for something very natural to code in ML and (b) exactly how trivial it would be to turn the algorithmic typechecker Frank and Bob's Canonical Forms in LF paper into code. Oh, the other main issue is that proofs in PCA are fairly small compared to PCC, so optimizing the hell out of the proof checker isn't so important - the naive algorithm (the one I'm trying to implement) should be okay.
After a few hours of hacking and remembering Java, I have a little parser, translation into deBruijn indices, and substitution apparently working. The one major bug that came up involved, of course, side-effects and pointer-sharing; there were two integer container classes that were actually the same, and a shift substitution that incremented the same index twice when trying to compute (\x \y x x)[^]. Hooray for non-functional languages! Grr. Well, that's only been a mild obstacle. I think I'll soon have a feel for how painful it is to implement the real stuff.