Both of the tests you describe are equivalent.
If I have two hypotheses:
H0:μ=0
H1:μ≠0
then they are equivalent to
H0:μ2=0
H1:μ2>0.
If the data are known to be normal, then the sample mean X¯ will also be Normal with mean μ and variance σ2/n (which might be known or unknown).
If the data aren't known to be Normal then you can use the central limit theorem and the above will be true asymptotically. You claim that X¯2 will converge to a chi-squared variable "faster" than X¯ will converge to a normal one. This is true in the sense that as n tends to infinity,
P(|X¯−μ|>|X¯2−μ2|)→1
but that is not the whole story. We are performing a likelihood ratio test, or at least an approximate one. the ratio will come out the same whether we perform a chi-squared or a normal test. (Recall that the square of a normal random variable follows a chi-squared distribution.) If the sample mean X¯ comes out at the 95th percentile of the relevant normal or t-distribution, then the sum-of-squares will be equal to the 95th percentile of the χ2 distribution (which is not the same number, but that doesn't matter).