The two most common settings are min(20,T−1) and lnT where T is the length of the series, as you correctly noted.
The first one is supposed to be from the authorative book by Box, Jenkins, and Reinsel. Time Series Analysis: Forecasting and Control. 3rd ed. Englewood Cliffs, NJ: Prentice Hall, 1994.. However, here's all they say about the lags on p.314:
It's not a strong argument or suggestion by any means, yet people keep repeating it from one place to another.
The second setting for a lag is from Tsay, R. S. Analysis of Financial Time Series. 2nd Ed. Hoboken, NJ: John Wiley & Sons, Inc., 2005, here's what he wrote on p.33:
Several values of m are often used. Simulation studies suggest that
the choice of m ≈ ln(T ) provides better power performance.
This is a somewhat stronger argument, but there's no description of what kind of study was done. So, I wouldn't take it at a face value. He also warns about seasonality:
This
general rule needs modification in analysis of seasonal time series
for which autocorrelations with lags at multiples of the seasonality
are more important.
Summarizing, if you just need to plug some lag into the test and move on, then you can use either of these setting, and that's fine, because that's what most practitioners do. We're either lazy or, more likely, don't have time for this stuff. Otherwise, you'd have to conduct your own research on the power and properties of the statistics for series that you deal with.
UPDATE.
Here's my answer to Richard Hardy's comment and his answer, which refers to another thread on CV started by him. You can see that the exposition in the accepted (by Richerd Hardy himself) answer in that thread is clearly based on ARMAX model, i.e. the model with exogenous regressors xt:
yt=x′tβ+ϕ(L)yt+ut
However, OP did not indicate that he's doing ARMAX, to contrary, he explicitly mentions ARMA:
After an ARMA model is fit to a time series, it is common to check the
residuals via the Ljung-Box portmanteau test
One of the first papers that pointed to a potential issue with LB test was Dezhbaksh, Hashem (1990). “The Inappropriate Use of Serial Correlation Tests in Dynamic Linear Models,” Review of Economics and Statistics, 72, 126–132. Here's the excerpt from the paper:
As you can see, he doesn't object to using LB test for pure time series models such as ARMA. See also the discussion in the manual to a standard econometrics tool EViews:
If the series represents the residuals from ARIMA estimation, the
appropriate degrees of freedom should be adjusted to represent the
number of autocorrelations less the number of AR and MA terms
previously estimated. Note also that some care should be taken in
interpreting the results of a Ljung-Box test applied to the residuals
from an ARMAX specification (see Dezhbaksh, 1990, for simulation
evidence on the finite sample performance of the test in this setting)
Yes, you have to be careful with ARMAX models and LB test, but you can't make a blanket statement that LB test is always wrong for all autoregressive series.
UPDATE 2
Alecos Papadopoulos's answer shows why Ljung-Box test requires strict exogeneity assumption. He doesn't show it in his post, but Breusch-Gpdfrey test (another alternative test) requires only weak exogeneity, which is better, of course. This what Greene, Econometrics, 7th ed. says on the differences between tests, p.923:
The essential difference between the Godfrey–Breusch and the
Box–Pierce tests is the use of partial correlations (controlling for X
and the other variables) in the former and simple correlations in the
latter. Under the null hypothesis, there is no autocorrelation in εt ,
and no correlation between xt and εs in any event, so the two tests
are asymptotically equivalent. On the other hand, because it does not
condition on xt , the Box–Pierce test is less powerful than the LM
test when the null hypothesis is false, as intuition might suggest.