First, you specify a single LED manufacturer and part number. The range in Vf from part to part will not be as great as you suggest (not 0.5V).
Second, small variations in brightness are not readily detectable to the eye. So you don't have to worry about small variations from unit-to-unit.
Third, when possible, you power the LED's from a regulated voltage, not the battery, so that you remove one source of variation.
Fourth, when the only power source available is variable (such as a battery), you drive the LED with a current source instead of a voltage source with a current limiting resistor. If there is at least one regulated voltage available (even if it is a low voltage), it is pretty easy to make a satisfactory current source for driving an LED indicator using only one transistor and a few resistors. This is cheap but does take up room on highly space constrained designs.
If there is not even one single regulated voltage available, you can still make a decent current source using two diodes in series as a voltage reference.
I am not sure if I am a real engineer, but I have had to do all this stuff while designing consumer products, and that is how I dealt with it. One other thing that can really get you with LED indicators is when heavy loads cause the battery voltage to sag. For example, a vibration motor or speaker may cause battery voltage to droop on some products. That droop may cause a noticeable flicker or variation in the LED brightness when the LED is driven from the battery. This is another reason to use a current source instead.
Here is a current source for when the LED is powered from the battery, but you have a GPIO signal available which is derived from a regulated voltage:
simulate this circuit – Schematic created using CircuitLab
In the above schematic, it doesn't matter if the LED is powered from 3.3V or VBATT or whatever, as long as the GPIO is powered from a regulated source. I copied this from another answer. You would want to tweak the emitter resistor to get the specific current you are looking for. When there is not much overhead available, you can also reduce R2 so that the base voltage is less than 1V.
Here is a circuit for when there is no regulated voltage available:
simulate this circuit
In the above circuit, D1 and D2 act as a voltage reference. The voltage will vary, but not as much as the battery voltage. This constant voltage at the base of Q1 is then leveraged into a constant voltage across R3, and thus, constant collector current (the transistor will not be saturated unless VBATT is very low). I haven't actually done this in a production design, but I believe it would work OK.
Compared with a simple saturated switch, both circuits do a good job of maintaining the desired current even when there is barely enough voltage available to keep the LED illuminated.
Here are some simulation results comparing the simple saturated switch with current limiting resistor (D1), vs the voltage divider reference circuit (D2) vs the two-diode reference (D5). This is with a 3V LED. Note that the resistor values have been tweaked to get around 9mA at VBATT = 4.2V.
As you can see, the current source with the voltage divider reference maintained good performance to, let's say 3.35V. So it only needs around 350mV of overhead.
The two diode reference circuit maintained good performance down to around 3.45V, which is around 450mV of overhead.
The standard circuit really doesn't maintain a regulated current at all. Current drops linearly with battery voltage.
Also note that the two-diode reference circuit and the voltage divider reference circuit both have higher current at all battery voltages compared to the standard circuit, except for at the max battery voltage.