Aside from the silliness of "can I ping you?", someone did make the
valuable observation that ping might not be the best way to measure
content delivery performance.
A better way would be to measure the actual delivery performance by
noting the size and delivery time for an actual bonafide request.
The traffic directing system could default to random
source servers (perhaps with coarse regional constraints)
until enough data is gathered to guide future server decisions.
As more client requests are received, occasional testing with other
source servers could be done to ensure best server selection.
A significant drop in throughput could also trigger some re-testing.
An additional benefit would be a reasonable guess as to the bandwidth
of the end user (dialup, broadband, etc).
I see that Apache already has an optional custom-log attribute for
transfer time in integer seconds (%T). It would be trivial to modify
that to give milliseconds.
If you could get this to work right it would be completely non-invasive
and should produce better overall results.