To expand on what others have said here, I find it helpful to think of BGP as a policy enforcement protocol, rather than as a distance vector routing protocol.
To that end, there’s a generally expected hierarchy of routes, and then a lot of individuality between networks. Having done traffic engineering for some global CDNs, there’s a bunch of inbound traffic control that you can do by letting an understanding of how most other providers think about this guide your transit and peering policies, and a remaining portion that generally needs to be solved through either discussions, negotiations, or commercial arrangements with the sending party or their upstreams.
For the general rules, local-preference trumps everything else. The number of AS path hops comes after local-preference. Other things being equal networks usually like to hand off traffic to a short AS path, and at the closest point to its origination (there are valid performance reasons for this) but local-preference policies will override both of those.
Local-preferences usually have three default tiers — customer, peering, and transit. In other words, get paid, hand off for free, and pay. There are often some additional peers that can be selected for traffic engineering reasons, either internally or by customers using BGP communities. BUT, those BGP communities don’t transit to other ASes, so even if you manage to signal one hop up stream, you may still find your upstream provider announcing your routes to those who have different ideas.
One example of this from the early days of anycasted DNS root servers involved k.root-servers.net installing a node in Delhi, which pulled 60% of its traffic from North America. This was clearly non-optimal. They had attempted to get routing diversity by getting transit from different providers in different parts of the world, but their Delhi node was, if I recall correctly, a customer of a customer of a customer of Level3. Oops.
So, what do you do about this?
If you’re a global network operator, you probably attempt to maintain consistent peering/transit relationships across sites. That way, AS paths and local-preferences should be fairly even, and you can let nearest exit routing do its thing.
If you have a smaller network, but have multiple interconnection locations that are far enough apart to make a performance difference, make the same transit and peering relationships at each one. Make exceptions only for peers (not transit providers) whose customers or services only exist in one of the areas, and make sure they don’t announce your routes to their upstreams. That way you won’t trombone traffic.
If you’ve done all that, and traffic is still coming in the wrong place, then you start talking to people. “Hey, I’m buying transit from you in both Asia and the Western US, and all my traffic from asian-country-x is coming into San Jose. Why?” “Well, they only have a 100 Mb/s interconnection to us in Asia. We have to traffic engineer around it.” And then you have to figure out how to convince some national telco to want to talk to you more than they want to talk to your transit provider.
I think in your case, I would be asking why you have a 5,000 mile, five-prepend loop to get to a provide ten miles away. It suggests that your network is doing things 5,000 miles away that are inconsistent with what you’re doing locally, or that you have upstreams who aren’t interconnecting locally or aren’t maintaining sufficient capacity or sufficient political relationships on those paths. All of those would predictably have this result. The solution is likely to take a look at your transit relationships, ask your transit providers about their transit relationships, and either supplement or switch to a set of transit providers who can provide the routing you want.