Interesting Ali Express web server behavior...

So I’m having trouble connecting to the Ali Express web server this evening and decided to investigate a little.

What I found surprised me…

owen@odmbpro3-3 ~ % openssl s_client -connect www.aliexpress.com:443

CONNECTED(00000005)

depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root CA

verify return:1

depth=1 C = US, O = DigiCert Inc, CN = DigiCert TLS RSA SHA256 2020 CA1

verify return:1

depth=0 C = CN, ST = \E6\B5\99\E6\B1\9F\E7\9C\81, L = \E6\9D\AD\E5\B7\9E\E5\B8\82, O = Alibaba Cloud Computing Ltd., CN = ae01.alicdn.com

verify return:1

… certificate stuff, blah blah from Akamai, routine…

SSL-Session:

Protocol : TLSv1.3

Cipher : AEAD-CHACHA20-POLY1305-SHA256

Session-ID:

Session-ID-ctx:

Master-Key:

Start Time: 1702187128

Timeout : 7200 (sec)

Verify return code: 0 (ok)

a message of 1136 lines which said:

But why would AliExpress be redirecting to DDN space? Is this
legitimate? Ali hoping to get away with squatting, or something
else?

No idea. The IP address does not reply to HTTP requests, anyway. A
practical joke?

Note that this redirection takes places only when there is no
User-Agent field. If you say 'User-Agent: Mozilla', you get a proper
redirection, in my case to https://fr.aliexpress.com/.

Hi,

Location: http://33.3.37.57/

But why would AliExpress be redirecting to DDN space? Is this legitimate? Ali
hoping to get away with squatting, or something else?

Not very long ago I worked for a well-known e-commerce platform where we nearly
ran out of RFC1918 space. We seriously considered using what was then
un-advertised DOD space to supplement RFC1918 space inside our data centers.

Perhaps AliExpress did get to that level of desperateness?

Thanks,

Sabri

I notice a weird issue like this with Alibaba when i try to use my Comcast connection. Turn my wifi off and now it works flawlessly.

Are you using your comcast connection?

-Mike

Starting to digress here for a minute…

How big would a network need to get, in order to come close to exhausing RFC1918 address space? There are a total of 17,891,328 IP addresses between the 10/8 prefix, 172.16/12 space and 192.168/16 space. If one was to allocate 10 addresses to each host, that means it would require 1,789,132 hosts to exhaust the space.

  • Christopher H.

AWS. They exhausted it, broke up the regions reusing the address
space, then exhausted it again.

Exhaustion was one of the motivations for Facebook going IPv6-only internally.

Regards,
Bill Herrin

How big would a network need to get, in order to come close to exhausing RFC1918 address space? […] If one was to allocate 10 addresses to each host, that means it would require 1,789,132 hosts to exhaust the space.

Total availability is not usually the problem - poor allocation of space done in the 80s is.

I’ve worked with a telco a while ago which had ‘run out of 10/8’ by having allocated multiple /16s to their largest sites for lan/mgmt/control. The plan to ‘free up IP space’ included resetting practically every 20 years old air conditioner they had in the country and put them in a different subnet, same for fire and access control systems (air conditioners and fire control specifically didn’t support IP address change, you had to drop the entire config).

If you think about the scale of the operation then suddenly 33/8 becomes very, very appealing.

On Sat, Dec 09, 2023 at 09:55:31PM -0800,
a message of 1136 lines which said:

But why would AliExpress be redirecting to DDN space? Is this
legitimate? Ali hoping to get away with squatting, or something
else?

No idea. The IP address does not reply to HTTP requests, anyway. A
practical joke?

Note that this redirection takes places only when there is no
User-Agent field. If you say 'User-Agent: Mozilla', you get a proper
redirection, in my case to https://fr.aliexpress.com/.

My guess would be they’re doing this to redirect unwanted / non-legitimate traffic away.

But why would AliExpress be redirecting to DDN space? Is this legitimate? Ali hoping to get away with squatting, or something else?

I’ve seen a large number of cases that a company was using someone else’s non-RFC1918 space for some reason, and that was accidentally exposed via application communication when some process / procedure they were using to fix that up didn’t work. This feels like that to me.

No, in this case, I was using HE uplink from the Cabinet in FMT2 for testing using my AS1734 space 192.159.10.0/24 as source address.

Owen

Given micro services and VM architectures these days, it’s not even difficult to imagine a company as large as Ali Baba burning through more than 17 milllion hosts.

Owen

Hi,

Starting to digress here for a minute...
How big would a network need to get, in order to come close to exhausing RFC1918
address space? There are a total of 17,891,328 IP addresses between the 10/8
prefix, 172.16/12 space and 192.168/16 space. If one was to allocate 10
addresses to each host, that means it would require 1,789,132 hosts to exhaust
the space.

Imagine a 20 year old platform originally built in the late 90s/early 2000s,
gradually evolving to what it is today. You'll have several version of design,
several versions of applications, several versions of networking, firewalls, and
other infrastructure. It is so old, when it was first built, each HTTPS address
required its own IP.

What you end up with is your typical pod design with 40-some TORs where you
allocate a /24 per IRB, not knowing how many hosts are going to end up on the
hypervisor. And due to PCI-DSS restrictions, you may need multiple IRBs per TOR.

And all of this in an environment where datacenters and pods are scaled based on
the amount of power available, not the amount of space.

Now factor in "legacy" pods and datacenters that were never properly migrated out
of, an address-guzzling corporate network administered by a separate team that
for some reason also needs to talk to prod and thus demands unique RFC1918 space
out of the same pool, and all of a sudden that DOD space looks awfully appealing.

This is how you end up with projects named "Save The Bacon".

Even after very rigorous reclaiming we still ended up using close to 60% of
RFC1918 space.

Thanks,

Sabri

And all of this in an environment where datacenters and pods are scaled based on
the amount of power available, not the amount of space.

In my experience, back then, most DCs ran out of cooling well before they ran out of power.

YMMV

Owen