[NANOG] Comcast latency

Let's think smaller. /16 shall we say?

Like the /16 here. Originally the SRI / ARPANET SF Bay Packet Radio
network that started back in 1977. Now controlled by a shell company
belonging to a shell company belonging to a "high volume email
deployer" :slight_smile:

http://blog.washingtonpost.com/securityfix/2008/04/a_case_of_network_identity_the_1.html

srs

nanog@daork.net (Nathan Ward) writes:

> That also doesn't take into account how many /8's are being hoarded by
> organizations that don't need even 25% of that space.

Unless you're expecting those organisations to be really nice and make
that address space available to other organisations (ie. their RIR/
LIR, or the highest bidder on ebay), ...

first, a parable:

in datacenters, it used to be that the scarce resource was rack space, but
then it was connectivity, and now it's power/heat/cooling. there are fallow
fields of empty racks too far from fiber routes or power grids to be filled,
all because the scarcity selector has moved over time. some folks who were
previously close to fiber routes and/or power grids found that they could
do greenfield construction and that the customers would naturally move in,
since too much older datacenter capacity was unusable by modern standards.

then, a recounting:

michael dillon asked a while back what could happen if MIT (holding 18/8)
were to go into the ISP business, offering dialup and/or tunnel/VPN access,
and bundling a /24 with each connection, and allowing each customer to
multihome if they so chose. nobody could think of an RIR rule, or an ISP
rule, or indeed anything else that could prevent this from occurring. now,
i don't think that MIT would do this, since it would be a distraction for
them, and they probably don't need the money, and they're good guys, anyway.

now, a prediction:

but if the bottom feeding scumsuckers who saw the opportunity now known as
spam, or the ones who saw the opportunity now known as NXDOMAIN remapping,
or the ones who saw the opportunity now known as DDoS for hire, realize that
the next great weakness in the internet's design and protocols is explosive
deaggregation by virtual shill networking, then we can expect business plans
whereby well suited shysters march into MIT, and HP, and so on, offering to
outsource this monetization. "you get half the money but none of the
distraction, all you have to do is renumber or use NAT or IPv6, we'll do
the rest." nothing in recorded human history argues against this occurring.

I'm not sure that I would tar everyone who does NXDOMAIN remapping with
the same brush as SPAM and DDOS. Handled the way OpenDNS does, on an
opt-in basis, it's a "good thing" IMO.

I would also say that disaggregating and remarketing dark address space,
assuming it's handled above board and in a way that doesn't break the
'net, could be a "very good thing". The artifact of MIT and others
having /8s while the entire Indian subcontinent scrapes for /29s, can
hardly be considered optimal or right. It's time for the supposedly
altruistic good guys to do the right thing, and give back the resources
they are not using, that are sorely needed. How about they resell it and
use the money to make getting an education affordable?

The routing prefix problem, OTOH, is an artificial shortage caused by
(mostly one) commercial entities maximizing their bottom line by
producing products that were obviously underpowered at the time they
were designed, so as to minimize component costs, and ensure users
upgraded due to planned obsolescence.

Can you give me a good technical reason, in this day of 128 bit network
processors that can handle 10GigE, why remapping the entire IPv4 address
space into /27s and propagating all the prefixes is a real engineering
problem? Especially if those end-points are relatively stable as to
connectivity, the allocations are non-portable, and you aggregate.

How is fork-lifting the existing garbage for better IPv4 routers any
worse than migrating to IPv6? At least with an IPv4 infrastructure
overhaul, it's relatively transparent to the end user. It's not
either/or anyway. Ideally you would have an IPv6 capable router that
could do IPv4 without being babied as to prefix table size or update
rate.

IPv4 has enough addresses for every computer on Earth, and then some.

That having been said, I think going to IPv6 has a lot of other benefits
that make it worthwhile.

YMMV, IANAL, yadda yadda yadda

I'm not sure that I would tar everyone who does NXDOMAIN remapping with
the same brush as SPAM and DDOS. Handled the way OpenDNS does, on an
opt-in basis, it's a "good thing" IMO.

i agree, and i'm on record as saying that since opendns doesn't affect the
people who do not knowingly sign up for it, and that it's free even to folks
who opt out of the remapping, it is not an example of inappropriate trust
monetization (as it would be if your hotel or ISP did it do you without your
consent, or, offered you no alternative, or, offered you no opt-out.)

I would also say that disaggregating and remarketing dark address space,
assuming it's handled above board and in a way that doesn't break the
'net, could be a "very good thing".

that's a "very big if".

The routing prefix problem, OTOH, is an artificial shortage caused by
(mostly one) commercial entities maximizing their bottom line by
producing products that were obviously underpowered at the time they
were designed, so as to minimize component costs, and ensure users
upgraded due to planned obsolescence.

i completely disagree, but, assuming you were right, what do you propose do
do about it, or propose that we all do about it, to avoid having it lead
to some kind of global meltdown if new prefixes start appearing "too fast"?

Can you give me a good technical reason, in this day of 128 bit network
processors that can handle 10GigE, why remapping the entire IPv4 address
space into /27s and propagating all the prefixes is a real engineering
problem? Especially if those end-points are relatively stable as to
connectivity, the allocations are non-portable, and you aggregate.

you almost had me there. i was going to quote some stuff i remember tony li
saying about routing physics at the denver ARIN meeting, and i was going to
explain three year depreciation cycles, global footprints, training, release
trains, and some graph theory stuff like number of edges, number of nodes,
size of edge, natural instability. couldn't been fun, especially since many
people on this mailing list know the topic better than i do and we could've
gone all week with folks correcting eachother in the ways they corrected me.

but the endpoints aren't "stable" at all, not even "relatively." and the
allocations are naturally "portable". and "aggregation" won't be occurring.
so, rather than answer your "technical reason" question, i'll say, we're in
a same planet different worlds scenario here. we don't share assumptions
that would make a joint knowledge quest fruitful.

How is fork-lifting the existing garbage for better IPv4 routers any
worse than migrating to IPv6? At least with an IPv4 infrastructure
overhaul, it's relatively transparent to the end user. It's not
either/or anyway. Ideally you would have an IPv6 capable router that
could do IPv4 without being babied as to prefix table size or update
rate.

forklifting in routers that can speak ipv6 means that when we're done, the
new best-known limiting factor to internet growth will be something other
than the size of the address space. and noting that the lesser-known factor
that's actually much more real and much more important is number of prefixes,
there is some hope that the resulting ipv6 table won't have quite as much
nearly-pure crap in it as the current ipv4 has. eventually we will of course
fill it with TE, but by the time that can happen, routing physics will have
improved some. my hope is that by the time a midlevel third tier multihomed
ISP needs a dozen two-megaroute dual stack 500Gbit/sec routers to keep up
with other people's TE routes, then, such things will be available on e-bay.

everything about IP is transparent to the end user. they just want to click
on stuff and get action at a distance. dual stack ipv4/ipv6 does that pretty
well already, for those running macos, vista, linux, or bsd, whose providers
and SOHO boxes are offering dual-stack. there's reason to expect that end
users will continue to neither know nor care what kind of IP they are using,
whether ipv6 takes off, or doesn't.

IPv4 has enough addresses for every computer on Earth, and then some.

if only we didn't need IP addresses for every coffee cup, light switch,
door knob, power outlet, TV remote control, cell phone, and so on, then we
could almost certainly live with IPv4 and NAT. however, i'd like to stay
on track toward digitizing everything, wiring most stuff, unwiring the rest,
and otherwise making a true internet of everything in the real world, and
not just the world's computers.

That having been said, I think going to IPv6 has a lot of other benefits
that make it worthwhile.

me too.

Tomas L. Byrnes wrote:

IPv4 has enough addresses for every computer on Earth, and then some.

There are approximately 3.4 billion or a little less usable ip addresses. there are 3.3 billion mobile phone users buying approximately 400,000 ip capable devices a day. That's a single industy, notwithstanding how the are presently employed what do you think those deployments are going to look like in 5 years? in 10?

How many ip addresses do you need to nat 100 million customers? how much state do you have to carry to do port demux for their traffic?

I guess making it all scale is someone else's problem...

but if the bottom feeding scumsuckers who saw the opportunity now known as
spam, or the ones who saw the opportunity now known as NXDOMAIN remapping,
or the ones who saw the opportunity now known as DDoS for hire, realize that
the next great weakness in the internet's design and protocols is explosive
deaggregation by virtual shill networking, then we can expect business plans
whereby well suited shysters march into MIT, and HP, and so on, offering to
outsource this monetization. "you get half the money but none of the
distraction, all you have to do is renumber or use NAT or IPv6, we'll do
the rest." nothing in recorded human history argues against this occurring.

paul, this is not the spanish inquisition or the great crusades.
nothing in human history argues against a lot of fantasies and black
helicopters. and yes, some of them actually come true, c.f. iraq. but
i have a business to run, not a religious crusade. there is no news at
eleven, just more work to do.

some time back what we now call legacy space was given out under
policies which seemed like a good idea at the time. [ interestingly,
these policies were similar to the policies being used or considered for
ipv6 allocations today, what we later think of as large chunks that may
or may not be really well utilized. have you seen the proposal in ripe
to give everyone with v4 space a big chunk of v6 space whether they want
it or not? ] the people who gave those allocations and the people (or
organizations) who received them were not evil, stupid, or greedy. they
were just early adopters, incurring the risks and occasional benefits.

maybe it benefits arin's desperate search for a post-ipv4-free-pool era
business model to cast these allocation holders as evil (see the video
of arin's lawyer at nanog and some silly messages on the arin ppml
list), with the fantasy that there is enough legacy space that arin can
survive with its old business model for an extra year or two. i think
of this as analogous to the record companies sending the lawyers out
instead of joining the 21st century and getting on the front of the
wave. i hope that the result in arin's case is not analogously tragic.

arin's legacy registration agreement is quite lopsided, as has been
pointed out multiple times. the holder grants and gives up rights and
gains little they do not already have. but i am sure there will be some
who will sign it. heck, some people click on phishing links.

i suggest we focus on how to roll out v6 or give up and get massive
natting to work well (yuchhh!) and not waste our time rearranging the
deck chairs [0] or characterizing those with chairs as evil.

randy

William Warren wrote:

That also doesn't take into account how many /8's are being hoarded by
organizations that don't need even 25% of that space.

which one's would those be?

While I wouldn't call it hoarding, can any single (non-ISP) organization actually justify a /8? How many students does MIT have again?

legacy class A address space just isn't that big...

There is more legacy space (IANA_Registry + VARIOUS, using Geoff's labels) than all space allocated by the RIRs combined.

Regards,
-drc

The artifact of MIT and others
having /8s while the entire Indian subcontinent scrapes for /29s, can
hardly be considered optimal or right.

While perhaps intended as hyperbole, this sort of statement annoys me as it demonstrates an ignorance of how address allocation mechanisms work. It may be the case that organizations in India (usually people cite China, but whatever) might "scrape for /29s", but that is not because of a lack of address space at APNIC, but rather policies imposed by the carrier(s)/PTT/government.

It's time for the supposedly
altruistic good guys to do the right thing, and give back the resources
they are not using, that are sorely needed.

"For the good of the Internet" died some while back. There is currently no incentive for anyone with more address space than they need to return that address space.

How about they resell it and
use the money to make getting an education affordable?

If you believe this appropriate, I suggest you raise it on ppml@arin.net and see what sort of reaction you get.

The routing prefix problem, OTOH, is an artificial shortage caused by
(mostly one) commercial entities maximizing their bottom line
[...]
Especially if those end-points are relatively stable as to
connectivity, the allocations are non-portable, and you aggregate.

A free market doesn't work like that, prefixes aren't stable, and the problem is that you can't aggregate. If you're actually interested in this topic, I might suggest looking at the IRTF RRG working group archives.

IPv4 has enough addresses for every computer on Earth, and then some.

Unless you NAT out every bodily orifice, not even close.

Regards,
-drc

William Warren wrote:

That also doesn't take into account how many /8's are being hoarded
by
organizations that don't need even 25% of that space.

which one's would those be?

While I wouldn't call it hoarding, can any single (non-ISP)
organization actually justify a /8? How many students does MIT have
again?

<Massachusetts Institute of Technology - Wikipedia >

<quote>
MIT enrolls more graduate students (approximately 6,000 in total) than undergraduates (approximately 4,000).
</quote>

Let's assume 2 staff/faculty per student (don't we wish :). So that would be 30K total. Let's further assume 100 IP addresses per student to deal with laptops, server, other computers, routers, etc. We're now at 330K.

That's no where near 25% of the /8 they have. Good thing they are announcing a /15, /16, and a /24* originated from their ASN too.

Just so we are clear, I have no idea how many servers, computers, or other things MIT might have to justify a /8, /15, /16, and /24. I'm just pointing out the number of students alone clearly doesn't justify their IP space.

UCLA, where the Internet was invented, only has 5x/16 + 2x/24. Obviously they're so much smarter they can utilize IP space better. (No, I'm not saying that just 'cause I went to UCLA. :slight_smile:

Since nobody mentioned it yet, there are now less than 1000 days projected
until IPv4 exhaustion:

IPv4 Address Report

Unfortunately that won't load for me over IPv6, path MTU black hole...

ps. 1000 days assumes no rush, speculation, or hoarding. Do people do
that?

Since the only people who can get really large blocks of IP addresses are the people who already have really large blocks of IP addresses, the eventual distribution of large blocks won't differ much depending on whether there will be a rush or not. Obviously the 99% of requests that use up only 17% of the space each year are of no importance in the grand scheme of things.

I was about to write that 1000 days is too optimistic/pessimistic, but (after trying to compensate for ARIN's strange book keeping practices) it looks like in 2006, 163 million addresses were given out, 196 in 2007. If the next few years also see an increase of 20% in yearly address use, then 1000 days sounds about right.

That means we'd have to use up 235 million addresses this year, while so far we're at 73 million, which puts us on track for 219 million. So maybe it will be 1050 days (which leaves us exactly a million addresses per day).

BTW, about the India thing: they should take their cue from China, which only had a few million addresses at the turn of the century but is now in the number two spot at ~ 150 million addresses. (Comparison: the US holds 1.4 billion, India 15 million, just behind Sweden which has 17 million.) China is now the biggest user of new address space.

http://www.bgpexpert.com/addressespercountry.php
http://www.bgpexpert.com/ianaglobalpool.php
http://www.bgpexpert.com/addrspace2007.php

(Make it "www.ipv4.bgpexpert..." if you have trouble reaching the site over v6.)

Randy Bush <randy@psg.com> writes:

back office software
ip and dns management software
provisioning tools
cpe
measurement and monitoring and billing

and, of course, backbone and aggregation equipment that can actually
handle real ipv6 traffic flows with acls and chocolate syrup.

chiming in late here... the situation on the edge (been looking at a
lot of gpon gear lately) is pretty dismal.

i won't bother mentioning the vendor who claimed their igmp
implementation supported ipv6 "just fine - we're a layer 2 device;
it's plug-and-play". srsly.

                                        ---rob

Suresh Ramasubramanian wrote:

Let's think smaller. /16 shall we say?

Like the /16 here. Originally the SRI / ARPANET SF Bay Packet Radio
network that started back in 1977. Now controlled by a shell company
belonging to a shell company belonging to a "high volume email
deployer" :slight_smile:

http://blog.washingtonpost.com/securityfix/2008/04/a_case_of_network_identity_the_1.html

Which leads me to ask an OT but slightly related question. How do other SPs handle the blacklisting of ASNs (not prefixes but entire ASNs). The /16 that Suresh mentioned here is being originated by a well-known spam factory. All prefixes originating from that AS could safely be assumed to be undesirable IMHO and can be dropped. A little Googling for that /16 brings up a lot of good info including:

http://groups.google.com/group/news.admin.net-abuse.email/msg/5d3e3f89bb148a4c

Does anyone have any good tricks for filtering on AS path that they'd like to share? I already have my RTBH set up so setting the next-hop for all routes originating from a given ASN to one of my blackhole routes (to null0, a sinkhole or srubber) would be ideal. Not accepting the route period and letting uRPF drop traffic would be ok too.

Justin

Tim Yocum wrote:

Patrick is correct - the subscriber count is just north of 10k; likely
far greater readership considering web archives, remailers, etc.

However... subscribership != readership. There are always many subscribers who don't actively read every post, or every day. (I'm just now catching up on 10 days of unread posts - it would be easy to declare email bankruptcy and just mark it all read...) All you have to do is post an important announcement (such as "the mailing list is moving to a new server") and then notice how few read it to prove the truth of this fact. :slight_smile: Also, note how few read the list's welcome message, read the FAQ, etc.

jc

And there are "subscribers" which are actually exploders with many people (mailboxes?) behind them.

Either way, it's not a small number of people / mailboxes.