Just got this dropped on my desk an hour ago, and I'm not finding as much
material online as I might have hoped for...
It looks like the easiest solution is to just hang a router/firewall at
Equinix Ashburn and AWS-DC to that, and then peer it to carriers both IP and
MPLS; is there a "native" way to do that from an AWS VPC instead?
Any public or private replies cheerfully accepted; will summarize what I
can to the list.
If you're asking if one can get a provider's router to handle the outside physical part of a DC connection... As an ISP service so you don't need your own router hardware...
I was working on this for a recent ex client and asked Level 3 exactly that question. I believe I had the right network guy on the phone and it was a firm no.
I was going to check all the other Direct Connect providers but client ran out of $$.
If anyone does do that, I would like to know and pass it along to ex client for their information.
George William Herbert
Not sure about AWS, but if you are a client of Dimension Data cloud, you
don't need to do anything. Everything will be taking care off from the
provider perspective. Didata will peer with your tier 1/MPLS - acts as
CPE...etc I am pretty sure AWS does that for you as well.
Else you could spin up a CSR1000v inside the AWS and ask them to connect
VPC is supported over IPsec if your public path is sufficient into the AWS
AWS shortens DirectConnect to DX not DC for some reason.
The AWS DirectConnect service is built on 10G infrastructure so using
potentially larger interconnects over public peerings with IPsec could be
DX requires fiber cross connects in addition to any other AWS peerings that
you may have at a particular location.
I haven't heard it from the horse's mouth, but I heard that the only way to have customers share an AWS DX (apparently) cross connect is through Equinix's cloud exchange service. Can anyone confirm that? It doesn't seem right that I could transport people to AWS all day long if they buy their own cross connect, but once we share, I have to go through someone offering a competitive service.
I can confirm that AWS (and Equinix, by extension, from a facility operator
perspective) permit carriers to have multiple end users share a physical
interface into the AWS gateway. The key is whether the providers that are
permitted into the DX environment (I believe AWS has limited the list to
only 7 or 8 in total - anyone else is reselling capacity off of those
carriers) are willing to deal with the constraints of that configuration -
essentially that the carrier needs to take responsibility of engaging
directly with AWS to associate the EVC on the provider interface with the
VPC on the AWS interface. I can confirm that at least one provider other
than Equinix will do this. Point being, it's not an AWS restriction as much
as whether the provider is willing to get its hands a bit dirtier. My $.02
If anyone has connections at Amazon in those areas, could you pass them my way? My IP peering contact (MMC) seems to have fallen off the face of the earth and I'm not sure that's his jurisdiction anyway. Their web site seems largely useless so far, catering more to the consultant and software dev guys than the infrastructure\transport guys.
***disclaimer - info on subject from a shareholder***
Yeah. In addition to Equinix and a few others Megaport is expanding pretty quickly in US at present. 30+ locations 7 US markets. Worth a look if you are trying to get your Azure and AWS fix from a single provider via 100% SDN, API driven platform (also and other services such as AMS-IX peering). Interesting differences such as a flat rate Virtual X-Connect regardless of speed and where the other end of the circuit is in the metro. Day/month/year from 1mbps to 10gbps. Been doing elastic interconnects since 2013.
Well known in Asia but less so in US/NANOG hence the first and last public post about this.
Anyway, maybe worth a look.
I work for a DirectConnect provider, albeit in the UK though. We have
fibre links to a AWS edge routers and we have multiple customers
seperated by VLANs over a fibre link, each terminating into different
VRFs on our edge and the AWS edge. For each customer we have an eBGP
session with a virtual gateway that lives inside the customer's VPC
Also for each customer they have backup tunnels using IPSec over the
Internet. Again we run eBGP over the IPSec tunnels to the virtual
gateway inside each customers VPC domain.
ESnet employs MPLS virtual circuits from our customer sites to VLANs
connecting over DX cross connects in US-EAST and US-WEST regions. Exploring
the DX provider paradigm we have demonstrated that the billing of the DX
network service can be billed to the provider with the compute costs billed
directly to the customer. In this way a network provider can cover the
shared network resource cost, if desired.
While the carrier does provision EBGP, in our use case it was only used for
monitoring not for exchanging routes. Each of our customers provision both
a public and private/VPC EBGP peering, see public and private DX services.
This gets interesting when you realized the routes advertised by AWS differ
by geographic region in the public Internet case and when peering with DX
AWS advertises a much larger table and recommends that end sites build
policies based on the information in this link:
At some point your DX customers will need to make the decision to prefer
the public AWS route prefixes that you export to them or those received
directly from their DX public EBGP service.