Intradomain DNS Anycast revisited

Hi,

I'm trying to set up a anycast DNS server farm for
customer service. In order to improve availability, we
plan to install those servers in
one LAN which has the similar structure like :

server-(1,3)---switch1---router-1---(outside)
                 >
                 >
server-(2,4)---switch2---router-2---(outside)

The four unix servers are all unix boxes, switch-1 &
switch-2 are interconnected to guarantee the
availability. BIND is to be used as
DNS cache server software, Quagga OSPFD is used to be
routing software.

According to above configuration, both routers will
know multiple paths to dns cache server, while dns
cache server should know two
paths to outside network. Here comes my questions:

1) should each dns cache server be configured a static
default route (0.0.0.0/0.0.0.0)? If server-(1,3) is
configured statically to use
router-1 as default router, will Quagga make it use
router-2 when router-1 is not reachable?

2) If each server is configured two default router (
router-1 &
router-2), or each server learn route 0.0.0.0/0.0.0.0
by OSPF ( our border router inject default route into
OSPF ); there should be
two equal cost path to 0.0.0.0/0.0.0.0 on each DNS
server, the DNS server should disperse any outgoing
packets onto the two paths, will
that do harm to DNS service ?

3) Is there any requirement on BIND to fit to such
multipath routing situation?

Joe

1) should each dns cache server be configured a static

    > default route (0.0.0.0/0.0.0.0)? If server-(1,3) is
    > configured statically to use
    > router-1 as default router, will Quagga make it use
    > router-2 when router-1 is not reachable?

No, because both routers are reached through the same L1/L2 medium, so
Quagga can't use link-state to determine reachability of the next-hop.
You could fix that by getting rid of the switches, and just having a bunch
of router interfaces facing two Ethernet interfaces on each server, which
would remove some points of failure, and would be a good idea if you can
spare the router interfaces... or you could use the OSPF which you're
already going to be running, to advertise a default from both routers to
each of the servers.

    > 2) If each server is configured two default router (
    > router-1 &
    > router-2), or each server learn route 0.0.0.0/0.0.0.0
    > by OSPF ( our border router inject default route into
    > OSPF ); there should be
    > two equal cost path to 0.0.0.0/0.0.0.0 on each DNS
    > server, the DNS server should disperse any outgoing
    > packets onto the two paths, will
    > that do harm to DNS service ?
    
Nope, no problem, particularly so long as the two routers are iBGP peers,
so they'll both (for the most part) have the same idea of what selected
paths are.

    > 3) Is there any requirement on BIND to fit to such
    > multipath routing situation?

Nope. BIND doesn't know what's going on that far below it.

                                -Bill

Hmh. Thinking of it, for Anycast to work I would rather run
Quagga's BGP daemon and announce the /32 (or whatever it is
in your setup) of the DNS server, and maybe receive a 0/0
from the router (not needed when the router no longer sees
your /32 as then you will simply not receive requests any
longer). All BGP with appropriate timers (some few seconds
only), and if you want you can put some simple scripts on
this box as well that further check if things are working.

All that as less and less real routers[tm] have FE connection
in them where you could work on link-state... and you need a
'dynamic' component to tell you the connection over some GE
aqggregation gear is still there, and if you combine this
with the monitoring of the service itself it should pretty
much work out. Mind you, this is theory. Looks quite sound
to me though.

I like BGP more as I could transport that /32 with no-export
right away.

Regards,
Alexander

Speaking of which, whatever happened to MARP?

http://www.watersprings.org/pub/id/draft-retana-marp-01.txt

This in combination with UDLD sounded good to me when I heard about it a couple of years ago.

On the subject of UDLD, does anyone know why no mechanism was included in 10GE for UDLD? In 1GE there is autoneg which fixes this, but it's gone in 10GE. I just cannot figure out why anyone would actually remove such an operationally important feature, and now try to implement it in software instead of on the link layer.

It has been my experience in the deployment of such anycasted dns
server pods that pushing ospf from the dns server hosts introduces
complexity and reduces reliability to the point that other, simpler
solutions become much more attractive.

You should also take a moment to take a look at your spanning tree
configuration, depending on how you care configuring your switches.

matto

  I'm trying to set up a anycast DNS server farm for
  customer service. In order to improve availability, we
  plan to install those servers in
  one LAN which has the similar structure like :
  
  server-(1,3)---switch1---router-1---(outside)
                   >
                   >
  server-(2,4)---switch2---router-2---(outside)
  
  The four unix servers are all unix boxes, switch-1 &
  switch-2 are interconnected to guarantee the
  availability. BIND is to be used as
  DNS cache server software, Quagga OSPFD is used to be
  routing software.
  
  According to above configuration, both routers will
  know multiple paths to dns cache server, while dns
  cache server should know two
  paths to outside network. Here comes my questions:
  
  1) should each dns cache server be configured a static
  default route (0.0.0.0/0.0.0.0)? If server-(1,3) is
  configured statically to use
  router-1 as default router, will Quagga make it use
  router-2 when router-1 is not reachable?
  
  2) If each server is configured two default router (
  router-1 &
  router-2), or each server learn route 0.0.0.0/0.0.0.0
  by OSPF ( our border router inject default route into
  OSPF ); there should be
  two equal cost path to 0.0.0.0/0.0.0.0 on each DNS
  server, the DNS server should disperse any outgoing
  packets onto the two paths, will
  that do harm to DNS service ?
  
  3) Is there any requirement on BIND to fit to such
  multipath routing situation?
  
  Joe

  1) should each dns cache server be configured a static
  default route (0.0.0.0/0.0.0.0)? If server-(1,3) is
  configured statically to use
  router-1 as default router, will Quagga make it use
  router-2 when router-1 is not reachable?

configure a loopback interface on your dns servers and advertise a route to that loopback address to your connected routers... Why the hell do you want to use static routes when you don't need to... Yuck

  2) If each server is configured two default router (
  router-1 &
  router-2), or each server learn route 0.0.0.0/0.0.0.0
  by OSPF ( our border router inject default route into
  OSPF ); there should be
  two equal cost path to 0.0.0.0/0.0.0.0 on each DNS
  server, the DNS server should disperse any outgoing
  packets onto the two paths, will
  that do harm to DNS service ?

Servers a cheaper than good routers... Why don't you just have your dns server connected to one router.... If you want dns servers on multiple subnets, spend a couple of grand and get a cheap 1U dell server for your other rack... Why do you need to worry about default routes? You are running a dynamic routing protocol on your dns server. Anycast dns pretty much relies on ospf working... Hell, you don't need any default gateway on your host... Let you quagga worry about routing...

Set up your ospf area as an nssa.... Your dns server only needs to LEARN default from the connected routers... You will be doing a 7-5 conversion on your asbrs.

  3) Is there any requirement on BIND to fit to such
  multipath routing situation?

What does bind care? We are talking unicast... If the packet can get to your dns server, bind can probably get the answer back. You are making this much more difficult than it needs to be....

http://www.net.cmu.edu/pres/anycast

Check out this presentation....

Peter

Date: Sun, 27 Mar 2005 08:44:34 -0800
From: Peter John Hill

configure a loopback interface on your dns servers and advertise a
route to that loopback address to your connected routers...

We've used this approach for several years. It works very well.

Eddy

I like BGP more as I could transport that /32 with no-export

    > right away.

Yes, in a simple hub-and-spoke anycast topology, iBGP is simplest. In a
wagon-wheel or mesh topology, having an IGP makes some things simplest,
though you can still use iBGP in that role.

                                -Bill

Yep, use the best routing protocol that you already have running on the connecting routers might be a decent rule of thumb... Therefore enterprises (where I have worked) tend to use ospf.... I can understand service providers using iBGP...

Peter