RE: Facility wide DR/Continuity

On the subject of DNS GSLB, there's a fairly well known article on the subject that anyone considering implementing it should read at least once.... :slight_smile:

http://www.tenereillo.com/GSLBPageOfShame.htm
and part 2
http://www.tenereillo.com/GSLBPageOfShameII.htm

Yes it was written in 2004. But all the "food for thought" that it provides is still very much applicable today.

As with all things, there's no "right answer" ..... a lot of it depends on three things :

- what you are hoping to achieve
- what your budget is
- what you have at your disposal in terms of numbers of qualified staff available to both implement and support the chosen solution

That's the main business level factors. From a technical level, two key factors (although, of course, there are many others to consider) are :

- whether you are after an active/active or active/passive solution
- what the underlying application(s) are (e.g. you might have other options such as anycast with DNS)

Anyway, there's a lot to consider. And despite all the expertise on Nanog, I would still suggest the original poster does their fair share of their own homework. :slight_smile:

In practice, active/passive DR solutions often fail. You rarely need
to fail over to the passive system. When you finally do need to fail
over, there are a dozen configuration changes that didn't make it from
the active system, so the passive system isn't in a runable state.

Regards,
Bill Herrin

Tell me about it ...... "failover test.... what failover test" :wink:

And management will never, ever let you do a full-up test, nor will they allow you to spend the money to build a scaled-up system which can handle the full load, because they can't stand the thought of hardware sitting there gathering dust.

Concur 100%.

Active/passive is an obsolete 35-year-old mainframe paradigm, and it deserves to die the death. With modern technology, there's just really no excuse not to go active/active, IMHO.

Roland,

Sometimes you're limited by the need to use applications which aren't
capable of running on more than one server at a time. In other cases,
its obscenely expensive to run an application on more than one server
at a time. Nor is the split-brain problem in active/active systems a
trivial one.

There are still reasons for using active/passive configurations, but
be advised that active/active solutions have a noticeably better
success rate than active/passive ones.

Regards,
Bill Herrin

Roland Dobbins wrote:

You rarely need to fail over to the passive system.

And management will never, ever let you do a full-up test, nor will they
allow you to spend the money to build a scaled-up system which can
handle the full load, because they can't stand the thought of hardware
sitting there gathering dust.

Concur 100%.

Active/passive is an obsolete 35-year-old mainframe paradigm, and it
deserves to die the death. With modern technology, there's just really
no excuse not to go active/active, IMHO.

There's always one good reason: money. Some things just don't
active/active nicely on a budget. Then you're trying to explain why you
want to spend money on a SAN when they really want to spend the money on
new "green" refrigerators. (That's not a joke, it really happened.)

~Seth

All understood - which is why it's important that app devs/database folks/sysadmins are all part of the virtual team working to uplift legacy siloed OS/app stacks into more modern and flexible architectures.

;>

Sure, because of inefficient legacy design choices.

Distribution and scale is ultimately an application architecture issue, with networking and ancillary technologies playing an important supporting role.