Exploits start against flaw that could hamstring huge swaths of Internet | Ars Technica

because:
  1) historical results matter here? (who looked at which products
over what period of time, with what attention to detail(s) and which
sets of goals?)
  2) the single person doing a code review is likely to see all of the
problems in each of the products selected?

nothing against any of the software in question here, but really this
is all quite a crapshoot and past transgression research doesn't make
for a great tool to plan for the future.

Joe's right: "all software has bugs, find the software and strategy
that makes sense for your organization" that MIGHT mean 2 platforms
(seems sensible to me!) and it might mean automation for management of
configs (from an abstraction so you can generate the right data to
each target implementation) or it might mean more monkeys on keyboards
if you don't believe in automation.

-chris

hi ya

>> > With the (large) caveat that heterogenous networks are more subject to
>> > human error in many cases.
>>
>> <cough>automate!</cough>
>>

...

> Automation just means your mistake goes many more places more quickly.
>

and letting people keep poking at things that computers should be
doing is... much worse. people do not have reliability and
repeat-ability over time.

ditto ...
computers are experts at listening and repeatatively doing what it's
told to do ..

If you fear 'many more places' problems, improve your testing.

i prefer automation .. even if it's wrong, you can look at the script
and see what bad things it did and you should know what to do to fix
the problem and fix the script to prevent it from spreading that mistake
again

<person's standard excuse>
if you ask a person(s), what did you do to create this mess, "duh... i donno"
btw, it's my kids birthday, i needed to be home an hr ago with the cake :slight_smile:

hummm... :slight_smile:
</standard>

Hi Jared,

I recommend using DNSDIST to balance traffic at a protocol level as you can have implementation diversity on the backside.

I can send an example config out later for people. You can balance to bind NSD and others all at the same time :slight_smile: just move your SPoF

As someone who once hosted TLD zones in a way that a query to a particular nameserver could be answered by either NSD or BIND9, my advice would be "don't do that". You're setting yourself up for troubleshooting hell.

You can include different nameservers in the set for a single zone. Using different software for different nameservers can be sensible. Using different software for the same nameserver can be a nightmare.

Joe

> I recommend using DNSDIST to balance traffic at a protocol level as you can h=
> ave implementation diversity on the backside.=20
>
> I can send an example config out later for people. You can balance to bind N=
> SD and others all at the same time :slight_smile: just move your SPoF
>
> Jared Mauch

Unless the same client hits the same server all the time this is a
bad idea.

  Software that can't handle the remote side having a
upgrade/downgrade/capability change is broken.

Resolvers actually track capabilities of servers as it is the only
way to get answers due to firewalls dropping legitimate packet and
protocol misimplementations. Add to that different vendors /
versions supporting different extensions randomly flipping between
vendors / versions is frought with danger unless you take extreme
care.

  I've come to use DNSDist to workaround the problems
that BIND has with outstanding queries which don't get a response.

  You might be surprised how poorly BIND performs if you
use something else to take a look at it from the exterior.

  http://puck.nether.net/~jared/dnsdist.png

  The first two are BIND the 3rd is not and the 4th is BIND.

  The last 3 get the same types of queries, notice how BIND
drops lots of queries. I don't have time to report all the DNS related
issues on bind-users/dev but you may find it helpful to use a tool
like this to at least identify what is going on.

  The last 3 servers get only domains like arpa and a few well
known domains, eg: gmail.

  - Jared

Hi Jared,

>I recommend using DNSDIST to balance traffic at a protocol level as you
>can have implementation diversity on the backside.
>
>I can send an example config out later for people. You can balance to bind
>NSD and others all at the same time :slight_smile: just move your SPoF

As someone who once hosted TLD zones in a way that a query to a particular
nameserver could be answered by either NSD or BIND9, my advice would be
"don't do that". You're setting yourself up for troubleshooting hell.

  I'm not suggesting you have an unpredictable set of
things you route queries to. I have a very simple config I'll share
with you off-list. One should route things in a predictable manner. This
is why people want operators who can code and operate a service vs just
operate it, or just code. Those are the people in the highest demand
in my narrow experience.

You can include different nameservers in the set for a single zone. Using
different software for different nameservers can be sensible. Using
different software for the same nameserver can be a nightmare.

  Proper logging and instrumentation is essential. DNSDIST
can be configured to fail over to something else while one server
or daemon is offline and being serviced or restarted. This can also
be done with other tools like "stupid routing tricks" aka anycast.

  For a resolver I want to "just work" for servers that need to
do e-mail etc this works well for me. The fact I can have it point to a
BIND process on localhost on a different port, or nsd, etc.. provides
flexability that others don't do as easily.

  - Jared

Wow this thread went off-track in nanoseconds.

So which bind versions are ok?

  -b

This week's.

9.10.2-P3 is marked "current stable", and 9.9.7-P2 is marked "current-stable ESV" at:

   https://www.isc.org/downloads/

The bind-users is probably a place where this kind of thread would at least go off-track in a different set of ways:

   https://lists.isc.org/mailman/listinfo/bind-users

Joe

>
>> However, the original point was that switching from BIND to Unbound
>> or other options is silly, because you're just trading one codebase
>> for another, and they all have bugs.
>
>
> It is equally silly to assume that all codebase are the same quality and
> have equally many bugs. Maybe we should be looking at the track record

of

> those two products and maybe we should let someone do a code review. And
> then choose based on that.

because:
  1) historical results matter here? (who looked at which products
over what period of time, with what attention to detail(s) and which
sets of goals?)
  2) the single person doing a code review is likely to see all of the
problems in each of the products selected?

Maybe not but a code review can tell what methods are used to safe guard
against security bugs, the general quality of the code, the level of
automated testing etc. History can give hints to the same. If it had a lot
of bugs discovered it is likely it is not good quality in a security
perspective and more bugs can be expected.

It is called due diligence. The aim is not to find the bugs but to evaluate
the product.

Regards

Baldur

As someone who once hosted TLD zones in a way that a query to a
particular nameserver could be answered by either NSD or BIND9, my
advice would be "don't do that". You're setting yourself up for
troubleshooting hell.

for some folk, complexity is a career. i worked for circuitzilla
for 15 months; it's embedded in their culture.

randy

Automation just means your mistake goes many more places more
quickly.

and letting people keep poking at things that computers should be
doing is... much worse. people do not have reliability and
repeat-ability over time.

i love the devops movement; operators discover that those computers can
be programmed. wowzers!

maybe in a decade or two, we will discover mathematics. nah.

randy

Maybe we can give them a new title. I'm thinking, "System Programmer."

Here's an example dnsdist config you might find helpful:

  This sends queries to the first two servers unless
they are for domains in the "nether" pool list. They go to
other servers.

  You can restrict access based on the Acl.

newServer("x.x.223.10")
newServer("x.x.223.20")
;setServerPolicy(firstAvailable) -- first server within its QPS limit
setServerPolicy(leastOutstanding)
webserver("0.0.0.0:8083", "AskMe")
addACL("192.168.0.0/22")
addACL("10.0.0.0/16")
addACL("172.16.22.0/24")
setKey("AskMe")
controlSocket("127.0.0.1:1099")
newServer{address="129.250.35.250", pool="nether"}
newServer{address="129.250.35.251", pool="nether"}
newServer{address="8.8.8.8", pool="nether"}
addPoolRule({"ntt.net.", "nether.net."}, "nether")
addPoolRule({"arpa.", "google.", "gmail.com.", "google.com.", "googlemail.com."}, "nether")

Guys, Red Hat have a release with the patch on CR repository. Should we update using the rpm on CR or using the source provide by ISC ?

The release on CR is: 9.8.2rc1-RedHat-9.8.2-0.37.rc1.el6_7.2

-----Mensagem original-----