There are lots of interesting tools to collect and analyze
network traffic information from protocols like Netflow,
sFlow, LFAP, etc. Few of the collectors/analyzers share
common data repositories, so one sets up a machine for each
with its own export from each router.
How are others handling this problem, particularly after
hitting the limit of one or two export flows supported by
most routers today?
Apparently there are a number of packet reflectors (usually
running on a Unix machine) that work in theory, how do these
work in actual practice, what kind of performance and
reliability to they have, and which work well for traffic
collection?
Has anyone tried using multicast or broadcast addresses or
other tricks (shared IP address or MAC address on multiple
machines) to share a single accounting stream across
multiple collectors?
Any other methods that are working for scaling multiple
collectors without exceeding router limitations or burdening
routers unnecessarily?
Pete.
Apparently there are a number of packet reflectors (usually
running on a Unix machine) that work in theory, how do these
work in actual practice, what kind of performance and
reliability to they have, and which work well for traffic
collection?
I have had good results with samplicator:
http://www.switch.ch/tf-tant/floma/sw/samplicator/
I haven't had any problems with it handling fairly heavy loads and the CPU
impact is minimal. Under very heavy loads you might want to bump up the
receive buffer (the -b command-line option).
Bradley
Any other methods that are working for scaling multiple
collectors without exceeding router limitations or burdening
routers unnecessarily?
I know that we at some point had several machines in the network gathering the data as the volumes simply became to much to process at one single point. We then had some homegrown software that did pre-processing on the data and this was then fed to the main processing server. However, in general I think it's pretty hard to make Netflow data scale...ideally you would be able to pick which fields you wanted exported...
If you find some "standard" tool set for this, I think would be interesting though. But I personalyl still think the problem lays with the protocol as such...
- kurtis -
data and this was then fed to the main processing server. However, in
general I think it's pretty hard to make Netflow data scale...ideally you
would be able to pick which fields you wanted exported...
The NetFlow v9 format looks like it would support this kind of thing.
Whether cisco will actually implement it that way I don't know.
Bradley
flow-tools has flow-fanout which does what you are describing for
netflow. (replicates arriving flows to many destinations)
man flow-fanout
http://www.splintered.net/sw/flow-tools/docs/flow-fanout.html
flow-tools
http://www.splintered.net/sw/flow-tools/
flow-tools seems to be a good bet for handling/munging flow data in
general.
-joel
data and this was then fed to the main processing server. However, in
general I think it's pretty hard to make Netflow data scale...ideally you
would be able to pick which fields you wanted exported...
The NetFlow v9 format looks like it would support this kind of thing.
Whether cisco will actually implement it that way I don't know.
Netflow9 is also only sampled data right? Although pretty close to the real numbers it's not realyl useful for for example billing.
- kurtis -