Could anyone help with the following scenario and associated questions...
Imagine you have a network consisting of 10,000 elements split into 1,000 devices and 9,000 interfaces.
For arguments sake assume the following:-
1. The maximum number traps that the management platform will receive is 200 per second and the typical number of traps is 10 per second.
2. For Syslog - assume we have 4 syslog servers (250 devices per server) that receive a maximum of 10 messages per second per server and a typical 1 message per second per server
3. The devices are using 'out of the box' trap and syslog settings in terms of what they send.
Q1. What do you think will be the percentage of 'useful' traps from a fault management perspective? Of course it all depends upon what you are interested in and what the network is doing but some thoughts about the volume of useful traps and what those traps are would be really useful
Q2. Same question as Q1 but for syslog.
Q3. What do you expect the real figures to be based upon the network operating normally and what, from your experience, are they likely to be under fault conditions?
Q4. What, again from your experience, devices send the most traps and syslog messages? - is it that a particular manufacturer are particularly trap-heavy for example?
Any thoughts or advice would be most appreciated.