Gb ethernet interface keeping dropping packet in ingress

Hi,

I'm using Harbour 10G lay3 switch which interconnects
a Catalyst6509 and a Foundry switch. the
interconnecting lines are all 1Gbps ethernet (1000Gb
LX).

Catalyst6509----Harbour 10G switch----Foundry
Switch---Firewall

the firewall and harbour interconnect at layer 3.

We noticed there is something abnormal on link between
10G switch and Foundry switch:

1. The load on that 1Gbps link may decrease sharply
from 110Mbps to 20Mbps and come back, the frequency
and occuring time is random.

2. we monitoring connection to Firewall interface by
'ping' with twenty 50byte packets every 20 seconds,
and there is packet loss and end-to-end latency
increase around the time when link load decreases
sharply.

3. on Harbour switch, the interface counter always
shows there is "Dropped Packets on Ingress" on
interface connecting foundry and cisco, and the number
increase continuously.

I've disable auto-negotiation and flow-control on
between 10G switch and Foundry switch, firewall
interface.

What's maybe the possible reason for problem above?

thanks in advance.

Joe

If you're sniffing one gigabit port from a switch with much higher bandwidth, you're going to lose something. Our primary sensor sits on an aggregation switch just prior to hitting the net, and we have a 2Gb fast etherchannel span port defined and lose relatively little in terms of packet loss. If course, the more aggregate traffic you have, the higher the probability you will max out the span port and it's buffers. Unless you're just drilling the heck out of the server farm(s) on that switch, you won't lose all that much with an etherchannel of 2 Gig ports. We have 2Gb etherchannel uplinks back to the core, and the most the switch could throw at us would be 2Gb etherchannel traffic. So we are spanning the uplinks there.

Just as your switches/routers can be "over subscribed" the the 4506 backplane is only 6Gb/slot, and we don't lose that much, and some of that loss is due to buffer constraints on the switch. Not perfect, but it works. In less critical ennvironments, we can sniff with a 100Mb interface and still do well.

The only caution here is that you can seldom catch local traffic. If there's a local scanner (like Blaster started out to be) it doesn't show up except for excessive arps. We have some cron'ed scripts that periodically (1) look at connection counts in the PIX, if they're out of "range), we quarantine them to the Perfigo dungeon. Similarly there is a script that counts ARP requests (just the dorms specifically right now) and for every 1000 it forks itself to start anew, and analyzes the numer of ARPS per station. Local scanners get eaten up here really quickly and they are also quarantined.

Not how sure this fits into NANOG, this is more of a local ISP/Universiity setting. I don't know that an ISP can do that much, they're too busy keeping the packets flowing and being only minimally intrusive on your traffic without special arrangements, at least as a usual case. Special cases like Slammer, Blaster, and the initial Bagel/MyDoom mix some may have initiated ingress/egress filters for those, temporarily.

You should be able to handle an OC-12 with a gig interface or two on the sensor. I wouldn't make any claims for an OC-48 or above. These things don't scale well into the certral peering points (MAE, Abilene, etc);.

Jeff

Hi,

we do not sniffing the Gbps ethernet link, and the box
I mentioned in previous message is not oversubscribed
at all. In fact, the 10Gbps switch is newly installed
and only two link connected ( one to catalyst6509, one
to firewall).

Anyway, thanks for your analysis and I want to know
what's the name of the scripts checking ARP on switch?

thanks.

Joe

--- Jeff Kell <jeff-kell@utc.edu> wrote: