Ophir,
Thank you! This is great information! Very useful!
I was looking at errors ( ifconfig | grep err ), saw "0" and thought that there was nothing to worry about on the NIC level!
I have lost 37 million packets due to "vport_rx_dropped" and some more "vport_rx_filtered". Would it be possible to dig deeper into these?
My application is getting a lot of UDP packets.
Does" vport_rx_dropped" indicate that NIC was unable to deliver the data to the UDP buffers (for example due to the lost IRQ or due to the buffer being full)?
I am monitoring UDP error counter (with netstat -us | grep "packet receive errors" ), I see some growth, assume that this indicates that Application was unable to process data in UDP buffers fast enough.
Does this indicate that my Application can't keep up with the flow of data coming in?
[root@npectlsqpwfh16 ~]# ethtool -S eth4 | egrep "rx_errors|rx_dropped|rx_over_errors|rx_crc_errors|rx_jabbers|rx_jabbers|vport_rx_filtered"
rx_errors: 0
rx_dropped: 0
rx_over_errors: 0
rx_crc_errors: 0
rx_jabbers: 0
vport_rx_dropped: 37364563
vport_rx_filtered: 5673020
[root@npectlsqpwfh16 ~]#
PS rx_lro_aggregated appears to be very low (doc says "should be equal to rx_packets in good/normal condition").
[root@npectlsqpwfh16 ~]# ethtool -S eth4 | egrep "rx_lro_aggregated|rx_lro_flushed|rx_lro_no_desc|rx_csum_good|rx_csum_none"
rx_lro_aggregated: 6737
rx_lro_flushed: 6518
rx_lro_no_desc: 0
rx_csum_good: 2607405347
rx_csum_none: 10553941
[root@npectlsqpwfh16 ~]#
Thank you,
Aleksandr