Thursday, February 12, 2009

NetApp 3040a Clustered Link Aggregation - vif

I've got (2) NetApp 3040a clustered systems both running LACP Aggregated vifs (nics) for my NFS VMWare Connections. One cluster is running on Cisco Catlyist 3750's and the other is running on a Cisco Catlyist 4507. Both switches are setup redundantly. The fail over / load balance is excellent. Here's how I set it up:

My switches are set to IP Load Balance (global switch setting)

Commands I used to setup the nics on the NetApp. This puts onboard nic c and d and add on card port c and d in an aggregated LACP vif called SANAprivate. I use this for private NFS traffic for my VMWare ESX Hosts. The next command sets the IP Address info and adds the partner vif for cluster failovers / non-disruptive SAN upgrades.
> vif create lacp SANAprivate -b ip e0c e0d e4c e4d

> ifconfig SANAprivate 192.168.217.11 up netmask 255.255.255.0 broadcast 192.168.217.255 -wins mediatype auto trusted partner SANBprivate


> vif status SANAprivate
default: transmit 'IP Load balancing', VIF Type 'multi_mode', fail 'log'
private: 4 links, transmit 'IP Load balancing', VIF Type 'lacp' fail 'default'
VIF Status Up Addr_set
up:
e4d: state up, since 30Jan2009 07:47:56 (7+08:17:02)
mediatype: auto-1000t-fd-up
flags: enabled
active aggr, aggr port: e0d
input packets 8106183, input bytes 9157734620
input lacp packets 22869, output lacp packets 21163
output packets 502026, output bytes 229370476
up indications 2, broken indications 0
drops (if) 0, drops (link) 0
indication: up at 30Jan2009 07:47:56
consecutive 0, transitions 2
e4c: state up, since 30Jan2009 07:47:54 (7+08:17:04)
mediatype: auto-1000t-fd-up
flags: enabled
active aggr, aggr port: e0d
input packets 912352, input bytes 82064164
input lacp packets 22874, output lacp packets 21162
output packets 4173173, output bytes 1334844804
up indications 2, broken indications 0
drops (if) 0, drops (link) 0
indication: up at 30Jan2009 07:47:54
consecutive 0, transitions 2
e0c: state up, since 30Jan2009 07:47:53 (7+08:17:05)
mediatype: auto-1000t-fd-up
flags: enabled
active aggr, aggr port: e0d
input packets 2356250, input bytes 569112124
input lacp packets 22857, output lacp packets 21160
output packets 873913, output bytes 121767134
up indications 2, broken indications 0
drops (if) 0, drops (link) 0
indication: up at 30Jan2009 07:47:53
consecutive 0, transitions 2
e0d: state up, since 30Jan2009 07:47:53 (7+08:17:05)
mediatype: auto-1000t-fd-up
flags: enabled
active aggr, aggr port: e0d
input packets 3886952, input bytes 2231755682
input lacp packets 22877, output lacp packets 21160
output packets 1772975, output bytes 1653703494
up indications 2, broken indications 0
drops (if) 0, drops (link) 0
indication: up at 30Jan2009 07:47:53
consecutive 0, transitions 2

Cisco Switch Config
We tested this by pulling Cables from each of the 4 nics up to 3 at a time, so each nic would be by itself and with other nics while pulling data from the link aggregation. We setup multiple connections so we were pulling more than 1 nics worth of bandwidth. I have had very good results with this configuration and have not seen any issues with teaming the onboard nics and the addon nics.

interface Port-channel10
description NetApp Filer Public Links
switchport
switchport access vlan 463
switchport mode access
!
interface GigabitEthernet1/1
description stfSan-e0a
switchport access vlan 463
switchport mode access
channel-group 10 mode active
!
interface GigabitEthernet1/2
description stfSan-e4a
switchport access vlan 463
switchport mode access
channel-group 10 mode active
!
interface GigabitEthernet2/1
description stfSan-e0b
switchport access vlan 463
switchport mode access
channel-group 10 mode active
!
interface GigabitEthernet2/2
description stfSan-e4b
switchport access vlan 463
switchport mode access
channel-group 10 mode active
!

2 comments:

  1. Good post, will be realy helpful with your switch configuration.

    ReplyDelete
  2. My NetApp LACP Cisco Switch configuration has added.

    ReplyDelete