Best Practices for Network Configuration with NetApp Storage Systems
Applies to
- ONTAP 9
- Data ONTAP 7
Answer
- Clients experience slow response from the storage system.
- NFS Server not responding during peak usage times.
- CIFS overdue requests during peak usage times.
One of the limiting factors in NAS (network attached storage) is the throughput of the network to which the NAS storage system is connected. There are several best practices that should be considered when determining the network requirements for a NetApp storage system.
- NETWORK CONNECTIVITY
Network connectivity can be divided into 2 classifications: public and private- Public Network Configuration:
- A public network configuration is one that is used by CIFS and/or NFS clients for file sharing.
- This network would be used for file sharing such as home directories.
- In public networks, VLANs are not required to separate traffic.
- Private Network Configuration (IPSAN)
- A private network configuration is used for iSCSI applications (Exchange) or NFS (Oracle or ClearCase) and/or CIFS (webserver share). It is a best practice to segregate database/iSCSI network traffic from public user traffic to ensure that the required bandwidth is available.
If virus scanning is used with the NetApp storage system, then the AV (anti-virus) scanners should reside on a separate private network containing only the storage system and the AV server. The connections for both should be at least gigabit Ethernet. For more information on enabling virus scanning on the storage system, see TR 3107: Antivirus Scanning Best Practices Guide. - VLANs (virtual LANs) can be employed to separate these protocols from public network traffic. In a private network configuration, multiple VLANs can be defined to ensure the separation of management, backup, and database traffic (iSCSI/NFS/CIFS). When configuring the switch for multiple VLANs, it is important to also consider the switch architecture to ensure that it has the necessary bandwidth to process the switching load placed on it. This will be discussed in greater detail in section 3 of this document.
- A private network configuration is used for iSCSI applications (Exchange) or NFS (Oracle or ClearCase) and/or CIFS (webserver share). It is a best practice to segregate database/iSCSI network traffic from public user traffic to ensure that the required bandwidth is available.
- Public Network Configuration:
- STORAGE SYSTEM NETWORK INTERFACE CONFIGURATION:
The configuration of the storage system's network interfaces depends on several factors such as the type of traffic being sent, bandwidth requirements, and subnet requirements. Storage system interfaces can be configured as physical interfaces, virtual interfaces (VIFs)/ interface groups (IFGRPs), or as members of VLANs.
- Physical Interfaces
When configuring the physical interface (NIC) of the NetApp storage system, the following settings should be specified:- The IP address, network (subnet) mask, and broadcast address
- Hardware-dependent values such as media-type, MTU size, and Ethernet flowcontrol
- Whether the NIC will register with WINS (Windows Internet Name Services) in a CIFS environment
- The partner IP address in a NetApp Active/Active configuration to ensure a successful cf takeover
Information on the specific commands used to configure network interfaces can be found in the Data ONTAP 7-Mode Network Management Guide or Network Management for ONTAP 9 depending on the version of Data ONTAP operating on the storage system.
When configuring 10/100Base-T NICs, it is best to set the speed and duplex to the desired value instead of allowing these to remain at auto-negotiate. These values should also be set on the associated switch ports. This will ensure that the network connection consistently operates with the desired parameters.
When configuring Gigabit Ethernet NICs, auto-negotiate should always be used. Additionally, Ethernet flowcontrol settings should be considered. Ethernet flowcontrol is a method of allowing pause frames to be sent and received when congestion is detected on a point-to-point Ethernet connection.
NetApp's current recommendation for flow control is to be disabled on the cluster network, but makes no recommendation for the data network.
- Virtual Interface (VIF) or Interface Group (IFGRP)
VIFs or IFGRPs, also known as port channeling or aggregation, allow for high availability and redundant network configurations on NetApp storage systems. VIFS are used in Data ONTAP 7G, and IFGRPs are used in Data ONTAP 8.0 7-Mode and later. Both VIFs and IFGRPs can be configured as either single-mode or multi-mode. A storage system can have multiple VIFs or IFGRPs configured.- Single-mode VIFs or IFGRPs
Single-mode VIFs allow for failover across network switches. A single-mode VIF should be used to provide redundancy with switch connections. Single-mode VIFs can contain two or more multi-mode VIFs to allow for increased network bandwidth on a given IP segment. When a single-mode VIF is brought online, one of the interfaces (or multi-mode VIFs) is favored. This preferred interface can be specified by the system administrator. Information on the configuration of single-mode VIFs can be found in the Data ONTAP 7-Mode Network Management Guide or Network Management for ONTAP 9. - Multi-mode VIFs or IFGRPs
Multi-mode VIFs are used with port channeling (or Cisco Etherchannel) configurations. A NetApp storage system allows up to 16 Gigabit Ethernet ports to be channeled together. A multi-mode VIF should be used to provide increased network throughput for data access. Multi-mode VIFs provide load balancing of outgoing traffic from the storage system. The storage system supports the following load-balancing methods: IP-address based, MAC-address based, Port based or Round Robin.- Note: Data ONTAP 7.3.1 does not include support for Round Robin load balancing.
- When determining the number of interfaces to include in a multi-mode VIF, the storage system administrator must understand the network load that will be applied to the storage system on a given IP segment. Most multi-mode VIFs contain 2 - 3 network interfaces.
The individual interfaces in a multi-mode VIF can be separated across multiple network switch blades to provide redundancy, however, additional configuration is required. For example, vPC for Cisco switches. If a single interface in a multi-mode VIF fails, the VIF continues to send traffic on the remaining interface(s).
- Data ONTAP 7.2.1 and later include two types of multi-mode VIFs: dynamic and static. Dynamic multi-mode VIFs comply with the LACP (IEEE 802.3ad) standard. As such, they can detect loss of link status and loss of data flow. Static multi-mode VIFs do not support LACP functionality.
- Detailed information on the configuration of multi-mode VIFs can be found in the Data ONTAP 7-Mode Network Management Guide or Network Management for ONTAP 9.
- Additional information on the use of VIFs can be found in TR-3802: Ethernet Storage Best Practices.
- Single-mode VIFs or IFGRPs
- VLAN Interfaces
The storage system also supports port-based VLAN membership. To configure a NetApp storage system for VLAN membership, the following conditions must be met:- The network switches must be IEEE 802.1Q compliant or have a vendor-specific implementation of VLANs.
- The end-station must be able to dynamically register VLAN membership using GVRP or be statically configured for switch ports to allow multiple VLANs.
- VLANS must be allowed on a trunked switch port.
- Additional information on configuring the storage system for VLANs can be found in the Network Management for ONTAP 9.
- RECOMMENDED SWITCH ARCHITECTURE:
- For the Data Center
To ensure high availability, Data Center switch architectures should always include redundant core switches. If private networks are to be used in the environment, it is recommended to dedicate switches to the private network. However, non-routed VLANs can also be used for private networks providing that the proper blade architecture is employed (see the Recommended Switch Blade Types). - For the Remote Office
Remote office switches should be Gigabit-Ethernet switches to provide sufficient bandwidth. If high availability is needed, then redundant switches should be deployed. For protocols such as iSCSI that should be run over a private network, the remote office switch should also support VLANs.
- For the Data Center
- RECOMMENDED SWITCH BLADE TYPES:
All Gigabit switches are not equal. Switch blades can be classified as non-blocking or blocking architecture. It is important to know what architecture the switch blade uses when determining if it is appropriate for the Data Center or Remote Office environment.- For the Data Center
As the Data Center switches will carry a heavy traffic load, the switch blades should provide a non-blocking architecture that guarantees dedicated gigabit bandwidth and memory for each port (1:1). Switch blades that employ blocking architectures (8:1) use shared memory and cannot provide the performance required in a Data Center environment. - For the Remote Office
A quality Gigabit Ethernet switch in the remote office is often sufficient providing that high performance is not expected. Switchblades with blocking architectures can often handle the performance requirements of the network in a remote office. For bandwidth/performance intensive requirements, however, a switch blade with a non-blocking architecture will be required. - The proper network configuration can positively impact the performance of the NetApp storage system. Questions on specific network requirements or configurations in your specific environment can be directed to NetApp.
- For the Data Center
*VIF is deprecated from 8.0 and later. Referred to as IFGRP post 8.0. |
Additional Information
additionalInformation_text