Careful with that dual-hop FCoE setup with HP Virtual Connect and Nexus 5500/5000

HP just recently launched its new Firmware release, the 4.01 – for its Blade Interconnect darling, the Virtual Connect. You can find the release notes here. Also noteworthy are the cookbooks, and HP just released its “Dual-Hop FCoE with HP Virtual Connect modules Cookbook” .

One of the new features is dual-hop FCoE. What this means is that until so far you had only one hop FCoE from Virtual Connect to the server Converged Network Adaptor (CNA). This facilitates quite a lot FCoE implementation, since you don’t need any of the DCB protocols – its only one hop. Starting from version 4.01 you can now have an uplink connection where you pass FCoE traffic.

VCFF-DUal-hop-Nexus

The VC serves thus as a FIP-snooping device and the upstream switch as Fiber Channel Forwarder (FCF), in case you want to have retrocompatibility with former existing FCP environments. Please note that only a dual-hop topology is supported. You cannot continue hopping FCoE traffic accross your Network. Well you can, but its just not supported.Dual-hp-FIP-snooping

Some important considerations:

  • On HP side, only VC FlexFabric (FF) and Flex10/10D are able to do dual-hop FCoE. On Cisco’s side, only Nexus 5500 and 5000 (NX-OS version 5.2(1)N1(3)) are supported.
  • Only Nexus FCF mode is supported (NPV mode is not supported), and Nexus ports must be configured as trunk ports and STP edge ports;
  • VC-FF, only ports X1 to X4 support FCoE; VC Flex10/10D supports using all its Uplinks;
  • Only VC uplink Active/Active design is supported; Active/Standby design will not be supported due to potential extra-hop on VC stacking links due to misconfiguration on Server profile;
  • Cross-Connect links between VC uplinks to Nexus are not supported. Even if you’re using vPC, it will be a design violation of traffic mix in SAN A (say connected to Nexus 01) and SAN B (say connected to Nexus 02).

VCFF - Dual-hp-constraints

  • Moreover, if you want to converge all traffic on the same uplink LAG ports, you will have to have a single Shared Uplink Set (SUS) per VC. Note that this SUS can only have one FCoE Network.

Dual-hop SUS

  • However, if you have more uplink portson your VCs available, then consider seperating Ethernet traffic from Ethernet traffic, using SUS for Ethernet, and a vNet for the FCoE traffic. So in this scenario, and when using vPC on the Nexus it is still advisable to cross-connect VC SUS Uplinks from the same VC Module to different Nexus for pure Ethernet traffic.

VCFF-Dual-hop-Uplinks

  • Note: Remember VC rules. You assign uplinks from one VC module to one SUS and uplinks from anotherdifferent VC module to the other SUS. It is very important that you get this right, as VC modules are not capable of doing IRF withemselves (yet). More on how setup VC with IRF here.

VC-uplinks

  • So when using port-channels on VC FCoE ports, you will need to configure ports on the Nexus to belong to the same Unified Port Controller (UPC) ASIC. Nexus 5010/5020 support a maximum of 4 ports when carrying FCoE traffic, 5548/5596 a maximum of 8.

Nuxus-UPC-Ports

More on the VC setup and installation “HP Virtual Connect for c-Class BladeSystem Version 4.01 User Guide” here.

Disclamer: note that these are my own notes. HP is not responsable for any of the content here provided.

Setup for VMware, HP Virtual Connect, and HP IRF all together

Trying to setup your ESXi hosts networking with HP Virtual Connect (VC) to work with your HP’s IRF-Clustered ToR switchs in Active-Active mode?

VC-IRF

This setup improves performance, as you are able to load balance traffic from several VMs across both NICs, across both Virtual Connect modules, and across both ToR switchs, while in the meanwhile gaining more resilience. Convergence on your network will be much faster, either if a VC uplink fails, a whole VC module fails, or a whole ToR fails.

Here’s a couple of docs that might help you get started. First take a look on the IRF config manual of your corresponding HP’s ToR switches. Just as an example, here’s the config guide for HP’s 58Xo series. Here are some important things you should consider when setting up an IRF cluster:

  1. If you have more than two switches, choose the cluster topology.  HP IRF allows you to setup a daisy-chained cluster, or in ring-topology. Naturally the second one is the recommended for HA reasons.
  2. Choose which ports to assign to the cluster. IRF does not need specific Stacking modules and interfaces to build the cluster. For the majority of HP switchs, the only requirement is to use 10GbE ports, so you can flexibly choose whether to use cheaper CX4 connections, or native SFP+ ports (or modules) with copper Direct Attach Cables or optical transcevers. Your choice. You can assign one or more physical ports into one single IRF port. Note that the maximum ports you can assign to an IRF port varies according to the switch model.
  3. Specify IRF unique Domain for that cluster, and take into consideration that nodes will reboot in the meanwhile during the process, so this should be an offline intervention.
  4. Prevent from Split-brain Condition potential events. IRF functions by electing a single switch in the cluster as the master node (which controls Management and Control plane, while data plain works Active-Active on all nodes for performance). So if by any reason stacking links were to fail, slave nodes by default initiate new master election, which could lead into a split-brain condition. There are several mechanisms to prevent this front happening (called Multi-Active Detection (MAD) protection), such as LACP MAD or BFD MAD.

Next hop is your VC. Whether you have a Flex-Fabric or a Flex-10, in terms of Ethernet configuration everything’s pretty much the same. You setup two logically identical Shared Uplink Sets (SUS) – note that VLAN naming needs to be different though inside the VC. Next you assign uplinks from one VC module to one SUS and uplinks from another different VC module to the other SUS. It is very important that you get this right, as VC modules are not capable of doing IRF withemselves (yet).

If you have doubts on how VC works, well nothing like the VC Cookbook. As with any other documentation that I provide with direct links, I strongly recommend that you google them up just to make sure you use the latest version available online.

If you click on the figure above with the physical diagram you will be redirected to HP’s Virtual Connect and IRF Integration guide (where I got the figure in the first place).

Last hop is your VMware hosts. Though kind of outdated document (as Vmware already launched new ESX releases after 4.0), for configuration purposes the “Deploying a VMware vSphere HA Cluster with HP Virtual Connect FlexFabric” guide already allows you to get started. What you should take into consideration is that you will not be able to use the Distributed vSwitch environment with new vmware’s LACP capabilities, which I’ll cover in just a while. This guide provides indications on how to configure in a standard vSwitch or distributed setup on your ESX boxes (without using VMware’s new dynamic LAG feature).

Note that I am skipping some steps on the actual configuration, such as the Profile configuration on the VC that you need to apply to the ESX hosts. Manuals are always your best friends in such cases, and the provided ones should provide for the majority of steps.

You should make sure that you use redundant Flex-NICs if you want to configure your vSwitchs with redundant uplinks for HA reasons. Flex-NICs are present themselves to the OS (in this case ESXi kernel) as several distinct NICs, when in reallity these are simply Virtual NICs of a single physical port. Usually each half-height HP Blade comes with a single card (the LOM), whcih has two physical ports, each of which can virtualize (up to date) four virtual NICs. So just make sure that you don’t choose randomly which Flex-NIC is used for each vSwitch.

Now comes the tricky part. So you should not that you are configuring Active-Active NICs for one vSwitch.

vSwitch-Vmware

However, in reality these NICs are not touching the same device. Each of these NICs is mapped statically to a different Virtual Connect module (port 01 to VC module 01, and port 02 to VC module 02). Therefore in a Virtual Connect environment you should note that you cannot choose the load balancing algorithm on the NICs based on IP-hash. Though you can choose any other algorithmn, HP’s recommendation is using Virtual Port ID, as you can see on the guide (this is the little piece of information missing on VC Cookbook on Active-Active config with Vmware).

Finally, so why the heck can’t you use IP-hash, nor VMware’s new LACP capabilities? Well  IP-Hash is intended when using link aggregation mechanisms to a single switch (or Switch cluster like IRF, that behaves and is seen as a single switch). As a matter a fact, IP-SRC-DST is the only algorithm supported for NIC teaming on the ESX box as well for the Switch when using port trunks/EtherChannels:

” […]

  • Supported switch Aggregation algorithm: IP-SRC-DST (short for IP-Source-Destination)
  • Supported Virtual Switch NIC Teaming mode: IP HASH
  • Note: The only load balancing option for vSwitches or vDistributed Switches that can be used with EtherChannel is IP HASH:
    • Do not use beacon probing with IP HASH load balancing.
    • Do not configure standby or unused uplinks with IP HASH load balancing.
    • VMware only supports one EtherChannel per vSwitch or vNetwork Distributed Switch (vDS).

Moreover, VMware explains in another article: ” What does this IP hash configuration has to do in this Link aggregation setup? IP hash algorithm is used to decide which packets get sent over which physical NICs of the link aggregation group (LAG). For example, if you have two NICs in the LAG then the IP hash output of Source/Destination IP and TCP port fields (in the packet) will decide if NIC1 or NIC2 will be used to send the traffic. Thus, we are relying on the variations in the source/destination IP address field of the packets and the hashing algorithm to provide us better distribution of the packets and hence better utilization of the links.

It is clear that if you don’t have any variations in the packet headers you won’t get better distribution. An example of this would be storage access to an nfs server from the virtual machines. On the other hand if you have web servers as virtual machines, you might get better distribution across the LAG. Understanding the workloads or type of traffic in your environment is an important factor to consider while you use link aggregation approach either through static or through LACP feature.

So given the fact that you are connecting your NICs to different VC modules (in a HP c7000 Enclosure), the only way you will be able to have active-active teaming is with static etherchannel, and without IP-Hash.

None-the-less please note that if you were configuring for instance a non-blade server, such as a HP Dl360, and connecting it to an IRF ToR cluster, then in that case you would be able to use vDS with LACP feature, and thus use IP-Hashing. Or if you were using a HP c3000 Enclosure, which is usually positioned for smaller enviroments. So it does not make that much sense to be using a c3000 and in the meanwhile having the required vSphere Enterprise Plus licensing.

That’s it, I suppose. Hopefully this overview should already get your ESX farm up-and-running.

Cheers.

Disclamer: note that these are my own notes. HP is not responsable for any of the content here provided.

Architectural overview of HP’s Vertica DB

If you start digging for more info on HP Vertica, chances are high that you stumble uppon this image, which summarizes it’s key selling points:Vertica

So the first thing one should start talking about should always be the origin of Vertica’s name. Instead of horizontal data population, Vertica stores in a verticaL disposition – a columnar storage of data (imagine a 90 degree turn on you DB, where the properties of each row are now the main agents to consider). The goal is to benefit both in querry as in storage capabilities. Allocation in columnar fashion allows to sort data (no need for indexing, since data is already sorted), which is then followed by an encoding process that allows to store only one time each unique value, as well as how many times that value repeats itself.

Vertica prefers enconding instead of compressing data. You can still querry data while it is on its enconded state. Vertica applies different encoding mechanisms in the same table to different Columns, accordingly . You can also apply both encoding and compression. to the type of data stored. Besides using encoding to provide additional query speed, storage reduction is also in stake. Vertica claims a 4:1 reduction of storage footprint. 

Another important differentiator is Vertica’s distributed grid-like architecture. Likewise Big Data architectures, instead of having hercule muscled machine, you are able to gain the advantages of multiple smaller and, most importantly, less expensive computing nodes working in parralel. In other words, you use a Massively-parallel processing (MPP) system -grid-base architecture clustering a bunch of x86-64 servers. So it is a shared-nothing architecture – each node functions as if it were the only node, and uses its own processing capabilities to return part of the answer of a query that involves that data that it owns.

How to design such a system? Each node has a minimum hardware recommendation of two quad-core CPU, and 16GB of RAM, 1TB of disk space and two VLANs assigned – one for client connection, another for private inter-node communication. In the private inter-node communication VLAN is where nodes distribute queries along them, obtaining parallel processing.

You might be wondering by now, what would happen if you were to loose one or more of those commondity hardware servers. The Db provides all the HA mechanisms you need by spreading data repeatadidly across several nodes. We are talking about a RAID-like storage of the same data at least twice in different nodes, thus eliminating SPOFs and tolerating node/s failure. To do so, Vertica does table segmentation, distributing tables evenly across different nodes. The result: you get a clean HA, without any need for manual log-based recovery.

Queries are indeed run on all nodes, so if one node is done, you will suffer on performance (comparing to multi-node performance, not other DBs!), since at least one node will have to do double work on failure. If you lose enough nodes to loose at least one segment of data, Vertica goes automatically down so not to corrupt the DB.

Also quite neat is the auto-recover process. When a failing node comes back online, other nodes automatically sync data.

As you can see, the answer to the following question is yes, ou can forget about your centrallized SAN environment, and using your big-ass Storage Array. Each node uses local disk which also allows better performance, as redundancy is guaranteed at the DB-level.

Another important aspect is that you can load your DB schema into Vertica, and it uses what they call “Automatic DB designer” to facilitate the whole process of DB design. You can you the designer repeatadily in different points of time, as you evolve to adjust.

Finally Vertica has a standard SQL interface, and supports SQL, ODBC, JDBC and the majority of ETL and BI reporting products.

Disclamer: note that these are my own notes. HP is not responsable for any of the content here provided.

HP’s new Switching platform 11900

HP showcased one of its soon to release new big&mean switching platform in InterOp 2013, the 11900, and today you can already get prices on this peace. Although having been considered hottest products in InterOp 2013, if you ask me, this might not be a game changer for HP’s portfolio as I believe the 12900 might be when it releases. But it is interesting enough to be noted.

This box is positioned for aggregation layer in the Datacenter (HP’s FlexFabric Portfolio), right next to today’s main Datacenter core platform, the 12500. Although probably a lot of work like me in countries qith “slightly” different companies as the US does. In countries where big companies considering substituting their Cisco 6500 with a 11900 might be more than enough, and therefore surely suffice as a core.

So we are talking about another platform with CLOS architecture designed to provide a non-blocking architecture with 4 Fabric Modules (each with 3x 40Gbps per I/O line card), with performance figures of 7,7Tbps, 5.7 Mpps, and achieving latency levels of 3 μs.

As for scalability, we are talking about being able to scale up to 64x 40GbE ports (non-blocking), and 384x 10GbE (also non-blocking) now-a-days. As a matter a fact, technically each I/O line card has 480Gbps (4x3x40) of total backend bandwidth,

Anyway, as you may already suspect, it does indeed run HP’s new version of Network OS, the Comware 7 (Cw7), and will therefore also contain almost all of the cool “Next-Generation” Datacenter features, namely:

  • TRILL
  • DCB and FCoE
  • EBV/VEPA
  • OpenFlow 1.3 (thus standards-based SDN-ready)
  • Though not available in this first release, MDC (context/ Virtual Switchs inside your chassis box) and SPB (sort of alternative to TRILL) will also become available

You might be wondering why the hell have a new Box, when HP has just released new management modules for the 10500, which also bring Comware 7 to it, and some of its features. Note that the 10500 is positioned for Campus-Core, not DC. As such, it will not support features as DCD/FCoE and EVB/VEPA, and those the major differences in the mid-term that you can count on.

As for it being positioned as a Core platform, please note that when it comes to scalability numbers in terms table size of FIB, ARP, MAC, etc 11900 platform does not even scale as near as HP’s 12500. Along with high buffering capability, these might be important features in DC environments with high density and number of VMs.

Networktest also publiced its test results on the 11900. The tests consist essentially of a Spirent TestCenter hitting a L2&3 bombardment simultaneously on all 384 10Gbps ports (all 8 I/O slots filled with 48 10GbE line cards), and another bombardment simultaneously on all 64 40Gbps ports (all 8 I/O slots filled with 8 40GbE line cards).

Disclamer: note that these are my own notes. HP is not responsable for any of the content here provided.

HP 10500 Campus-Core switch now supports new OS

HP just released a new hardware model of Management Processing Units (MPUs) for their  10500 Core switch platforms. (For those of you more Cisco-oriented, we are talking about sort of the equivalent of the Supervisors on a Nexus 7k platform, since the 10500 has a CLOS architecture, with dedicated Switching Fabric modules that offload all the switching between the line cards.) Here are some specs of the new MPU on the right side, along with a (for example) 10508 mapping where they fit:

10500 MPU v2

Now though the hardware specs show a some hardware enhancements (comparing with the previous one, which had a 1GHz single-core CPU, and 128MB of Flash), the most interesting part is definitely the support for the new version of OS: Comware 7.

Until so far the only platforms that supported this OS were the muscled ToR 5900 (yes 5920 as well, but for the sake of simplicity I will try not making it complicated) and the 12500 high-end Switching Core platform. Here are some of the enhanced and exclusive features that the boxes running CW7 were blessed with:

  • ISSU (In-Service Software Update, which means online Software update without forwarding interruption)
  • RBAC
  • DCB Protocols (PFC, DCBX, and ETS) + FCoE (exclusive to the 5900 on this first release)
  • EVB (naturally exclusive to the 5900, as this is a feature intended for ToR)
  • MDC (Multi-Device Context – supports completely isolated Contexts/vSwitches, exclusive to 12500)
  • TRILL (until so far was exclusive to 5900)

So what this means is that HP is bringing some of this juice to its platform positioned for Campus-Core. So if you upgrade your existing MPUs with the new ones, you will be able to add ISSU (without needing a second box clustered with IRF), as well as TRILL

And, as you can imagine, there might be some updates in the near future on which Comware 7 features are supported on this box. Just be aware that 10500 is and will remain (at least for quite a while) a platform positioned for Campus-core, and as such, it does not make sense to add Datacenter funcionalities, such as EVB and DCB/FCoE.

Cheers.

HP SDN beta-testing

HP is beta-testing its Security SDN-based solution: Sentinel.

The article describes a School implementing HP still to come SDN Security solution. The school implemented a hybrid OpenFlow solution – the most likely usual implementation in this initial SDN phase – where “intelligent” switchs are used running all usual old-school networking protocols simultaneously with OpenFlow enabled. OpenFlow is used to forward all DNS request to HP Security Controller – Sentinel. Sentinel uses HP’s IPS DB – Tipping Point’s Reputation DB – which is updated every 2 hours with spoted Internet suspicious threats.  Sentinel accepts or rejects traffic forwarding to a certain IP, based on what network administrator choose to do. The network admin can configure Sentinel to follow all Tipping Point recommendations, or rather specify his prefered alternatives. Thus when an OpenFlow switch requests what to do with a certain DNS querry, the controller simply “tells” what to do with related packets by populating its OpenFlow table.

This might be a very simplistic security implementation. However the most interesting is the promising margin for development. As this solution gains increasing intelligent, this may well start suiting as low-cost IPS/firewall solutions, using a distributed computing model with already existing OpenFlow switchs. I find this model very appealing for instance for ROBO sites.

Another alternative use-case is HP’s Beta-testing example in the article: BYOD. Securing devices at the edge perimeter greatly simplifies network security.

SDN might be a simple reengineering of the way things are done. Still, it’s a cool one in deed…

Disclamer: note that these are my own notes. HP is not responsable for any of the content here provided.

Wondering about the differences between HP AP’s MSM460 and MSM466?

Easy:

  • Both radios in MSM466 can support simultaneously 5GHz;
  • MSM466 Antennas are external, MSM460 are internal.

Key features in common:

  • Dual 802.11n radios;
  • 450 Mbps Max performance per radio;
  • Number of transmitters/receivers: 3×3
  • 3 Spatial Streams;
  • Lifetime warranty.

Cheers.

Disclamer: note that these are my own notes. HP is not responsable for any of the content here provided.

Some thoughts on HP MSM765zl Wireless Controller

MSM765zl controller has its ports facing the internal Switch Fabric of 5400zl and 8200zl Chassis Switchs.

MSM765zl Ports

Here are my bullets on the consequences of this fact to consider:

  • Has two ports  – LAN and Internet Port – each 10GbE;
  • “Slot 1” is the Internet Port
  • “Slot 2” is the LAN Port
  • If MSM765zl is installed, for instance, on slot “D”, the Slot 1 is “d1” CLI-wise and Slot 2 is “d2” CLI-wise.
  • Tagging (VLAN trunking Cisco) a VLAN on slot 1 on the CLI example: Switch(config)#   VLAN 5 tagged d1
  • Since ports are permanently connected, to simulate disconnecting a port disable the port.
  • Note: LAN Port should not be disabled on the MSM765zl Controller, since it carries Services Managent Agent (SMA) communications between the Controller and the switch, which delivers the clock to the controller. If you plan to only use the Internet port, then you can assign slot 2 port an unused VLAN ID untagged.
  • Configuring Controller’s IP address on the CLI if managing through the Internet Port: Switch(config)# ip interface wan     Switch(config)# ip address mode [static | dhcp]   Switch(config)# ip address < IP address/prefix length>

Disclamer: note that these are my own notes. HP is not responsable for any of the content here provided.

HP MSM Controllers Initial Setup Considerations

HP MSM Controllers have the following services: DHCP (tied to the LAN portal essentially), Access Controller, RADIUS (client and server), Router (tied to access controller), VPN (tied to the Internet port/IP Interfaces except the LAN port), Bandwidth control (tied to the Internet port/IP Interfaces except the LAN port), NAT (tied to the Internet port/IP Interfaces except the LAN port), and stateful Firewall (tied to the Internet port/IP Interfaces except the LAN port). The fact that the services are tied to physical ports, or have some dependencies, has some consequences that you should be aware of:

  • About the Router: it is tied to the Access Controller (AC), meaning it only routes traffic to or from access-controlled clients, and does not route traffic for other devices. Also note that it does not do any routing on its own behalf, so you must guarantee it has a default gateway,
  • About the AC: Responsible for redirecting access-controlled users to a login portal and for controlling the user’s traffic before and after Auth.
  • About the DHCP Server: though the DHCP server service is tied to the LAN port, it is also intended to provide IP addresses to guest traffic tunneled to the Access Controller. Thus, you may be using the Internet Port to receive tunneled traffic and still use the internal DHCP service to provide IP to guests. None the less, you can also relay DHCP requests from client tunneled traffic to your Network DHCP, if you do not want to use the controller’s DHCP server. You might also be wondering how to choose which port is used to tunnel access controlled client traffic? Easy – the MSM solution always tunnels client traffic to the same interface where AP management traffic tunnels are established.
  • About the Internet Port: when traffic is routed out the Internet port the controller can apply bandwidth control, NAT services, and stateful firewall services.
  • About the LAN Port:  untagged LAN port interface supports DHCP services. When DHCP is enabled, Controller responds to all untaggedDHCP discovery requests on the LAN port. AC traffic can also be routed out the LAN port, however you loose bandwidth control, NAT and FW services. For this reason, one should not design solutions to route access controlled traffic out this interface. The controller treats devices on the untagged LAN port as access-controlled clients.

Architecture Planning

HP MSM solution allows you to distribute data forwarding from APs directly to destination, avoiding hitting all traffic on Controllers. This way you can enhance considerably performance of traffic forwarding. You basically do this decision when creating what HP calls a Virtual Service Community (VSC), which is kind of a virtual profile of the service for traffic forwarding (most of the times it is simply the settings associated with a SSID).

In the creation of a VSC you choose if you want to use the controller for Authentication, and if you want additionally to use the controller for Access Control. When you enable use the controller for Auth., only Authentication traffic is tunneled to the controller, whether using pre-shared keys or 802.1X; however data traffic is forwarded directly to its destination. You might centralize Authentication on the controller even when you are using an external RADIUS, in order to simply the setup by having a single entity talking to the RADIUS server.

Only when using the Access Controller is all traffic forwarded to the controller.

VSC

This solution is best indicated for providing guests a portal for Web Auth, while simultaneously using Bandwidth control and Firewall for example for such clients. However, when you use  a NAC solution, such as HP IMC-UAM 5.2, you should not enable Access Controll on the controller – you rather use the NAC solution as the controller’s RADIUS server.

Also note that the egress VLAN (VLAN in which traffic is forwarded to) in this case is different than in a distributed architecture. In a distributed architecture the egress VLAN is simply the VLAN where the AP forwards traffic to. So in the case of a distributed architecture (read non-access-controlled traffic) you can do this by specifying a VSC binding:

  • Select the group of APs to which you want to apply the settings.
  • Select “VSC bindings“, and “Add a new binding…
  • Associate the configured VSC with an egress network.

VSC binding

If you do not specify a binding, the VSC simply forwards traffic to the APs default VLAN, if there is not any user-defined VLAN to forward to.

However Access-controlled VSCs are a little bit more tricky. As far as I understand you have two main options. In the first option clients in access-controlled VSC can be routed essentially to any subnet that the controller, and also furthermore by using the controller’s gateway.

You can achieve this by either specifying a subnet for the clients using the controller’s DHCP server on the LAN port or from tunneled traffic (thus the recommendation to always tunnel traffic to simplify setup on the rest of the network infrastructure), or you use DHCP relay for clients to acquire IP. For the same type of reasoning you are advised to check the tunnel traffic box, you are also advised to activate NAT service here, so that after controller has routed traffic out to its Gateway, the gateway knows how to return back the traffic. Alternatively, you can configure that gateway and add a route.

Option two is focused on restricting access-controlled clients reachability. You do it by using the the egress VLAN. This makes sense for instance when you want guests to be routed directly to the Internet, without accessing any corporate resources. Considering that the Controller only routes traffic, the guest will not have an IP on the egress VLAN, because the controller does not simply bridge traffic as APs do for VSC bindings. Instead it will still have an IP on the specified subnet for these clients using internal DHCP, but the traffic will be routed to the controller’s Gateway. So you must be aware that in  it is mandatory for you to have the following conditions guaranteed when implementing an Egress network for access-controlled VSCs:

  • specify a tagged VLAN to be the egress VLAN. It cannot be any of the untagged VLAN. If you want traffic bandwidth control, then you must specify a tagged VLAN on the Internet port.
  • The VLAN must have an IP interface, as well as a gateway.
  • It is recommended to specify NAT for the IP interface. This is because you have to guarantee that the default gateway for that VLAN can route traffic back to the clients subnet, which can be greatly simplified if you use NAT.

Finally, there is another varient in access-controlled VSCs, which includs adjusting the solution so that clients receive their IP in the egress VLAN they are ejected to (thus resembling the egress VLAN concept in non-access-controlled traffic, where APs bridge traffic to a certain VLAN). This solution involves using DHCP relay. Here are the guidelines:

  • Apply the egress VLAN for unauthenticated and authenticated clients in the VSC

VSC egress mapping AC traffic

  • In the global DHCP relay settings, select the checkbox for extending the ingress interface to the egress interface. After this is enabled, you will no longer be able to specify the IP address and subnet mask settings on VSCs.
  • Disable NAT on the egress VLAN IP interface.
  • Set the MSM controller’s IP address for the default gateway and DNS server (it will forward them to the correct ones).

In sum, the egress VLAN here simply specifies a specific default route for the controller to use for the intended clients, thus limiting client’s access to corporate resources.

Basic info about MSM Controllers:

  • HTTP/HTTPS supported to access Web Page, by default on standard ports 80 and 443; can be later on changed. If changed, then:  http: //<controller IP address>:port or https: //<controller IP address>:port
  • Default Controller IP Address: 192.168.1.1/24.
  • Default Credentials: User: admin  Password: admin
  • SNMP (UDP 161) and SOAP (TCP448) only on LAN port.
  • AP discovery (UDP 38212): only on Access Network (for MSM720), and LAN port and IP interfaces on the LAN port (for the rest of MSM Controllers)
  • Internal Login Pages access from any interface, except if you use NOC-based Auth., in which case you should enable the Internet Port
  • For security reasons, you can and should restrict the management access to a single interface. This operation can be done by the Web Management Interface.

To add VLANs to Controller ports – configure “Network Profile”, which entails a VLAN ID and can entail an IP interface when assigned to a port. A network profile does not have any effect until you apply it to a port. Furthermore you can only apply an IP interface to a VLAN ID after you have associated the network profile to a specific interface. Note also that though you can apply several profiles to each physical interface, the services tight to the particular interface do not change.

A Network profile assigned to a port with an IP interface is called a non-default IP interface to distinguish from the (untagged) Internet port. Like the (untagged) Internet port, non-default IP interfaces do not map to the Access Controller – unless special tunneling is involved. However, the Controller can route traffic to and from access-controlled clients and any IP interface (independently from which physical port this interface is assigned to). Again, note that the controller will only route traffic related to access-controlled devices.

A port can only have one untagged profile, and supports more tagged profiles. On MSM760 and 765 you can only add new profiles for tagged traffic. Have in consideration that the physical ports are only routed ports, and thus you cannot switch traffic in a VLAN between ports. You cannot assign the same network profile to two different ports, but you can assign the same VLAN ID in two different ports. So be aware of what this means, and avoid doing so unless you have different subnets on the two ports.

By default management (of the controller) is enabled on the Internet Network Profile, and is thus usually used for such purpose to make setup easier. You can eanble it on a different interface; but just make sure that for security reasons you only enable it on one interface. Be sure to enable protocols like SOAP (essential for HP NAC solution with IMC-UAM module), SNMP, DNS and SNTP on the appropriate management interface.

MSM720 technicalities

Though until so far I have been treating all MSM Controllers equally, the MSM720 does not function exactly like other controllers. To start with it has a total of six ports 10/100/1000 Mbps (unlike others that have only two), which act like switch ports (as opposed to other controllers which act as routing ports).

MSM720

You can aggregate ports with static port trunk or dynamic LACP, and assign network profiles as untagged and tagged to multiple ports or trunks.

Like other Controllers it uses global Network profiles. However, every profile that is mapped to a port requires a VLAN ID. When you assign the profile to a port, you select whether the associated VLAN is tagged or untagged on that port. The MSM720 only accepts tagged traffic for which the ID is configured as a tagged VLAN on the port.

The MSM720 has two network profiles at factory defaults. Called the Access network
and the Internet network, these profiles roughly correspond to other controllers’ LAN
port network (untagged) and Internet port network (untagged) profiles, respectively.
However, the profiles have associated VLAN IDs, the Access network’s default ID
being 1 and the Internet network’s being 10.

Similar to the other controllers, you can also associate more network profiles to the controller’s ports. However, in MSM720 these can be either tagged or untagged. Note that the basec functioning rule still applies: a port can only have one untagged VLAN. So by assigning a untagged profile, you will consequently be removing the former untagged profile from that port.

AP Deployment Considerations

There are essentially 3 main ways for deploying APs on the network: two where APs discover the Controller by L2 broadcast, one by L3 discovery.

The first deployment is when you deploy a dedicated subnet or VLAN for AP’s management, and different and dedicated for Controller management, for security reasons. It also allows you to isolate the MSM APs management traffic from other network traffic. To make things easier, ensure that the Controller also supports the APs dedicated VLAN, tagged on one (or more in case of MSM720) of its ports, in order to support L2 discovery. This is the best option to consider.

The second option consists of deploying both Controller and APs on the same VLAN. Usually this happens when companies do not want to change their network infrastructure by adding a new VLAN.

Though normally companies prefered to use their own DHCP server to contralize all scope management, you can also use the Controller’s own DHCP service to provide IP for the APs. In option 1, the DHCP server admin should create a new scope (or pool) for the MSM APs. The network admin should also configure DHCP server relay on the routing switch that acts as gateway for the new VLAN.

Note: DHCP service can only be used when controllers are not teamed (in other words, in clustering mode for HA). Also you should be careful not to enable the DHCP service on the   same subnet where another DHCP server is working on.

The third and last option is usually deployed when APs are deployed in remote sites. Communication between APs and the Controller is done in L3, so you must either use DHCP option 43 attribute, by DNS resolution (which implies having the default controller name used in your DNS), or configure manually on the AP the IP address of the controller.

Please note that these options can be combined.

Also for security reasons you might prefer having MSM APs connect to tagged ports on the switch, aswell as implementing 802.1X on the switch port, since MSM APs support acting as supplicants.

Disclamer: note that these are my own notes. HP is not responsable for any of the content here provided.