Clustered OnTap Basics – Carry overs from the GX lifestyle

These can be classified from the old days of the GX system. The new clustered OnTap in now way looks like the old GX clusters. There has been so many changes to the  c-mode major releases as well. The first release of c-mode OnTap 8 did not have support for SnapMirror and a host of other useful (valuable) 7G features. Nor did they really support CIFS and SAN. The did have striped volumes and Stretch Mirror. Those were the days when the “vifs” had a new name and were called the “lifs”.

I just put together a info page and this may be more aligned to the first version of c-mode OnTap. I am checking the new versions and will update if there are any changes.

The two things I like about the configuration is that tab and wild cards works!!!!!

The systems can be split into the following components

  • N-Blade
  • D-Blade
  • CSM – Cluster Session Manager
  • Nodes, Clusters, Virtual Servers

N-Blade – Network Blade

They manage the following:

  1. Network
  2. Protocols

D-Blade – Disk Blade

They manager the following:

  1. WAFL
  2. RAID
  3. Storage

CSM:Cluster Sessions Manager
It provides the communication between the N-blade and the D-blade. The N-blade uses the CIFS and NFS protocols on the front end and the D-blade uses FC protocol on the backend.
Their common language was a proprietary protocol called SpinNP, which is the protocol that CSM uses. A write request flows from a client;through the N-blade,CSM and the D-blade and to the disks.

Another component is the M-host, known as the management host. It provides for the overall management of a node with a command line interface called the ng shell or ngsh. It also provides for browser based management through its Element Manager user interface.
The whole set of software components runs within the scaffolding of a FreeBSD Unix OS.

Clustering
When two or more nodes are clustered via a dedicated IP network, an N-blade request can be directed to any D-blade. This decoupling of the network and protocol functionality from an actual data storage allows an entire cluster of nodes to work together to make a distributed group of nodes look and act as one single storage system.

Nodes, Clusters, and Virtual Servers
A cluster can use nodes from 2-24.

A cluster can be made up of a number of nodes. Each node should have an active/active partner for the storage fail-over  SFO but that partner is also a peer node in the larger cluster. The storage system of the node can fail over to its active/active partner, and its virtual interfaces can fail over to any node in the cluster.

Virtual Servers
They can use any or all of the physical cluster resources to provide a file system or “namespace”. Multiple virtual servers can exist in a single cluster,and each namespace is disjointed from the namespaces of any other virtual servers in the cluster. The same cluster resources can be used simultaneously by multiple virtual servers. Virtual servers do not have node boundaries; they are bound by the cluster on which they are built.

Virtual Server components
Each volume is associated with exactly one virtual server. All snapshot copies of that volume are also associated with that server. Each virtual server can also have its own client facing virtual interfaces, as well as one NFS service and one CIFS service.

 

Leave a Reply