Vnx Unified Storage Management - Student Guide

  • Uploaded by: Akram Khan
  • 0
  • 0
  • March 2021
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Vnx Unified Storage Management - Student Guide as PDF for free.

More details

  • Words: 78,649
  • Pages: 583
Loading documents preview...
VNX Unified Storage Management Student Guide

Education Services November 2015

[email protected]

[email protected]

Welcome to VNX Unified Storage Management. Copyright ©2015 EMC Corporation. All Rights Reserved. Published in the USA. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. The trademarks, logos, and service marks (collectively "Trademarks") appearing in this publication are the property of EMC Corporation and other parties. Nothing contained in this publication should be construed as granting any license or right to use any Trademark without the prior written permission of the party that owns the Trademark. EMC, EMC² AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Bus-Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert ,cLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common Information Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft, Connectrix, Constellation Computing, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection Advisor, DBClassify, DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO, Document Sciences, Documentum, DR Anywhere, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC Centera, EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, ISIS,Kazeon, EMC LifeLine, Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Multi-Band Deduplication,Navisphere, Netstorage, NetWorker, nLayers, EMC OnCourse, OnAlert, OpenScale, Petrocloud, PixTools, Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional, QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne, SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net, WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise Storage.

Revision Date: November 2015 Revision Number: MR-1CP-VNXUNIDM.10

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Course Introduction

1

This course focuses on key activities to manage the EMC VNX series storage systems in a Block and File environment. Key topics includes, initial array configuration, domain management, SAN host configurations with Windows, Linux, and VMware ESXi. The course will also cover VNX File storage configuration and management for both NFS and CIFS environments. Configuring file content and storage efficiency features is also illustrated.Management of VNX local protection features are also covered in the course. This course emphasizes the Unisphere GUI to manage a VNX storage system.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Course Introduction

2

The course agenda is shown here.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Course Introduction

3

The course agenda is shown here.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Course Introduction

4

This module focuses on explaining of where this course fits into your VNX curriculum. To gain the most benefit from this course, certain prerequisite knowledge is required. A brief review of these prerequisites will be discussed, along with cross-references to the prerequisite courses to help you obtain this knowledge.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Introduction to VNX Management

1

Having an understanding of where this management course fits into your VNX curriculum will help you find the additional training you require, as well as have realistic expectations of what is covered in this course.

Shown here is a depiction of the VNX training options available to you. The prerequisite courses for this class are shown above the management courses, while the expert-level classes are below. This VNX Unified Storage Management course also has two derivative courses: VNX Block Storage Management, and VNX File Storage Management. Each of these courses are a subset of the ‘Unified’ course, focusing specifically on its own storage services. Technical certification through the EMC Proven™ Professional VNX Solutions Specialist Exam for Storage Administrators (E20-547) is based on the prerequisite courses and VNX Unified Storage Management (or a combination of VNX Block Storage Management and VNX File Storage Management).

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Introduction to VNX Management

2

The prerequisite eLearning, VNX Fundamentals, provides an effective overview of the VNX storage system. The VNX Series unifies EMC’s file-based and block-based offerings into a single product that can be managed with one easy to use GUI. VNX is a storage solution designed for a wide range of environments that include midtier-to-enterprise. The back end storage connectivity is via Serial attached SCSI (SAS) which provides up to 6 Gb/s connection. The VNX Unified Storage platform supports the NAS protocols (CIFS for Windows and NFS for UNIX/Linux, including pNFS, and FTP/SFTP for all clients), as well native block protocols (Fiber Channel, iSCSI, and FCoE). VNX is built upon a fully redundant architecture for high availability and performance.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Introduction to VNX Management

3

VNX Fundamentals introduces the available models in the VNX family. The VNX2 Series (VNX with MCx) consists of five Unified and two Gateway models. VNX2 can scale up to 6 PB of storage. Each model is built on multi-core CPU technology, which means that each CPU consists of two or more independent central processing units. The multiple CPU cores provide better performance. The key to such performance is that the EMC VNX Operating Environment (OE) is designed for Multi-Core Optimization (MCx). With MCx, all services are spread across all cores. Shown here are the current VNX models. There are also derivative models, VNX-F and VNXCA, which are specialized for particular requirements.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Introduction to VNX Management

4

VNX storage systems consist of the following hardware components. The components included in your system will depend on the model. All VNX Storage systems have two Storage Processors (SPs). SPs carry out the tasks of saving and retrieving of block data. SPs utilize I/O modules to provide connectivity to hosts and 6 Gb/s Serial Attached SCSI (SAS) to connect to disks in Disk Array Enclosures. SPs manage RAID groups and Storage Pools and are accessed and managed through SP Ethernet ports using either a CLI or EMC Unisphere. Unisphere is web-based management software. SPs are major components contained inside both Disk Processor Enclosures and Storage Processor Enclosures. Storage Processor Enclosures (SPEs) house two Storage Processors and I/O interface modules. SPEs are used in the high-end-enterprise VNX models, and connect to external Disk Array Enclosures. Disk Processor Enclosures (DPEs) house two Storage Processors and the first tray of disks. DPEs are used in the midsize-to-high-end VNX models. A Data Mover Enclosure (DME) houses the File CPU modules called Data Mover X-Blades. Data Movers provide file host access to a VNX storage array. This access is achieved by connecting the DMs to the SPs for back-end (block) connectivity to the disk enclosures. DMEs are used in all File and Unified VNX models and act as a gateway between the file and block storage environments. A Control Station (CS) allows management of File storage, and act (in File or Unified systems) as a gateway to the Storage Processors. Only Storage Processors can manage Block storage. Control Stations also provide Data Mover failover capabilities. Disk Array Enclosures (DAEs) house the non-volatile hard and Flash drives used in the VNX storage systems. Standby Power Supplies (SPSs) provides power to the SPs and first Disk Array Enclosure to ensure that any data that is in transit is saved if a power failure occurs.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Introduction to VNX Management

5



A brief description of the core VNX features is discussed in VNX Fundamentals.



VNX high availability and redundancy features provide five-nines (99.999% availability) access to data.



All hardware components are redundant, or have the option to be redundant. Redundant components include: dual Storage Processors with mirrored cache, Data Movers, Control Stations, storage media via RAID and sparing, etc.



High availability to LUNs is provided by trespassing LUNs from one SP to another, Asymmetric Logical Unit Access (ALUA), and Symmetric Active-Active.



Paths to Block data are also redundant within the array. Each drive has two ports connected to redundant SAS paths. (Outside of the array, path redundancy can be provided at both the host and network levels.



Network features LACP (Link Aggregation Control Protocol) and Ethernet Channel protect against an Ethernet link failure, while Fail Safe Networking protects against failures of an Ethernet switch.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Introduction to VNX Management

6

VNX Virtual Provisioning improves storage capacity utilization by only allocating storage as needed. File systems as well as LUNs can be logically sized to required capacities, and physically provisioned with less capacity.

Deduplication and compression are available for both Block and File services. While compression for both Block and File use the same underlying technology, File-level deduplication uses EMC Avamar technology. VNX File Level Retention is a capability available to VNX File that protects files in a NAS environment from modification and deletion until a user specified retention date. With quotas, a limit can be specified on the number of allocated disk blocks and/or files that a user/group/tree can have on a VNX file system, controlling the amount of disk space and the number of files that a user/group/tree can consume.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Introduction to VNX Management

7

While VNX Fundamentals introduces FAST and FAST VP, this technology is discussed in depth in the VNX FAST Suite Fundamentals prerequisite eLearning. FAST VP automates movement of data across media types based on the level of the data’s activity. This optimizes the use of high performance and high capacity drives according to their strongest attributes. FAST VP improves performance and cost efficiency. FAST Cache uses Flash drives to add an extra cache tier. This extends the array’s read-write cache and ensures that unpredictable I/O spikes are serviced at Flash speeds.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Introduction to VNX Management

8

The VNX Local Protection Suite Fundamentals prerequisite eLearning expands upon the general overview of the features shown here that is provided in VNX Fundamentals. VNX SnapSure is a feature for File data services. SnapSure provides a read-only or read/write, point-in-time view of VNX file systems. SnapSure is used primarily for lowactivity applications such as backup and user-level access to previous file versions. SnapSure uses Copy on First Write technology. VNX Snapshot is Block feature that integrates with VNX Pools to provide a point-in-time copy of a source LUN using redirect on first write methodology. VNX SnapView Snapshot is also a Block feature that provides point-in-time copies of source LUNs. SnapView integrates with Classic LUNs and uses Copy on First Write technology. SnapView Clone provides a full copy of a LUN.

Appliance-based, RecoverPoint/SE local protection replicates all block data for local operational recovery, providing DVR-like rollback of production applications to any point-intime. It tracks all data changes to every protected LUN (Journal volume).

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Introduction to VNX Management

9

While VNX Fundamentals introduces the remote protection features of VNX, the VNX Remote Protection Suite Fundamentals prerequisite eLearning explains these features in detail.

VNX Remote Protection features include SAN Copy, MirrorView, Replicator and RecoverPoint/SE CRR. SAN Copy copies LUN data between VNX storage systems and any other storage array. SAN Copy is software-based, and provides full or incremental copies, utilizing SAN protocols (FC or iSCSI) for data transfer. MirrorView is a feature of VNX for Block used for remote disaster recovery solutions. MirrorView is available in both synchronous (MirrorView/S) and asynchronous (MirrorView/A) modes. Replicator is a VNX File features that produces a read-only copy of source file system. The copy can be local or remote. VNX Replicator transfers file system data over an IP network. Changes to the source file system are tracked and transmitted on a time interval. VNX Replicator can be used as an asynchronous disaster recovery solution for both NFS and CIFS. RecoverPoint/SE Remote Replication, is a comprehensive data protection solution that provides bi-directional synchronous and asynchronous replication for block data between VNX systems of a WAN. RecoverPoint/SE CRR allows users to recover applications remotely to any significant point in time without impact to the production operations.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Introduction to VNX Management

10

VPLEX Continuous Operations allows administrators to have the exact same information in two separate locations accessible at the same time from both locations. Companies can achieve improved levels of continuous operations, non-disruptive migrations/technology refresh, and higher availability of their infrastructure. VPLEX allows active/active VNX data access strategy over distance, load balance across two VNX arrays, and supports automated failover. VNX supports NDMP (Network Data Management Protocol), which is an open standard backup protocol designed for NAS environments. During NDMP operations, backup software is installed on a third party host, while the Data Mover is connected to the backup media and server as the NDMP server. NDMP provides the ability to backup multi-protocol (CIFS and NFS) file systems. EMC Common Event Enabler is a File-level alerting framework for CIFS and NFS. It notifies antivirus servers of potentially virulent client files, and uses third-party antivirus software to resolve virus issues. VNX Controller-Based Encryption encrypts all data at the Storage Processor. All data on disk is encrypted such that it is unreadable if removed from the system and attached to a different VNX.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Introduction to VNX Management

11

Unisphere for VNX, Navisphere Secure CLI, and the VNX CLI via the Control Station are the core configuration and management interfaces of the VNX. Using these interfaces, users can fully administer a VNX system. The focus of this course is management via Unisphere for VNX. Unisphere Central is software that provides centralized multi-box monitoring of hundreds of VNX systems whether they reside in a data center or are deployed in remote and branch offices. Unisphere Analyzer is the VNX performance analysis tool to help identify bottlenecks and hotspots in VNX storage systems, and enable users to evaluate and fine-tune the performance of their VNX system. Unisphere Quality of Service Manager (UQM) measures, monitors, and controls application performance on the VNX storage system.

VNX Family Monitoring and Reporting automatically collects block and file storage statistics along with configuration data, and stores them into a database that can be viewed from dashboards and reports.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Introduction to VNX Management

12

As the focus of this course is on the native VNX management tools, our next module will provide further detail on various aspects of VNX Unisphere security and basic management usage.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Introduction to VNX Management

13

This module covered a very brief summary of the prerequisite knowledge to help gain the most value of this course. We defined a large number of VNX aspects and directed you to the prerequisite eLearnings to help you acquire a fuller understanding to each.

If you have not yet consumed any of the eLearnings listed, you can register and enroll in these during your off hours this week. In addition to the prerequisite courses, we also identified additional training that you may find valuable after this management course. These courses are VNX Unified Storage Performance Workshop, VNX Block Storage Remote Protection with MirrorView, and VNX File Storage Remote Protection with Replicator.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Introduction to VNX Management

14

This module focuses on the basics of VNX Unisphere security and basic management. Discussed will be the user interface options, management security, notifications and event monitoring, and Storage Domains.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

1

This lesson covers the management interfaces for the VNX Unified Storage system. It covers an overview of the Unisphere GUI, its aspects of management, the layout and access methods. An overview of the CLI management interfaces for both file and block are also presented.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

2

There are three interface options available to manage the VNX Unified Storage system; the Unisphere Graphical User Interface (GUI), File Command Line Interface (CLI), and Block Command Line Interface (CLI). Management is performed from an administrator PC or workstation to the VNX. The Unisphere GUI is the primary management interface for the system. From it, both the block and file aspects of the system are managed. It is a web-based application that resides on the VNX, accessed using a browser, such as Internet Explorer, addressed to the VNX. Unisphere Client software is also available as an installable application for Windows platforms. Management is performed over a secure network connection to the VNX system. The File CLI option is available for file administrative tasks. The tasks are performed over a secure network connection using Secure Shell (SSH) to the VNX Control Station. Or over a direct serial connection to the Control Station. The File CLI option is useful for scripting administrative tasks for file.

The Block CLI option is available as an installable application and is used for block administrative tasks. The tasks are performed over a secure network connection to the VNX Storage Processors, A or B. The Block CLI can be used to automate management functions through scripts and batch files.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

3

With Unisphere, all aspects of the VNX can be managed. Global system management tasks are available, as well as the tasks that are unique to file storage and block storage. Some of the system management tasks relate to settings on the system such as network addressing, services, and caching. System hardware can be viewed and configured. Security relating to management is also available, such as management accounts and storage domain configuration. The system software is also managed from Unisphere. Reports can also be generated about the system configuration, status, and availability. System monitoring and alert notification can also be managed within Unisphere. File storage related tasks are also available in Unisphere, such as Data Mover networking and services settings. Management of storage space for file relating to pools and volumes is provided. File systems and all their features are managed. CIFS shares and servers are managed as well as NFS exports. Unisphere also manages both local and remote VNX file replication features.

Unisphere provides block storage management tasks, such as network and Fibre Channel connectivity settings. Storage provisioning for Storage Pools and RAID Groups are available. LUNs and all their features are also managed. Host access to storage is managed within VNX Storage Groups with Unisphere. It also manages both local and remote VNX block replication features.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

4

Unisphere is easily accessed for managing the VNX. The Unisphere Server software runs natively on the Control Station and both Storage Processors; SPA and SPB. An optional Unisphere Server executable is available for installation on a Windows server which allows centralized management of multiple VNX systems through a Unisphere Storage Domain. Simply open a browser and input the IP address or DNS name of the device running the Unisphere Server software; the VNX Control Station, either Storage Processor, or the Windows server. Or, if it is installed, open the Unisphere Client software and provide it the name or IP address of the Unisphere Server; VNX Control Station, either Storage Processor, or Windows server. Next, input credentials for the VNX at the logon screen and Unisphere will open. It is important to note that Unisphere is a JAVA-based application, thus the system running Unisphere requires the JAVA Runtime Environment (JRE) software is installed and it will run to support the Unisphere application.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

5

The Unisphere interface has three main areas which are the top navigation bar, task pane, and main pane. Top Navigation consists of: • Previous and Next Icons: The left and right arrows allow users to go back and forth • Home Icon: It shows the Dashboard screen. • System Drop-down menu: It allows the user to switch between VNX storage systems registered on the domain. • Context-Sensitive Menu Bar: Presents the main options for VNX for File/Unified and VNX for Block. It varies depending on the system being managed. Task pane: It is task based navigation which means common tasks are placed together facilitating the access. Depending on the menu selected different tasks will appear.

Main pane: It is where the pertinent information about a particular menu is displayed. The division between Task Pane and Main Pane can be resized by clicking the mouse with the cursor over the division bar, and dragging it to the new position. Also, the Task Pane can be hidden by clicking the right arrow on the division bar which will expand the Main Pane. The Task Pane can be expanded again by clicking the left arrow on the division bar which will re-dimension the size of the Main Pane. This course includes a lab exercise that provides the learner hands-on experience accessing, operating and navigating the Unisphere interface.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

6

Unisphere also provides a setup page for the SP Management Server. It is used mostly for modifying initial settings or restarting the Management Server and other activities relating to maintenance. The setup page is accessed from a browser addressed to either the IP address of SPA or SPB with /setup appended to it as in this example: https:///setup. The page will require you to input credentials to access it. Some operations available from the setup page are: change the SP host name, create a new Global Administrator account, manage the SSL/TLS Certificate, update parameters for agent communication, restart Management Server, Recover Domain, set RemotelyAnywhere access restrictions, and many other functions.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

7

The File CLI is accessed from the Control Station through either a secure network connection using Secure Shell (SSH) or a direct serial connection. It consists of a series of Linux-like commands for managing file related tasks on the VNX system. There are over 100 unique commands that are formed from five prefix command sets. The prefix sets are used for managing different elements of the VNX file storage and are shown below: • cel_ commands execute to the remotely-linked VNX for File system • cs_ commands execute to the local Control Station • fs_ commands execute to the specified file system • nas_ commands execute directly to the Control Station database • server_ commands require a “movername” entry and execute directly to a Data Mover. (For example, server_ifconfig server_2…) The Control Station also includes the full command set for Block CLI.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

8

The Block CLI is provided through the naviseccli command, also known as Secure CLI, and has a secure command structure. It includes a rich set of command options and sub-options for all block related management, configuration and maintenance operations. With it, all aspects of VNX block storage and its features can be configured and managed. Host connectivity to storage can also be configured and managed. The status of the system can be checked. Maintenance tasks can also be performed such as SP reboots and software updates. With the CLI, repetitive administrative tasks for block can be scripted. The Block CLI is installed on supported Windows, Linux and UNIX-based systems. It is also included on the VNX Control Station in its /nas/sbin directory.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

9

Some VNX features are not manageable through the Unisphere GUI and can only be managed using File CLI. An example is event notifications that specify individual event identifiers.

The GUI does offer an option from its Task Pane for running File CLI commands. The Control Station CLI option within Unisphere allows you to enter commands one at a time and view its output result.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

10

Please read the Pre-Lab Exercises section of the lab guide for information about the lab layout and access methods. This Lab covers VNX management with Unisphere. System login and Unisphere general navigation is performed along with Unisphere navigation to specific File and Block functions. The File command line interface will be invoked from within Unisphere.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

11

This lab covered VNX Management with Unisphere. System login and general Unisphere navigation was performed. Unisphere navigation to specific File and Block functions was done. CLI from with Unisphere was invoked.

Please discuss as a group your experience with the lab exercise. Were there any issues or problems encountered in doing the lab exercise? Are there relevant use cases that the lab exercise objectives could apply to? What are some possible concerns relating to the lab subject?

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

12

This lesson covers the different strategies used by Unisphere to prevent unauthorized access to VNX systems. The lesson will also discuss the different authentication scopes and how to assign privileges associated with tasks an administrative user can perform on particular VNX objects.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

13

A key capability of VNX is its secure management. VNX implements key tenants of security to ensure that only limited, authorized users and applications have management access to the system. The key tenants that VNX management security is built upon are; authentication, authorization, privacy, trust and audit. Each provide the following: • Authentication: Identify who is making a request, and only grant access to the authorized users. VNX systems will not permit any actions without the validation of the authentication. • Authorization: Determine if the requestor has the right to exercise the request. The Storage Management Server authorizes user activity based on the role of the user.



Privacy: Protect against snooping of data. Security settings enable definition of controls to prevent data stored in the VNX system to be disclosed in an unauthorized manner. VNX Systems use several proprietary data integrity features to protect user data with encryption and secure connections.



Trust: Verify the identity of the communication parties. VNX systems use certificates for securing network operations associated with managing the system. Certificates provide a mechanism of establishing a trusted identity on the network.

• Audit: Keep a record of who did what, and when. VNX event logs contain messages related to user management actions, activities performed by service personnel, and internal events. VNX storage systems can be accessed by different management applications for configuration, maintenance, and administration: Unisphere, File and Block CLI, Unisphere Service Manager (USM), Unisphere Host Agent (or Server Utility), Unisphere Initialization Utility, VNX Installation Assistant (VIA), SNMP management software, Admsnap, Admhost, snapcli, ESRS, Unisphere Central, and Unisphere client.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

14

Secure management access to the VNX is accomplished through the management interface login, the connection to the VNX, and its management user accounts. A secure network connection is established between the management interface and the VNX using industry standard protocols; Secure Socket Layer (SSL), Transport Layer Security (TLS), or Secure Shell (SSH). These industry standard protocols use certificates that establish a trust and authentication between the management interface and the VNX. They then encrypt communication between each other to establish the privacy required for secure communications. Note: If using the File CLI via serial connection, physical security of the VNX is required to assure management access security. The administrative user then supplies login credentials to the management interface which are passed over the secure connection to the VNX. The VNX examines the user credentials against its user accounts for user authentication and authorization. The VNX will then maintain an audit log of the user’s management activities.

This results in only allowing authenticated users performing authorized management activities to the VNX over a private, trusted connection.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

15

Auditing is a specialized form of logging whose purpose is to record the security relevant events that happen on a system and provide sufficient information about who initiated the event and its affect on the system. Unisphere provides audit logging capabilities for both VNX for Block and VNX for File system configurations, by capturing system activities surrounding or leading to an operation, procedure or event. Audit information on VNX for Block systems is contained within the event log on each SP. The log contains a time-stamped record for each event, with information about the storage system, the affected SP and the associated host. An audit record is also created every time a user logs in, enters a request through Unisphere, or Secure CLI command. On VNX for File systems the auditing feature used is native to the Control Station Linux kernel and is enabled by default. The feature is configured to record management user authentications and captures the management activities initiated from the Control Station. Events are logged when specified sensitive file systems and system configurations are modified.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

16

Another benefit is management flexibility provided using its schema of management accounts. VNX provides the capability for having local management accounts for File and Block, and Global accounts. The local accounts focus on specific management tasks for a specific VNX. For example, on a specified VNX, the File Local accounts are for file management tasks and the Block Local accounts focus on block management tasks. VNX also provides the capability of having Global accounts that can manage both file and block management tasks. The system comes from the factory with a set of default management accounts configured and are listed in the table. It is also possible to create additional Global, File Local and Block Local management accounts. All management accounts are associated with management roles. It is a best practice to create additional accounts and use those for VNX management rather than to use the default management accounts. This is especially important for auditing purposes in environments where multiple people may be managing the VNX.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

17

VNX role-based management is a key capability for flexible, easy system management. Roles are a combination of VNX management objects and privileges to those objects. Roles define an authority for managing an object and apply to all VNX management operations. Using roles, management tasks can be focused on specific areas of system management such as networking, data protection, or storage. Roles are directly associated with VNX management groups that are associated with VNX management user accounts. The VNX has a number of system-defined roles that cannot be modified or deleted. It also provides the capability of defining custom configured roles. Roles apply to Unisphere GUI and CLI management operations. This course includes a lab exercise that provides the learner hands-on experience creating local user accounts and assigning a role to the user.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

18

The VNX provides three different management user authentication scopes for flexible management options. The LDAP authentication scope is used when the VNX is configured to bind to an LDAP domain. The VNX performs an LDAP query to the domain to authenticate the administrative users. LDAP domain users and groups are mapped to user and group IDs on the VNX. When the “use LDAP” option is selected during user login, the Global or Local scope setting is disregarded. The Global authentication scope is used when the VNX is configured to be a member of a Storage Domain. All the systems within the domain can be managed using a single sign-on with a global account. If a user selects the “Global” scope during login to a VNX that is not a Storage Domain member, Unisphere will use local authentication for the user. The Local authentication scope is used to manage a specific system only. Logging into a system using a local user account is recommended when there are a large number of systems in the domain and you want to restrict visibility to a single system and or certain features on a given system. When you start a session, Unisphere prompts you for a username, password, and scope. These credentials are encrypted and sent to the storage management server. The storage management server then attempts to find a match within the user account information. If a match is found, you are identified as an authenticated user. All subsequent requests that the applet sends contain the cached digest in the authentication header.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

19

Another key management capability of VNX is its ability to integrate with LDAP-based domains. Using this capability allows LDAP users to login and perform VNX management tasks using their existing LDAP user credentials.

To achieve this integration, the VNX is configured to bind to the LDAP domain to form an authentication channel with the domain. When an LDAP login is performed, the VNX passes the LDAP user credentials to the User Search Path of the LDAP server over the authentication channel. Role-based management is also configured for the user based on membership in an LDAP group. A management Role is defined for the LDAP group. The VNX automatically creates an identically named VNX group and the role is assigned to the VNX group. A mapping between the LDAP and VNX groups provides the management role to the LDAP user. The Use LDAP option must be selected for the Unisphere login to be authenticated by the LDAP domain. The user will be able to perform management tasks based on the management role configured for the LDAP group of which the user is a member. LDAP users are also able to use File CLI management. The CLI login to the VNX Control Station requires that the user input the username in the <username>@<domain name> format.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

20

This demo/lab covers the configuration steps for binding a VNX to a Windows Active Directory domain, configuring a role for an LDAP user, and logging into Unisphere with LDAP credentials.

To launch the video use the following URL: https://edutube.emc.com/Player.aspx?vno=DBCxwOWRe44ULoQg5YRSRw==&autoplay=true

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

21

This lesson covers the monitoring features provided by Unisphere, how to check alerts and event logs associated with VNX system activities, and how to enable notifications for both File and Block systems.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

22

Within the Unisphere System monitoring page, there are several areas where the system can be monitored, including: • Alerts for various system conditions

• SP Event Logs for monitoring block related activities • Background Tasks for File • Event Logs for File • Notification Logs for File • Notifications for Block • Statistics for File • Statistics for Block • QoS Manager

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

23

In the “Alerts” section, the user can see if there are any critical errors, warning, or errors. To obtain details for the alert, simply double-click on the alert of interest to retrieve its properties. The “Alert Details” will provide further information on the status of the alert and how to resolve it. Alerts may come from the Block side or the backend, or from the File side of the VNX system.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

24

In the “Background Tasks for File” area File-related tasks are logged and can be monitored. This page will report the tasks with the following information:

ID - Unique identifier for the task State - Status of task: Succeeded, Failed, Running, or Recovering Originator - User and host that initiated the task Start Time - Time the administrator initiated task. The start time is in the format: month/date/year hours:minutes Description - Brief task description Schedule - Frequency of occurrence and type of task Systems - Name of the remote system involved in the task The logged task properties can be visualized by double-clicking the mouse over the selection or by hitting the Properties button.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

25

In “Event Logs for File” area, File-related events can be monitored. The page can be configured to display log messages from the Control Station or the Data Movers based on a selected time interval and severity level: Severity - Severity of event. The severity is converted from a numerical value (0-6) in the log file to one of four named values. Events provides a comparison Time - Date and time of event Facility - Component that generated event Description - Description of event To view details about an event right-click the mouse over the record and select details.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

26

In the “SP Event Logs” section, logs for each one of the SPs can be retrieved for visualization, filtered by type of event, saved on a local file on the client machine, and printed. The displayed report fields are:

Date - Date that the event occurred Time - Time that the event occurred Event Code - Numerical code that pertains to the particular event Description - Brief description of the event Storage System - Name of the storage system that generated the event. Displays N/A for non-device event types Device - Name of the device within the storage system on which the event occurred. Displays N/A for non-device event types SP - SP to which the event belongs – SP A or SP B Host - Name for the currently running Agent – SP Agent or Host Agent

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

27

“Notifications for File” are actions that the Control Station takes in response to a particular system condition. These features are configurable notifications based on system events and system resource utilization.

The system “Event” notifications are based on pre-defined system events such as a temperature being too high. As displayed in this table, these notifications are configured based on the Facility affected and the Severity levels (Critical, Error, Warning, Info). The user can set what is the action that must be taken in case the defined criteria is met, and what is the destination of the notification: path of Control Station log file, Single SNMP trap for the traps, or a list of e-mail addresses separated by a comma. The other tabs of the Notifications for File are Storage Usage, Storage Projection and Data Mover Load. These refer to notifications based on resource utilization. The user can also configure conditions or thresholds for triggering the notifications.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

28

“Even Notifications for Block Storage Systems” allows the configuration of either Centralized Monitoring or Distributed Monitoring. With Centralized Monitoring, a single Unisphere Agent monitors selected storage systems. With Distributed Monitoring, each Unisphere Agent monitors its own storage systems. When creating a template, the user is able to define Severity level and Category for general events or configure notifications for explicit events. The severity levels are Info, Warning, Error, and Critical. The Categories relate to the events pertaining to Basic Array feature, MirrorView, SnapView, SAN Copy, VNX Snapshots, etc. Some of the actions that can be configured regarding a notification include the following: • Logging the event in an event log file • Sending an email message for single or multiple system events to a specific email address

• Generating an SNMP trap • Calling home to the service provider • Running a script

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

29

“Statistics for File” provides information about the file system utilization, storage and network performance. Graphs are configurable and given in real-time. The “Statistics” page displays a live graph of the statistics for components of the VNX. The legend under the graphic explains the chart data. The graph can display a maximum of 14 statistics at any one time. The top line on the page includes two arrows that allows the user to navigate backward and forward in the accumulated data, and text stating the time period covered by the visible graph. To manipulate the graph, the user can right-click the graph and select: • Export Data: to export the data in the graph into a comma-separated values file • Print: to print the graph, rotated or scaled to fit a page as needed • Time Interval: to change the time period displayed by the graph • Select Stats: to add or remove types of statistical data displayed in the graph • Polling Control: to change the polling interval for statistical update queries, and to disable and enable statistical update polling • Polling Interval: the rate at which an object is polled The default polling interval for updated stats is five minutes for Data Mover and storage system data. File system data is polled at a fixed interval of 10 minutes.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

30

Statistics for Block are provided by the Unisphere Analyzer feature. The Unisphere Analyzer feature lets the user monitor the performance of the storagesystem components: LUNs, the storage processors (SPs) that own them, and their disk modules. Unisphere Analyzer gathers block storage-system performance statistics and presents them in various types of charts. This information allows the administrator to find and anticipate bottlenecks in the disk storage component utilization. Analyzer can display the performance data in real time or as a file containing past performance data from an archive. The user can capture the performance data in an archive file at any time and store it on the host where Unisphere was launched. The statistics are displayed as seven different types of charts: Performance Survey chart, Performance Summary, Performance detail, Performance Overview (for RAID Group LUNs, metaLUNs only), and LUN IO Disk detail chart (for LUNs only).

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

31

These video demonstrations will provide you with a brief discussion of configuring Unisphere’s VNX notifications for Block and Notifications for File. To launch the videos use the following URsL: Link to Notifications for Block Demo: https://edutube.emc.com/Player.aspx?vno=25uGUJW3sapbkcJ+HWoiQg==&autoplay=true Link to Notifications for File Demo https://edutube.emc.com/Player.aspx?vno=4NadO6Lvj+IdSHaUXsu12g==&autoplay=true

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

32

This lesson covers VNX management using Unisphere Storage Domains and also examines management using the Unisphere Client and Server software packages.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

33

Each VNX Unified storage system by default is configured into its own local storage domain. The system’s SPs and its Control Station are members of the domain by default. A VNX system can be managed using a Unisphere session to any member of the storage domain. Management of the domain requires Administrator or Security Administrator role privileges.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

34

Beyond the default local Unisphere domain, Unisphere lets you create Storage Domains with multiple VNX systems are members. The storage domain lets you manage and monitor a group of systems by using a single sign-on of Unisphere. This capability requires using the Global Scope and that the VNX is configured with global user accounts. Unisphere lets you create multi-domain environments as well. A multi-domain environment lets you manage and monitor a group of domains (potentially all the systems in the storage enterprise) by using the same instance of Unisphere. You can create a multi-domain environment if systems are located remotely, and you do not want to include them in the local domain. The multi-domain feature lets you manage and monitor systems in separate domains using one instance of Unisphere. A multi-domain environment consists of one local domain and one or more remote domains. The local domain is the domain you targeted by connecting to a particular system. The domain to which that system belongs is the local domain. A remote domain is a separate domain with its own master, whose systems can be managed and monitored by you from the local domain.

The multi-domain feature offers the option of single sign-on which allows you to log in to the entire multi-domain environment by using one user account. In this instance, each domain within the environment must have matching credentials. Alternatively, you can use login on-demand. In a multi-domain environment, you can add or remove systems and manage global users only on a local domain (that is, the domain of the system to which you are pointing Unisphere). To perform these operations on a remote domain, you must open a new instance of Unisphere and type the IP address of a system in that remote domain.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

35

Another management configuration available for VNX is Unisphere Client and Server. They are separate Unisphere software packages that can be installed on Windows systems and can be used in Storage Domains. Unisphere Client is a complete standalone version of the Unisphere user interface (UI) applet. Unisphere Server is an “off-array” management system running the Unisphere management server. The packages can be installed on different Windows systems, or be installed together on the same Windows system. If only the Unisphere Client is installed on a Windows system, the Unisphere UI is launched locally and pointed to any Unisphere Server system in the environment. You can also optionally install both the Unisphere Client and Server on the same Windows system. The Unisphere Server accepts requests from Unisphere Client and the requests are processed within the Windows system. The Unisphere Server can be configured as a domain member or a domain master for managing multiple VNX systems within the same UI. The Unisphere Client and Server packages provide for faster Unisphere startup times since the Unisphere applet does not have to download from the VNX Control Station or SPs. This can be very advantageous when managing systems in different geographic locations connected via slow WAN links. Another advantage of running Unisphere Server on a Windows system is it lowers management CPU cycles on the VNX SPs for certain management tasks.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

36

This demo covers the configuration of Unisphere Server as a Domain master in a Storage Domain having multiple VNX systems. It also illustrates using the Unisphere client to run the Unisphere UI.

To launch the video use the following URL: https://edutube.emc.com/Player.aspx?vno=OjxiNeui3SMDH6qB7HrsCA==&autoplay=true

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

37

This module covered the interfaces for managing the VNX and how management is secured. It also detailed system event monitoring and notifications and the use of Storage Domains for managing multiple VNX systems.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

38

This Lab covers role-based management of the VNX with Unisphere. The VNX Storage Domain will be verified. A Global user and a local group will be created on the VNX. Then a role for the new user will be defined and tested.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

39

This lab covered role-based VNX management in Unisphere. The VNX Storage Domain was verified and a Global User and Local Group were created on the VNX. A management role was configured for the Global User.

Please discuss as a group your experience with the lab exercise. Were there any issues or problems encountered in doing the lab exercise? Are there relevant real world use cases that the lab exercise objectives could apply to? What are some concerns relating to the lab subject?

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: Unisphere Security and Basic Management

40

This module focuses on storage system configuration and management of SP management ports and Multicore Cache setting verification.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Storage System Configuration

1

This lesson covers the configuration of VNX management ports.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Storage System Configuration

2

Unisphere provides graphical depictions of all VNX components as well as the other features. Selecting a section button (System in the example) displays a big button page that contains buttons to select subsections. These items are exactly the same as those in the top level of the drop-down menu when hovering over the section button. The big button pages also include a task list. This task list is the same as the task list shown on content pages. The type of screen that will be displayed depends on if the system is a Block only system, a File only system, or a Unified system. For example the slide shows a VNX Unified view. Hardware views relate logical objects to physical components. System configuration and state information is tracked in real time although the LEDs are not real time.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Storage System Configuration

3

Selecting the Storage Hardware button will bring users to the “Hardware” page. Components will be displayed and can be expanded from the tree structure to show subcomponents. Right-clicking on an SP launches the “SP Properties window” from this window select the “Network” tab. This tab allows users to view and modify the physical network port properties and provides access to the “Virtual Port” Properties window. Double-click on the “Virtual Port” to configure the IP address for the port.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Storage System Configuration

4

Network settings can also be configured from the “Settings” menu option and selecting “Edit Networks Settings – SPx” This will launch the same “SP Properties’ window and allow users to access the “Virtual Port Properties” window.

There are two network troubleshooting commands that can be executed from the window, Ping and Traceroute.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Storage System Configuration

5

This lesson covers VNX Multicore Cache benefits and settings.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Storage System Configuration

6

Multicore Cache, also known as SP Cache, is an MCx software component that optimizes host write and read performance by optimizing a VNX storage processors DRAM. Multicore Cache was designed to effectively scale across multiple CPU cores.

Multicore Cache space is shared for writes and reads. A dirty (used) cache page is copied to disk, and the data remains in memory. As you can see in the figure, cache is shared between writes (1) and reads (2), then copied to disk (3). This way the page can be re-hit again by the host (4 and 5), improving performance. Data is ultimately expelled from cache (7), freeing the page for other workloads.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Storage System Configuration

7

Multicore Cache Dynamic Watermarks constantly evaluates the effectiveness of cache for specific workloads. Buffering write I/Os in cache is effective for short bursts, but that write buffering is also counterproductive for workloads that tend to occupy the entire cache space. The rate of flushing, and the rate of acceptance of host I/O into cache, is managed dynamically to optimize system performance. Multicore Cache Write Throttling dynamically auto-adjusts incoming and outgoing I/O per RAID group. Multicore Cache provides the destination RAID group with time needed to process an increased flushing load by delaying host I/O acknowledgements, thus slowing the rate of incoming writes. Throttling continues until the rate of incoming data is equated with the abilities of the underlying RAID group and the pre-cleaning age value changed to match the workload. When the RAID group’s rate of page flushing levels with the rate of incoming I/O, and the associated dirty pages stay in cache for equal or less time than the pre-cleaning age value regulates, write throttling stops.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Storage System Configuration

8

Write through mode simply disables cache, so that writes are written through cache directly to disk. There’s no write acknowledgement to the host until the write has been completed to the spindle.

Since disabling Write Cache causes I/O to be written directly to disk, there will be a significantly negative impact on performance. It is recommended that the default setting of “Enabled” remain in place. If changing this setting is being considered, it is strongly recommended that this default setting only be changed at the direction of EMC Engineering personnel.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Storage System Configuration

9

This module covered the administration of the VNX SP management ports. VNX Multicore Cache was described, as well as verifying its settings.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Storage System Configuration

10

This Lab covers the VNX system configuration. Since the lab systems are preconfigured, this lab exercise covers the verification of the system configuration. It verifies the configuration settings for the Storage Processor (SP) cache and the SP networking configuration. The exercise also explores the VNX system hardware and port configuration as well as its iSCSI ports configuration.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Storage System Configuration

11

This lab covered VNX storage processor configurations – cache settings, networking, and port configuration.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Storage System Configuration

12

This module focuses on how to integrate hosts with VNX Block storage. We will organize the process into three stages: storage networking, storage provisioning, and readying the storage for the host.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

1

Host access to VNX Block storage requires a host having connectivity to block storage from the VNX system. The graphic illustrates an overview of configuration operations to achieve host block storage access to a VNX. These activities span across the host, the connectivity and the VNX. Although hosts can be directly cabled to the VNX, connectivity is commonly done through storage networking, and is formed from a combination of switches, physical cabling and logical networking for the specific block protocol. The key benefits of switch-based block storage connectivity are realized in the logical networking. Hosts can share VNX front-end ports; thus the number of connected hosts can be greater than the number of VNX frontend ports. Redundant connectivity can also be created by networking with multiple switches, enhancing storage availability. Block storage logical networking for Fibre Channel, iSCSI and FCoE protocols are covered in the Storage Area Networking (SAN) training curriculum available from the EMC training portal. Logical networking of block storage will not be covered in this training. Storage must be provisioned on the VNX for the host. Provisioning VNX storage consists of grouping physical disk drives into a RAID Group or a Storage Pool. Block storage objects called Logical Unit Numbers (LUNs) are then created from the disk groupings. Connected hosts are registered to the VNX using Unisphere Host Agent software. Host registration can also be done manually without an agent. The VNX LUN masking and mapping feature presents LUNs to the host. The feature uses a logical object called a Storage Group which is populated with LUNs and the registered host. A Storage Group creates a ‘virtual storage system’ to the host, giving it exclusive access to the LUNs in the Storage Group. The host must then discover the newly presented block storage within its disk sub-system. Storage discovery and readying for use is done differently per operating system. Generally, discovery is done with a SCSI bus rescan. Readying the storage is done by creating disk partitions and formatting the partition.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

2

The process of provisioning Block storage to host can be broken into a few basic sections. For the purposes of this course, we will group the activities into three parts. The first stage of our process focuses on storage networking. The steps in this section set up the host to “see” the storage array via one the supported storage protocols. The next stage will deal with configuring storage that will then be provisioned, or assigned, to the host. When these steps are completed, the storage will be “visible” in the host’s operating system. The final stage uses host-based tools to ready the storage volumes for the operating system. After this stage is completed, users or applications will be able to store data on the storage array from the host, and management options will be available through various utilities, which will be discussed later. Please note that, although all of the steps presented in the module are essential, the actual sequence of steps is very flexible. What is presented in this module is merely one option for the sequence.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

3

The topics covered in this module and the next one are covered in detail within each of the block host connectivity guides shown. General steps and task overviews will be shown on the slides. Please refer to the appropriate block host connectivity guide for details of integrating a specific host via a specific protocol (Fiber Channel, iSCSI or FCoE) and HBA to the VNX. The guides are available from the EMC online support site via Support Zone login.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

4

This lesson covers the various storage network topologies and the requirements to implement them. We will look at identifying the different network technologies, while taking a closer look at the Fibre Channel and iSCSI implementations. We will delve into the Fibre Channel and iSCSI components and addressing as well as look at the various rules associated with implementing those technologies. Finally we will look at host connectivity requirements for the various storage network topologies.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

5

The first stage of the process is configuring storage networking so that the host and array can “see” each other. The first element here is to choose, or identify, the storage protocol to be used. VNX supports FC, iSCSI, and FCoE. Once the storage protocol is confirmed, the host will need to have an adapter of some kind to communicate via the storage protocol. In Fibre Channel environments, a host will have a Host Bus Adapter, or HBA, installed and configured. For iSCSI, either a standard NIC or a dedicated iSCSI HBA can be used. FCoE uses a Converged Network Adapter (CAN). This course focuses primarily on iSCSI and FC. With a storage networking device ready on the host, connectivity between the host and the array will be required. In FC, this will include setting up zoning on an FC switch. In iSCSI environments, initiator and target relationships will need to be established. After connectivity has been configured, the hosts need to be registered with the VNX. Registration is usually automatic [when a host agent is installed], though in some cases it will be performed manually. In either case, the registrations should be confirmed. Having completed the connectivity between the host and the array, you will then be in a position to configure storage volumes and provision them to the host.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

6

The choice of block storage protocol can depend on various factors such as distance, performance, scalability and overhead. Each technology has different strengths and weaknesses and are compared and contrasted here.

iSCSI uses IP networking technology over Ethernet for connecting hosts to storage. These IP networks are very common and in wide use today. They provide unrivaled connectivity to all parts of the world. The technology is relatively inexpensive and the skillset to connect devices is common. IP technologies can connect hosts to storage over larger distances than the channel technology of Fibre Channel. IP can also scale to larger numbers of hosts connecting to the storage. If distance and/or scalability are prime concerns, iSCSI may be the protocol of choice. Selecting iSCSI has the tradeoff of slower performance and more overhead than Fibre Channel. Fibre Channel uses channel technology for connecting hosts to storage. Channel technologies are specialized and their networks create connectivity between specific devices. The technology is relatively expensive and requires a specialized skillset to connect devices. The channel technology of Fibre Channel performs faster and has lower protocol overhead than iSCSI. If fast performance and/or low overhead are prime concerns, Fibre Channel may be the protocol of choice. Selecting Fibre Channel has the tradeoff of shorter distances and less scalability that iSCSI.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

7

The rules concerning iSCSI and FC host connectivity are shown here.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

8

In order to connect a host to a VNX storage array, you will need to meet the requirements shown here. Keep in mind that most vendors have management plug-ins in addition to the drivers which allow users to view and configure HBA/NIC parameters, such as the Emulex OCManager plug-in or the Qlogic QConvergeConsole plug-in. The fabric switches will need to be properly configured and zoned, and there will also need to be a properly configured Ethernet network as well. You can successfully run Unisphere management software from a storage system or a Windows off-array management server. Note: If the HBA/NIC drivers are not installed, consult the current EMC® VNX™ Open Systems Configuration Guide document on support.emc.com for the latest supported configuration guidelines.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

9

This lesson covers storage networking with Fibre Channel, including its characteristics, tasks, and connectivity requirements.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

10

Fibre Channel is a serial data transfer interface that operates over copper wire and/or optical fiber at full-duplex data rates up to 3200 MB/s (16 Gb/s connection). Networking and I/O protocols (such as SCSI commands) are mapped to Fibre Channel constructs, and then encapsulated and transported within Fibre Channel frames. This process allows high-speed transfer of multiple protocols over the same physical interface. Fibre Channel systems are assembled from familiar types of components: adapters, switches and storage devices. Host bus adapters are installed in computers and servers in the same manner as a SCSI Host Bus Adapter or a network interface card (NIC). Fibre Channel switches provide full bandwidth connections for highly scalable systems without a practical limit to the number of connections supported (16 million addresses are possible). Note: The word “fiber” indicates the physical media. The word “fibre” indicates the Fibre Channel protocol and standards.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

11

World Wide Names identify the source and destination ports in the Fibre Channel network. World Wide Node Names (WWNN) identify the host or array, while the World Wide Port Name (WWPN) identifies the actual port. These two 64 bit names are often combined into a 128 bit name.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

12

An HBA is an I/O adapter that sits between the host computer's bus and the I/O channel, and manages the transfer of information between host and storage. In order to minimize the impact on host processor performance, the HBA performs many low-level interface functions automatically or with minimal processor involvement. In simple terms, an HBA provides I/O processing and physical connectivity between a server and storage. The storage may be attached using a variety of direct attached or storage networking technologies, including Fibre Channel, iSCSI, or FCoE. HBAs provide critical server CPU off-load, freeing servers to perform application processing. As the only part of a Storage Area Network that resides in a server, HBAs also provide a critical link between the SAN and the operating system and application software. In this role, the HBA enables a range of high-availability and storage management capabilities, including load balancing, fail-over, SAN administration, and storage management.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

13

VNX Fibre Channel ports can be viewed from the Unisphere GUI (as well as by using the Navisphere CLI commands) by navigating in Unisphere to the System > Storage Hardware menu. Once there expand the tree for I/O Modules to view the physical locations and properties of a given port. The example shows SPA expanded to display the I/O modules and ports. To display port properties, highlight the port and select Properties. The WWN can be determined for the port as well as other parameters such as speed and initiator information. VNX can contain FC, FCoE, and iSCSI ports depending on the I/O module installed.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

14

A Switched Fabric is one or more Fibre Channel switches connected to multiple devices. The architecture involves a switching device, such as a Fibre Channel switch, interconnecting two or more nodes. Rather than traveling around an entire loop, frames are routed between source and destination by the Fabric.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

15

Under single-initiator zoning, each HBA is configured with its own zone. The members of the zone consist of the HBA and one or more storage ports with the volumes that the HBA will use. In the example, there is an Emulex HBA zoned to two VNX ports.

This zoning practice provides a fast, efficient, and reliable means of controlling the HBA discovery/login process. Without zoning, the HBA will attempt to log in to all ports on the Fabric during discovery and during the HBA’s response to a state change notification. With single-initiator zoning, the time and Fibre Channel bandwidth required to process discovery and the state change notification are minimized. Two very good reasons for single-initiator zoning:

• •

Reduced reset time for any change made in the state of the Fabric Only the nodes within the same zone will be forced to log back into the Fabric after a RSCN (Registered State Change Notification)

When a node’s state has changed in a Fabric (i.e. cable moved to another port), it will have to perform the Fabric Login process again before resuming normal communication with the other nodes with which it is zoned. If there is only one initiator in the zone (HBA), then the amount of disrupted communication is reduced. If you had a zone with two HBAs and one of them had a state change, then BOTH would be forced to log in again, causing disruption to the other HBA that did not have any change in its Fabric state. Performance can be severely impacted by this.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

16

This lesson storage networking with iSCSI, including its characteristics, tasks, and connectivity requirements.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

17

iSCSI is a native IP-based protocol for establishing and managing connections between IPbased storage devices, hosts, and clients. It provides a means of transporting SCSI packets over TCP/IP. iSCSI works by encapsulating SCSI commands into TCP and transporting them over an IP network. Since iSCSI is IP-based traffic, it can be routed or switched on standard Ethernet equipment. Traditional Ethernet adapters or NICs are designed to transfer filelevel data packets among PCs, servers, and storage devices. NICs, however, do not usually transfer block-level data, which has been traditionally handled by a Fibre Channel host bus adapter. Through the use of iSCSI drivers on the host or server, a NIC can transmit packets of block-level data over an IP network. The block-level data is placed into a TCP/IP packet so the NIC can process and send it over the IP network. If required, bridging devices can be used between an IP network and a SAN. Today, there are three block storage over IP approaches: iSCSI, FCIP, and iFCP. There is no Fibre Channel content in iSCSI.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

18

All iSCSI nodes are identified by an iSCSI name. An iSCSI name is neither the IP address nor the DNS name of an IP host. Names enable iSCSI storage resources to be managed regardless of address. An iSCSI node name is also the SCSI device name, which is the principal object used in authentication of targets to initiators and initiators to targets. iSCSI addresses can be one of two types: iSCSI Qualified Name (iQN) or IEEE naming convention, Extended Unique Identifier (EUI). iQN format - iqn.yyyy-mm.com.xyz.aabbccddeeffgghh where:

• • • •

iqn - Naming convention identifier yyyy-nn - Point in time when the .com domain was registered com.xyz - Domain of the node backwards aabbccddeeffgghh - Device identifier (can be a WWN, the system name, or any other vendor-implemented standard)

EUI format - eui.64-bit WWN:

• •

eui - Naming prefix 64-bit WWN - FC WWN of the host

Within iSCSI, a node is defined as a single initiator or target. These definitions map to the traditional SCSI target/initiator model. iSCSI names are assigned to all nodes and are independent of the associated address.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

19

Challenge Handshake Authentication Protocol, or CHAP, is an authentication scheme used by Point to Point servers to validate the identity of remote clients. The connection is based upon the peer sharing a password, or secret. iSCSI capable storage systems support both one-way and mutual CHAP. For one-way CHAP, each target can have its own unique CHAP secret. For mutual CHAP, the initiator itself has a single secret with all targets. CHAP security can be set up either as one-way CHAP or mutual CHAP. You must set up the target (storage array) and the initiator (host) to use the same type of CHAP to establish a successful login. Unisphere is used to configure CHAP on the storage array. To configure CHAP on the host, use the vendor tools for either the iSCSI HBA or NIC installed on each initiator host. For a Qlogic iSCSI HBA use SANsurfer software. For a standard NIC on a Windows host use Microsoft iSCSI Initiator software. On a Linux host, CHAP is configured by entering the appropriate information in the /etc/iscsi.conf file.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

20

LAN configuration allows Layer 2 (switched) and Layer 3 (routed) networks. Layer 2 networks are recommended over Layer 3 networks. The network should be dedicated solely to the iSCSI configuration. For performance reasons EMC recommends that no traffic apart from iSCSI traffic should be carried over it. If using MDS switches, EMC recommends creating a dedicated VSAN for all iSCSI traffic. CAT5 network cables are supported for distances up to 100 meters. If cabling is to exceed 100 meters, CAT6 network cables would be required. The network must be a well-engineered network with no packet loss or packet duplication. When planning the network, care must be taken in making certain that the utilized throughput will never exceed the available bandwidth. VLAN tagging is also supported. Link Aggregation, also known as NIC teaming, is not supported.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

21

CPU bottlenecks caused by TCP/IP processing have been a driving force in the development of hardware devices specialized to process TCP and iSCSI workloads, offloading these tasks from the host CPU. These iSCSI and/or TCP offload devices are available in 1 Gb/s and 10 Gb/s speeds. As a result, there are multiple choices for the network device in a host. In addition to the traditional NIC, there is the TOE (TCP Offload Engine) which processes TCP tasks, and the iSCSI HBA which processes both TCP and iSCSI tasks. TOE is sometimes referred to as a Partial Offload, while the iSCSI HBA is sometime referred to as a Full Offload. While neither offload device is required, these solutions can offer improved application performance when the application performance is CPU bound.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

22

VNX SP Front End connections in an iSCSI environment consist of iSCSI NICs and TOEs. In VNX Unisphere, right-clicking on a selected port displays the Port Properties. This example shows an iSCSI port, Port 0, which represents the physical location of the port in the chassis and matches the label (0 in this example) on the I/O module hardware in the chassis. A-4 in this example means:

• •

A represents the SP (A or B) on which the port resides 4 represents the software assigned logical ID for this port.

The logical ID and the physical location may not always match.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

23

iSCSI basic connectivity verification includes Ping and Trace Route. These are available from the Network Settings menu in Unisphere. Ping provides a basic connectivity check to ensure the host can reach the array and vice versa. This command can be run from the host, Unisphere and the storage system’s SP. Trace Route provides the user with information on how many network hops are required for the packet to reach its final destination. This command can also be run from the host, Unisphere, and the storage system’s SP. The first entry in the Trace Route response should be the gateway defined in the iSCSI port configuration.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

24

This lesson covers the activities included in registering hosts with Unisphere.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

25

Registration makes a host known to the storage system and can be performed in a number of ways:

• • • •

Automatically, by the Unisphere Agent, when it starts

Automatically, by the Unisphere Agent, in response to a naviseccli register command Manually, through Unisphere Manually, through Navisphere CLI

Connectivity to the array depends on the protocol the host is using to connect to the storage system. If the host is fibre attached, fabric logins tell the VNX which ports and HBAs are connected. If the host is iSCSI attached, iSCSI logins tell the VNX which ports and initiators (hardware or software based) are connected.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

26

In Windows, the Unisphere Host Agent (or the manual registration of a host) will inform the storage system of how the host is attaching to the system. It will either inform the array of the hostname and the WWNs if it’s a fibre-attached host, or it will inform the array of the hostname and either the IQN or the EUI if it’s an iSCSI attached host.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

27

Presenting LUNs to a Windows iSCSI host is the same process as a Fibre connected host with the exception of discovering the iSCSI targets and LUNs with the Unisphere Server Utility.

Similar to the Host Agent, the Unisphere Server Utility registers the server’s HBA (host bus adapter) or NICs with the attached VNX. With the server utility you can perform the following functions:

• • •

Register the server with all connected storage systems Configure iSCSI connections on this server (Microsoft initiators only) Verify Server High Availability

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

28

To run the Host Agent, CLI, or server utility, your server must meet the following requirements:

• • • • •

Run a supported version of the operating system



If you want to use the CLI on the server to manage storage systems on a remote server, the server must be on a TCP/IP network connected to both the remote server and each SP in the remote server’s storage system. The remote server can be running AIX, HP-UX, Linux, Solaris, or the Windows operating system.

Have an EMC VNX supported HBA hardware and driver installed Be connected to each SP in each storage system either directly or through a switch Each SP must have an IP connection Have a configured TCP/IP network connection to any remote hosts that you will use to manage the server’s storage systems, including any host whose browser you will use to access Unisphere, any Windows Server host running Storage Management Server software, and any AIX, HP-UX, IRIX, Linux, NetWare, Solaris, Windows Server host running the CLI.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

29

Depending on your application needs, you can install the Host Agent, server utility, or both on an attached server. If you want to install both applications, the registration feature of the server utility will be disabled and the Host Agent will be used to register the server’s NICs or HBAs to the storage system. Note if the server utility is used while the Host Agent is running, a scan of the new devices will fail.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

30

If you have a Microsoft iSCSI initiator, you must install the Microsoft iSCSI Software Initiator because the Unisphere Server Utility uses it to configure iSCSI connections. Note: In FC configurations, do not install the server utility on a VMware Virtual Machine. You can install the utility on a VMware ESX Server. Do not disable the Registration Service option (it is enabled by default.) The Registration Service option automatically registers the server’s NICs or HBAs with the storage system after the installation and updates server information to the storage system whenever the server configuration changes (for example, when you mount new volumes or create new partitions). If you have the Host Agent installed and you are installing the server utility, the server utility’s Registration Service feature will not be installed. You must reboot the server when the installation dialog prompts you to reboot. If the server is connected to the storage system with NICs and you do not reboot before you run the Microsoft iSCSI Software Initiator or server utility, the NIC initiators will not log in to the storage system.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

31

Also, in Linux, the Unisphere Host Agent (or the manual registration of a host) will inform the storage system of how the host is attaching to the system. It will either inform the array of the hostname and the WWN’s if it’s a fibre attached host or it will inform the array of the hostname and either the IQN or the EUI if it’s an iSCSI attached host. EMC recommends that you download and install the most recent version of the Unisphere Host Agent software from the applicable support by product page on the EMC Online Support website. 1. On the Linux server, log in to the root account. 2. If your server is behind a firewall, open TCP/IP port 6389. This port is used by the Host Agent. If this port is not opened, the Host Agent will not function properly. 3. Download the software: a) From the EMC Online Support website, select the VNX Support by Product page and locate the Software Downloads. b) Select the Unisphere Host Agent, and then select the option to save the tar file to your server. 4.

Make sure you load the correct version of the package; a) 32-bit server – rpm -ivh UnisphereHostAgent-Linux-32-noarch-en_US-versionbuild.noarch.rpm ** b) 64-bit server – rpm -ivh UnisphereHostAgent-Linux-64-x86-en_US-versionbuild.x86_64.rpm ** ** Where version and build are the version number and build number of the software.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

32

In order to run CLI commands on Linux, the Host Agent configuration file must be modified to include an entry that defines users, in lower case only, who will issue the CLI commands as a privileged user.

1. For a local user  user name ( user root ) 2. For a remote user  user name@hostname or ( user system@Ipaddress ) 3. Save the file and restart the agent  /etc/init.d/hostagent restart 4. Verify that the Host Agent configuration file includes a privileged user  more /etc/Unisphere/agent.config

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

33

In Linux, the hostagent command can be used to start, stop, restart and provide status on the Host Agent. The command executes from the /etc/init.d directory once the Host Agent is installed.

The example displays a sequence to verify the agent status, stop and start the agent, and verify the Host Agent is running by looking at the process status. The commands used are:

• • • • •

hostagent status hostagent stop ps –ef | grep hostagent hostagent start ps –ef | grep hostagent

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

34

You can verify that your Host Agent is functioning using Unisphere. Navigate to Hosts > Hosts. Once there, click the host you want to verify and then click properties. The example on the left displays the Host Properties window when the Update tab has been selected to view the LUN status. However, the Host Agent is either not installed or the agent is not started. On the right, is the proper display when a Host Agent is started The window displays the agent information and privileged users list from the agent.config file. Selecting Update will succeed.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

35

Naviseccli can also be used on Linux systems to verify that a Host Agent is functioning properly. As shown in the example, the naviseccli port and getagent commands can be used to verify HBA connections to the VNX and retrieve the Host Agent information.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

36

The Unisphere Server Utility can be used on a Linux host to discover and view connected storage information. The utility comes as an RPM package and once installed, can be executed form the /opt/Unisphere directory by issuing a ./serverutilcli command as shown in the example. Selecting “1” from the menu options will perform a scan and report the information about connected storage systems. Note the Host Agent must be stopped for the utility to work.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

37

ESXi hosts register automatically with the VNX even though it doesn't run a Host Agent. Because there's no Host Agent, it shows up as 'Manually Registered', even though that's not strictly true.

If you need to manually register an ESXi host with the VNX, navigate to Hosts > Initiators and click the create button. The Create Initiators Record window allows manual registration of a host which is logged in to the fabric, but does not have a Unisphere Host Agent capable of communicating with the VNX. To add a new host entry, the user must select the New Host radio button, and enter a hostname, IP address, and other information in the New Initiator Information boxes. Once complete, the host is regarded as manually registered; management of host access to LUNs may now take place in the normal manner.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

38

The maximum number of connections between servers and a storage system is limited by the number of initiator records supported per storage-system SP and is model dependent. An initiator is an HBA or CNA port in a server that can access a storage system. Some HBAs or CNAs have multiple ports. Each HBA or CNA port that is zoned to an SP port is one path to that SP and the storage system containing that SP. Each path consumes one initiator record. Depending on the type of storage system and the connections between its SPs and the switches, an HBA or CNA port can be zoned through different switch ports to the same SP port or to different SP ports, resulting in multiple paths between the HBA or CNA port and an SP and/or the storage system. Note that the failover software environment running on the server may limit the number of paths supported from the server to a single storage system SP and from a server to the storage system. Access from a server to an SP in a storage system can be:

• Single path: A single physical path (port/HBA) between the host system and the array

• Multipath: More than one physical path between the host system and the array via multiple HBAs, HBA ports and switches

• Alternate path: Provides an alternate path to the storage array in the event of a primary path failure.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

39

On a properly configured host with the correct drivers loaded, users can verify host to target connectivity by examining the Hosts > Initiators window. ESXi hosts can have either Fibre Channel or iSCSI connectivity. This example shows an ESXi Fibre Channel connection.

The initiator is attached to host esx-57-181 and it’s IP address is currently 10.127.57.181. Users can validate the initiator is registered and logged into the array. The green Status icon indicates the host is currently connected to a Storage Group. The Host Initiators window can display up to 1000 hosts at one time. Click More to display the next 1000 hosts or click Show All to display all the remaining hosts/initiators. Each host initiator record includes the following information:

• • • • • • • • • • • •

Status: Shows whether the host is connected to a Storage Group Initiator Name: The iSCSI IQN, or the FC WWN:WWPN SP Port: The port to which the initiator is connected on the array Host Name: The hostname of the system connecting to the array

Host IP Address: IP address of the host connecting to the array Storage Group: Storage Group with which the host is associated Registered: Whether or not the initiator has registered with the array Logged In: Shows whether the initiator has logged in Failover Mode: Mode of failover to which the port has been set Type: Type of initiator Protocol: Which protocol the initiator is using Attributes: The attributes of the initiator

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

40

To view a Fibre Channel initiator record, highlight the entry and select Properties. Initiator values are detailed below:

• • •

Hostname: The name of the host in which the initiator HBA is located

• •

HBA Type: Host or Array

• •

Failover Mode: Indicates the failover mode for the selected initiator (0, 1, 2, 3, or 4)

• •

Storage Group: All Storage Groups to which this initiator has access

• •

SP Port Physical Location: The location of the SP port within the enclosure

IP Address: The IP address of the host in which the initiator HBA is located Initiator Type: Used for specific host OS configurations (Select CLARiiON/VNX unless instructed otherwise.) Array CommPath: Indicates the status of the communication path for this initiator (Enabled or Disabled) Unit Serial Number: Reports the serial number on a per LUN, or per storage system, basis to the host operating system SP-Port IQN/WWN: The iSCSI qualified name or World Wide Name of the selected SP port SP Port: The logical target ID (not physical ID) or SP port to which the initiator connects

Note: Different host failover software requires different settings for Failover Mode and ArrayCommPath. Be sure to refer to the Knowledgebase Article 31521, available in Knowledgebase on the EMC Online Support website, for correct failover values.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

41

iSCSI initiators are identified in Unisphere by the iqn.yyyymm.naming_authority:unique_name naming convention. VNX ports must be configured with an IP Address which is done by opening an iSCSI properties page from the Settings > Network > Settings for Block menu. Similar to Fibre Channel, iSCSI initiators should show a value of Registered and Logged in along with an SP port, hostname and IP address, and Storage Group. The example displays an iSCSI connection on SP port B-6v0 for Host esxi05a with an IP Address of 192.168.1.205. This initiator is logged in and registered and is currently not connected to a Storage Group as indicated by the yellow triangle with the exclamation point.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

42

The initiator information window for iSCSi connections provides the same information as the Fibre channel window. Here we see the information for Host esxi05a. Note the SP port contains a different identifier, B-6v0. This is because the IP address is assigned to a virtual port on physical port Slot B2 Port 2.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

43

After confirming that host initiators are successfully registered, these hosts are available to be provisioned storage from Unisphere.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

44

This lesson covers the storage architecture and configuration options for Pools, RAID Groups, and LUNs. We will also discuss LUN masking.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

45

The second stage of our process is intended to create storage on the array and provision it so that the host can discover the volume with its operating system or volume management software.

The first element here is to choose the storage architecture to be used. This includes determining which disk, Storage Pool (Pool or RAID Group), and LUN type to configure, as well as choosing between classic LUNs or using storage Pools. Once the storage architecture is determined, Unisphere can be used to create the storage objects that compromise the storage configuration. When this stage is complete, one or more LUNs will be available to provision to hosts. With the LUNs ready, the next step is to provision the storage by putting both the LUN and host into the same Storage Group. This effectively “connects” the host’s initiator to the LUN, and the host is ready to discover the volume locally. After storage has been provisioned to the host, the host needs to discover the LUN. This can be done with native operating system tools, or with volume management software.

Having completed the provisioning of the storage to the host, you will then be in a position to use host-based utilities to configure and structure the LUN so that it is usable for write and read I/O from the operating system.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

46

Before provisioning storage to hosts, it is important to understand the options in the underlying storage architecture. We will now look at the various choices regarding storage media, Pool and RAID configurations, and LUN types.

Block storage is provisioned from the VNX array as LUNs. Even in VNX File implementations, the storage is allocated to Data Movers as LUNs. So, LUNs are the storage object seen by the connected hosts. LUNs, in turn, are allocated from either Pools or RAID Groups, and can be Thin provisioned or Thick provisioned. The Pools and RAID Groups are the storage object that are comprised of the actual storage media, which can be solid state or spinning media.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

47

Some general guidelines for selecting drive types should be followed to optimize and enable good performance on a VNX storage system. The disk technologies supported are Flash Drives, Serial Attached SCSI (SAS) drives, and Near-line Serial Attached SCSI (NL-SAS) drives. Flash Drives are EMC’s implementation of solid state disk (SSD) technology using NAND single layer cell (SLC) memory and dual 4 Gb/s drive interfaces. Flash drives offer increased performance for applications that are limited by disk subsystem latencies. Its technology significantly reduces response times to service a random block because there is no seek time—there is no disk head to move. Serial Attached SCSI drives offer 10,000 or 15,000 RPM operation speeds in two different form factors: 3.5 inch and 2.5 inch. The 2.5-inch drive technology provides significant density and power improvements over the 3.5-inch technology. NL-SAS drives are enterprise SATA drives with a SAS interface. NL-SAS drives offer larger capacities and are offered only in the 7200 RPM speed and in a 3.5-inch form factor.

Matching the drive type to the expected workload is primary in achieving expected results. When creating storage pools, drives in a storage pool are divided into three tiers based on performance characteristics: Extreme Performance Tier (Flash), Performance Tier (SAS) and Capacity Tier (NL-SAS). Basic rules of thumb to determine the required number of drives used to support the workload can be found in the EMC VNX2 Unified Best Practices for Performance guide found on the EMC Online Support website.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

48

Before provisioning LUNs, it will need to be decided if the LUNs will be built from a Pool or from a RAID group. The left side of this slide shows an example of a Pool. A Pool can be made up of different tiers of storage. Each tier uses different types of disks and can have a different RAID type. It is strongly recommended that all disks in the tier be in the same configuration (in this example, all disks in the Performance Tier are in a RAID 5 (8+1) configuration). Pools may include Solid-State-Disk (SSD), also called Flash drives, for the extreme performance tier, SAS drives for the performance tier, and Near-Line SAS drives for the capacity tier. Pools support both thick and thin LUNs, as well as support for features such as Fully Automated Storage Tiering (FAST) which enables the hottest data to be stored on the highest performing drives without administrator intervention. Pools give the administrator maximum flexibility and are easiest to manage; Therefore, they are recommended. The right side of the slide shows RAID group examples. Each RAID group has an automatically assigned name beginning with RAID Group 0 (the RAID Group ID can be defined by the user thus affecting the RAID group name. For example, if a RAID group ID of 2 is chosen, then the RAID Group Name would be set to RAID Group 2). A RAID group is limited to using a single drive type; Flash, SAS, or NL-SAS and a single type of RAID configuration. The needs of the host being attached to the storage will largely determine the drive technology and RAID configuration used.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

49

Pools are dedicated for use by pool LUNS (thick or thin), and can contain a few or hundreds of disks. Best practice is to create the pool with the maximum number of drives that can initially be placed in the pool at creation time based on the model of the array. Because a large number of disks can be configured, workloads running on pool LUNs will be spread across many more resources than RAID groups requiring less planning and management. Use homogeneous pools for predictable applications with similar and expected performance requirements. Use heterogeneous pools to take advantage of the VNX FAST VP feature with facilitates the automatic movement of data to the appropriate tier. The RAID configuration for drives within a pool is performed at the tier level. Within each tier, users can select from five recommended RAID configurations using three RAID types that provide an optimal balance of protection, capacity and performance to the pool. Mixing RAID types within a pool is supported and allows for using best practice RAID types for the tiered drive types in a pool. Keep in mind that once the tier is created, the RAID configuration for that tier in that pool cannot be changed. Multiple pools can be created to accommodate separate workloads based on different I/O profiles enabling an administrator to dedicate resources to various hosts based on different performance goals.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

50

Some general guidelines should be followed to optimize and enable good performance on a VNX storage system. For best performance from the least amount of drives, ensure the correct RAID level is selected to accommodate the expected workload. RAID 1/0 is appropriate for heavy transactional workloads with a high rate of random writes (greater than 25%.) RAID 5 is appropriate for medium-high performance, general workloads, and sequential I/O. RAID 6 is appropriate for NL-SAS read-based workloads and archives. This protection provides RAID protection to cover the longer rebuild times of large drives.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

51

RAID groups are limited to a single drive type and a maximum limit of 16 drives. For parity RAID levels, the higher the drive count, the higher the capacity utilization as well as a higher risk to availability. With RAID groups, administrators need to carefully create storage since there is a tendency to over provision and underutilize the resources. When creating RAID groups, select drives from the same bus if possible. There is little or no boost in performance when creating RAID groups across DAEs (vertical). There are of course exceptions typically in the case where FAST Cache drives are used (see the EMC VNX2 Unified Best Practices for Performance guide for details). RAID5 4+1 RAID groups have an advanced setting to select a larger element size of 1024 blocks. This setting is used to take advantage of workloads consisting of predominantly large-block random read I/O profiles, such as data warehousing.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

52

The capacity of a thick LUN, like the capacity of a RAID group LUN, is distributed equally across all the disks in the Pool on which the LUN was created. This behavior is exactly the same as when data is added from hosts and when additional disks are added to the Pool. When this happens, data is distributed equally across all the disks in the Pool. The amount of physical space allocated to a thick LUN is the same as the user capacity that is seen by the server’s operating system and is allocated entirely at the time of creation. A thick LUN uses slightly more capacity than the amount of user data written to it due to the metadata required to reference the data.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

53

The primary difference between a thin LUN and a thick LUN is that thin LUNs present more storage to a host than is physically allocated. Thin LUNs incrementally add to their in-use capacity and compete with other LUNs in the pool for the pools available storage. Thin LUNs can run out of disk space if the underlying Pool to which it belongs runs out of physical space. As with thick LUNs, thin LUNs use slightly more capacity than the amount of user data due to the metadata required to reference the data.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

54

Classic LUNs are created on RAID groups and like thick LUNs, the entire space consumed by the LUN is allocated at the time of creation. Unlike Pool LUNs, the Classic LUNs LBAs (Logical Block Addresses) are physically contiguous. This gives the Classic LUN predictable performance and data layout. Classic LUNs can be seen and accessed through either SP equally. With Classic LUNs, MCx supports Active-Active host access (discussed in another module), providing simultaneous SP access to the LUN. Thus LUN trespass is not required for high availability. If a path or SP should fail, there is no delay in I/O to the LUN. This dual SP access also results in up to a 2X boost in performance.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

55

Access Logix is a factory installed feature on each SP that allows hosts to access data on the array and the ability to create storage groups on shared storage systems. A storage group is a collection of one or more LUNs or metaLUNs to which you connect one or more servers. A server can access only the LUNs in the storage group to which it is connected. It cannot access LUNs assigned to other servers. In other words, the server sees the storage group to which it is connected as the entire storage system. Access Logix runs within the VNX Operating Environment for Block. When you power up the storage system, each SP boots and enables the Access Logix capability within VNX Operating Environment (VNX OE ). Access Logix cannot be disabled.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

56

Access Logix allows multiple hosts to connect to the VNX storage system while maintaining exclusive access to storage resources for each connected host. In effect, it presents a ‘virtual storage system’ to each host. The host sees the equivalent of a storage system dedicated to it alone, with only its own LUNs visible to it. It does this by “masking” certain LUNs from hosts that are not authorized to see them, and presents those LUNs only to the servers that are authorized to see them. Another task that Access Logix performs is the mapping of VNX LUNs, often called Array Logical Units (ALUs), to host LUNs or Host Logical Units (HLUs). It determines which physical addresses, in this case the device numbers, each attached host will use for its LUNs. Access to LUNs is controlled by information stored in the Access Logix database, which is resident in a reserved area of VNX disk - the PSM (Persistent Storage Manager) LUN. When host agents in the VNX environment start up, typically shortly after host boot time, they send initiator information to all storage systems they are connected to and this information gets stored in the Access Logix Database which is managed by the Access Logix Software.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

57

This slide shows a conceptual diagram of a storage system attached to 2 hosts. Each host has a storage group associated with it – storage group A for Server A, and storage group B for Server B. In this example, the LUNs used on the storage system are sequential, from 0 through 7. However, they don’t need to be. Each LUN on the storage system (ALU, or Array Logical Unit) has been mapped to a LUN number (sometimes called the LUN alias) as seen by the host (HLU, or Host Logical Unit). It is important to note that each host sees LUN 0, LUN 1, etc, and that there is no conflict due to multiple instances of the LUN number being used. The mappings are stored in a Access Control List, which is part of the Access Logix database. Each server sees the LUNs presented to it as though they are the only LUNs on the ‘virtual storage system’, represented by the storage group.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

58

There are limits to the number of connections inside any VNX environment. Some of those limits are directly related to Access Logix while others are hardware-related. The hardware limits generally affect Access Logix and are covered here. No distinction is made between software and hardware limits. First, note that any host may be connected to only 1 storage group on any storage system. This does not imply that only 1 host may be connected to a storage group; where clustering is involved, 2 or more hosts may share the same storage group. No host may be connected to more than 4 storage groups. This means that any host may not use LUNs from more than 4 storage systems. There may be more storage systems in the environment, and the host may even be zoned to make them visible at the Fibre Channel level, but connection to storage groups should not be allowed for those storage systems. There are also limits to the number of hosts that may attach to a storage system, and those limits depend on the storage system type. Always consult the latest EMC VNX Open Systems Configuration Guide for the updated limits. Storage groups are resident on a single storage system and may not span storage systems. The number of LUNs contained in a storage group is also dependent on the VNX model.

EMC recommends that any host connected to a VNX storage system have the host agent running. The advantage to the user is that administration is easier – hosts are identified by hostname and IP address rather than by WWN, and the host addressing of the LUN, e.g. c0t1d2, or H:, is available to Unisphere. If all users were allowed to make changes to the Access Logix configuration, security and privacy issues would be a concern. With Unisphere, users must be authenticated and have the correct privileges before any storage system configuration values may be changed. With legacy systems running classic Navisphere CLI, the user must have an entry in the SP privileged user list to be allowed to make configuration changes. This entry specifies both the username and the hostname, which may be used for storage system configuration. If the Navisphere Secure CLI is used, then the user must either have a Security File created, or must specify a username:password:scope combination on the command line.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

59

When using Fibre Channel, access to the LUNs is controlled by an Access Control List (ACL ) which contains the 128-bit Globally Unique ID (UID) of the LUN, and the 128-bit Unique IDs of the HBAs in the host. The HBA UID consists of the 64-bit World-Wide Node Name (WWNN) followed by the 64-bit World-Wide Port Name (WWPN). The LUN UID is assigned to the LUN when it is bound and includes time-related information. If the LUN is unbound and an identical LUN bound again, they will have different UIDs. Each request for LUN access references the ACL in order to determine whether or not a host should be allowed access. If this meant that each request required access to the disk-based Access Logix database, the lookups would slow the storage system significantly; accordingly, the database is cached in SP memory (not in read or write cache) and operations are fast. Because of the disk-based nature of the database, it is persistent and will survive power and SP failures. If an SP fails and is replaced, the new SP assumes the WWPNs of the failed SP and no changes need be made to the database. If a host HBA fails and is replaced, and if the replacement has a different WWN (which will be the case unless it can be changed by means of software), then that host’s entry in the database will be incorrect. The information for the old HBA needs to be removed from the database, and the information for the new HBA needs to be entered. These processes are the de-registration and registration processes respectively.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

60

Regardless of the type of access (FC or iSCSI), the LUN UID is used and has the same characteristics as discussed in the previous slide.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

61

This lesson covers the procedure to create Pools, Pool LUNs, RAID Group, and RAID Group LUNs.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

62

Once you have determined the underlying architecture of your VNX storage, you can now create LUNs that can be provisioned to hosts.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

63

The slide shows the Storage Configuration page from Unisphere. The page offers several options for configuring the storage on the system. The primary method of configuring storage is to use the Storage Pools option from the main Unisphere screen. Provisioning wizards are also available from the right-side task pane. Whichever method is chosen, disks in the system are grouped together in Pools or RAID Groups and LUNs are created from their disk space. Storage configured from Pools use Advanced Data Services such as FAST, Thin and Thick LUNs, and Compression. The default storage configuration option is Pools. LUNs created from Pools or RAID Groups are assigned as block devices to either the VNX File or other block hosts.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

64

When creating Pools, each tier provides a RAID configuration drop-down and a number of disks drop-down. The example shows a storage array with three available drive types, (Extreme Performance, Performance and Capacity).

VNX systems allow administrators to mix different drive types within the same pool. The selection of drives are user selectable with RAID 5 (default), RAID 6, and 1/0 available. The Scheduled Auto-Tiering box is visible only when the FAST enabler is installed and the Pool radio button is selected. Select Scheduled Auto-Tiering to include this storage pool in the auto-tiering schedule. For the level of performance selection, multiple combo boxes will be shown (if there are different disk types available), one for each disk type in the array such as Flash, SAS, or NL-SAS. If a certain type of disk is not present in the array, the combo box for that type will not be displayed. The Advanced tab can be selected to configure Thresholds and FAST Cache. If the FAST Cache enabler is present when the Pool is created, the Enabled box is selected by default, and will apply to all LUNs in the pool. The Snapshots section will be available if the Snapshot enabler is loaded. This option determines whether the system should monitor space used in the pool and automatically delete snapshots if required. If enabled, the specified thresholds indicate when to start deleting snapshots and when to stop, assuming that the lower free space value can be achieved by using automatic deletion. If the lower value cannot be reached, the system pauses the automatic delete operation.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

65

The Storage Pool properties window is launched from the Storage > Storage Configuration > Storage Pools page, by selecting the pool and clicking the properties button. The Storage Pool properties window has 4 tabs with information about the selected pool: General, Disks, Advanced, and Tiering. The General tab which includes information about the name and status of the pool, its physical and its virtual capacities. The Disks tab displays the physical disks that make up the storage pool. The pool can also be expanded by using the Expand button on the page. From the Advanced tab it is possible to set the Percent Full Threshold value, enable FAST Cache, and configure the automatic deletion of old snapshots. The Tiering tab displays capacity details for all disk tiers within the pool and reports the status of any data relocation operations and is visible only for VNX Arrays with the FAST enabler installed. Detailed information is available for each tabbed page by accessing its Help button.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

66

Creating Pool LUNs can be done by navigating to Storage > LUNs from the Unisphere navigation bar or optionally, the LUN can be created by right clicking on the storage pool in which you want to create the LUN. The General tab allows the user to create either a Pool LUN (default) or a RAID Group LUN using the appropriate Storage Pool Type radio button. Thick and/or thin LUNs can be created within the same pool, by default the Thin option is checked. The user must uncheck the box to create a Thick (fully provisioned) LUN. By default the LUN ID is used for its name. A Name radio button option is provided for customized LUN naming. Note: a LUN name can be changed after its creation from its Properties window. Illustrated is the creation of a single 1 GB thin pool LUN created from the Block Pool with a LUN ID of 10 and a Mixed RAID Type. The Advanced tab is shown with the default values for a Thin or Thick pool LUN with FAST enabler installed. Pool LUNs by default, are automatically assigned (Auto radio button) to a specific SP unless the Default Owner is changed. FAST settings are shown if the FAST enabler is installed. Tiering Polices can be selected from the dropdown list and are as follows: Start High then Auto-Tier (recommended) — First sets the preferred tier for data relocation to the highest performing disk drives with available space, then relocates the LUNs data based on the LUNs performance statistics and the auto-tiering algorithm. Auto-Tier — Sets the initial data placement to Optimized Pool and then relocates the LUNs data based on the LUNs performance statistics and the auto-tiering algorithm. Highest Available Tier — Sets the preferred tier for initial data placement and data relocation to the highest performing disk drives with available space. Lowest Available Tier — Sets the preferred tier for initial data placement and data relocation to the most cost effective disk drives with available space. Initial Data Placement - Displays tier setting that corresponds to the selection in the Tiering Policy drop-down list box. Data Movement - Displays whether or not data is dynamically moved between storage tiers in the pool. Snapshot Auto-Delete Policy – This option enables or disables automatic deletion of snapshots on this LUN. The default is enabled.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

67

From the Storage > Storage Configuration > Storage Pools page, select the LUN from the Details section and either click the properties button or right-click over the LUN and select properties. Optionally the LUN properties window can be launched by selecting the Storage > LUNs page and hitting the Properties button with the LUN selected. The Pool LUN Properties window is displayed with all the information about the selected LUN. The Pool LUN Properties window is comprised of several tabs: General, Tiering, Statistics, Hosts, Folders, Compression, Snapshots, and Deduplication. The General tab displays the current configuration of the selected LUN, its ID information and current state, the pool name and capacity information of the LUN, and its ownership information. The Tiering tab displays the FAST VP Tiering Policy settings and allows it to be changed for the LUN. The Statistics tab displays the LUN statistics if statistics logging is enabled for the SP that owns the LUN. The Hosts tab displays information about the host or hosts connected to the LUN; the host name, IP address, OS, and the logical and physical device naming of the LUN. For virtualized environments, the virtual machine components that use the LUN are also listed.

The Folders tab displays a list of folders to which the LUN belongs. The Compression tab lets the user turn the feature on or off, pause or resume the compression and change the compression rate. The Snapshots tab displays information for created VNX snapshots and allows the user to create Snapshots and Snapshot Mount Points. The Deduplication tab allows the feature to be turned on or off and displays deduplication details. Detailed information is available for each tabbed page by accessing its Help button.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

68

To configure a RAID group storage pool, select Storage > Storage Configuration > Storage Pools from the Unisphere navigation bar. By default the Pool tab is highlighted so users must select the RAID Groups tab and then Create, to launch the Create Storage Pool window. Optionally the user can hit the Create button from the Pool tab and select the RAID Group option on Create Storage Pool window. Users configure the RAID group parameters for Storage Pool ID, RAID Type, and the Number of Disks using the dropdown menus available from the General tab. By default, Unisphere selects the disks that will be used in the group automatically as shown by the Automatic radio button. However, users have the option to select the disk manually, as well, if needed. Note the checkbox for Use Power Saving Eligible Disks which is not present when creating a Pool The RAID Group Advanced tab contains two configurable parameters. Users can select the Allow Power Savings parameters for the RAID Group if the group contains eligible disks. It is also possible to set the stripe element size of the LUN to either Default (128) or High Bandwidth Reads (1024). Some read intensive applications may benefit from a larger stripe element size. The high bandwidth stripe element size is available for RAID 5 RAID groups that contain 5 disks (4+1). The High Bandwidth Reads option is not supported on Flash drives.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

69

The RAID Group properties window is launched from the Storage > Storage Configuration > Storage Pools > RAID Group page, by selecting the RAID Group and clicking the properties button. The Storage Pool properties window has 3 tabs with information about the selected RAID Group: General, Disks, and Partitions. The General tab displays the RAID Group information and various capacity information. The Disks tab shows the physical disks which make up the RAID Group, as well its current state. The Partitions tab displays a graphic representation of bound and unbound contiguous space associated with the RAID Group. Detailed information is available for each tabbed page by accessing its Help button.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

70

Creating RAID Group LUNs can be done by navigating to Storage > LUNs from the Unisphere navigation bar and clicking on the Create button. Optionally, the LUN can be created by right clicking on the storage pool in which you want to create the LUN and selecting Create LUN. The Create LUN window General tab is launched with the RAID Group radio button selected. The window lets users create one or more LUNs of a specified size within a selected RAID group . The Capacity section displays the available and consumed space of the selected RAID group for the new LUN. The user can use then use the information to define the size and amount of LUNs to create. Optionally the user can define a names (prefix) and ID to associate with the LUN. Some or all LUN properties display N/A if the storage system is unsupported or if the RAID type of the LUN is RAID 1, Disk, or Hot Spare. If the LUNs you are creating reside on a storage system connected to a VMware ESX server, and these LUNs will be used with layered applications such as SnapView, configure the LUNs as raw device mapping volumes set to physical compatibility mode. The Advanced tab options for a RAID Group LUN is are also shown here. Use SP Write Cache: Enables (default) or disables write caching for this LUN. For write caching to occur, the storage system must have two SPs and each must have adequate memory capacity. Also, write caching must be enabled for the storage system. FAST Cache: Enables or disables the FAST Cache for this LUN. Displayed only if the FAST Cache enabler is installed. If you enable the FAST Cache for Flash disk LUNs, the software displays a warning message. You should disable the FAST Cache for write intent log LUNs and Clone Private LUNs. Enabling the FAST Cache for these LUNs is a suboptimal use of the FAST Cache and may degrade the cache's performance for other LUNs. No Initial Verify: When unchecked, performs a background verification. Default Owner: Sets the SP that is the default owner of the LUN. The default owner has control of the LUN when the storage system is powered up: SP A, SP B and Auto.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

71

From the Storage > Storage Configuration > Storage Pools page, RAID Groups tab select the RAID Group and its related LUN from the Details section. Then either click the properties button or right-click over the LUN and select properties. Optionally the LUN properties window can be launched by selecting the Storage > LUNs page and hitting the Properties button with the LUN selected. The Pool LUN Properties window is displayed with all the information about the selected LUN. The RAID Group LUN Properties window is comprised of several tabs: General, Cache, Statistics, Hosts, Disks, Folders, and Compression. The General tab displays the current configuration of the selected LUN, its ID information and current state, the RAID Group, element size and capacity information of the LUN, and its ownership information. The Cache tab displays information about the LUN cache settings and allows the user to enable or disable write caching for the selected LUN. The Statistics tab displays the LUN statistics if statistics logging is enabled for the SP that owns the LUN. The Hosts tab displays information about the host or hosts connected to the LUN; the host name, IP address, OS, and the logical and physical device naming of the LUN. For virtualized environments, the virtual machine components that use the LUN are also listed. The Disks tab lets the user view information about the disks on which the LUN is bound. The Folders tab displays a list of folders to which the LUN belongs. The Compression tab lets the user turn the feature on and will migrate the LUN to a Pool LUN. Detailed information is available for each tabbed page by accessing its Help button.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

72

Having created LUNs from either a Pool or RAID Group, the next step is to provision the storage to a host. This operation is effectively configuring Access Logix on the VNX where a registered host or hosts will be granted access to the created LUN or LUNs.

The provisioning operation starts with the VNX Storage Group and is accessed by navigating to Hosts > Storage Groups from the Unisphere navigation bar. A wizard is available from the right-side task pane to provision storage to block hosts. The ~filestorage Storage Group is always present on the VNX and is the Storage Group defined for VNX File. Additional Storage Groups can be created for the registered block hosts by clicking the Create button.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

73

The configuration of a Storage Group is a two-part task that involves adding created LUNs and registered hosts. Each task is performed from its own tabbed page. Adding LUNs is done from the LUNs tab by selecting LUNs from an Available LUNs tree and clicking the Add button. LUNs can be selected by expanding tree objects to view the LUNs. It is possible to multiple select LUNs within the tree. The LUNs will be added to the Selected LUNs section of the page where the Storage Group HLU value can be optionally set. This option is exposed by moving the mouse to the Host LUN ID area for a selected LUN and clicking the right-mouse button. Adding registered hosts is done from the Hosts tab by selecting a host or hosts from a list of available hosts and clicking the right arrow button.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

74

The Properties page of a Storage Group will display its current configuration and is presented by navigating in Unisphere to Hosts > Storage Groups and selecting a specific storage group. The window is comprised of three tabs: General, LUNs and Hosts.

The General tab displays the current configuration of the selected Storage Group, its WWN and name. The LUNs tab will present an Available LUNs tree that can be expanded to select additional LUNs to add to the Storage Group. The Selected LUNs section displays the current LUNs configured into the Storage Group. LUNs can be selected from this section for removal from the Storage Group. The Hosts tab displays the list if available registered hosts that can be added to the Storage Group as well as listing the current configured registered hosts for the Storage Group. The configured hosts can be selected for removal from the Storage Group. Detailed information is available for each tabbed page by accessing its Help button.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

75

When storage has been provisioned to a host, the host needs to discover the new block storage presented to it. This is typically done by a SCSI rescan on the host and is done in various manners for different hosts OSs. Illustrated here is how VNX File discovers provisioned storage. Block host storage discovery specifics are covered in the appropriate block host connectivity guide. To discover newly provisioned storage to VNX File, in Unisphere select Storage from the navigation bar. On the right-side tasks pain scroll down to the File Storage section and click the Rescan Storage Systems link. This initiates a SCSI bus rescan for each Data Mover in the VNX system. It is a long running task and you can verify its completion by navigating to System > Background Tasks for File. When complete the newly discovered storage will automatically be configured as disk volumes by the volume manager within VNX File.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

76

With the storage LUNs now provisioned to the hosts within the Storage Group, the next step is to ready the hosts to use the storage for applications and users.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

77

This instructor performed demonstration covers the provisioning of storage. Disks from the VNX are used to create two Pools; one pool that will be used for creating Thick LUNs and be provisioned for File storage, and another pool for creating Thick and Thin LUNs for other block hosts. Disks from the VNX are also used to create RAID Groups for creating Classic LUNs. LUNs created will be assigned to use with advanced storage features for lab exercises later in this course Reference videos of the storage provisioning is also included and can be used for future reference. To launch the videos, use the following URLs: Creating Pools and Pool LUNs https://edutube.emc.com/Player.aspx?vno=L1l3uClTNZmFAX7HP2O+Qg==&autoplay=true Creating a RAID Group and Classic LUNs https://edutube.emc.com/Player.aspx?vno=s/dAs/D/VgiDpC03OlBN6Q==&autoplay=true Provisioning Storage for File https://edutube.emc.com/Player.aspx?vno=sYF/frALloGIYd4ArmccXg==&autoplay=true

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

78

This lesson covers the steps taken at a connected host to ready the provisioned storage for use by applications or users. We will discuss these tasks for the Windows, Linux, and ESXi operating systems.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

79

The slide outlines the steps to ready storage at the Windows host. Once LUNs are created add the LUN to a Storage Group and connect the Windows host to the Storage Group:

1. Align the data (if necessary) 2. Use the Disk Management on the Windows host to discover LUNs 3. Initialize devices 4. Assign drive letters 5. Format drives 6. Write data

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

80

Data alignment refers to the alignment of user data on the LUN. Misalignment of data affects both reads and writes. Additional I/O caused by disk crossings slows down the overall system, while multiple accesses to disk make individual I/Os slower. In the case of cached writes, the misalignment causes slower flushing, which leads to reduced performance. Note that it is not Windows itself that causes this issue, but the Intel architecture’s use of a Master Boot Record at the beginning of the disk. Linux systems (as well as VMware, etc) on the Intel architecture will be similarly affected. Note that Windows 2008 and 2012 automatically aligns partitions at the 1 MB boundary, so no user intervention is required. For other Windows versions, alignment at the 1 MB boundary is recommended (see Microsoft KB Article 929491.) Windows Server 2003, which was the last Windows version requiring data alignment, is schedule to be End of Service Life July 14, 2015.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

81

If it is necessary to align a Windows host, you can do this using the Microsoft utility ‘diskpart’ which is available as part of the Windows OS. Diskpart allows the size of the reserved area to be selected manually. The Microsoftrecommended value of 2,048 blocks (1 MB) ensures alignment of the Windows file system blocks and the VNX 64 KB elements. This offset value causes user data to be written from the beginning of the first element of the fifth stripe. This is acceptable for random and sequential I/O. The assumption made in the diagram is that the NTFS partition has been formatted with 4 KB blocks, though the mechanism is valid for 8 kB or even 64 kB blocks. This block size fits evenly into a VNX 64 KB element. Diskpart should be used on the raw partition before formatting; it will destroy all data on a formatted partition. The use of diskpart allows the start of the raw LUN and the start of the partition to be on 64 KB boundaries. Host accesses as well as VNX Replication Software access will be aligned to the physical disk element.

VNX allows the configuration of a LUN Offset value when binding striped LUNs (RAID-0, RAID-3, RAID-1/0, RAID-5, RAID-6). The selection of any value other than 0 causes an extra stripe to be bound on the LUN. The ‘logical start’ of the LUN is marked as being ‘value’ blocks from the end of the ‘-1’ stripe ( 63 blocks in a W2K/W2K3 environment). The OS writes metadata in that space, and application data will be written at the start of stripe 0, and will therefore be aligned with the LUN elements. Performance will be appreciably better than with misaligned data.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

82

Use these steps to align data with diskpart:



Use the select disk command to set the focus to the disk that has the specified Microsoft Windows NT disk number. The example shows disk 5 has been selected. If you do not specify a disk number, the command displays the current disk that is in focus.



The list disk command provides summary information about each disk that is installed on the computer.

• •

The list volume displays the volumes on the physical disk. (disk 5) The list partitions can be used to display the disk partitions, the example displays the disk has been partitioned with an offset of 1024 KB (create partition primary align=1024) Notes  When you type this command, you may receive a message that resembles the following: DiskPart succeeded in creating the specified partition.  The align= number parameter is typically used together with hardware RAID Logical Unit Numbers (LUNs) to improve performance when the logical units are not cylinder aligned. This parameter aligns a primary partition that is not cylinder aligned at the beginning of a disk and then rounds the offset to the closest alignment boundary. number is the number of kilobytes (KB) from the beginning of the disk to the closest alignment boundary. The command fails if the primary partition is not at the beginning of the disk. If you use the command together with the offset =number option, the offset is within the first usable cylinder on the disk.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

83

Once the VNX Storage Group is created with attached LUNs and connected Hosts, the windows Server Manager utility can be used to scan for new disks, initialize and format the disk for use.

This example shows a Windows 2012 host running the Server Manager utility. The utility is launched from server manager icon on the bottom task bar. Once in Server Manager navigate to File and Storage Services > Disks. If your new disks do not appear, click the tasks pull down menu on the left hand side and choose Rescan Storage. This will search for new disks and present them in the Disks window. The example shows the LUN (Disk 6) which we added to the Storage Group previously. As you can see it is showing as offline and unknown, so users will need to initialize the disk, assign a device letter and format the disk. Right-click on the desired disk and choose Bring Online and then click “Yes” at Bring Disk Online window.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

84

To create a new volume on a disk, right-click on the disk that was just brought online and choose New Volume. This will launch the New Volume wizard. The wizard will walk the user through several steps after selecting Next.

Users will be asked to bring the disk online and initialize it, ask for the size of the new volume, drive letter, and file system type. Once completed, it will ask you to confirm your settings and if they are set click create to create the new volume. As you can see, once the volume is created, disk 6 is now online and partitioned.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

85

The slide details the steps to ready storage for use on Linux hosts. Once LUNs are created, add the LUN to a Storage Group and connect the Windows host to the Storage Group. Scan to discover LUNs, align the data (if necessary) partition the volume, and create and mount the file system for use.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

86

After assigning LUNs and Linux hosts to Storage Groups, the hosts with the newly assigned LUNs will need to rescan their SCSI bus to recognize the new devices. Linux provides multiple mechanisms to rescan the SCSI bus and recognize SCSI devices presented to the system. In the 2.4 kernel solutions, these mechanisms were generally disruptive to the I/O since the dynamic LUN scanning mechanisms were not consistent. With the 2.6 kernel, significant improvements have been made and dynamic LUN scanning mechanisms are available. Linux currently lacks a kernel command that allows for a dynamic SCSI channel reconfiguration like drvconfig or ioscan. The mechanisms for reconfiguring devices on a Linux host include:

• • • • •

System reboot:

Most reliable way to detect newly added devices

Unloading and reloading the modular HBA driver: Host Bus Adapter driver in the 2.6 kernel exports the scan function to the /sys directory Echoing the SCSI device list in /proc

Executing a SCSI scan function through attributes exposed to /sys Executing a SCSI scan function through HBA vendor scripts

EMC recommends that all I/O on the SCSI devices should be quiesced prior to attempting to rescan the SCSI bus.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

87

After a bus rescan or reboot, verify the LUN is available to the OS by opening the scsi file from the /proc/scsi directory. As you can see from the Storage Groups view in Unisphere, the LUN that was assigned to the group has a host LUN ID of 60, and when looking at the /proc/scsi directory you see there that LUN 60 is now available

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

88

Here we see an example of how to use fdisk on Linux to align the first (and only) partition of the /dev/emcpowerj1 device. The starting logical block for the partition has been moved from 62 to 128.

After the change is made, the partition table needs to be written (with the final “w” command). At that point, the partition can be put into production, usually via addition to a volume group, or alternatively by building a filesystem directly on it. To create an aligned file system larger than 2 TB use the GUID Partition Table (GPT). GPT provides a more flexible way for partitioning than the older Master Boot Record (MBR) The GPT does not require any alignment value. The partition will be set to 0 in the MBR entry as the first sector on the disk followed by a Primary Table Header, the actual beginning of the GPT.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

89

For Linux, the final steps in the process are to create and mount a file system to the Linux host. The example shows the mkfs command to make a file system on an emcpower device partition and then create and mount the file system using the mkdir and mount commands. Once the file system is created and mounted, the touch command can be used to write a data file to the device.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

90

The phase of readying the host to use storage is somewhat different in virtualized environments. The step to be performed by the hypervisor, in this case vSphere, is to scan for storage so that it can then be assigned to the virtual machine.

Although these initial steps are brief, there is still much that can be done to manage the connected storage. This will be discussed in the following module.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

91

After the LUNs and ESxi hosts are added to the Storage Group. To make the LUNs visible to the ESXi host, perform a rescan from the vSphere client Storage Adapters > Configuration page. Once on the configuration pane, right click the HBA the LUNs were assigned to and choose Rescan. Once the Rescan is completed, the LUNs will appear in the details pane as seen on the slide. You can also use the rescan all link at the top right of the page. If this link is selected, it will automatically rescan all adapters that are present.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

92

Here is a list with functions and utilities to use when dealing with ESXi servers. The use of these functions and utilities is optional; they are listed for reference only.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

93

Now that we can integrate hosts with VNX storage at a basic level, we can move on to discuss other aspects of VNX storage. Path management includes planning for path failures, load balancing across data paths, and working with host-based utilities. Advanced features can be employed. These include migrating data, FAST VP, and FAST Cache. VNX provides options for block data replication, VNX Snapshot and SnapView. There is also full serviced File capabilities for NAS environments to provide high availability and file-level replication features, such as SnapSure. As we move forward in this course, we will discuss these topics.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

94

This module covered the three major stages of integrating hosts with VNX storage. Storage networking involves options for FC or iSCSI, each with some requirements. FC networking requires the host HBA and VNX Front End ports to be zoned together. Once hosts can access the array, we need to confirm in Unisphere that the hosts are registered. Provisioning storage involves choosing storage media types as well as selecting between Storage Pools or traditional RAID Groups. LUNs configured on each of these are then added to the Storage Group along with the registered host. Once the storage has been provisioned to the host, some additional tasks may be required to ready the storage at the host. These tasks include bringing the volume online and creating a usable file system. Older systems may also require disk alignment.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

95

This lab covers the storage configuration of the VNX. This exercise verifies the preconfigured storage configuration of the lab VNX; the storage that is provisioned for VNX File, the configuration of Pools and RAID Groups. Classic LUNs are created from the preconfigured RAID Groups. Thick and Thin LUNs are created from a pre-configured pool. And finally the system’s hot spare policy is verified.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

96

This lab covered storage configuration of the VNX. It verified the pre-provisioned storage for File and the pre-configured RAID Groups and Pools. The creation of Classic, Thick and Thin LUNs are performed. And the verification of the system’s hot spare policy was done.

Please discuss as a group your experience with the lab exercise. Were there any issues or problems encountered in doing the lab exercise? Are there relevant use cases that the lab exercise objectives could apply to? What are some of the concerns relating to the lab subject?

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

97

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Host Integration to Block Storage

98

This module focuses on managing the data path from a host to the storage on the array. We will discuss PowerPath functions that provide both high availability and load balancing. The VNX internal options for high availability are discussed as well as host utilities. VNX integration with VMware VAAI and VASA storage APIs are described.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

1

This lesson covers the options for managing the data path for high availability and load balancing from the host to VNX using PowerPath. We will consider path fault tolerance options within VNX.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

2

PowerPath is an EMC-supplied, host-based layered software product that provides path management, load balancing, and high availability on any supported host platform. PowerPath operates with several storage systems, on several operating systems, and supports both Fibre Channel and iSCSI data channels (with Windows Server 2003, Windows Server 2008, Windows Server 2008 R2,and Windows Server 2012 non-clustered hosts only, parallel SCSI channels are supported). By design, PowerPath is a configure-it-and-forget-it product for most typical deployments. Any subsequent, manual reconfiguration is required only in highly specific situations. The fundamental strength of PowerPath is the default, out-of-the-box functionality that it provides for automatic path failover, automatic path restore and load-balancing. This greatly simplifies host-side administration for multi-pathed storage. It reduces essential administrative tasks to routine monitoring, manual spot checks on path availability, and examining PowerPath logs when path faults are suspected.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

3

Since PowerPath sits above host native Logical Volume Mangers (LVM), PowerPath devices can be managed just as any other device. The following host LVM’s have been qualified for PowerPath:



Solstice Disk Suite, Veritas, VCS on Solaris  With Veritas, multipathed devices should be excluded from DMP control; recommendation is to use native devices within Veritas (not emcpower pseudo’s)

• •

Veritas, native LVM on HP-UX Veritas, native LVM on AIX  Add hdiskpower devices to AIX volume groups; may be done via smitty



Sistina LVM on Linux

The PowerPath Installation and Administration Guide for each supported operating system provides information on integration with specific third-party volume managers.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

4

Data path failures result in the failure of I/O requests from the host to the storage device. PowerPath auto-detects these I/O failures, confirms the path failure via subsequent retries, and then reroutes the I/O request to alternative paths to the same device. The application is completely unaware of the I/O rerouting. Failover is fully transparent to the application. After the failure, PowerPath continues testing the failed path. If the path passes the test, PowerPath resumes using it. Thus, PowerPath provides the facility to continuously monitor the state of all configured paths to a LUN. PowerPath manages the state of each path to each logical device independently. From PowerPath’s perspective, a path is either alive or dead:

• •

A path is alive if it is usable; PowerPath can direct I/O to this path. A path is dead if it is not usable; PowerPath does not direct user I/O to this path. PowerPath marks a path dead when it fails a path test; it marks the path alive again when it passes a path test.

The PowerPath path test is a sequence of I/Os PowerPath issues specifically to ascertain the viability of a path. If a path test fails, PowerPath disables the path and stops sending I/O to it. After a path fails, PowerPath continues testing it periodically, to determine if it is fixed. If the path passes a test, PowerPath restores it to service and resumes sending I/O to it. The storage system, host, and application remain available while the path is restored. The time it takes to do a path test varies. Testing a working path takes milliseconds. Testing a failed path can take several seconds, depending on the type of failure.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

5

Examples causes of I/O failure include: HBA/NIC, network, switch, interface, and Interface port failures.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

6

PowerPath has built-in algorithms that attempt to balance I/O load over all available, active paths to a LUN. This is done on a host-by-host basis. It maintains statistics on all I/O for all paths. For each I/O request, PowerPath intelligently chooses the least-burdened available path, depending on the load-balancing and failover policy in effect. If an appropriate policy is specified, all paths in a PowerPath system have approximately the same load.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

7

The PowerPath mode setting can be configured to either Active or Standby for each native path to a LUN. The default mode is Active, although this can be changed by the administrator. Since this can be tweaked on a per-LUN basis, it becomes possible to reserve the bandwidth of a specific set of paths to a set of applications on the host. I/O is usually routed to Standby paths, only when all Active paths to the LUN are dead. When multiple Active paths are available, PowerPath attempts to balance load over all available Active paths. Load-balancing behavior is influenced by mode setting:

• •

PowerPath will route I/O requests only to the Active paths Standby paths will be used only if all Active paths fail

Mode settings can also be used for dedicating specific paths to specific LUNs (and thus to specific applications).

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

8

In this slide we see a depiction of a failover to the Standby mode devices. Notice that all of the Active paths have failed before the Standby went online.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

9

To help support high availability in the context of path failover, VNX has three options which allow hosts to access a LUN via either of the VNX SPs. The three options, listed on this slide, vary regarding how data is accessed after a failover, as well as how data is accessed during normal operations. The following sequence of slides will discuss each of these options.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

10

With Active/Passive arrays such as a VNX, there is a concept of LUN ownership. On a VNX array, every LUN is owned by one of the two Storage Processors. Host paths to the currently active SP are active paths, and can service I/O. Paths to the same LUN via the other SP are passive; PowerPath is aware of them, but does not route I/O requests to them. When LUN ownership changes to the other SP, the active paths for that LUN become passive, and vice versa. A LUN trespass can occur in one of two ways. The trespass can be initiated by the array itself, when it detects total failure of an SP, or when the SP needs to reboot. When this happens, PowerPath becomes aware of the change in LUN ownership, and follows over the LUN to the other SP. This follow-over is reported by PowerPath’s logging mechanism. A LUN trespass can also occur when an I/O fails due to path failure from the HBA to the SP (e.g. cable, switch problems). When this happens, PowerPath initiates the LUN trespass and logs the trespass. When there are multiple available paths to each SP, every path to the currently active SP must fail before PowerPath initiates a trespass. The PowerPath mechanisms described above, follow-over and host-initiated trespass, apply to other supported Active/Passive arrays as well.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

11

ALUA (Asymmetric Logical Unit Access) is a request forwarding implementation. In other words, the LUN is still owned by a single SP; however, if I/O is received by an SP that does not own a LUN, that I/O is redirected to the owning SP. It’s redirected using a communication method to pass I/O to the other SP. ALUA terminology:



The optimized path would be a path to an SP that owns the LUN. A non-optimized path would be a path to an SP that doesn’t own the LUN.



This implementation should not be confused by an active-active model because I/O is not serviced by both SPs for a given LUN. You still have a LUN ownership in place. I/O is redirected to the SP owning the LUN.



One port may provide full performance access to a logical unit, while another port, possibly on a different physical controller, provides either lower performance access or supports a subset of the available SCSI. It uses failover mode 4.

In the event of a front-end path failure the LUNs are not initially trespassed in case the path fault is transient. To prevent unneeded trespasses, the Upper Redirector driver routes the I/O to the SP owning the LUNs through the CMI channel. If the system later detects that the path failure is persistent, it detects the optimal path is through the peer SP and trespasses the LUNs and sends I/O via that path. A back-end path failure is handled much in the same manner but is handled by the Lower Redirector. On initial path failure the Lower Redirector routes the I/O to the SP owning the LUNs through the CMI channel. LUNs are not trespassed initially in case the path fault is transient. Only when the system detects a persistent path failure are LUNs trespassed to the peer SP. An additional benefit of the lower-redirector is internal in that the replication software drivers (including meta-lun components) are also unaware of the redirect. When in failover, after the original path failure is resolved, the LUNs will be returned to the original SP owner under any of the following conditions: The LUNs are manually trespassed back by the administrator. By another automatic failover if the peer path fails. By host failover software, such as PowerPath, that supports automatic restore.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

12

The Symmetric Active/Active feature provides key path management capabilities for the VNX. With it, both SPs serve I/O to a given LUN regardless of LUN SP ownership. The host can send I/O to either SP and either SP can service the I/O to the LUN. If a front-end or a back-end path fails, I/O will continue to the LUN through the surviving path as though there was no failure at all. Ownership of the LUN does not trespass to the peer SP. There is a potential performance gain when using both SP paths for I/O. This would only be realized if the host I/O exceeds the bandwidth limit of a single path. The feature is available for Classic LUNs from RAID Groups and does not support Thick or Thin LUNs from a Pool. It should be noted that for a host to take advantage of using both SPs for I/O, the host Operating System must support Symmetric Active-Active, or be running software such as PowerPath 5.7 which can take advantage of this feature.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

13

For Classic LUNs, VNX’s LUN Parallel Access Locking Service is also available for a host to be able to write I/O through both SPs at the same time. This service allows each SP to reserve Logical Block Addresses (LBAs) on a LUN at which it will write it’s information. In the example above, a host sends information down paths to both SPs. The SPs communicate to each other that they will be writing to a specific LUN and where. The information sent over the CMI is much smaller and happens more quickly than the actual writing to the LUN and so has no impact on the writing process. The SPs then use the locks to write to their section of the LUN in parallel with each other. Using the same process, when a host is going to read from a LUN on both SPs, shared locks are given out for the read I/Os. This way both SPs can access the same area on the LUN (Symmetric Active/Active) for increased performance.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

14

VNX Symmetrical Active-Active access provides added reliability and availability of Classic LUNs, since all paths can be active at the same time. During path or SP failures, or NDUs, there is no need for trespassing LUNs to the alternate SP with the delay involved, improving the reliability of the system. The Active-Active feature is easy to implement. There are no settings to configure on the VNX and any host OS that is capable of using this feature does so automatically. With all paths able to serve I/O to hosts there is up to a 2X performance boost possible.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

15

This lesson covers Windows, Linux and ESXi server connectivity to VNX block storage. It explores guidelines and best practices for host connectivity and describes utilities for managing host to storage connectivity.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

16

Even though Windows allows easy use of host-based RAID, it is strongly recommended that storage system-based RAID is used. It is more efficient and uses no host cycles for tasks such as calculating parity. In addition, other VNX features discussed earlier ensure higher data availability and integrity than allowed by host-based RAID. Basic disks are simple to configure and have proven easier to manage in many environments. The use of dynamic disks offers no advantage to VNX users; basic disks may be mounted on mount points and may be grown, or extended, as can dynamic disks. Note: If replication software is used (MirrorView, SnapView or SAN Copy) and you are working with Windows 2003 LUN, then LUN alignment should be performed by means of the Microsoft diskpart/diskpar utility. Set the value to 1024 kB.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

17

For all Linux environments, EMC supports up to 16 Fibre Channel initiator ports on a single host. The host initiators may be single or dual channel HBAs. The number of host initiator ports on a server is also limited by the number of HBA slots available on the server and supported by the server vendor. Note: EMC does not support the mixing of HBAs from different vendors or mixing HBAs with different PCI-interfaces on the same host. It is required that the SAN connections used for multipathing be homogenous. For example, using both a PCI-X and a PCI-Express HBA to connect to the same set of logical units (LUNS) from the same host is not supported. EMC PowerPath stipulates a maximum of 32-paths to a single LUN. The number of logical units seen by a host system is dependent on the SCSI scan algorithm employed by the operating system and the LUN scan limits imposed by the host bus adapter. Linux supported protocols are:

• • •

Fibre Channel (FC) Fibre Channel over Ethernet (FCoE) iSCSI

Note: Refer to the EMC Host Connectivity Guide for Linux on support.emc.com for details.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

18

Shared storage offers a number of benefits over local storage on ESXi hosts. Shared storage allows vMotion migrations to be performed; allows you to have a fast, central repository for virtual machine templates; allows you to recover virtual machines on another host if you have a host failure; allows clustering of virtual machines across hosts; and allows you to allocate large amounts (terabytes) of storage to your ESXi hosts. Before you implement your vSphere environment, discuss your vSphere storage needs with your storage administration team. Discuss things like LUN sizes, I/O bandwidth required by your applications, disk cache parameters, zoning and masking, identical LUN presentation to each ESXi host, and which multi-pathing setting to use (active-active or active-passive) for your shared storage.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

19

SP Cache is enabled for all LUNs by default, even those that consist solely of Flash drives. FAST Cache is not enabled for Flash-based LUNs, though. As is true of other operating systems, ESXi Server will not typically use much read cache on the VNX. It is also a best practice to dedicate a LUN to a single VMFS volume; this will aid in the backup/restore process, particularly if storage system based solutions are used. RAID type selection should follow the guidelines that would be used for physical hosts running the applications that the VMs will run; a VM behaves almost identically to a physical host running a specified application.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

20

VMware Raw Device Mapping is a mechanism to provide direct volume access to a guest OS within a virtual machine. This is very useful when an application requires direct management of its storage volumes e.g. databases RDMs in physical mode behave exactly like LUNs connected to a physical host. This means that any SAN or storage system tools may be used to manage them. Examples include the admsnap and admhost utilities. Though the performance of RDMs and virtual disks is comparable in many environments, RDMs are the preferred choice when performance is a high priority. I/Os are processed by the VM with minimal intervention by the ESXi host, and this reduced overhead improves response time. Some clustering configurations, e.g. MSCS in a physical-virtual cluster, require the use of physical mode RDMs.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

21

Utilities to help manage connectivity to VNX block storage are available from HBA vendors. The utilities provide a graphical user interface (GUI) to manage the HBA connectivity. Scriptable command line interface (CLI) utilities are also available. Emulex provides the OneCommand Manager GUI utility for Windows and Linux hosts. It also provides a vCenter plug-in for ESXi servers using Emulex HBAs. The Emulex command line utility is hbacmd. Qlogic provides the SANsurfer GUI utility for Windows and Linux hosts. The QConvegeConsole is a VMware vCenter plug-in to manage connectivity of ESXi servers using Qlogic HBAs. The Qlogic command line utility is scli. Brocade has a GUI utility called Host Connectivity Manager for Windows and Linux hosts. For ESXi servers using Brocade HBAs the Network Advisor vCenter plug-in for VMware vCenter is available. The Brocade command line utility is bcu. The utilities function similarly, offering a rich set of HBA and SAN connectivity information for the host and target storage. The HBA firmware and software drivers can be managed. They also provide other HBA properties and diagnostics information.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

22

This slide illustrates the Emulex OneCommand Manager GUI for a Windows host. The Linux and ESXi GUIs display similar HBA connectivity information. Hosts that are configured with Emulex HBAs can use the OneCommand Manager utility to determine the types of HBA’s that were discovered as well as the VNX arrays to which they are attached. HBA’s can be expanded to show the targets of the VNX SP Ports WWPN’s. For example, LP10000DC - 10:00:00:00:C9:5C:4A:30 shows connections to two (2) ports from one (1) VNX array. (50:06:01:62:47:20:32:88 and 50:06:01:6A:47:20:32:88) By highlighting an HBA, the user can view different types of HBA properties from the Port Information tab as shown in the right window. The Port Information tab contains detailed information associated with the selected adapter port. To do this, navigate to Host View or Fabric View > Select an adapter port in the discovery-tree > Select the Port Information tab. Users can see from this screen that the HBA drivers are loaded and the initiators (HBAs) can see targets (VNX SP ports). Check the Hosts > Initiators window on the Storage system with Unisphere to validate further.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

23

The Target Mapping tab enables you to view current target mappings and set up persistent bindings. Target mappings are displayed by World Wide Port Name (WWPN), World Wide Node Name (WWNN), device ID (D_ID), SCSI ID, and Type. It is a good idea to set up persistent binding on adapters. Global automapping assigns a binding type, target ID, SCSI Bus and SCSI ID to the device. The binding type, SCSI Bus and SCSI ID can change when the system is rebooted. With persistent binding applied to one of these targets, the WWPN, SCSI Bus and SCSI ID remain the same when the system is rebooted. The driver refers to the binding information during system boot. When you create a persistent binding, the OCManager application tries to make that binding dynamic. However, the binding must meet all of the following criteria to be dynamic:



The SCSI ID (target/bus combination) specified in the binding request must not be mapped to another target. For example, the SCSI ID must not already appear in the 'Current Mappings' table under 'SCSI ID'. If the SCSI ID is already in use, then the binding cannot be made dynamic, and a reboot is required.



The target (WWPN, WWNN or DID) specified in the binding request must not be mapped to a SCSI ID. If the desired target is already mapped, then a reboot is required.



The bind type (WWPN, WWNN or DID) specified in the binding request must match the currently active bind type shown in the Current Settings area of the Target Mapping tab. If they do not match, then the binding cannot be made active.

Note: The Emulex driver sets the bindings to Automapping when configured initially. Select the Driver Parameters tab to view the specific settings.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

24

This CLI example illustrates using the Emulex hbacmd command to view HBA information. In this example the output the ./hbacmd listhbas and targetmapping commands are shown. The display provides users with a view of the HBA, it’s WWPN, the host the HBA is on (SAN2) along with the model of the HBA. (LP 10000). Once the HBA WWPN’s have been determined, use the targetmappings command to verify connectivity to the VNX storage system. The targetmappings output can provide details on the VNX ports to which the HBA is connected to as well as any LUNs that may be in the storagegroup that the host can see (/dev/sdj).

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

25

This slide illustrates the Qlogic SANsurfer GUI for a Windows host. The Linux and ESXi GUIs display similar HBA connectivity information. Hosts that are configured with Qlogic HBAs can use the SANsurfer utility to determine the types of HBA’s that were discovered as well as the VNX arrays to which they are attached. HBA’s can be expanded to view Port information and show the targets of the VNX SP Ports WWPN’s.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

26

In this example we show CLI commands available when using Qlogic HBAs. In this example, the window on the left shows the landing page when executing scli. Selecting 1 from the main menu launches the General Information page where users can determine the HBA model and WWPNs.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

27

This slide illustrates the Brocade Host Connectivity Manager GUI for a Windows host. The Linux and ESXi GUIs display similar HBA connectivity information.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

28

FCoE converged network adapters (CNAs) are similar to Fibre Channel adapters in that they require drivers in order to function in a host. However, because of its ability to converge both Fibre Channel and Ethernet traffic over a single physical link, the adapter will appear to the host as two different pieces of hardware in Windows and Linux hosts. An ESXi host will display the CAN as either a traditional NIC or HBA, depending upon which I/O stack is being utilized. The example shown here is from a Windows server. When viewing the Windows Device Manager, you will see both a QLogic Fibre Channel adapter as well as an Intel 10 Gigabit network adapter available in the system. The installation of the QLogic FCoE CNA provides the host with an Intel-based 10 Gb/s Ethernet interface (using the existing in-box drivers), and a QLogic Fibre Channel adapter interface. Upon installation of the proper driver for the FCoE CNA, the Fibre Channel interface will function identically to that of a standard QLogic Fibre Channel HBA. In-depth information about FCoE and its supported features and topologies can be found in the "Fibre Channel over Ethernet (FCoE)“ chapter of the EMC Networked Storage Topology Guide, available through E-Lab Interoperability Navigator. For CNA configuration procedures, refer to Product Guide called EMC Host Connectivity with Qlogic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

29

This lesson covers the VNX integration with the VMware VAAI and VASA storage APIs.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

30

The vStorage API for Array Integration (VAAI) is a VMware-based vendor neutral storage API. It is designed to offload specific storage operations to compliant storage arrays. The array thus provides a hardware acceleration of the ESXi storage operations. Offloading storage operations to the array reduces ESXi CPU, memory and storage fabric bandwidth consumption, thus enabling more of its compute power to be available for running virtual machines. The VNX series is VAAI compliant for the block-based and file-based storage operations of ESXi servers.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

31

The VAAI features or “primitives” for block-based storage operations are listed in the table. Some typical VMware operation scenarios that invoke a VAAI storage operation to the VNX block are shown. Block Zero: Within a virtual disk file there is both used space (data) and empty space yet to be utilized. When a VM is cloned this “empty space” is also copied, which means that many additional SCSI operations are generated for, essentially, empty space. Block Zeroing allows for a reduced SCSI command operation set to be generated as the storage array is made responsible for zeroing large numbers of blocks without impacting the creating ESXi server. This is achieved by acknowledging the completion of the zero block write before the process has completed and then completing the process in the background. Full Copy: The creation of a Virtual Machine (VM) from a template is performed within the storage array by cloning the Virtual Machine from one volume to another, utilizing the array functionality and not the ESXi server’s resources. Hardware Locking: VMFS volume locking mechanisms are offloaded to the array and are implemented at the sub-volume level. This allows the more efficient use of shared VMFS volumes by VMware cluster servers by allowing multiple locks on a VMFS volume without locking the entire volume. The ATS (Atomic Test and Set) operation atomically (noninterruptible) compares an on-disk sector to a given buffer, and, if the two are identical, writes new data into the on-disk sector. The ATS primitive reduces the number of commands required to successfully acquire an on-disk lock. Thin Provisioning: When a VM is deleted or migrated from a Thin LUN datastore, the space that was consumed is reclaimed to available space in the storage pool. Stun and resume: When a thin LUN runs out of space due to VM space consumption, the affected VM is paused to prevent it from being corrupted. The storage administrator is alerted to the condition so more storage can be allocated.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

32

The VMware Aware Storage API (VASA) is a VMware-based vendor neutral storage API. VASA is a VMware specific feature and protocol and uses an out-of-band http level protocol to communicate with the storage environment. It is designed for the storage array to provide its storage capabilities into VMware vCenter. The VNX will provide capabilities for its Storage Processors, LUNS, I/O ports and file systems. The array health status and space capacity alerts are also provided to vCenter. The VNX is VASA compliant for both its block-based and file-based capabilities and status monitoring.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

33

To configure the VNX for VASA a Unisphere user with the VM Administrator role will need to be configured. This user credential is then used by the vCenter VASA queries the VNX for its storage information. Next vCenter needs to have the VNX added as a Vendor Provider. A single Storage Processor is added for obtaining VNX block-based storage information. The Control Station is added for obtaining VNX file-based storage information.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

34

A key aspect of the API integration is the ability to create and use profiles for system resource configuration. In this case a Storage Profile can be defined for a specific VM need and then when performing vMotion or Cloning etc. a profile can be substituted for an actual target device. The system will then chose a target of the same profile and will highlight the most suitable target based on free space for the target. Storage profiles can also be associated with a datastore cluster when SDRS (Storage Distributed Resource Scheduler) is enabled. When the profile is part of a datastore cluster, SDRS controls datastore placement. Storage capabilities can also be used for other tasks such as new VM creation and VM migration. They provide the ability to match virtual machine disks with the appropriate class of storage to support application I/O needs for VM tasks such as initial placement, migration, and cloning.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

35

This module covered several aspects of managing connectivity from the host. We showed that PowerPath has failover modes to support high availability, as well as native load balancing to promote equalization of data going through each path to the storage. VNX has Active/Passive, Active/Active with ALUA, and Symmetric Active/Active modes to provide internal high availability to the LUNs. We discussed various utilities to help manage Block connections, including many vendor utilities. And the integration of VNX with VMware storage APIs were described.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

36

This Lab covers Windows, Linux, and ESXi host access to VNX block storage.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

37

This lab covered Windows, Linux, and ESXi host access to block storage. Various host configurations were verified and the VNX Auto-manage host setting was verified. Creation and configuration of a Storage Group was performed and LUN access by the Windows, Linux, and ESXi host was accomplished. Please discuss as a group your experience with the lab exercise. Were there any issues or problems encountered in doing the lab exercise? Are there relevant use cases that the lab exercise objectives could apply to? What are some of the concerns relating to the lab subject?

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Managing Block Host Connectivity

38

This module focuses on expanding Classic and Pool LUNs. We will also perform a LUN migration as well. FAST VP and FAST Cache will be examined. The storage efficiency features of Block Deduplication and Block Compression are detailed. The Data-At-Rest Encryption feature will also be covered.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

1

This lesson covers the benefits and process to migrate a LUN and the procedures for expanding Pool LUNs and the Classic LUNs overview. It also shows how to proceed with the volume extension in a Windows 2012 Server.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

2

The LUN Migration feature allows data to be moved from one LUN to another, regardless of RAID type, disk type, LUN type, speed and number of disks in the RAID Group or Pool. LUN Migration moves data from a source LUN to a destination LUN (of the same or larger size) within a single storage system. This migration is accomplished without disruption to applications running on the host though there may be a performance impact during the migration. A LUN Migration can be cancelled by the administrator at any point in the migration process. If cancelled before it completes, the source LUN returns to its original state and the destination LUN is destroyed. Once a migration is complete the destination LUN assumes the identity of the source, taking on its LUN ID, WWN, and its Storage Group membership. The source LUN is destroyed to complete the migration operation.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

3

A benefit of the LUN Migration feature is its use in storage system tuning. LUN Migration moves data from a source LUN to a destination LUN (of the same or larger size) within a single storage system. This migration is accomplished without disruption to applications running on the host. LUN Migration can enhance performance or increase disk utilization for the changing business needs and applications by allowing the user to change LUN type and characteristics, such as RAID type or size (Destination must be the same size or larger), while production volumes remain online. LUNs can be moved between Pools, between RAID Groups, or between Pools and RAID Groups. When a Thin LUN is migrated to another Thin LUN, only the consumed space is copied. When a Thick LUN or Classic LUN is migrated to a Thin LUN, the space reclamation feature is invoked and only the consumed capacity is copied.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

4

The LUN Migration feature does have some guidelines for use. The LUNs used for migration may not be private LUNs, nor may they be in the process of binding, expanding or migrating. Either LUN, or both LUNs, may be metaLUNs, but neither LUN may be a component LUN of a metaLUN.

The destination LUN may not be part of SnapView or MirrorView operation. This includes Clone Private LUNs, Write Intent Log LUNs, and Reserved LUN Pool LUNs. Note the Destination LUN is required to be at least as large as the Source LUN, but may be larger.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

5

When the FAST Cache feature is being used, ensure FAST Cache is OFF on LUNs being migrated. This prevents the migration’s I/O from consuming capacity in the FAST Cache that may otherwise benefit workload I/O. When migrating into or between FAST VP pool-based LUNs, the initial allocation of the LUN and the allocation policy have an important effect on its performance and capacity utilization. Tiering policy setting (Highest, Auto, Lowest) determines which tier within the pool the data of the source LUN will be first allocated to. Be sure to set the correct policy needed to ensure the expected starting performance for all the source LUN’s data. As much capacity from the source LUN will be allocated as possible to the appropriate tier. Once the migration is complete the user can adjust the tiering policy. There will be a lowering in the rate of migration when the source or destination LUN is a thin LUN. It is difficult to determine the transfer rate when the source LUN is a thin LUN but the transfer rate will be lower than migrations involving thick or classic LUNs. The decrease in the rate depends on how sparsely the thin LUN is populated with user data, and how sequential in nature of the stored data is. A densely populated LUN with highly sequential data increases the transfer rate. Random data and sparsely populated LUNs decrease it. ASAP priority LUN migrations with normal cache settings should be used with caution. They may have an adverse effect on system performance. EMC recommends that the user execute at the High priority, unless migration time is critical.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

6

The VNX classic LUN expansion (metaLUN) feature allows a base classic LUN to be expanded to increase LUN capacity. A base LUN is expanded by aggregating it with another classic LUN or LUNs, called component LUNs. When expanded, it forms a metaLUN which preserves the personality of the base LUN. There are two methods of aggregating the base and components to form the metaLUN; concatenating and striping. With concatenation, the capacity of the component LUN is added to the end of the base LUN and is available immediately. The I/O flow to the metaLUN is through the base LUN until its space is consumed, then the I/O flow extends onto the component LUN. It is recommended (but not required) to use component LUNs of the same size, RAID type, and disks (in both number and type) to maintain the performance profile of the base LUN. If the component LUN differs from the base LUN, the performance of the metaLUN will vary. With striping, the capacity of the component LUN is interlaced with that of the base LUN by a restriping process that forms RAID stripes across the base and component LUNs. Therefore, a component LUN must have the same size and RAID type and is recommended (but not required) to use the same number and type of disks. If the base LUN is populated with data, the restriping process will take time to complete and can impact performance. While the existing base LUN data is available, the additional capacity will not be available until the restriping process completes. Once complete, the I/O flow of the metaLUN is interlaced between the base and component LUNs, thus preserving or increasing the performance profile of the base LUN.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

7

A benefit of the VNX metaLUN feature is its ability to increase the capacity of a classic LUN. A RAID Group is limited to 16 disks maximum, thus the size of a classic LUN is limited to the space provided by 16 disks. MetaLUNs are constructed using multiple classic LUNs which can be created from disks in different RAID Groups and thus avoid the 16 disk capacity limit. VNX metaLUNs provide flexibility and scalability to the storage environment. Another metaLUN benefit is the performance affect of additional disks. With more disks available to the metaLUN bandwidth to the LUN increases, thus its I/O throughput can be higher benefiting performance to the metaLUN. VNX metaLUNs provide performance adaptability to the storage environment. MetaLUNs are functionally similar to volumes created with host volume managers, but with some important distinctions. To create a volume manager stripe, all component LUNs must be made available to the host, and each will have a unique address. Only a single LUN, with a single address, is presented to the host with metaLUNs. If a volume is to be replicated with VNX replication products (SnapView, VNX Snapshot, MirrorView and SAN Copy), a usable image requires consistent handling of fracture and session start operations on all member LUNs at the same time. MetaLUNs simplify replication by presenting a single object to the replication software. This also makes it easier to share the volume across multiple hosts – an action that volume managers will not allow.

The use of a host striped volume manager has the effect of multithreading requests consisting of more than one volume stripe segment which increases concurrency to the storage system. MetaLUNs have no multithreading effect since the multiplexing of the component LUNs are done on the storage system. VNX metaLUNs provide ease of storage usage and management.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

8

The VNX metaLUN feature does have some guidelines for use. A base LUN can be a regular classic LUN or it can be a metaLUN. A metaLUN can span multiple RAID Groups. When creating a concatenated metaLUN, it is recommended that the base LUN and the component LUNs be of the same RAID type.

As a result of the increase in back-end activity associated with restriping, it is recommended to expand only one LUN per RAID Group at the same time. The host workload and the restriping operation share the same system resources. So a heavy restriping workload will have a performance impact on host storage operations. Likewise, a heavy host storage workload will have an impact on the time it takes to expand a striped metaLUN.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

9

In the systems drop-down list on the menu bar, select a storage system. Right-click the base LUN and select Expand. When the “Expand Storage Wizard Dialog” opens, follow the steps. Another option is from the task list, under Wizards. Select RAID Group LUN Expansion Wizard.

Follow the steps in the wizard, and when available, click the Learn more links for additional information.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

10

The Pool LUN expansion feature is available for both Thick and Thin Pool LUNs. The expanded capacity is immediately available for use by the host. The expansion is done in the same manner for either type of LUN but it allocates physical storage differently. When a Thick Pool LUN is expanded, its expanded size must be available from physical disk space in the pool and is allocated to the LUN during the expansion. When a Thin Pool LUN is expanded, physical disk space from the pool does not get allocated as part of the expansion. It is the in-use capacity that drives the allocation of physical storage to the Thin LUN.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

11

A benefit of the Pool LUN Expansion feature is its fast, easy, on-line expansion of LUN capacity. A few easy clicks in Unisphere is all it takes to increase the LUN capacity. Another key capability is that the LUN performance is not changed by the capacity expansion. Since its performance is based on the physical storage of the pool it is built from, the performance characteristics of the expanded LUN will stay the same as it was prior to the expansion. Also, the expansion process itself has no performance impact on the LUN.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

12

There are a few guidelines for expanding Pool LUN capacities. A capacity expansion cannot be done on a pool LUN if it is part of a data protection or LUN-migration operation. For a thick LUN expansion, the pool must have enough physical storage space available for the expansion to succeed; whereas, for a thin LUN the physical storage space does not need to be available. The host OS must also support the capacity expansion of the LUN.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

13

This Lab covers the VNX advanced storage features of LUN expansion and migration. In the lab exercise pool-based Thick and Thin LUNs expansions are performed along with a Classic LUN expansion. A LUN migration is also completed.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

14

This lab covered the VNX advanced storage features of LUN expansion and migration. In the exercise Thick, Thin and Classic LUNs were expanded and a LUN migration was performed. Please discuss as a group your experience with the lab exercise. Were there any issues or problems encountered in doing the lab exercise? Are there relevant use cases that the lab exercise objectives could apply to? What are some concerns relating to the lab subject?

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

15

This lesson covers the functionality, benefits and configuration of FAST VP.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

16

VNX FAST VP, or Fully Automated Storage Tiering for Virtual Pools tracks data in a Pool at a granularity of 256 MB – a slice – and ranks slices according to their level of activity and how recently that activity took place. Slices that are heavily and frequently accessed will be moved to the highest tier of storage, typically Flash drives, while the data that is accessed least will be moved to lower performing, but higher capacity storage – typically NL-SAS drives. This sub-LUN granularity makes the process more efficient, and enhances the benefit achieved from the addition of Flash drives. The ranking process is automatic, and requires no user intervention. When FAST VP is implemented, the storage system measures, analyzes, and implements a dynamic storagetiering policy in a faster and more efficient way than a human analyst. Relocation of slices occurs according to a schedule which is user-configurable, but which defaults to a daily relocation. Users can also start a manual relocation if desired. FAST VP operations depend on tiers of disks – up to three are allowed, and a minimum of two are needed for meaningful FAST VP operation. The tiers relate to the disk type in use. Note that no distinction is made between 10k rpm and 15k rpm SAS disks, and it is therefore recommended that disk speeds not be mixed in a tier.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

17

FAST VP enables the user to create storage pools with heterogeneous device classes and place the data on the class of devices or tier that is most appropriate for the block of data. Pools allocate and store data in 256 MB slices which can be migrated or relocated, allowing FAST VP to reorganize LUNs onto different tiers of the Pool. This relocation is transparent to the hosts accessing the LUNs. For example, when a LUN is first created it may have a very high read/write workload with I/Os queued to it continuously. The user wants that LUN to have the best response time possible in order to maximize productivity of the process that relies on this storage. Over time, that LUN may become less active or stop being used and another LUN may become the focus of the operation. VNX systems configured with EMC’s FAST VP software would automatically relocate inactive slices to a lower storage tier, freeing up the more expensive storage devices for the newly created and more active slices. The administrator can use FAST VP with LUNs regardless of whether those LUNs are also in use by other VNX software features, such as Data Compression, SnapView, MirrorView, RecoverPoint, and so on. The tiers from highest to lowest are Flash, SAS, and NL-SAS, described in FAST VP as Extreme Performance, Performance, and Capacity respectively. FAST VP differentiates each of the tiers by drive type, but it does not take rotational speed into consideration. EMC strongly recommends the same rotational speeds per drive type in a given pool. FAST VP is not supported for RAID groups because all the disks in a RAID group, unlike those in a Pool, must be of the same type (all Flash, all SAS, or all NL-SAS). The lowest performing disks in a RAID group determine a RAID group’s overall performance.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

18

FAST VP uses a number of mechanisms to optimize performance and efficiency. It removes the need for manual, resource intensive, LUN migrations, while still providing the performance levels required by the most active dataset. Another process that can be performed is the rebalance. Upon the expansion of a storage pool, the system recognizes the newly added space and initiates an auto-tiering data relocation operation. It can lower the Total Cost of Ownership (TCO) and increase performance by intelligently managing data placement. Applications that exhibit skew, and have workloads that are fairly stable over time will benefit from the addition of FAST VP. The VNX series of storage systems deliver high value by providing a unified approach to auto-tiering for file and block data. Both block and file data can use virtual pools and FAST VP. This provides compelling value for users who want to optimize the use of highperformance drives across their environment.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

19

During storage pool creation, the user can select RAID protection on a per-tier basis. Each tier has a single RAID type, and once the RAID configuration is set for that tier in the pool, it cannot be changed. The table above shows the RAID configuration that are supoorted for each tier. The drives used in a Pool can be configured in many ways – supported RAID types are RAID 1/0, RAID 5, and RAID 6. For each of those RAID types, there are recommended configurations. These recommended configurations balance performance, protection, and data efficiency. The configurations shown on the slide are those recommended for the supported RAID types. Note that, though each tier may have a different RAID type, any single tier may have only 1 RAID type associated with it, and that type cannot be changed once configured.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

20

FAST VP policies are available for storage systems with the FAST VP enabler installed. The policies define if and how data is moved between the storage tiers. Use the “Highest Available Tier” policy when quick response times are a priority. A small portion of a large set of data may be responsible for most of the I/O activity in a system. FAST VP allows for moving a small percentage of the “hot” data to higher tiers while maintaining the rest of the data in the lower ties. The “Auto Tier” policy automatically relocates data to the most appropriate tier based on the activity level of each data slice. The “Start High, then Auto Tier” is the recommended policy for each newly created pool, because it takes advantage of the “Highest Available Tier” and “Auto-Tier” policies. Use the “Lowest Available Tier” policy when cost effectiveness is the highest priority. With this policy, data is initially placed on the lowest available tier with capacity. Users can set all LUN level policies except the “No Data Movement” policy both during and after LUN creation. The “No Data Movement” policy is only available after LUN creation. If a LUN is configured with this policy, no slices provisioned to the LUN are relocated across tiers.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

21

Unisphere or Navisphere Secure CLI lets the user schedule the days of the week, start time, and durations for data relocation for all participating tiered Pools in the storage system. Unisphere or Navisphere Secure CLI also lets the user initiate a manual data relocation at any time. To ensure that up-to-date statistics and settings are accounted for properly prior to a manual relocation, FAST VP analyzes all statistics gathered independently of its regularly scheduled hourly analysis before starting the relocation. FAST VP scheduling involves defining the timetable and duration to initiate Analysis and Relocation tasks for Pools enabled for tiering. Schedules can be configured to be run daily, weekly, or just single iteration. A default schedule will be configured when the FAST enabler is installed. Relocation tasks are controlled by a single schedule, and affect all Pools configured for tiering.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

22

The first step to configuring FAST VP is to have a tiered Pool. To create a Heterogeneous Pool select the storage system in the systems drop-down list on the menu bar. Select Storage > Storage Configuration > Storage Pools. In Pools, click Create.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

23

The next step is to configure the Pool with teirs. In the General tab, under Storage Pool Parameters, select Pool” The user can create pools that use multiple RAID types, one RAID type per tier, to satisfy multiple tiering requirements within a pool. To do this the pool must contain multiple disk types. When creating the pool, select the RAID type for each tier

For the Extreme Performance tier, there are two types of disks that can be used: FAST Cache optimized Flash drives and FAST VP optimized Flash drives. A RAID Group created by FAST VP can use only one type, though both types can appear in the tier. If both types of drive are present, the drive selection dialog shows them separately. When the user expands an existing pool by adding additional drives, the system selects the same RAID type that was used when the user created the pool. When the user expands an existing pool by adding a new disk type tier, the user needs to select the RAID type that is valid for the new disk type. For example, best practices suggest using RAID 6 for NL-SAS drives, and RAID 6, 5, or 1/0 for other drives. The Tiering Policy selection for the Pool is on the Advanced tab. A drop-down list of tiering policies is available for selection.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

24

There is a default Tiering policy that gets put into place when a Pool is created – it is Start High then Auto-Tier (Recommended). This policy is applied to all LUNs that are created from the Pool. The policy can be adjusted on a per-LUN bases by going to the LUN Properties page and accessing the Tiering tab. The various Tiering Policies are available from a drop-down for selection.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

25

Provided the FAST enabler is present, select the Tiering tab from the Storage Pool Properties window to display the status and configuration options. Scheduled means FAST VP relocation is scheduled for the Pool. Data relocation for the pool will be performed based on the FAST schedule in the Manage Auto-Tiering dialog. If a tier fills to 90% capacity, data will be moved to another tier.

The Relocation Schedule button launches the Manage Auto-Tiering dialog when clicked. Data Relocation Status has several states. Ready means no relocations in progress for this pool, Relocating means relocations are in progress for this pool and Paused means relocations are paused for this pool. Data to Move Down is the total amount of data (in GB) to move down from one tier to another; Data to Move Up is the total amount of data (in GB) to move up from one tier to another; Data to Move Within is the amount of data (in GB) that will be relocated inside the tier based on I/O access. Estimated time for data relocation is the estimated time (in hours) required to complete data relocation Note: If the FAST enabler is not installed, certain information will not be displayed. Tier Details shows information for each tier in the Pool. The example Pool has 2 tiers, SAS (Performance) and NL-SAS (Capacity). Tier Name is the Name of the tier assigned by provider or lower level software.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

26

The Manage Auto-Tiering option available from Unisphere allows users to view and configure various options. The Data Relocation Rate controls how aggressively all scheduled data relocations will be performed on the system when they occur. This applies to scheduled data relocations. The rate settings are high, medium (default), and low. A low setting has little impact to production I/O, but means that the tiering operations will take longer to make a full pass through all the pools with tiering enabled. The high setting has the opposite effect. Though relocation operations will proceed at a much faster pace, FAST VP will not consume so much of the storage system resources that server I/Os time out. Operations are throttled by the storage system. The Data Relocation Schedule if enabled, controls the system FAST VP schedule. The schedule controls allows configuring the days of the week, the time of day to start data relocation, and the data relocation duration (hours selection 0-23; minutes selection of 0, 15, 30, &.45, but will be editable to allow for minutes set through CLI). The default schedule is determined by the provider and will be read by Unisphere. Changes that are applied to the schedule are persistent. The scheduled days use the same start time and duration. When the “Enabled” box is clear (not checked), the scheduling controls are grayed out, and no data relocations are started by the scheduler. Even if the system FAST VP scheduler is disabled, data relocations at the pool level may be manually started.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

27

Unisphere or Navisphere Secure CLI lets the user manage data relocation. The user can initiate a manual data relocation at any time. To ensure that up-to-date statistics and settings are accounted for properly prior to a manual relocation, FAST VP analyzes all statistics gathered independently of its regularly scheduled hourly analysis before starting the relocation.

Data relocation can also be managed with an array-wide scheduler. Relocation tasks controlled with the single array-wide schedule affect all Pools configured for tiering. For Pools existing before the installation of FAST VP, their Data Relocation is off by default. Pools created after the installation of FAST VP, their Data Relocation is on by default. These default setting can be changed as needed.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

28

The Start Data Relocation dialog displays all of the pools that were selected and the action that is about to take place. If FAST is Paused, this dialog will contain a message alerting the user that FAST is in a Paused state and that relocations will resume once FAST is resumed (provided that the selected window for the relocations did not expire in the meantime). If one or more Pools are already actively relocating data, it will be noted in the confirmation message. Data Relocation Rates are High, Medium, and Low. The default setting of the Data Relocation Rate is determined by the Data Relocation Rate defined in the Manage FAST dialog. The default Data Relocation Duration is 8 hours. When the “Stop Data Relocation” menu item is selected, a confirmation dialog is displayed noting all of the pools that were selected and the action that is about to take place. If one or more pools are not actively relocating data, it will be noted in the confirmation message.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

29

The Tiering Summary pane can be configured from the Customize menu on the Dashboard. The icon displays information about the status of tiering. This view is available for all arrays regardless of the FAST enabler. When the FAST enabler is not installed, it will display no FAST data and instead will show the user a message alerting them to the fact that this feature is not supported on this system. Relocation Status: Indicates the tiering relocation status. Can be Enabled or Paused. Pools with data to be moved: the number of Pools that have data queued up to move between tiers. This is a hot link that takes the user to the Pools table under Storage > Storage Configuration > Storage Pools. Scheduled Pools: the number of tiered pools associated with the FAST schedule. This is also a hot link that takes the user to Storage > Storage Configuration > Storage Pools. Active Pool Relocations: the number of pools with active data relocations running. This is also a hot link that takes the user to Storage > Storage Configuration > Storage Pools. Additional information includes the quantity of data to be moved up (GB), the quantity of data to be moved down (GB), the estimated time to perform the relocation, the relocation rate, and data to be moved within a tier if the tier has been expanded. .

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

30

This lesson covers functionality, benefits, and configuration of EMC FAST Cache.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

31

FAST Cache uses Flash drives to enhance read and write performance for frequently accessed data in specified LUNs. FAST Cache consists of a storage pool of Flash disks configured to function as FAST Cache. The FAST Cache is based on the locality of reference of the data set. A data set with high locality of reference (data areas that are frequently accessed) is a good candidate for FAST Cache. By promoting the data set to the FAST Cache, the storage system services any subsequent requests for this data faster from the Flash disks that make up the FAST Cache; thus, reducing the load on the disks in the LUNs that contain the data (the underlying disks). The data is flushed out of cache when it is no longer accessed as frequently as other data, per the Least Recently Used Algorithm. FAST Cache consists of one or more pairs of mirrored disks (RAID 1) and provides both read and write caching. For reads, the FAST Cache driver copies data off the disks being accessed into the FAST Cache. For writes, FAST Cache effectively buffers the data waiting to be written to disk. In both cases, the workload is off-loaded from slow rotating disks to the faster Flash disks in FAST Cache. FAST Cache operations are non-disruptive to applications and users. It uses internal memory resources and does not place any load on host resources. FAST Cache should be disabled for Write Intent Log (WIL) LUNs or Clone Private LUNs (CPLs). Enabling FAST Cache for these LUNs is a misallocation of the FAST Cache and may reduce the effectiveness of FAST Cache for other LUNs. FAST Cache can be enabled on Classic LUNs and Pools once the FAST Cache enabler is installed.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

32

FAST Cache improves the application performance, especially for workloads with frequent and unpredictable large increases in I/O activity. FAST Cache provides low latency and high I/O performance without requiring a large number of Flash disks. It is also expandable while I/O to and from the storage system is occurring. Applications such as File and OLTP (online transaction processing) have data sets that can benefit from the FAST Cache. The performance boost provided by FAST Cache varies with the workload and the cache size. Another important benefit is improved total cost of ownership (TCO) of the system. FAST Cache copies the hot or active subsets of data to Flash drives in chunks. By offloading many if not most of the remaining IOPS after FAST Cache, the user can fill the remainder of their storage needs with low cost, high capacity disk drives. This ratio of a small amount of Flash paired with a lot of disk offers the best performance ($/IOPS) at the lowest cost ($/GB) with optimal power efficiency (IOPS/KWH). Use FAST Cache and FAST VP together to yield high performance and TCO from the storage system. For example, use FAST Cache optimized Flash drives to create FAST Cache, and use FAST VP for pools consisting of SAS and NL-SAS disk drives. From a performance point of view, FAST Cache provides an immediate performance benefit to bursty data, while FAST VP moves more active data to SAS drives and less active data to NL-SAS drives. From a TCO perspective, FAST Cache can service active data with fewer Flash drives, while FAST VP optimizes disk utilization and efficiency with SAS and NL-SAS drives.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

33

To create FAST Cache, the user needs at least 2 FAST Cache optimized drives in the system, which will be configured in RAID 1 mirrored pairs. Once the enabler is installed, the system uses the Policy Engine and Memory Map components to process and execute FAST Cache. •

Policy Engine – Manages the flow of I/O through FAST Cache. When a chunk of data on a LUN is accessed freuqnetly, it is copied temporarily to FAST Cache (FAST Cache optimized drives). The Policy Engine also maintains statistical information about the data access patterns. The policies defined by the Policy Engine are system-defined and cannot be changed by the user.



Memory Map – Tracks extents usage and ownership in 64 KB chunks of granularity. The Memory Map maintains information on the state of 64 KB chunks of storage and the contents in FAST Cache. A copy of the Memory Map is stored in DRAM memory, so when the FAST Cache enabler is installed, SP memory is dynamically allocated to the FAST Cache Memory Map. The size of the Memory Map increases linearly with the size of FAST Cache being created. A copy of the Memory Map is also mirrored to the Flash disks to maintain data integrity and high availability of data.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

34

During FAST Cache operations, the application gets the acknowledgement for an IO operation once it has been serviced by the FAST Cache. FAST Cache algorithms are designed such that the workload is spread evenly across all the flash drives that have been used for creating FAST Cache. During normal operation, a promotion to FAST Cache is initiated after the Policy Engine determines that 64 KB block of data is being accessed frequently. To be considered, the 64 KB block of data must be accessed by reads and/or writes multiple times within a short period of time. A FAST Cache Flush is the process in which a FAST Cache page is copied to te HDDs and the page is freed for use. The least recently used (LRU) algorithm determines which data blocks to flush to make room for the new promotions. FAST Cache contains a cleaning process which proactively copies dirty pages to the underlying physical devices during times of minimal backend activity.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

35

FAST Cache is created and configured on the system from the System Properties FAST Cache tab page. From the page click the Create button to start the initializing operation. The Flash drives are then configured for FAST Cache. The user has an option for the system to automatically select the Flash drives to be used by FAST Cache or the user can manually select the drives. When the initializing operation is complete, the cache state is Enabled. The cache stays in the Enabled state until a failure occurs or the user choose to destroy the cache. To change the size of FAST Cache after it is configured, the user must destroy and recreate FAST Cache. This requires FAST Cache to flush all dirty pages currently contained in FAST Cache. When FAST Cache is created again, it must repopulate its data (warm-up period). If a sufficient number of Flash drives are not available to enable FAST Cache, Unisphere displays an error message, and FAST Cache cannot be created.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

36

The FAST Cache option will only be available if the FAST Cache enabler is installed on the storage system. When a Classic LUN is created, as shown in the example on the top left, FAST Cache is enabled by default (as is Read and Write Cache). If the Classic LUN has already been created as shown in the example on the bottom left, and FAST Cache has not been enabled for the LUN, the Cache tab in the LUN Properties window can be used to configure FAST Cache. Note that checking the Enable Caching checkbox checks all boxes below it (SP Read Cache, SP Write Cache, FAST Cache). Enabling FAST Cache for Pool LUNs differs from that of a Classic LUNs in that FAST Cache is configured at the Pool level only as shown in the examples on the right. In other words, all LUNs created in the Pool will have FAST Cache enabled or disabled collectively depending on the state of the FAST Cache Enabled box. The FAST Cache Enabled box will be enabled by default if the FAST Cache enabler was installed before the Pool was created. If the Pool was created prior to installing the FAST Cache enabler, FAST Cache is disabled on the Pool by default. To enable FAST Cache on the Pool, launch the Storage Pool Properties window and select the Enabled box under FAST Cache as shown in the example on the bottom right.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

37

The FAST Cache enabler is required to be installed on the VNX for the feature to be available. Once installed, the VNX needs to have FAST Cache optimized Flash drives installed and configured as RAID 1 mirrored pairs. FAST VP drives cannot be used for FAST Cache. FAST Cache is configured on Classic LUNs individually. FAST Cache is enabled by default at the Pool level for Pool LUNs. All LUNs created from the Pool will have FAST Cache enabled on them. If the FAST Cache enabler was installed after the Pool was created FAST Cache is disabled by default. Likewise, if a Classic LUN was created prior to the FAST Cache enabler being installed, the Classic LUN will have FAST Cache disabled by default. FAST Cache should be disabled for Write Intent Log (WIL) LUNs or Clone Private LUNs (CPLs). Enabling FAST Cache for these LUNs is a misallocation of the FAST Cache and may reduce the effectiveness of FAST Cache for other LUNs.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

38

This table shows the FAST Cache maximum configuration options. The Maximun FAST Cache in the last column depend on the drive count in the second column (Flash Disk Capacity). For example: VNX5400 can have up to 10 drives of 100 GB or up to 10 drives of 200 GB.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

39

This lesson covers the space efficiency features of Block Deduplication and Block Compression. It provides a functional overview and the architecture of each feature as well as the storage environments that are suited for each of them. The enablement and management of the features are detailed and their guidelines and limits are examined.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

40

VNX Block Deduplication and Block Compression are optional storage efficiency software features for VNX Block storage systems. They are available to the array via specific enablers; a Deduplication enabler and a Compression enabler. The features cannot be enabled on the same LUNs as they are mutually exclusive to each other on a per-LUN basis. If Block Compression is enabled on a LUN it cannot also have Block Deduplication enabled. Conversely, if Block Deduplication is enabled on a LUN it cannot also have Block Compression enabled. In general Block Deduplication uses a hash digest process for identifying duplicate data contained within Pool LUNs and consolidate it in such a way that only one actual copy of the data is used by many sources. This feature can result in significant space savings depending on the nature of the data. VNX Block Deduplication utilizes a fixed block deduplication method with a set size of 8 KB to remove redundant data from a dataset. Block Deduplication is run post-process on the selected dataset. Deduplication is performed within a Storage Pool for either Thick or Thin Pool LUNs with the resultant deduplicated LUN being a Thin LUN. As duplicate data is identified, if a 256 MB pool slice is freed up, the free space of the slice is returned to the Storage Pool. Block Deduplication cannot be directly enabled on Classic LUNs. A manual migration of the Classic LUN can be performed to a Thin LUN, then Deduplication can be enabled on the Thin LUN. For applications requiring consistent and predictable performance, EMC recommends using Thick LUNs. If Thin LUN performance is not acceptable, then do not use Block Deduplication. In general Block Compression uses a compression algorithm that attempts to reduce the total space used by a dataset. VNX Block Compression works in 64 KB chunk increments to reduce the storage footprint of a dataset by at least 8 KB and provide savings to the user. Compression is not done on a chunk if the space savings are less that 8 KB. If a 256 MB pool slice is freed up by compression, the free space of the slice is returned to the Storage Pool. Because accessing compressed data may cause a decompression operation before the I/O is completed, compression is not suggested to be used on active datasets.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

41

The VNX Block Deduplication feature operates at the Storage Pool level. Non-deduplicated and deduplicated LUNs can coexist within the same pool. The deduplication architecture utilizes a Deduplication Container that is private space within a Storage Pool. There is only one deduplication container per pool. The container holds all the data for the deduplicationenabled LUNs within the specific pool. The container is created automatically when deduplication is enabled on a pool LUN and conversely is destroyed automatically when deduplication is disabled on the last LUN or when that LUN is deleted. Existing LUNs are migrated to the container when deduplication is enabled on them. When a LUN is created with deduplication enabled, the LUN gets created directly in the container. Because deduplication is an SP process that uses hashing to detect duplicate 8 KB blocks on LUNs within the pool, LUN SP ownership is critical to the feature performance. The container SP Allocation Owner is determined by the SP Default Owner of the first LUN in the container. To avoid deduplication feature performance issues, it is critical to use a common SP Default Owner for all the LUNs within the pool that are deduplication enabled. This will result in the container SP Allocation Owner matching the SP Default Owner for the specific pool’s deduplicated LUNs. If LUNs from multiple pools are deduplication-enabled it is recommended to balance the multiple deduplication containers between the SPs. The deduplication process runs against each deduplication container as a background task 12 hours after its last session completed. Each SP can run three concurrent container sessions. Other sessions needing to run are queued. If a session runs for four hours straight the session is paused and the first queued session will start. The session checks the container for 64 GB of new or updated data, if it exists the session runs a hash digest on each 8 KB block of the 64 GB of data to identify duplicate block candidates. Candidate blocks are then compared bit by bit to verify the data is exactly the same. The oldest identical blocks are kept and duplicate blocks are removed (evacuated from the container). The deduplication container uses a Virtual Block Map (VBM) to index the removed duplicate blocks to the single instance saved block. Any freed pool slices are returned to the pool. If a session starts and there is less that 64 GB of new or updated data, the hash digest portion of the process is run to identify without the removal of duplicate data and the session complete timer is reset.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

42

The VNX Block Compression feature can be used on Thick and Thin Pool LUNs and on Classic RAID Group LUNs. Compressed Pool LUNs will remain in the same Storage Pool. A Thick LUN will be migrated to a Thin LUN by the compression process. When compression is enabled on a Classic LUN, compression will migrate the LUN to a Thin LUN. The operator must select a Storage Pool having enough capacity to migrate the LUN. The compression is done on the Classic LUN in-line during the migration to a Thin LUN. The compression process operates on 64 KB data chunks on the LUN. It will only compress the data if a space savings of 8 KB or more can be realized. The process will not modify any data chunk if less that 8 KB space saving would result. The compression feature is always ongoing to compression enabled LUNs. It can be manually paused by the operator. The rate of compression for a LUN is also selectable between High, Medium and Low. The default value is set to Medium. This setting is not a level of compression for the data but rather is the rate at which the compression runs on the data. A Low rate can be selected when response-time critical applications are running on the storage system. As data compression frees 256 MB pool slices, that space is returned to the pool for its use.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

43

VNX Block space efficiency features are best suited for storage environments that require space efficiency combined with a high degree of availability. Both Deduplication and Compression use Thin LUNs to reclaim saved space to the pool, thus their use will only be suited for environments where Thin LUN performance is acceptable. The features work best in environments where data is static and thus can best leverage the features’ storage efficiencies. Block Deduplication is well suited for environments where large amounts of duplicate data are stored and that do not experience over 30% write IOs. Avoid environments that have large amounts of unique data as it will not benefit from the space savings the feature provides. If the environment is over 30 % write active that will tend to drive a constant cycle of undoing and redoing the deduplication. Also avoid environments where sequential or large block IOs are present. Block Compression is very suitable to data archive environments. Avoid compression in time sensitive application environments. This is because when compressed data is read, it has to be decompressed inline and that affects the individual I/O thread performance. Also avoid environments where data is active. If compressed data is written to, it first must be decompressed and written back in uncompressed form, thus consuming space in the LUN.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

44

When creating a Pool LUN, the user is given the option of enabling VNX Block Deduplication at the time of creation. Notice that the Thin checkbox is also enabled, since Deduplicated LUNs are Thin LUNs by definition. From the Advanced tab the SP Default Owner of the LUN can be selected. If this is the first LUN from the pool to be deduplication-enabled, the same SP will be the Deduplication Container’s Allocation Owner. A warning message will be displayed for creating a deduplication-enabled LUN that has an SP Default Owner that does not match the pool Deduplication Container Allocation Owner. In the example shown, the Block Pool already contains a deduplication enabled LUN having an SPB Default Owner and the pool’s Deduplication Container Allocation Owner is SPB. The warning message alerts the operator that selecting SPA as a Default Owner of the LUN will cause a performance impact. Therefore the operator should select SPB as the Default Owner for the LUN to match the existing SP Allocation Owner of the container.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

45

Dedulication can be enabled on an existing pool LUN by going to the LUNs page in Unisphere and right-clicking the LUN to select the Deduplication option from the drop-down selection. It can also be enabled from the LUN Properties page from the Deduplication tab. If the existing LUN SP Default Owner does not match the pool Deduplication Container SP Allocation Owner a warning message is displayed showing the operator the Optimal SP Owner and recommending changing the SP Default Owner. If the LUN uses a feature not supported, like VNX Snapshots, the user receives a message relating to how the system will proceed. Deduplication for the LUN can also be turned off from the LUNs page or the LUN Properties page.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

46

The State of Block Deduplication for a Storage Pool can be viewed on the Storage Pool Properties page from the Deduplication tab. If it is running, a percentage complete and remaining space is shown. Deduplication on the pool can be Paused or Resumed, the Tiering policy can be set and the Deduplication Rate can be set to Low, Medium (default) or High. The page will also display the amount of space that is shared and between the deduplicated LUNs, including VNX Snapshots. A display is also given for the estimated value of the capacity saved for deduplicated LUNs and VNX Snapshots.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

47

To enable Block Compression on a pool LUN, from the Unisphere LUNs page select the LUN and go to its Properties page. On the Compression tab check the Turn On Compression option. The compression Rate of Low, Medium (default) or High can also be selected. Once Compression is enabled it can be Paused from the same location. The slide illustrates the Compression tab for a Thin LUN and a Thick LUN. Notice the difference in Consumed Capacity. Enabling Compression on the Thick LUN will cause it to be migrated to a Thin LUN and less Consumed Capacity will be a result. Additional space savings will be realized by the compression of data on the LUN as well.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

48

To enable Block Compression on a Classic LUN, access its Properties page from Unisphere and select the Compression tab and click the Turn On Compression button. The system will migrate the Classic LUN to a pool Thin LUN and displays a window for the user to select an existing capacity capable pool or allows one to be created.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

49

Block Deduplication has some feature interoperability guidelines. They are listed on the table and are continued on the next slide.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

50

This slide continues the feature interoperability guidelines for Block Deduplication.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

51

Block Compression has some feature interoperability guidelines. They are listed on the table and are continued on the next slide.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

52

This slide continues the feature interoperability guidelines for Block Compression. It also displays a table detailing Compression operation limits by VNX array model.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

53

This lesson covers the Data-At-Rest Encryption (D@RE) advanced storage feature. It describes the feature’s benefits and its guidelines and considerations. It also details activating the feature for use in the VNX.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

54

The Data-At-Rest (D@RE) feature secures user data on VNX disk drives through strong encryption. If a drive is physically stolen from the VNX system, the user data is unreadable. The data is encrypted and decrypted by embedded encryption hardware in the SAS controller. D@RE issues a unique encryption key for each drive that is configured in a Pool or RAID Group. The encryption happens on the direct I/O path between the SAS controller and the disk and is transparent to all upper-level data operations. The hardware encrypts and decrypts at near line speed with a negligible performance impact. Since the SAS controller hardware performs all the encryption, all VNX disk drive types are supported. VNX D@RE requires no special disk hardware unlike other data protection solutions which us self encrypting drives (SEDs). The D@RE feature is provided by the DataAtRestEncryption enabler and is installed on all new VNX systems shipped from manufacturing. The enabler is available as an NDU to upgrade currently deployed VNX systems. A separate Activation step is required to start the user data encryption of the drives. If the VNX already contains unencrypted data, the activation process will encrypt the existing data and all new data.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

55

The design objective of D@RE is to secure data stored on the VNX disks in the event of physical theft. Some D@RE benefits are its ability to encrypt all data stored on the VNX. This includes both File and Block data. It will also encrypt any existing data on the VNX and does this with minimal performance impact. Because the encryption is done on the direct I/O path from the SAS controller to the disk drive, all VNX storage features are unaffected and are supported. The feature uses the existing SAS controller hardware of the VNX with MCx systems so there is no special disk drive hardware needed. The feature works with the existing supported VNX disk drives, all types (Flash, SAS and NL-SAS) and all vendors.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

56

The D@RE feature does have some guidelines and considerations. Before activating D@RE, all FAST Cache LUNs need to be destroyed. This is required so that the data held in FAST Cache is written back to disk and can thus be encrypted. Encrypting existing data is time consuming. This is due to a design choice to limit encrypting the existing data to 5% of available bandwidth and maintain the rest of the bandwidth for host I/O workloads. For systems containing a large amount of data, the encryption of existing data can take tens of days or more. The D@RE keystore contains all of the existing keys used to encrypt each drive and six copies of it are stored on a system private LUN that protected by a 2 X 3 mirror. Each time a disk is configured into either a Pool or RAID Group the system provides an alert to the operator to perform a keystore backup. The backup requires operator intervention and should be stored off the VNX in the unlikely event of keystore loss or corruption. Should a keystore recovery be needed, a support engagement will be required. Similarly, only support can revert the SAS I/O modules to an un-encrypted state.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

57

To activate D@RE its wizard must be used and is available for selection from the Wizards task pane. The wizard screens are shown and display caution a message that once activated will be irreversible.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

58

From the System Properties page access the Encryption tab to see the status of D@RE. The feature is active and the encryption of existing data is ongoing and can take some time to complete. The SPs will need to be rebooted one at a time in any order to complete enabling the D@RE feature on the system. Make sure the first SP rebooted fully back online prior to rebooting the second SP. The SPs can be rebooted when the encryption status of: “In Progress”, Scrubbing”, or “Encrypted” exists.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

59

The keystore backup operation is selectable from the System Management section of the task pane. The keystore backup is a manual operation and should be done upon activating D@RE and each time a drive is added to a Pool or RAID Group since D@RE will issue an encryption key for the new drive. Backup the keystore to a location off the VNX. This precaution is recommended should the existing keystore be lost or corrupted. Without a keystore, all data on a D@RE activated VNX becomes unavailable until as keystore recovery operation is completed by EMC support.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

60

This module covered the advanced storage features of LUN Migration, LUN Expansion, FAST VP, FAST Cache, storage efficiencies (Block Deduplication and Block Compression), and D@RE. Their functionality was described, the benefits identified, and guidelines for operation were listed. It also provided the configuration steps for the FAST VP, FAST Cache and D@RE features.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

61

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Advanced Storage Features

62

This module focuses on the theory and operation and the management of VNX Local Replication options for Block—SnapView Snapshots, SnapView Clones, and VNX SnapShot.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

1

This lesson covers the SnapView Snapshots local block replication feature.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

2

With the SnapView feature a source LUN, a SnapView session, and a reserved LUN work together to create a SnapView snapshot that captures a point in time data state of a source LUN. The SnapView snapshot is made visible to a secondary host when it is activated to a SnapView session (given that a SnapView snapshot has been defined and has been added to a storage group connected to the secondary host). VNX SnapView snapshots are a composite view of a LUN which represents a point in time, and not the actual LUN itself. As a result, creating a Snapshot and starting a session is a very quick process, requiring only a few seconds. The view that is then presented to the secondary host is a frozen representation of the source LUN as the primary host saw it at the time the session was started. The SnapView snapshot is writable by the secondary host, but any changes made to it are discarded if the SnapView snapshot is deactivated. A SnapView snapshot is a composite of the unchanged data chunks on the source LUN and data chunks on a LUN called “Reserved LUN”. Before chunks of data are written to the source LUN, they are copied to a reserved area in private space, and a memory map is updated with the new location of these chunks. This process is referred to as Copy on First Write. The Copy On First Write Mechanism (COFW) uses pointers to track whether the data is on the source LUN, or in the Reserved LUN Pool. These pointers are kept in SP memory, which is volatile, and could therefore be lost if the SP should fail or if the LUN is trespassed. A SnapView feature designed to prevent this loss of session metadata is persistence for sessions (which stores the pointers on the Reserved LUN(s) for the session). All sessions are automatically persistent and the user cannot turn off persistence. The SnapView snapshot can be made accessible to a secondary host, but not to the primary host (unless software that allows simultaneous access, like EMC Replication Manager, is used).

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

3

The table above lists the hardware and software requirements for SnapView snapshots. A VNX is required hardware for using SnapView snapshots. If a host is to access the SnapView snapshot, two or more hosts are required; one primary host to access the VNX source LUN and one or more additional secondary hosts to access the SnapView snapshot of the source LUN. The Admsnap program runs on host system in conjunction with SnapView running on the EMC VNX storage processors (SPs), and allows the user to start, activate, deactivate, and stop SnapView sessions. The Admsnap utility is an executable program (Command line) that the user can run interactively or with a script. This utility ships with the SnapView enabler.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

4

Having a LUN marked as a source LUN is a necessary part of the SnapView procedure, but it is not all that is required. To start the tracking mechanism and create a virtual copy which has the potential to be seen by a host, the user needs to start a SnapView session. A SnapView session is associated with a SnapView Snapshot, and both are associated with a unique source LUN. SnapView sessions are identified by a session name which identifies the session in a meaningful way. An example of this might be ‘Drive_G_8am’. These names may be up to 64 characters long. Remember that utilities such as admsnap make use of those names, often as part of a host script, and that the host operating system may not allow certain characters to be used. Use alphanumerics and underscores for the names. SnapView has support for a consistent session which is a single session that includes multiple source LUNs. All updates to the source LUNs are delayed until the session has started on all source LUNs. Once the Session has started, updates are allowed to continue. The consistent start of a SnapView session allows it to be started at the same point in time on multiple source LUNs. If a consistent start is not used in a multi-source LUN session, it is possible that updates to one or more source LUNs can take place between the time that the session starts on the first source LUN and the time that the session starts on the last source LUN. This causes inconsistency in the data on the set of LUNs. The user can also ensure consistency by quiescing the application but this is unacceptable in many environments. A consistent session can only be started if it can be started on all source LUNs and will fail if a session on any of the source LUNs fails to start. This ensures consistency across the source LUNs. While the session is active (started), no further source LUNs can be added to the session. The Navisphere Secure CLI command allows multiple source LUNs to be listed on the same command line. When the consistent start is initiated, updates to all source LUNs are held until the session has started on all source LUNs. This has the same effect as a quiesce of the I/O to the source LUNs, but is performed on the storage system rather than the host.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

5

Sessions may be stopped by the administrator or may be stopped automatically by the system. The latter usually implies that something has gone wrong with the storage system or that the reserved LUN Pool has filled.

Let’s look at the administrative termination first. An administrator may choose to stop the session at any time. Usually the Snapshot is deactivated first, and host buffers flushed, to prevent error messages on the host. Once that is done, the session may be ended. Ending a session removes all metadata in SP memory associated with that session and is an irreversible event. The resources that were used by that session, such as reserved LUN pool space, are all freed up for reuse. SnapView sessions are stopped automatically by the software if the reserved LUN pool fills. The session that caused the LUN pool to fill is terminated. If multiple sessions are running on a single source LUN, all sessions that use the chunk that caused the overflow will be terminated. A SnapView session Rollback operation restores the persistent point-in-time data state of the session to the source LUN(s). The rollback operation is valuable to recover from data changes as a result of operator error, for example if data was deleted by mistake.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

6

Due to the dynamic nature of reserved LUN assignment per source LUN, it may be better to have many smaller LUNs that can be used as a pool of individual resources. The total number of reserved LUNs allowed varies by storage system model.

Each reserved LUN can be a different size, and allocation to source LUNs is based on which is the next available reserved LUN, without regard to size. This means that there is no mechanism to ensure that a specified reserved LUN will be allocated to a specified source LUN. Because of the dynamic nature of the SnapView environment, assignment may be regarded as a random event (though, in fact, there are rules governing the assignment of reserved LUNs). The Reserved LUN Pool can be configured with thick pool LUNs or classic LUNs only. Pool LUNs that are created as Thin LUNs cannot be used in the RLP. For performance reasons it is a best practice recommendation to create write-cache enabled LUNs from SAS drives for the RLP LUNs. The combination of these factors makes the sizing of the reserved LUN pool a non-trivial task – particularly when Incremental SAN Copy and MirrorView/A are used along with Snapshots. It is expected that 10% of the data on the source LUN changes while the session is active. Creating 2 RLs per Source LUN allows for a safety margin - it allows twice the expected size, for a total of 20%. This example shows a total of 160 GB to be snapped, with eight reserved LUNs totaling 32 GB.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

7

This slide shows how the SnapView may be invoked to process I/O sent to a source LUN. The processing is needed only if the host performs a write to a source LUN with an active session. Then, depending on whether or not the chunk is already in the Reserved LUN Pool, it may need to be copied there before the write proceeds.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

8

Here we see how several different secondary host I/Os are directed to a Snapshot. Note that if there is no active session on the Snapshot, it appears off-line to the secondary host, and the host operating system raises an error if an attempt is made to access it.

If a session is active, SnapView needs to perform additional processing. Reads may require data to be read from the reserved LUN pool or the source LUN, and the driver needs to consult the memory map to determine where the data chunks are located and retrieve them. Writes to a Snapshot are always directed at the Reserved LUN Pool because the secondary host has no access to the source LUN. SnapView needs to determine whether the data chunk is already in the Reserved LUN Pool or not. If it is, the write proceeds to the data chunk. If it is not, the original data chunk is copied from the source LUN to the Reserved LUN Pool and is kept there so SnapView can preserve the point-in-time data state of the session. The secondary host write is then performed on the data chunk and it is stored as a separate data chunk on the Reserved LUN Pool.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

9

The slide shows a source LUN being used by the Production Server. To use SnapView snapshots, the user must create a Reserved LUN Pool. This LUN pool needs enough available space to hold all the original chunks on the source LUN that are likely to change while the session is active. When the user starts the first session on a source LUN, one reserved LUN is assigned (allocated) to that source LUN. If the reserved LUN becomes full during the time this session is running, the next available reserved LUN will be assigned automatically to the source LUN. When the session is started, the COFW mechanism is enabled and the SnapView starts tracking the source LUN.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

10

Next step is to create the SnapView snapshot. Creating the SnapView snapshot enables the allocation of an offline device (Virtual LUN) to a storage group. Following the slide the user creates the SnapView snapshot that will be offline until activated to a running SnapView session. In the slide above, even though the Snapshot of Source Lun is added to the Storage Group of Server B the device is still offline (Not Ready) as the SnapView Snapshot is not activated yet.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

11

Once the SnapView session and the SnapView snapshot are created for a given source LUN, the user can activate the SnapView snapshot. This action essentially associates a Snapview snapshot to the point-in-time view provided by the SnapView session. If the SnapView snapshot is already in a storage group and allocated to a host, following activation the connected host should be able to see this point-in-time copy of source LUN data after a bus rescan at the host level. If the secondary host requests a read, SnapView first determines whether the required data is on the source LUN (i.e. has not been modified since the session started), or in the reserved LUN Pool, and fetches it from the relevant location.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

12

This slide demonstrates the COFW process, invoked when a host changes a source LUN chunk for the first time. The original chunk is copied to the reserved LUN Pool.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

13

After the copy of the original chunk to the reserved LUN Pool, pointers are updated to indicate that the chunk is now present in the reserved LUN Pool. The map in SP memory, and the map on disk (remember that all sessions are persistent), is also updated.

Note that once a chunk has been copied to the reserved LUN Pool, further changes made to that chunk on the source LUN (for the specific session) do not initiate any COFW operations for that session.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

14

SnapView Snapshots are writeable by the secondary host. This slide shows the write operation from Server B. The write operation addresses a non-virgin chunk; no data for that chunk is in the reserved LUN Pool. SnapView copies the chunk from the source LUN to the reserved LUN Pool in an operation which may be thought of as a Copy on First Write. The copy of the data visible to the Server B(the copy in the RLP) is then modified by the write.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

15

After the modification of the chunk, the map and pointers are updated.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

16

This Lab covers the VNX SnapView local block replication feature. The exercise starts with verifying the preconfigured Reserved LUN Pool needed for SnapView is present. Then a SnapView Snapshot is created and the persistence of a SnapView session is tested. The SnapView Rollback feature is also tested. Finally, a Consistent SnapView session is started and tested.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

17

This lab covered VNS SnapView Snapshot local replication. The Reserved LUN Pool was verified, a SnapView Snapshot was create and its session persistence was tested. A Rollback operation was performed and a Consistent Session was started and tested.

Please discuss as a group your experience with the lab exercise. Were there any issues or problems encountered in doing the lab exercise? Are there relevant use cases that the lab exercise objectives could apply to? What are some concerns relating to the lab subject?

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

18

This lesson covers SnapView Clone and its operations.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

19

Unlike SnapView Snapshots, clones are full copies of the source LUN. Since clones allow synchronization in both directions, the clone must be the same size as the Source LUN. Replication software which allows only one-way copies, such as SAN Copy, does not have this restriction. Clones provide users with the ability to create fully populated point-in-time copies of LUNs within a single storage system. Clones are packaged with SnapView and expand SnapView functionality by providing the option to have fully-populated copies (as well as the pointer-based copies of Snapshots). For users familiar with MirrorView, clones can be thought of as mirrors within arrays, as opposed to across arrays. Clones have additional functionality, however, in that they offer the ability to choose which direction the synchronization is to go between source LUN and clone. Clones are also available for read and write access when fractured, unlike secondary mirrors, which have to be promoted, or made accessible via a Snapshot or a clone to allow for data access.

Since clones are fully-populated copies of data, they are highly available and can withstand SP or VNX reboots or failures, as well as path failures (provided PowerPath is installed and properly configured). It should be noted that clones are designed for users who want to be able to periodically fracture the LUN copy and then synchronize or reverse synchronize the copy. Users who simply want a mirrored copy for protection of production data would implement RAID 1 or RAID 1/0 LUNs.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

20

Since clones use MirrorView type technology, the rules for image sizing are the same – source LUNs and their clones must be exactly the same size. This slide shows operations which may be performed on clones.

The first step is the creation of a Clone Group, which consists of a source LUN and 0 to 8 clones. This operation is not allowed if the Clone Private LUNs (CPLs), discussed later, have not been allocated. Once a Clone Group exists, clones may be added to it. Those clones may then be synchronized and reverse synchronized as desired. A clone which is synchronized may be fractured. This stops writes to the source LUN from being copied to the clone, but maintains the relationship between source LUN and clone. A fractured clone may be made available to a secondary host. A set of clones may be fractured at the same time to ensure data consistency. In that case, updates from the source LUNs to the clones are stopped at the same time and the clones are then fractured. Note that there is no concept of a ‘consistency group’. Clones are managed individually after being consistently fractured.

Removal of a clone from a Clone Group turns it back into an ordinary LUN and permanently removes the relationship between the source LUN and that clone. Data on the Clone LUN is not affected; the ability to use the LUN for synchronization or reverse synchronization operations is lost, however. Destroying a Clone Group removes the ability to perform any clone operations on the source LUN.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

21

This slide shows the initial synchronization of the clone. Synchronization is the process of copying data from the source LUN to the clone. Upon creating the association of a clone with a particular source this translates to a full synchronization – all extents (regions) on the source LUN are copied to the clone to provide a completely redundant replica. Subsequent synchronizations involve only a copy of any data that has changed on the source since the previous synchromization – overwriting any writes that have ocurred directly to the clone from any secondary server that had been accessing it while the clone was fractured. It is essentially an update for the clone. Once synchronized with the incremental updates from the source LUN, the clone is ready to be fractured again to maintain the relevant point-in-time reference. Source LUN access is allowed during synchronization. The clone, however, is inaccessible during synchronization, and attempted host I/Os are rejected

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

22

The Clone Private LUN contains the Fracture Log which allows for incremental resynchronization of data. This reduces the time taken to resynchronize and allows customers to better utilize clone functionality. The term extent mentioned in the previous slide and above is the granularity at which changes are tracked. This granularity depends on the size of the source LUN. The extent size is 1 block for each GB of source LUN size, with a minimum size of 128 kB. This means that up to a source LUN size of 256 GB, the extent size will be 128 kB. A source LUN of 512 GB will therefore have an extent size of 512 GB = 256 KB. Because the Fracture Log is stored on disk in the clone Private LUN, it is persistent and can withstand SP reboots or failures and storage system failures or power failures. This allows customers to benefit from the incremental resynchronization feature, even in the case of a complete system failure. A Clone Private LUN is a Classic LUN of at least 1 GB, that is allocated to an SP and must be created before any other clone operations can commence. Note that any space above the required minimum is not used by SnapView.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

23

Reverse Synchronization allows clone content to be copied from the clone to the source LUN after the clone has been initially synchronized. SnapView implements Instant Restore, a feature which allows Copy-on-Demand, or out-of-sequence copies. This means that as soon as the Reverse Synchronization begins, the source LUN seems to be identical to the clone. The Source LUN must briefly be taken off-line before the reverse synchronization starts; this allows the host to see the new data structure. During both synchronization and reverse synchromization, server I/Os (read and write) can continue to the source. The clone, however, is not accessible for secondary server I/Os during either synchronizations or reverse synchronizations; the user must ensure that all server access to the clone is stopped (this includes ensuring that all cached data on the server is flushed to the clone) prior to initiating a synchronization or a reverse synchronization.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

24

A protection option during a reverse synchronization is to enable the protected restore option for the clone. The protected restore option ensures that when the reverse synchronization begins, the state of the clone is maintained. When protected restore is not explicitly selected for a clone, a normal restore occurs. The goal of a normal restore is to send the contents of the clone to the source LUN, while allowing updates to both, and to bring the clone and the source LUN to an identical data state. To do that, writes coming into the source LUN are mirrored over to the clone that is performing the reverse synchronization. Also, once the reverse synchronization completes, the clone is not fractured from the source LUN. On the other hand, when restoring a source LUN from a golden copy clone, that golden copy needs to remain as-is. This means that the user wants to be sure that source LUN updates do not affect the contents of the clone. So, for a protected restore, the writes coming into the source LUN are NOT mirrored to the protected clone. An0d, once the reverse synchronization completes, the clone is fractured from the source LUN to prevent updates to the clone.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

25

A clone LUN can be assigned to the alternate SP from the source LUN; however, the clone LUN will be trespassed during the clone synchronization process, and returned to its SP when it is fractured.

Trespassing a clone is only allowed after it is fractured. When it is in a non-fractured relationship, it will be trespassed if its source LUN is trespassed. If the source LUN is trespassed, any clone that is not fractured trespasses along with it. If the clone is fractured, it is treated like a regular LUN, and trespasses as required. If the clone was synchronizing when it was trespassed, the peer SP continues the synchronization. Information about differences in the data state between source LUN and clone is kept in the Clone Private LUN (CPL). The CPLs are always identical and ensure that each SP has the same view of the clones.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

26

A consistent fracture operation enables users to establish a point-in-time set of replicas that maintain write-ordered consistency, which in turn, allows users to have a re-startable pointin-time replicas of multi-LUN datasets. This can be useful in database environments where users have multiple LUNs with related data. A Clone consistent fracture refers to a set of clones belonging to write-ordered dependent source LUNs. The associated source LUN for each clone must be unique meaning users cannot perform a consistent fracture on multiple clones belonging to the same source LUN.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

27

Clone Time of Fracture allows the user to know the date/time when the clone’s images were administratively fractured. Clones stamp the time ONLY when the clones were administratively fractured by the user and the images were a point-in-time copy (consistent state). The time is stored persistently inside the current clones private area in PSM. All clones involved in a Consistent Fracture operation will report the same time of fracture. You can view the time of fracture by issuing the cli -listclone command and using either the -all or the -timeoffracture option. The time of fracture will be displayed in the following cases: The clone was administratively fractured and its state is consistent. The clone is fractured because the reverse synchronization (protected restore enabled) was completed. The clone is administratively fractured (including media failures) during a reverse synchronization (protected restore enabled). The time of fracture will not be displayed when the state isn’t administratively fractured and/or the time of the fracture isn’t stored. Specific examples: The clone is performing a synchronization (synchronization or reverse synchronization). The condition of the clone is Normal. The clones were fractured because a reverse synchronization has started within the CloneGroup. The clone’s state is out of sync or reverse-out-of-sync (protected restore disabled). The clones were fractured due to a media failure (except by the protected reverse synchronization case).

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

28

This Lab covers SnapView Clones local replication. The lab exercise started by verifying the preconfigured Clone Private LUNs are configured. A Clone is then created and tested. And a Clone Consistent Fracture operation is performed.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

29

This lab covered SnapView Clone local replication. The required preconfigured Cone Private LUNs were verified. A clone was then created and tested; and finally, a Consistent Clone Fracture operation was performed.

Please discuss as a group your experience with the lab exercise. Were there any issues or problems encountered in doing the lab exercise? Are there relevant use cases that the lab exercise objectives could apply to? What are some of the concerns relating to the lab subject?

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

30

This lesson covers the purpose, requirements, managed objects and theory of operations of VNX snapshots.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

31

VNX Snapshots is a storage system-based software application that allows the user to create snapshots of pool-based LUNs. In fact, VNX Snapshots can only be used with pool LUNs. A snapshot is a virtual point-in-time copy of a LUN and takes only seconds to create. VNX Snapshots use a very different internal mechanism to that used by SnapView snapshots, though both are pointer-based. VNX Snapshot data may be in the original Primary LUN space or may have been written to a different location in the Pool. As a result of the Relocate on First Write (ROW) technology used, VNX Snapshots use appreciably less additional space than a full copy would use. A VNX Snapshot will use appreciably less space than that occupied by its Primary LUN, and will make more efficient use of space than SnapView Snapshots. An enabler gives the user access to VNX Snapshots, while a separate enabler allows the use of SnapView Snapshot and Clone technology. These two methods of making point-in-time copies are independent, and have limits which are independent of each other. They can coexist on the same storage system, and even on the same Pool LUNs. Note that VNX Snapshots cannot be used on Classic LUNs. Management of VNX Snapshots is performed through Unisphere or Navisphere Secure CLI. A host-based utility, SnapCLI, can perform a subset of the VNX Snapshot management operations, and will be discussed later.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

32

The table above lists the hardware and software requirements for VNX Snapshots. A VNX is required hardware for using VNX Snapshots. If a host is to access the VNX Snapshots snap, two or more hosts are required; one primary host to access the VNX source LUN and one or more additional secondary hosts to access the VNX Snapshots snap of the source LUN. The SnapCLI program runs on host system in conjunction with VNX Snapshot running on the EMC VNX storage processors (SPs), and allows the user to create and delete snapshots, and expose them to host systems. All SnapCLI commands are sent to the storage system through the Fibre Channel or iSCSI connection.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

33

The Primary LUN is the production LUN that is replicated. This is the LUN that is in use by the application (and the production host) and it is not visible to secondary hosts. When a snapshot is attached to a snapshot mount point, it is made available to a secondary host.

A Snapshot is the VNX Snapshot equivalent of the SnapView session. A Snapshot Mount Point is the VNX Snapshots equivalent of the SnapView Snapshot – a virtual LUN that is used to make the replica visible to a secondary host. The SMP is associated with the primary LUN, and can be used for snapshots of that LUN only. Consistency Groups allow primary LUNs or Snapshot Mount Points to be grouped together persistently. Operations can be performed on the group as a single object.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

34

VNX Snapshots address limitations of copy on first write (COFW) SnapView Snapshots. The VNX Snapshot technology is redirect on write (or ROW). VNX Snapshots are limited to Poolbased LUNs (i.e. not Classic LUNs). Up to 256 writeable VNX Snapshots can be associated with any Primary LUN, though only 255 are user visible. Because the VNX Snapshot uses pointers rather than a full copy of the LUN, it is space-efficient, and can be created almost instantaneously. The ROW mechanism does not use a read from the Primary LUN as part of its operation, and thus eliminates the most costly (in performance terms) part of the process. A Reserved LUN Pool is not required for VNX Snapshots - VNX Snapshots use space from the same Pool as their Primary LUN. Management options allow limits to be placed on the amount of space used for VNX Snapshots in a Pool. VNX Snapshots allow replicas of replicas; this includes Snapshots of VNX Snapshots, Snapshots of attached VNX Snapshot Mount Points, and Snapshots of VNX Snapshot Consistency Groups. VNX Snapshots can coexist with SnapView snapshots and clones, and are supported by RecoverPoint. If all VNX Snapshots are removed from a Thick LUN, the driver will detect this and begin the defragmentation process. This converts Thick LUN slices back to contiguous 256 MB addresses. The process runs in the background and can take a significant amount of time. The user can not disable this conversion process directly, however, it can be prevented by keeping at least one VNX Snapshot of the Thick LUN. Note: while a delete process is running, the Snapshot name remains used. So, if one needs to create a new Snapshot with the same name, it is advisable to rename the Snapshot prior to deleting it.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

35

A VNX Snapshot Mount Point (SMP) is a container that holds SCSI attributes WWN Name Storage Group LUN ID, etc. An SMP is similar to a Snapshot LUN in the SnapView Snapshot environment. It is independent of the VNX Snapshot (though it is tied to the Primary LUN), and can therefore exist without a VNX Snapshot attached to it. Because it behaves like a LUN, it can be migrated to another host and retain its WWN. In order for the host to see the point-in-time data, the SMP must have a VNX Snapshot attached to it. Once the Snapshot is attached, the host will see the LUN as online and accessible. If the Snapshot is detached, and then another Snapshot is attached, the host will see the new point-in-time data without the need for a rescan of the bus.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

36

The VNX Snapshot Consistency Group allows Snapshots to be taken at the same point in time on multiple Primary LUNs. If individual Snapshots were made of the Primary LUNs, it is possible that updates to one or more Primary LUNs could take place between the time of the Snapshot on the first Primary LUN and the time of the Snapshot on the last Primary LUN. This causes inconsistency in the Snapshot data for the set of LUNs. The user can ensure consistency by quiescing the application but this is unacceptable in many environments. A Consistency Group can have a Snapshot taken of it, and can have members added or removed. Restore operations can only be performed on Groups that have the same members as the Snapshot. This may require modifying Group membership prior to a restore. When a Snapshot is made of a Group, updates to all members are held until the operation completes. This has the same effect as a quiesce of the I/O to the members, but is performed on the storage system rather than on the host.

VNX Snapshot Set – a group of all Snapshots from all LUNs in a Consistency Group. For simplifications, is referred to as CG Snap throughout the material. VNX Snapshot Family – a group of Snaps from the same Primary LUN

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

37

This slide summarizes the differences between SnapView Snapshot and VNX Snapshot terms.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

38

This slide, and the following side, compares the two VNX snapshot technologies. In this slide, the processes involved in a new host write to the source LUN (primary LUN) are compared. In the familiar SnapView Snapshot environment, the COFW process reads the original 64 KB data chunk from the source LUN, writes that chunk to the Reserved LUN, and updates the pointers in the Reserved LUN map area. Once these steps complete, the host write to the Source LUN is allowed to proceed, and the host will receive an acknowledgement that the write is complete. If a SnapView Snapshot is deleted, data in the RLP is simply removed, and no processing takes place on the Source LUN. In the case of a VNX Snapshot, a new host write is simply written to a new location (redirected) inside the Pool. The original data remains where it is, and is untouched by the ROW process. The granularity of Thin LUNs is 8 KB, and this is the granularity used for VNX Snapshots. New data written to a Thick LUN with a VNX Snapshot is mapped in the same way as the data for a Thin LUN, and it is expected that this will reduce Thick LUN performance. If a VNX Snapshot is removed from a Thin LUN, data will be consolidated into 256 MB slices to minimize wasted space. If the last VNX Snapshot is removed from a Thick LUN, the defragmentation process moves the new data to the original locations on disk, and freed space is returned to the Pool.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

39

In this slide, the processes involved in a secondary host read of a Snapshot are compared. In the familiar SnapView Snapshot environment, data which has not yet been modified is read from the source LUN, while data that has been modified since the start of the SnapView Session is read from the Reserved LUN. SnapView always needs to perform a lookup to determine whether data is on the Source LUN or Reserved LUN, which causes Snapshot reads to be slower than Source LUN reads. In the case of a VNX Snapshot, the original data remains where it is, and is therefore read from the original location on the Primary LUN. That location will be discovered by a lookup which is no different to that performed on a Thin LUN which does not have a VNX Snapshot, so the performance is largely unchanged.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

40

VNX Snapshots support the following operations: Create a Snapshot Create a Snapshot Mount Point Attach a Snapshot Copy a Snapshot Snap a Snapshot Mount Point (Cascading Snapshots) Detach a Snapshot Mount Point Destroy a Snapshot Attach a Consistency Snapshot These operations will be covered on the following slides.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

41

Creating a snapshot does not consume any pool space. The space starts being used when new writes to the primary LUN or to the snapshot itself arrive. Snapshots have a granularity of 8 KB, and their blocks are tracked just like the blocks in thin LUNs. Every snapshot must have a primary LUN, and that property never changes. A primary LUN cannot be deleted while it has snapshots.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

42

Creating a snapshot does not require any space from the pool. Each SMP is dedicated to a specific primary LUN. It is not possible to attach snapshots from two different primary LUNs to a single SMP. Therefore, a backup server that is backing up four different LUNs must have four different SMPs provisioned to back up the snapshots of those LUNs.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

43

Attaching is an asynchronous operation during which the SMP remains available, but the I/O is queued. This means that the host does not have to rescan the SCSI bus to view the snapshot. The rescan is required only to discover the SMP when it is first presented to the host.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

44

A VNX Snapshot can be copied to another snapshot. The resulting snapshot is a copy of the source except for the name. The “allowReadWrite” property is set to “No” on the copy. The snapshot copy retains the source LUN properties and resides within the same pool as the original snapshot. So, copying a snapshot increases the snapshot count for a given production LUN by one.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

45

A snapshot of a SMP still has the same Primary LUN property as the Mount point (and as the attached snapshot). The primary LUN properties of snapshots and the mount points that they are attached to will never be different. It is technically possible to attach a snapshot to a SMP that is not a part of a storage group. Therefore, it is possible to create a snapshot of such a SMP. The resulting snapshot will be slightly different from a regular snapshot copy. The source of this snapshot and the creation times will not be the same as the snapshot attached to the SMP.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

46

Detaching a snapshot from a SMP modifies these properties: Last Modified Date and Last Modified By. Detaching a snapshot from a SMP does not destroy the SMP by default, and the SMP remains provisioned to the host. Detaching is an asynchronous operation during which the SMP remains available, but the host I/O is queued at the array. After the detach operation completes, all queued I/O requests return a fail status to the host.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

47

Destroying (deleting) a snapshot reclaims space for reuse in the storage pool. Reclaim is not instant, and is done by an internal process. This internal process is throttled for better performance of the array and is not sequential, meaning that more than one snapshot can be destroyed at a time. Multiple snapshot destructions start on a first-come-first-served basis. VNX is tuned to destroy up to 16 snapshots simultaneously on each Storage Processor (SP). Additional destruction requests are queued until a destruction thread becomes available.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

48

Mounting an entire Consistency Snapshot requires the same number of SMPs as there are members in the CG.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

49

This lesson covers the management of SnapView snapshots and sessions.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

50

Every VNX Snapshot may have an optional expiration date. Expired snapshots are destroyed at regular intervals. The VNX array scans for expired snapshots once an hour (The Auto-Delete process does not process destruction of expired snapshots. The destruction is handled by another software layer.). When the expiration time is reached, the snapshot may not be destroyed immediately. It is deleted by the process started at the next running interval. Setting an expiration date on a snapshot automatically disables AutoDelete. In Unisphere, the user can set an expiration date only after Auto-Delete is disabled (unchecked). The Advanced section of the Snapshot Properties (General tab) includes allowing read/write mode for the snapshot, enabling automatic deletions based on the configuration of the hosting Pool, and configuring an expiration time. Default expiration time is 7 days; the selection ranges from 1 hour to 10 years. VNX Snapshots will not be deleted if they are in use – attached, or involved in a restore - when their expiration date is reached; they will be removed when detached, or when the restore completes. The user will be warned before Snapshots are deleted.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

51

A Cascading snapshot is a snapshot of an attached Snapshot Mount Point. The Source LUN property of the Cascading Snapshot has the name of the SMP. It is possible to create multiple snapshots at this level, and individually mount them.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

52

Snapshots can be used to restore a primary LUN or a SMP. In other words, the data in the LUN will be changed to match the data in the snapshot. The classic use case for this operation is when recovering from data corruption.

Restoring automatically creates a ‘Restore point snapshot’ to recover from unintentional data corruption. While the LUN is being restored, its state is shown as ‘Initializing’, and is changed back to ‘Ready’ after the restore is complete. Although restoring is not instant, it is an online operation, in the sense that the earlier point-in-time data is immediately available, even while the restore operation occurs in the background. The user only needs to perform an initial flush of the host buffers before starting the process, and then the rest of the process is completely host transparent. Restore is also referred to as Protected Restore, because restoring does not change the “Restore Point” snapshot. All its data is protected so that the user can return to the pointin-time of the source data if needed. When restoring a SMP, the data from the source snapshot (the one being restored) is placed onto the snapshot attached to a Mount Point. Restoring can change the LUN size if the source snapshot was taken before the primary LUN was expanded or shrunk.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

53

Since pool LUNs can have a combination of SnapView snapshots and VNX Snapshots, it is possible to restore from either a SnapView snapshot or a VNX Snapshot. Restore is supported for both cases. To restore a primary LUN from a VNX Snapshot, all SnapView sessions must be stopped. If the primary LUN is restored from a SnapView session, VNX Snapshots are unaffected. Certain prerequisite steps must be performed from the host operating system for a successful LUN restoration. Operating systems often have cached meta7data pertaining to the LUN’s file system. Restore operations tend to confuse memory maps unless the cache is cleared. This affects most operating system. Stop application access to the LUN prior to a restore operation. Optionally, the user may need to flush application buffers. EMC recommends using SnapCLI for all operating systems (if an appropriate binay exists). Use native operating systems methods when SnapCLI binary is not available.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

54

This module covered SnapView Snapshots tool for several common and crucial requirements. Classic LUNs, point-in-time copies, rate of change, and limited storage availability requirements are examined.

SnapView Clones use synchronization, reverse synchronization, fractures, and consistent fractures to provide full LUN copies for data protection, application testing, and backup and restore operations. VNX Snapshot uses ROW to provide a point-in-time virtual copy of pool-based LUNs. The Snapshot Mount Point behaves like a LUN, and is gives hosts access to the point-in-time virtual LUN.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

55

This Lab covers VNX Snapshots local replication. First the VNX Snapshot software enabler is verified. Then a VNX Snapshot is created. A Restore operation is performed. And finally a VNX Snapshot Consistency Group is created.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

56

This lab covered VNX Snapshots local replication. The lab verified the VNX Snapshot enabler was installed. A VNX Snapshot was created and a Restore operation was performed. And finally a VNX Snapshot Consistency Group was created.

Please discuss as a group your experience with the lab exercise. Were there any issues or problems encountered in doing the lab exercise? Are there relevant use cases that the lab exercise objectives could apply to? What are some of the concerns relating to the lab subject?

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

57

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX Local Replication for Block

58

This module focuses on concepts and activities required to configure and implement File storage services with VNX.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

1

This lesson covers the overview of VNX File storage services. We will briefly contrast Block and File solutions, as well as outline the configurations that you will be building in the following lessons.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

2

To better appreciate the tasks required to manage VNX File services, consider the differences between storage services for Block and File. Provisioning Block storage for hosts involves a single host accessing one or more LUNs. The LUNs on the array are seen directly by the host (although there are some exceptions, e.g. Meta LUNs). The host OS handles all management of that storage’s use, including creation of one or more file systems. With Block storage, the array is completely unaware of how the LUN is being used; there are no concerns of users or permissions; there is no need for supporting services (DNS, NTP, etc.) When storage is provisioned for File services, or NAS (Network Attached Storage), all of the volume management is done on the NAS device. In the illustration showing here, there are multiple LUNs which have been configured, a file system has been created on those LUNs, and a directory has been created and shared on the IP network. Once online, that directory can be accessed over the IP network, simultaneously, by many (possibly thousands) of users, or it can be accessed be applications, such as Vmware or Xen. Additionally, rather than seeing File storage as a system volume, NAS clients see it as a directory, mountpoint, or mapped drive. While with Block services the array is concerned with connected hosts, with File, the array is concerned with clients, users, applications, etc. In order to operate effectively in this user environment, the array must be concerned with network services, such as DNS, LDAP, NTP, etc.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

3

Before NAS clients can access File storage, there are a few layers of configuration required to make this possible. As seen here, In addition to the LUNs, striped volumes are formed across those LUNs. Next, a file system is constructed, along with the creation of a mountpoint. It will also be necessary to setup up an IP interface so that clients can access the storage over the IP network.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

4

When comparing configurations for NFS, in Linux/Unix environments, and CIFS, in Microsoft Windows environments, we see some different requirements. When providing File services for CIFS, there are some additional configurations needed. We see that a “CIFS server” has been created to emulate a Windows file server’s functions, such as permission management. There are also additional configurations required to interact with Active Directory, such as Domain membership, and Kerberos.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

5

Another option, and best practice, for VNX File configurations is to implement Virtual Data Movers. This feature provides a layer of virtualization which will allow the CIFS or NFS configuration to be portable, as well as enable replication in CIFS environments.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

6

When working in dual NFS and CIFS situations, VNX File configuration provides organizations the flexibility to share different directories from the same file system over the two protocols respectively. If preferred, separate file systems can be created and exported for each protocol.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

7

In some situations, certain users may have both Windows and Linux/Unix accounts. If these users need to access the same file from either environment, the VNX can be configured for multiprotocol access. In this configuration, a single mountpoint or directory can be exported for both protocols. This is an advanced configuration and requires advanced user mapping techniques to ensure that user IDs are aligned between the two user environments. Multiprotocol configuration is beyond the scope of this course.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

8

The process of provisioning File storage for access by users and applications can be broken into a few basic sections. For the purposes of this course, we will group the activities into four parts. The first stage of our process focuses on networking for File services. The steps in this section set up IP interfaces as well as essential network services, such as DNS and NTP.

The next stage will deal with configuring Virtual Data Movers. The VDMs will be used to share VNX File storage, as well as provide portability to the File configuration. The third phase of our process will deal with creating file systems. File systems can be made manually, or using VNX’s Automatic Volume Manager (AVM). This course will use AVM to produce our file systems. The final stage makes the storage available to users and applications on the network, either for NFS or CIFS.. Please note that, although all of the steps presented in the module are essential, the actual sequence of steps is very flexible. What is presented in this module is merely one option for the sequence.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

9

This lesson covers the configuration of VNX File Ethernet speed and duplex, IP interfaces and routing. It also details integrating VNX File with network services.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

10

The first phase of configuring VNX to provide File services is to setup TCP/IP networking. The configurations included in this phase are: speed and duplex, IP addressing, routing, and network services such as DNS and NTP time services. Not all configurations require routing, DNS, and NTP. However, even in environments that do not require them, these setting are often configured regardless.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

11

To modify and display the hardware network configuration for the VNX, from the Top Navigation Bar click the Settings → Network → Settings For File. Here you will find Data Mover interfaces, device information, DNS settings, routing information, statistics and the ping utility for troubleshooting.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

12

Before you configure hardware parameters, list the network devices (PCI devices, Peripheral Component Interconnect) to see what is available. From the Top Navigation Bar, click Settings → Network → Settings for File and select the Devices tab. Listed here are all the devices available to be used when creating interfaces. The CLI command that retrieves the list of devices for each data mover is server_sysconfig <mover_name> -pci. Note: The list of devices on any one Data Mover may vary widely. The devices presented here are merely examples of what might be displayed, depending on the network specifics of a given model.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

13

VNX’s default duplex setting is auto, and the default transmission speed is also auto. The entire network should run in the same duplex mode. It is preferred that the Data Mover transmit at a minimum of 100 Mbps, Full Duplex. For GigE connections, the best practice setting should be auto/auto. In order for a network to function well, the same settings should be deployed across the network.

Since Ethernet hubs are not capable of operating in Full Duplex mode and do not provide buffering functions, it is strongly recommended that all Ethernet hubs in VNX environments be removed from the network and replaced with Ethernet switches. If they must be used, the VNX should not be connected to the hub. If the network does not fully support Full Duplex, then implementing Full Duplex on VNX could cause connectivity devices in the network to fill their buffers, which would have a drastic effect on the performance of the network, as well as possible data loss. At a minimum, speed and duplex setting must match on both sides of a link. If a host/client is operating at 10Mbps/Half Duplex, then the switch port to which it is connected should match.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

14

To set the speed and duplex on a Data Mover network device, right-click the device and select Properties. Next, select the desired speed and duplex from the Speed/Duplex dropdown menu. Setting the speed and duplex to match that of the switch port, the Data Mover is attached to is the best practice. In the event the switch port is set to auto, then the Data Mover, by default, is already set correctly as well. Click OK or Apply to apply the changes (if any). Device Speed/Duplex settings must match the same settings of the switch.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

15

The IP address, subnet mask, and broadcast address are all required when configuring the interface. To configure an IP address under the Interface tab:

• • •

Click Create at the bottom of the window.



Click OK when finished.

Select the Data Mover and Device Name from the dropdown menus. Enter the IP address of the new interface and the subnet mask. Unisphere will automatically calculate the broadcast address. The interface name is optional. If MTU size value and the VLAN ID are left empty, VNX automatically enters the default values for you. The default value for the MTU size is 1500. The default VLAN value is 1.

Note: By default, the name of the interface, if left empty, will be the IP address with hyphens. The broadcast address is not configurable. Note: Static NAT (Network Address Translation) is supported via the Unisphere client on all VNX systems.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

16

To display the IP configuration, from the Top Navigation Bar click Settings → Network → Settings For File → Interfaces Note: When deleting or modifying an IP configuration for an interface, remember to update the appropriate CIFS servers that may be using that interface and any NFS exports that may depend on the changed interface.

To display the IP configuration using CLI you can use the server_ifconfig command. Example: $ server_ifconfig server_2 -all server_2 : loop protocol=IP device=loop inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255 UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost vnx2fsn0 protocol=IP device=fsn0 inet=10.127.57.122 netmask=255.255.255.224 broadcast=10.127.57.127

UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:26:a4:7e

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

17

VNX supports IPv6 addresses for networks that are IPv6 compatible. An IPv6 address is 128 bits long, compared to IPv4 32 bit long address. The single most important IPv4 problem to address is the address space. IPv6 provides a larger address space. An IPv6 address is also written in Hexadecimal notation instead of IPv4’s decimal notation. Another improvement of the new IP protocol is that security is integrated into the protocol, not treated as an add-on. IPsec protects message confidentiality and authenticity. Beyond the added addresses, security and features of IPv6, another reason why networks are moving to IPv6 is because there are a number of IPv6 initiatives within governments around the world. IPv6 is now widely deployed in Asia due to initiatives in China, Japan and South Korea.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

18

The VNX Control Station can be configured for using IPv6 during the installation process, or later using either Unisphere, or CLI. With the current version of VNX code, the Control Station supports two types of IP configuration. The configurations supported are; IPv4 only and IPv4/IPv6 at same time. The configuration of IPv6 only is not currently supported. Note: VNX Installation Assistance cannot be used for configuring IPv6.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

19

The routing table of a Data Mover is used to direct outgoing network traffic via, both external (router) and internal (individual network interfaces such as cge-1-0, cge-1-1, etc.) gateways. For network activity initiated by the Data Mover, the system uses the routing table to get destination and gateway information. Routes to a particular host must be distinguished from those to a network. The optional keywords, net and host, specify the address type and force the destination to be interpreted as a network or a host, respectively. The Data Mover routing table can contain three types of routes:



Directly connected – The network is directly connected to the Data Mover by an IP interface.



Static – A route where information is entered manually into the routing table, which takes priority over dynamic routing protocols.



Dynamic –The routing table is managed automatically by the routed daemon on the Data Mover. The routed daemon listens for the RIP v1 and v2 messages on the network and change the routing table based on the messages. The Routing Information Protocol (RIP) is a dynamic routing protocol supported by VNX. RIP determines a network route based on the smallest hop count between the source and the destination. By default, the Data Mover listens for RIP routes on all interfaces.

Because the majority of network traffic on a Data Mover (including all file system I/O) is client-initiated, the Data Mover can use Packet Reflect to reply to the client requests. Packet Reflect ensures that the outbound packets always exit through the same interfaces that inbound packets entered.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

20

This slide shows how to configure a default gateway route. Different types of subnet masks are supported, however, VNX does not support noncontiguous network masks; that is, masks without a continuous stream of 1 bits. A netmask of 0.0.0.0 or 255.255.255.255 is invalid for net routes. By default, a netmask of 255.255.255.255 is assigned to host routes.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

21

VNX Data Movers support both traditional DNS and Dynamic DNS in a Microsoft Windows network. When configuring the Data Mover for DNS, multiple DNS servers can be included, separated by spaces, in the command statement. Unisphere allows you to configure an unlimited number of DNS domains per Data Mover. Additionally, although the default protocol for DNS is UDP, the TCP protocol can be specified and is recommended. Note: EMC recommends that two DNS name servers are employed and configured for VNX. This allows for redundancy in case of a DNS server failure. The CLI command for DNS configuration is server_dns. Example: server_dns server_2 server_2 : DNS is running. corp.hmarine.com proto:tcp server(s):10.127.57.161 To stop, start and flush the DNS service use the following commands: server_dns server_X –o stop server_dns server_X –o start server_dns server_X –o flush

.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

22

This slide shows how to configure DNS using Unisphere. The Transmission Control Protocol (TCP) is connection based. Connection based means that a virtual session is established before information is sent between the source and destination. As data is received, TCP adds a header containing message delivery information. The combination of data and TCP header forms the Transport layer message. The message for TCP is called a “segment.” TCP provides a reliable transport mechanism, meaning it acknowledges the delivery of messages. The User Datagram Protocol (UDP) is a relatively simple transport layer protocol. It is not connection based, which means that a virtual session is not created before messages are sent between source and destination. UDP messages are sometimes called data grams. When using UDP, data grams might arrive out of order or even appear duplicated. UDP lets the application deal with error checking and corrections. UDP provides an unreliable service transport mechanism, which means it does not acknowledge the delivery of transport layer messages.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

23

The Data Mover implements an NTP client that can synchronize the system clock with an NTP or SNTP server. NTP is a standard timekeeping protocol used on many platforms, including both Windows and UNIX/Linux environments. The full NTP specification uses sophisticated algorithms for time correction and maintenance to allow time synchronization with an accuracy of about a millisecond. This high level of accuracy is achieved even in large networks with long network delays, or in cases where access to a time server is lost for extended periods of time. SNTP implements a subset of NTP for use in environments with less-stringent synchronization and accuracy requirements. SNTP uses simple algorithms for time correction and maintenance and is capable of accuracy to the level of a fraction of a second. To an NTP or SNTP client, NTP and SNTP servers are indistinguishable. SNTP can be used:



When the ultimate performance of the full NTP implementation is not needed or justified.



In environments where accuracy on the order of large fractions of a second is good enough.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

24

To configure a Data Mover to use NTP :

• • • •

From the Top Navigation Bar, click System > Hardware > Data Movers. Right click server_2 and click Properties. Enter the IP address of the NTP server. Click Apply to accept the changes.

Note: To verify NTP status, using CLI run the server_date command. Example: server_date server_2 timesvc stats ntp

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

25

This Lab covers Data Mover network configuration. The lab exercise demonstrates configuring the VNX File for network access.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

26

This lab covered basic Data Mover networking. A network interface was configured on a Data Mover. Network settings for routing and DNS were verified. And network access was tested. Please discuss as a group your experience with the lab exercise. Were there any issues or problems encountered in doing the lab exercise? Are there relevant use cases that the lab exercise objectives could apply to? What are some concerns relating to the lab subject?

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

27

This lesson covers the configuration of Virtual Data Movers on a VNX. We will discuss basic theory and requirements of VDMs, as well as learner activities to create them.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

28

The next phase of configuration of VNX File services is to create Virtual Data Movers. Not all environments require VDMs. Many NFS environments, for example, do not require them. However, in CIFS environments, even if they are not required for the success of the implementation, it is still best practice to employ VDMs in all CIFS implementations. Following this best practice will position the system for future situation such as load balancing across Data Movers, replicating for DR protection, and migrations as part of a hardware refresh and upgrade.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

29

A Virtual Data Mover (VDM) is a VNX software-based Data Mover that is built within a file system; the VDM root file system. The VDM is mounted to, and runs on, a physical Data Mover. The physical Data Mover provides the VDM with CPU, memory, network, and network service resources. This creates a separation of the VDM from the physical hardware. Like a physical Data Mover, a VDM can support multiple CIFS servers and a single NFS server for a single domain namespace. Data file systems can also be mounted to the VDM for its CIFS server(s) and NFS server, providing clients access to the shared and exported data. The VDM is a “virtual container” that holds configuration data for its CIFS and NFS servers, and their shares and exports. The VDM is portable, able to be moved or replicated autonomously to another physical Data Mover.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

30

The system assigns the root file system a name in the form root_fs_, where is the name of the VDM. If a VDM name is not specified during creation, a default name is assigned in the form vdm_<x>, where <x> is a unique integer. The VDM stores the majority of the CIFS server’s dynamic data within its root file system. This data includes:

• • • • • • •

CIFS server configuration (compnames and interface names) Local group database for the servers in the VDM Kerberos information for the servers in the VDM Share database for the servers in the VDM Home directory information for the servers in the VDM Auditing and Event Log information Secmap for mapping CIFS users

The NFS server specific data includes:

• •

NFS server endpoint and exported file systems Name resolvers and STATD hostname

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

31

There are a variety of features and configuration information available to physical Data Movers and Virtual Data Movers. The table lists features and configuration information for physical and Virtual Data Movers.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

32

Non-VDM based CIFS servers are not logically isolated and although they are very useful in consolidating multiple servers into one Data Mover, they do not provide isolation between servers as needed in some environments, such as ISPs. All of the configuration and control information is stored in the root file system of the Data Mover. By allowing administrative separation between groups of CIFS servers, VDMs provide the following benefits:



Each Virtual Data Mover has its own separate set of configuration files which makes the CIFS servers easier to manage.



Server isolation and security is also provided as a result of the administrative separation of VDMs.



CIFS servers within a VDM can be moved from one physical Data Mover to another without changing the share configuration and CIFS environment can be replicated from a source to a destination site. This allows configuration and management flexibility for the administrator to accomplish load balancing without interrupting services for other VDMs within the same Data Mover.



Combining VDMs with Replicator provides an asynchronous data recovery (DR) solution for CIFS to be implemented.

VDMs provide a “multiple domains solution” benefit for NFS servers. Without VDMs, a Data Mover is limited to a single NFS server that can service users from a single domain naming space; either DNS, NIS or LDAP. Like a physical Data Mover, a VDM can only support a single NFS server from a single naming space. By using multiple VDMs, it is possible to support multiple NFS servers, with each VDM supporting an NFS server from a different domain naming space.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

33

When a VDM is operational, it runs on a physical Data Mover. The physical Data Mover has the VDM root file system mounted to its root file system. The NFS and CIFS servers configured on the VDM are available on the network through the physical Data Mover interfaces and network services. The VDM has the exported/shared user data file system(s) mounted to its root file system. In this operational state, NFS and CIFS clients have access to the data through the VDM.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

34

A VDM in the loaded state is its normal fully operational active mode. By default, a VDM is created in the loaded state. A loaded VMD has its data file system(s) mounted read/write and the NFS and CIFS servers in the VDM are running and serving data. It is not possible to load one VDM into another VDM. A VDM in the Unloaded state is inactive. The VDM does not have data file systems mounted and its root file system is not mounted on the physical Data Mover. The VDM does not access any resources from the physical Data Mover and its NFS and CIFS servers are stopped. A VDM can be permanently unloaded or temporarily unloaded from the VNX command line interface (CLI). If Unisphere is used to unload a VDM, it is permanently unloaded from the source Data Mover. The VDM is available to be manually reloaded onto a physical Data Mover at any time. A VDM in the mounted state has its root file system mounted read-only to the physical Data Mover. The VDM is inactive and its NFS and CIFS servers are unavailable to clients. No changes can be made to a mounted VDM. When a VDM is being replicated, the VDM replica is in the mounted state. A VDM can also be set to the mounted state using the CLI.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

35

The table summaries the VDM states and their characteristic attributes.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

36

A VDM can be moved from its “source” physical Data Mover to another “target” physical Data Mover within a VNX system. The move operation is performed in Unisphere by selecting the Properties dialogue window of the VDM. The VDM being moved unmounts its data file system(s) and unloads from its source Data Mover. The VDM then loads onto the new Data Mover and mounts its data file systems.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

37

A VDM can be unloaded from its “source” physical Data Mover on the VNX system. An unload operation can be used to stop all user data access from the VDM. Prior to unloading a VDM, its data file system(s) must first be unmounted as a separate operation performed by the administrator. The unload operation is performed in Unisphere by selecting the Properties dialogue window of the VDM. An unloaded VDM stops its access of the physical Data Mover CPU, memory, network interfaces and service resources. The NFS and CIFS servers on the VDM stop and remain inactive. The VDM root file system is also unmounted by the physical Data Mover. The VDM is permanently unloaded from the physical Data Mover. If the physical Data Mover is rebooted, it will not reload the VDM. The root file system of the VDM does not get deleted by the unload operation, it is available for reloading onto a physical Data Mover at a later time.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

38

There are a few considerations that are important concerning VDM names. Since VDMs are portable and can be moved with a VNX system or replicated to another VNX system, VDM names need to be unique to avoid conflicts. It is also important to know that when a VDM is renamed, the VDM’s root file system name will also change to reflect the name of the VDM. So if VDM names need to be changed for avoiding naming conflicts, their root file system names will also be changed.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

39

When a VDM is created, it inherits the internationalization mode that is set on the physical Data Mover. If the VDM is moved or replicated to another Data Mover, the modes are compared. If modes do not match, the system will not allow the VDM to load and run on the target Data Mover. VDMs can be converted from ACSII to Unicode mode and the conversion is not reversible. A Unicode mode cannot be converted to ASCII mode. For more information refer to the product document Configuring Virtual Data Movers on VNX available from the EMC Support page.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

40

When using VDMs with NFS there are some important considerations to know. The NFS server on the VDM supports NFSv3 and NFSv4 over the TCP protocol. The VDM does not support NFS in any version over the UDP protocol. The NFS server supports a single domain naming space per VDM. To support NFS environments having multiple domain naming spaces requires having a VDM configured for each naming space. The configuration of NFS on a VDM is done with through CLI only. This course does not cover implementing NFS on VDMs. Please refer to the product document Configuring Virtual Data Movers on VNX available from the document section of the EMC support site.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

41

Portability is a key function of the VDM and a key consideration when implementing them. What VDMs make portable are their NFS and CIFS servers and those objects depend on networking and network services. If a VDM is moved or replicated to a target physical Data Mover without similar networking and services available, the VDM’s NFS and CIFS servers will not operate to serve data to clients. The DNS naming service is critical for CIFS Active Directory environments. The DNS, LDAP and NIS naming services are also critical to NFS environments. The NTP service is also a key service to maintain consistent time for the servers and clients using NFS and CIFS file storage protocols. Data Mover network interface names are also critical to the NFS and CIFS servers. The servers are “bound” to the interface name. When a VDM with NFS and CIFS servers are ported, the target Data Mover must have the same interface names available to the VDM’s NFS and CIFS servers. Since NFS and CIFS servers export and share their file systems to clients, it is important when moving a VDM that the target Data Mover have access to the same storage and disk volumes that the NFS and CIFS server file systems are built upon. If a VDM is ported using Replicator, it is critical that the data file systems are also replicated with the VDM. There are additional VDM and CIFS considerations that will be covered within the CIFS section later in this course.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

42

This Lab covers the configuration of a Virtual Data Mover. In the lab exercise a Virtual Data Mover (VDM) will be created. Unload an load operations will be performed to move the VDM to another physical Data Mover. A file system will then be mounted to the VDM. Finally the VDM will be moved back to its original physical Data Mover

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

43

This lab covered Virtual Data Mover configuration. A VDM was created then unloaded and loaded onto another physical Data Mover. A file system was mounted to the VDM and then the VDM was moved back to its original physical Data mover. Please discuss as a group your experience with the lab exercise. Were there any issues or problems encountered in doing the lab exercise? Are there relevant use cases that the lab exercise objectives could apply to? What are some concerns relating to the lab subject?

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

44

This lesson covers the theory and requirements behind a VNX file system. We will also create file systems using VNX’s Automatic Volume Manager.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

45

The next phase of implementing File services on VNX is to create one or more file systems. In this course, we will focus on using VNX’s Automatic Volume Manager to automatically build the volume structure beneath the file system. File system options, such as automatic file system extension, can also be set at this time.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

46

A file system is a method of cataloging and managing the files and directories on a storage system. The default, and most common, VNX file system type is UxFS (Universal Extended File System). As we all know, metadata is used to control and maintain the actual file data. UxFS stores the metadata with its associated file data thus improving data locality of reference significantly. VNX file systems can either be created manually or automatically by using Unisphere or the Control Station CLI.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

47

Automatic Volume Management (AVM) is a VNX for File feature that automates volume creation and management. System administrators can create and expand file systems without creating and managing the underlying volumes. The design goals of AVM are to provide an easy to use method of creating and managing file systems, and at the same time maximize capacity and improve performance for the clients. File systems may also be created manually. This means that the system administrator will have to create the entire structure in order to create a file system. This method provides a better option for those who want more control over their storage allocation. Applications that require precise placement of file systems on certain physical disks would benefit from manual file system creation. In this module we will be covering the creation of file systems with AVM.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

48

At the core of an AVM-created file system you will find disk volumes, or dVols. These volumes are the building blocks of the file system. The dVols are then striped together to form a stripe volume. From this stripe volume, a slice volume is taken to create a metavolume. Lastly, a file system is created on top of the metavolume. This entire process is done automatically by AVM without any interaction from the system administrator. We will cover each one of the underlying volumes in more detail in the upcoming slides.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

49

Disk volumes are the underlying storage of all other volume types. A dVol links to a LUN as presented to the VNX for File by the storage system via the File provisioning wizard, or when LUNs are manually added to the ~filestorage storage group. Each LUN is a usable storage-system volume that appears as a dVol to the VNX system. AVM is able to use Traditional LUNs created from RAID Groups, or LUNs from block storage pools.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

50

A stripe volume is a logical arrangement of volumes organized, as equally as possible, into a set of interlaced stripes. AVM uses stripe volumes to achieve greater performance and higher aggregate throughput by spreading the load across multiple dVols, which can be active concurrently. AVM decides on the number of dVols to stripe depending on the type of LUN being used. For dVols that come from RAID Group LUNs, up to 4 dVols are used. For dVols coming from Pool LUNs, then up to 5 dVols are striped together.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

51

Slice volumes are cut out of other volume configurations to make smaller volumes that are better suited for a particular purpose or request. Slice volumes are not always necessary, however, if a smaller size volume is needed, it will then be critical to understand slice volumes and be able to implement them. For example, with an 80 GB stripe volume without any slicing, a request for a 20 GB file system would have to be satisfied by using the entire stripe volume, even if only 20 GB is needed. AVM is able to use slicing when creating file systems.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

52

File systems can only be created and stored on metavolumes. A metavolume is an end-toend concatenation of one or more volumes. A metavolume is required to create a file system because metavolumes provide the expandable storage capacity needed to dynamically expand file systems. A metavolume also provides a way to form a logical volume larger than a single disk.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

53

As LUNs are added to the ~filestorage group, and dVols created, AVM will group dVols together based on the physical disk technology and LUN type. RAID Group LUNs on SAS and NL-SAS drives are given a disk type of CLSAS. The CLEFD disk type is given to RAID Group LUNs from flash drives. Pool LUNs from NL-SAS disks and SAS disks are placed in Capacity and Performance groups respectively. Mixed disk type dVols are Pool LUNs coming from a mix of different disk types. For a complete list of the different disk types, please refer to the Managing Volumes and File Systems with VNX AVM product document.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

54

Once a stripe volume is created, AVM will place the volume in a dedicated file storage pool based on the dVol’s disk type and RAID configuration. There are two types of storage pools for dVols from RAID Group LUNs. System-defined storage pools are pre-configured and designed to optimize performance based on the LUN’s disk and RAID configuration. The AVM storage pool stripe depth determines how much data will be written to a dVol, or LUN, before moving to another dVol in the stripe volume. In system-defined storage pools, the stripe depth is 256 KB. User-defined pools allow for more flexibility in that the administrator chooses what storage should be included in the pool. The administrator must explicitly add and remove volumes from the storage pool and define the attributes for the storage pool. A mapped pool is a storage pool that is dynamically created during the VNX for File storage discovery (diskmark) process and contains dVols from Pool LUNs.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

55

System defined pools have their predefined set of rules that define how dVols will be selected and combined together to create the underlying file system structure. This slide shows the RAID configurations supported by each system-defined pool. clarsas_archive: Designed for medium performance and availability at medium cost. This storage pool uses Serial Attached SCSI (SAS) disk volumes created from RAID 5 disk groups. clarsas_r6: Designed for high availability at medium cost. This storage pool uses SAS or NLSAS disk volumes created from RAID 6 disk groups. clarsas_r10: Designed for high performance and availability at medium cost. This storage pool uses two SAS disk volumes in a RAID 1/0 configuration. clarefd_r5: Designed for very high performance and availability at high cost. This storage pool uses EFD (Flash) disk volumes created from 4+1 and 8+1 RAID 5 disk groups. clarefd_r10: Designed for high performance and availability at medium cost. This storage pool uses two EFD disk volumes in a RAID 1/0 configuration.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

56

A mapped pool is a one-to-one mapping with a VNX for Block storage pool. In the example on the slide, Block Storage Pool 1 has been created using only SAS drives. LUNs from Pool 1 have been provisioned for File and a one-to-one mapping of the Block Storage Pool 1 is made to an AVM mapped Storage Pool of Pool 1. If a mapped pool is not in use and no LUNs exist in the ~filestorage group that corresponds to the block storage pool, the mapped pool will be deleted automatically during diskmark. A mapped pool can contain a mix of different types of LUNs that use any combination of data services: Thin and thick LUNs Auto-tiering VNX block compression NOTE: VNX for File has its own thin provisioning and compression/de-duplication capability that operates at the file system level. While it is possible to use those block level features at the VNX for File LUN level, it is recommended to use those features from the file system level if performance is a prime concern. The use of Thin LUNs and block deduplication and compression for file systems is recommended if space efficiency is the prime concern.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

57

The first step that AVM will take with Pool LUNs is to divide the LUNs into thick and thin groups. Once this is done it will try to stripe 5 dVols together, with the same size, same data services and in an SP balanced manner. If 5 dVols cannot be found, AVM will then try 4, then 3 and finally 2 dVols to make the stripe with. All Thick LUNs will be used first. If thick LUNs/dVols cannot be found to satisfy the 2 dVol minimum above, then the same search will be implemented for Thin LUNs, creating a 5 dVol stripe, then 4, then 3 and then finally 2 dVols.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

58

If AVM cannot find enough Thick or Thin LUNs of the same size to stripe, it will then try to concatenate enough Thick LUNs to meet the size requirement. If AVM cannot find enough Thick LUNs to concatenate for the correct size, it will try to find enough Thin LUNs to concatenate to meet the size requirements. Finally, if AVM cannot find enough Thin LUNs to concatenate, it will then try to concatenate Thick and Thin LUNs together to meet the size requirements. If all of these efforts to meet the requirements still fail, the file system creation/extension will also fail.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

59

All parts of a file system must be stored on the same storage system. Spanning more than one storage system increases the chance of data loss or data unavailability, or both. A spanned file system is subject to any performance and feature differences set between storage systems. AVM storage pools must contain only one disk type and disk types cannot be mixed, unless Pool LUNs are being used. When creating Pool LUNs, the Pool LUN count should be divisible by five and balanced between SPs to assist AVM during the LUN selection process. Note: if a feature like block deduplication is being used, then the Pool LUNs are required to have a common “Allocation Owner” rather than having balanced SP ownership. This requirement will be discussed in further detail later in this course. Thick LUNs will perform significantly better than Thin LUNs. When virtual provisioning is required, use “Thin Enabled” file systems created from Thick LUNs from a block Pool or Classic LUNs from RAID Groups if performance is a prime concern. If space efficiency is the prime concern of the file system, create it using Thin LUNs from a block Pool. File systems created from Thin LUNs can utilize block deduplication and compression and reclaim unused space to the block pool as files are deleted from the file system. The storage efficiency of file system built from Thin LUNs is discussed in further detail later in this course.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

60

File systems created with AVM can be enabled with the auto extend capability. You can enable auto extension on a new or existing file system. When you enable auto extension, you can also choose to adjust the high water mark (HWM) value, set a maximum file size to which the file system can grow. Auto extension causes the file system to automatically extend when it reaches the high water mark and permits you to grow the file system gradually on an as-needed basis. The file system usage threshold is an integer in the range of 50-99 percent (the default is 90 percent). With auto-extension enabled and the HWM set at 90 percent, an automatic extension guarantees that the file system usage is at least 3 percent below the HWM.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

61

When a file system is extended, AVM tries to satisfy the new size request from the same storage pool member as the file system. In other words, another slice is taken from the same stripe volume as the file system. Then another metavolume is created from the new slice, and this new metavolume is concatenated with the original metavolume to extend the file system. If there is not enough space in the original stripe volume, a new stripe volume is created from new dVols, and a slice taken to satisfy the file system extension request. Calculating the automatic extension size depends on the extend_size value and the current file system size, the file system I/O rate and polling interval. If the file system usage, after the first extension, is within three percent of the HWM, the Control Station extends the file system by an additional amount, bringing file system usage below three percent of the HWM. If a file system is smaller than the extend_size value, it extends by its size when it reaches the HWM. If a file system is larger than the extend_size value, it extends by 5 percent of its size or the extend_size, whichever is larger, when it reaches the HWM. In the example shown, the file system, composed of meta volume v110, was originally built from a 20 GB slice, s69, of the 80 GB stripe volume, v107. The file system is then extended by taking another 20 GB slice, s71, from the same stripe volume, making a new meta volume, v117, from this slice, and using the new meta volume to extend the file system.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

62

While an automatic file system extension is running, the Control Station blocks all other commands that apply to the file system. When the extension is complete, the Control Station allows the commands to run. The options associated with automatic extension can be modified only on file systems mounted with read/write permission. If the file system is mounted read-only, you must remount the file system as read/write before modifying the automatic file system extension, HWM, or maximum size options. Enabling automatic file system extension does not automatically reserve the space from the storage pool for that file system. Administrators must ensure that adequate storage space exists, so that the automatic extension operation can succeed. When there is not enough storage space available to extend the file system to the requested size, the file system extends to use all the available storage.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

63

The thin enabled option of a file system, which can only be used in conjunction with auto extend, allows you to allocate storage capacity based on your anticipated needs, while you dedicate only the resources you currently need. This allows the file system to slowly grow on demand as the data is written. NFS or CIFS clients and applications see the virtual maximum size of the file system, of which only a portion is physically allocated.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

64

Thin provisioning may be enabled at file system creation time, or at a later time by accessing the file system’s properties. Once the option Thin Enabled is selected, the auto extend option will also be selected and grayed out. Here we also need to specify the maximum size in which the file system will grow.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

65

VNX for File supports up to 16 TB file system size and a minimum size of 2 MB. The maximum number of file systems per Data Mover is 2048, while the maximum number of file system per cabinet is 4096. Always validate the specs provided here against the latest version of release notes.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

66

This Lab covers the configuration and management of a file system with AVM. In the lab exercise a file system will be created. Then a file system extension operation will be performed.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

67

This lab covered file system configuration and management with AVM. A file system was created and then a file system extension operation was performed. Please discuss as a group your experience with the lab exercise. Were there any issues or problems encountered in doing the lab exercise? Are there relevant use cases that the lab exercise objectives could apply to? What are some concerns relating to the lab subject?

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

68

This lesson covers the NFS protocol and the process involved in making a file system available for a NFS client and application access via the network. The lesson explains how to export a file system at the root level and at the sub-directory level. The lesson also covers how to mount the exported file system at the NFS client considering the host/user access level and security authentication process.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

69

With networking and file systems in place, the next phase is to provide access to the file system for NAS users and applications using the NFS or CIFS protocols. We will first examine the configurations for NFS. The procedure to implement NFS file sharing includes exporting the mounted file system, as well as setting access control and permissions.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

70

NFS stands for Network File System protocol. It is a client or server distributed file service which provides file sharing in network environments. Client computers in the enterprise network are able to access file system data stored on a VNX for File or Unified storage configurations. To provide access to data stored on the VNX storage system, a Data Mover is configured as a NFS server. The file systems on the Data Mover must be mounted and a path to the file systems must be exported. After the exporting process, the file systems are available to be mounted by remote NFS client systems. NFS environments can include UNIX, and Linux clients, as well as ESXi hosts and Virtual Machines running Linux guest operating systems. Microsoft Windows systems configured with third-party applications that provide NFS client services (such as Hummingbird) can also be granted access. VNX supports file system access for clients running versions 2, 3, and 4 of the NFS protocol. The NFS version 4 protocol is a further revision of the NFS protocol defined by versions 2 and 3. It retains the essential characteristics of previous versions (a design independent of transport protocols, operating systems, and file systems) and integrates file locking, strong security, operation coalescing, and delegation capabilities to enhance client performance.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

71

Clients authorized to access the NFS exports are specified through their hostname, netgroup, subnet, or IP address. A client system may have read-write access while a particular user might have only read access to a specific file. This slide shows the authentication methods The VNX NFS service authenticates users through the different mechanisms: UNIX Security, Secure NFS, Authentication daemon. UNIX security: performed by the NFS client machine. The UID and GIDs are carried by the RPC protocol. By default NFS exports uses UNIX user authentication. Secure NFS: Provides Kerberos-based user and data authentication, data integrity, and data privacy. Kerberos, a distributed authentication service, is designed to provide strong authentication that uses secret-key cryptography. Authentication daemon: For a Windows system (PC client) that uses NFS to access the VNX, an authentication daemon, typically rpc.pcnfsd or pcnfsd, is used to bridge the differences between Windows and UNIX NFS user authentication methods. All NFS versions support UNIX security and the PC authentication daemon. NFS versions 3 and 4 support secure NFS by using either UNIX or Linux or Windows Kerberos KDC.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

72

The Create Mount dialog box will allow the selection of the file system and determine where it will be mounted, the path, the state mode, and the access checking policy. File System Name: Select the file system to mount. Mount On : Select the Data Mover on which the file system will be mounted. Path : Specify the pathname of the new mount. The pathname must begin with a forward slash (/). The pathname is limited to 255 bytes (represented as 255 ASCII characters or a variable number of Unicode multibyte characters), and can include upper and lowercase letters, numbers, forward slashes, hyphens (-), underscores (_), and periods (.). Read Only : Select the write mode for the file system: Read/Write or Read Only. Access-Checking Policy : Select an access-checking policy for the file system. The default policy is NATIVE. Access-checking policies apply only when the Data Mover’s user authentication is set to the recommended default, NT. This is set by using the -add security option to the server_cifs command. An access-checking policy defines the file and directory permissions that are applied (UNIX mode bits, Windows ACLs, or both) when a user attempts to access a file system in a mixed NFS and CIFS environment. Virus Checking Enabled : Clear to disable the Virus Checker protocol for the file system. This option is enabled by default. CIFS Oplocks Enabled : Clear to disable opportunistic lock granting for CIFS client access on this file system. Opportunistic locks reduce network traffic by enabling CIFS clients to cache the file and make changes locally. Set Advanced Options : Select to display advanced options. If clear, only basic properties appear.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

73

The Create NFS Export page is displayed to configure the export. Various options can be configured to allow a host IP address, an IP subnet, and/or a netgroup to gain access to file system resources. Every file system exported for NFS must have security options specified in order to grant access to the appropriate users. Host Access Read-only Export : This option exports the path for all NFS clients as read-only . Read-only hosts: It exports the path for specified NFS clients as read-only. It can be an IP host, subnet, or netgroup . Read/Write hosts: It exports the path as read, or write for the specified IP client, subnet or netgroup .

Root Host: The specified IP host, subnet or netgroup gets to root access to the file system. Access Hosts: It provides default access (read and execute) for the specified clients. It denies access to those NFS clients who are not given explicit access.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

74

On UxFS based file systems that support UNIX and Linux clients, permissions are defined by mode bits which are divided into four parts. The first part represents the first character of the permission. Regular files are represented with a hyphen (-). Directories and links are represented with letters d for directory and l for links. The second, third, and fourth parts are represented with three characters each. The first part is the owner permissions. Second part is the group permissions and the third part is for others. Others are all users that are neither the owner nor belong to the owner group. In order to allow the access to the exported file system, the VNX looks at the owner, group, and others permissions. The minimum permission to access the file system is read-only (4). For instance if you want to give read, write, execute for the owner and group, and read, execute for others on the directory eng_data the chmod command should be as follow. # chmod 775 eng_data Note: The example above is not an EMC recommendation for your environment. It is being used as shown for training purpose only.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

75

When a UxFS file system is created, by default it contains, at its root level, two key directories; .etc and lost+found. These directories are extremely important to the function and integrity of the file system. The .etc directory contains configuration files and lost+found contains files that are restored after a system crash, or when a partition has not been unmounted before a system shutdown. In order to keep these directories safe, export a sub-directory instead of the root of the file system. This will keep these directories from being accessed by the general user community and protect them from accidental removal or damage. It is a best practice to create an export at the top-level of the file system and limit its access to the administrator’s client system as root. The administrator then can mount the top-level export and create a sub-directory and modify the UxFS permissions as needed for the general user community. Then the path to the sub-directory can be exported to the user clients as needed. The slide shows an example of this best practice procedure for a file system fs01 that is mounted to the Data Mover on the /fs01 mount point. The top level of the file system (path is /fs01) is exported to the administrator’s client system with root level access. The administrator accesses that export and creates a sub-directory (data in the example). Then the administrator would configure UxFS permissions on the data subdirectory appropriate for the user access needed. Then the /fs01/data path is exported to the user community with the appropriate client export access settings.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

76

The Network Information Service (NIS), as the name implies, is a service for distributing network configuration data such as host names and users throughout the network. After configuring your NIS server on your environment, you can use Unisphere to integrate the VNX with NIS. For configuring NIS using Unisphere you have to know the Data Mover for which the services are being configured, the domain name, and the IP address of the NIS server.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

77

NFS can be used to establish a client-server relationship between an ESXi host (NFS client) and the VNX (NFS server). NFS allows volumes to be accessed simultaneously by multiple ESX and ESXi hosts running multiple virtual machines. Currently, VMware supports NFS version 3 over TCP/IP. An NFS datastore can be provisioned to an ESXi server from the VMware vSphere client interface by selecting an existing VNX file system NFS exported through Unisphere. A VNX file system and NFS export can be created with the use of the EMC Virtual Storage Integrator (VSI) plug-in for VMware vSphere.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

78

EMC Virtual Storage Integrator (VSI) Unified Storage Management (USM) Plug-in for VMware vSphere is a VMware vCenter integration feature designed to simplify storage administration of the EMC VNX and VNXe unified storage platforms. The feature has built-in VAAI functionality that enables VMware administrators to provision new NFS and VMFS datastores, and RDM volumes directly from the vSphere Client. VAAI or vStorage APIs for Array Integration consists of a set of APIs that allows vSphere to offload specific host operations to the storage array. These features reduce host workload, lessen network traffic, and improve over all performance. VNX OE for File and Block supports VAAI with FC, iSCSI, and NFS protocols. The Unified Storage Management feature can be downloaded from EMC Online Support. The application is accessed and run from within Virtual Center server.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

79

Once VSI USM is installed and enabled on the vSphere vCenter Center server, the VNX storage system can be added to take advantage of the Plug-in features.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

80

EMC Unified Storage Management Plug-in for VMware vSphere provides the ability to :



Provision storage, and extend VMFS file systems. For VMFS datastores and RDM volumes on block storage, users can use the feature to provision and mount new storage based on storage pools or RAID groups, and to set a tiering policy for the new storage.



Compress virtual machines (VM) reducing the storage space utilized for VMs through VNX File Deduplication and Compression



Uncompress VMs on demand  Create VM clones: Fast Clones (Space-optimized copy) and Full Clones (arrayacelerated copy) on NFS datastores

Provision cloned VMs to VMware View and Refresh desktops in VMware View Compress virtual machines (VNX File) The Provision Storage option launches a wizard that prepares a NAS NFS file system as well as block VMFS file system, or block RDM volume for use by the ESXi server(s). If you choose to provision storage on a cluster, folder, or data center, then all hosts within the selected object will mount the newly provisioned NFS datastore, VMFS datastore, or RDM volume. Right-click the object (the object can be a host, cluster, folder or data center). If you choose a cluster, folder, or data center, then all ESXi hosts within the object will be attached to the newly provisioned storage. Select the EMC > Unified Storage then select Provision Storage. The Provision Storage wizard will appear.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

81

The Provisioning Storage wizard presents the user with a series of ordered tasks for creating a new datastore to the ESXi server starting with the option of using a Disk/LUN or a Network File System. These wizard screens illustrate creating an NFS datastore. Select Network File System (NFS) and the Provision Storage wizard will prepare a NAS NFS file system for use by the ESXi server.

Choose the VNX Storage System where the new file system is to be created or where the NFS export of an existing file system is to be found. Or add a new storage system. Type the name for the new NFS datastore. Select the Data Mover where the existing file system is mounted (or where it will be created) and the IP interface that will be used for the NFS connection. The Provision Storage wizard allows the user to select an existing file system that was previously created and NFS exported on the VNX by the user using the Unisphere interface. Or just create a new file system on the VNX and NFS export it.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

82

The “New NFS export” screen allows the selection of the storage pool, thin provisioning, and initial and maximum sizes for the file system. And finally the advanced button allows the configuration of any advanced setting on the exported file system, including setting a different mount point (to hide the root of the file system).

After clicking Finish, the Unified Storage Management feature:

• • •

Creates a file system on the selected storage pool.

• •

Creates the NFS datastore on the selected ESXi hosts.

Mounts the newly created file system on the selected VNX Data Mover. Exports the newly created file system over NFS and provides root and access privileges to the ESXi hosts that will mount the NFS datastore. Updates the selected NFS options on the chosen ESXi hosts.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

83

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

84

On the VNX Storage Array we see the properties of the newly created file system that has been NFS exported.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

85

This Unisphere feature presents information about all ESXi Servers attached to the VNX storage system. The feature simplifies the discovery of the relationship between Virtual Machines within an attached ESXi Server and the storage with which they are associated. From the “Host” tab, select “Virtualization”. Right-click on an already registered ESXi server and click Virtual Machines to retrieve information about the vmdk files and related VNX storage. Or Run the “Hypervisor Information Configuration” wizard to add a new ESXi host. You will have the choice to discover the Servers either by going through the Virtual Center, providing there is one, or by connecting directly to the ESXi server in question. After reading the steps that will be performed by the Wizard, select the storage system to which the ESXi Server is connected and follow through with the rest of the steps.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

86

This Lab covers provisioning NFS storage on the VNX for client access. An NFS export will be created for a VNX file system and client access is demonstrated. Then an export of a subdirectory of the file system is created and accessed. Finally, root permissions are assigned to a NFS export and client access is verified.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

87

This lab covered NFS storage provisioning on a VNX for client access. An NFS export was created and accessed by a client. A NFS export of a VNX file system subdirectory was created and accessed. Finally, root permissions were assigned to a NFS export and client access was tested. Please discuss as a group your experience with the lab exercise. Were there any issues or problems encountered in doing the lab exercise? Are there relevant use cases that the lab exercise objectives could apply to? What are some concerns relating to the lab subject?

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

88

This lesson covers an overview of preparing for, and configuring CIFS on a VNX system.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

89

With networking and file systems in place, the next phase is to provide access to the file system for NAS users and applications using the NFS or CIFS protocols. We will next examine the configurations for CIFS. The procedure to implement CIFS file sharing includes working with the existing network and VDM configurations, starting the CIFS service, creating a CIFS server to emulate a Microsoft Windows file server, joining a Microsoft Windows Active Directory domain, and creating a CIFS share on the VNX.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

90

Configuring CIFS is a multi-step process with an end result of having the VNX system storage available for access via its CIFS server and shares to users on the Microsoft network. There are several configuration tasks done on the VNX to prepare for the CIFS configuration. The first step is to make the VNX available on the network by configuring Data Mover IP networking. This done by creating an interface and assigning its IP addressing. Network routing is also configured to provide connectivity across networks. The next task is to configure network services on the Data Mover. The DNS service is required for Windows Active Directory. It is a recommended best practice to use Dynamic DNS for the CIFS environment. The NTP service should also be configured to maintain time synchronization within the Windows network. Windows Active Directory utilizes Kerberos authentication which is time sensitive. The next step is to configure a Virtual Data Mover on the Data Mover. Although it is possible to configure a CIFS server on a physical Data Mover, it is a best practice to configure the CIFS server on a Virtual Data Mover. A final preparation task is to configure a file system for the Virtual Data Mover. The file system will provide the file storage space and be made available to users via a CIFS share.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

91

With the Data Mover networking, network services, Virtual Data Mover and file system in place on the VNX, now the CIFS specific configurations can be done. The first CIFS specific task is to start the CIFS service. The service runs on the physical Data Mover and is not started by default. With the service started on the Data Mover, its configured interfaces will now communicate on the Windows network via the CIFS protocol.

The next step is to create the VNX CIFS server. A CIFS server is a “logical” file server that utilizes the CIFS protocol to transfer files to and from CIFS clients. To follow best practices, create the CIFS server on the prepared VDM. This allows the CIFS server to be portable to other physical Data Movers. The CIFS server uses an available interface to communicate with in the network. The next step is to join the CIFS server to the Windows domain. By default, the join operation creates an Organizational Unit named EMC Celerra within the Windows Active Directory domain and the CIFS server is contained within it.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

92

The final step of configuring CIFS is to make file storage available on the network to Windows clients. A CIFS share is created using the prepared file system. The share is made available to the CIFS server which then makes it available on the network to the CIFS clients. CIFS is now configured on the VNX. The steps taken have made a CIFS server that is available on the network and joined to the Active Directory domain. The VNX file system is providing file storage to CIFS clients though the CIFS share.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

93

The slide illustrates CIFS Shares created from a file system’s data structure and made available through a CIFS server. The top level of the file system along with two lower-level directories have been exported as CIFS shares. The pathname of the file system data structure is specifically exported and is given a share name. A share name ending in $ creates a share that is hidden. It is common practice to hide top-level shares from users and to make lower-level shares available to users. This allows administrators to access the top-level share and create the needed directory structures for creating additional shares for the users and organizations. Additionally, it has the benefit of keeping the file system’s lost+found and .etc directories hidden from users.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

94

A CIFS Server interface can be changed using the Unisphere GUI. This functionality is provided on the CIFS Server properties page. The considerations in modifying a CIFS Server interface are as follows:



Interface stealing:  Is possible between CIFS Servers hosted on the same Physical Data Mover

 Is possible between CIFS Servers hosted on the same Virtual Data Mover  Is not possible between CIFS servers hosted on different Data Movers (Physical or Vrtual)



An Interface for the Default CIFS Server cannot be changed. The Default CIFS Server automatically uses interfaces that are not currently used by any other CIFS Servers.



If the interface of a CIFS Server is disabled, the CIFS shares that are connected through this interface will no longer be accessible. The shares need to be reconnected through a new interface.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

95

It is possible to assign an interface to a CIFS server that is already in use by another CIFS server. This is termed as “interface stealing”. In the slide, the CIFS Server VNX_CIFS02 is being configured. The Data Mover has an existing CIFS server VNX_CIFS01 that is using the 192.168.65.8 interface. When that interface is assigned to the new CIFS Server VNX_CIFS02, a warning message appears. The warning displays the message: The interface is already in use by another CIFS server. Click OK to use it for the new server instead (the existing server will no longer be accessible on this interface).

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

96

The CIFS service must be stopped and restarted for any changes in the configuration to take effect, such as a WINs server configuration. Please refer to the product document Configuring and Managing CIFS on VNX for other setting requiring CIFS service restarts. It is very important to know that stopping the CIFS service on the physical Data Mover will stop all CIFS servers configured on the physical Data Mover. Any VDMs that are loaded onto the physical Data Mover will also have its CIFS servers stopped. The data served by the CIFS servers will be unavailable to the users until the CIFS service is started again.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

97

When moving a VDM containing a CIFS server to another physical Data Mover, the target physical Data Mover must have the same interface naming to support the CIFS server. This is because the CIFS server binds to the interface name. There are name resolution issues that need to be considered after the move. If the target interface has different IP addressing, when the VDM loads onto the target, the CIFS server will send an update to DNS for its name and IP address. The CIFS server record within DNS will be updated if dynamic DNS is used. For clients that had a session established to the CIFS server before the move, the session will have to be re-established. The client’s DNS cache will maintain the original name and IP address pairing for several minutes. To reestablish a session to the CIFS server, the user will have to wait till the client DNS cache expires or manually perform a flush of its DNS cache. If the target interface is using the same IP address, to avoid having duplicate IP addresses on the network, the inactive Data Mover interface will have to be manually “downed”. This requires manual intervention by the VNX administrator, adding another few steps to the process of a VDM move operation.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

98

There are some restrictions to be aware of when using VDMs containing CIFS servers. A VDM containing a CIFS server cannot be loaded onto a physical Data Mover having a “default CIFS server. A “default” CIFS server uses all available interfaces on the physical Data Mover. Therefore no interfaces would be available for a CIFS server contained within a VDM. Another VDM CIFS server restriction relates to antivirus functionality. The antivirus solution requires a “global” CIFS server created at the physical level. A CIFS server contained within a VDM cannot be a “global” CIFS server that provides the antivirus functionality. There are several other restrictions for VMD CIFS server that relate to command line interface configuration. Please refer to the Configuring Virtual Data Movers on VNX document for a complete list.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

99

User mapping with VNX is needed to uniquely identify users and groups from Windows and UNIX/Linux environments that access the VNX with their respective file access protocols. Windows environments use Security Identifiers (SIDs) for identifying users and groups. UNIX and Linux environments have User Identifiers (UIDs) and Group Identifiers (GIDs) to identify users and groups. The VNX file system is UxFS based and uses UNIX/Linux UIDs and GIDs. When Windows users access the VNX, the user and group SIDs need to be mapped to UIDs and GIDs and are applied to the users’ data on the VNX file system. The user mapping provides this correlation of Windows SIDs to UNIX/Linux UIDs and GIDs. User mapping is required in a CIFS only user environment and in a mixed CIFS and NFS user environment. User mapping is not required for NFS only user environments, the VNX uses the UIDs and GIDs provided natively with NFS access.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

100

There are a number of user mapping methods available to VNX for use in supporting different user environments. Some mapping methods are internal to the VNX and some are from external systems within the user environment. The focus of this course is on the Usermapper mapping method. The other mapping methods are listed in the table here to provide a list of user mapping on the VNX. In general, when a CIFS user with SIDs accesses a VNX CIFS server, the user mapping method provides corresponding user and group UIDs and GIDs to the Windows user and group SIDs. For further details consult the document “Configuring VNX User Mapping” from the VNX series documentation. Usermapper: This mapping method is used in a CIFS-only user environment and is provided by default from Data Mover 2 on all VNX systems.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

101

Secure mapping, also known as secmap or secmap cache, augments user mapping on the VNX. Secmap effectively listens to mapping sources and records the mapping information provided. It is important to know that secmap does not generate user mappings – it simply records the mapping information that a mapping source provides. Once a mapping source has provided initial user and group mappings, any subsequent access by the user or group will get its mapping information from secmap. Secmap is designed to improve response time for a subsequent mapping request of a user or group that has already been mapped. Secmap holds the mapping information in a binary database. The mapping information is retained though system reboots and power cycles. Secmap is present on all production Data Movers, both physical Data Movers and virtual Data Movers. The Secmap mapping entries are displayed using the command line interface only. The entries are not displayed in the Unisphere interface.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

102

The Data Mover follows a search order for user mapping. Only enabled mapping methods are searched. The default search order is shown and is described below:

• •

The Data Mover first determines if it has a mapping for the SID on the secmap.



If no mapping is found and NIS is configured the Data Mover queries NIS for an UID or GID.



If the Data Mover does not receive the mapping from NIS, and LDAP–based directory services is configured the Data Mover queries LDAP.

• •

If no map is found it checks the Active Directory.

Failing on finds the user on the secmap. The Data Mover checks its local user and group files.

When Active Directory cannot resolve the ID mapping, the Data Mover queries the Usermapper.

The default mapping order is affected if there is an nsswitch.conf file present on the Data Mover. The file can have definitions for the order of search for users (passwd) and groups. Possible entries are files, NIS, and/or LDAP. The mapping search order for files, NIS and LDAP will be the order that is defined in the nsswitch.conf file if it is present.

When ntxmap is enabled, the mapping mechanism first refers to the ntxmap rules before using secmap. The mapping provided by ntxmap replaces any previous secmap cache for a user, which was created by another user mapping method. Any existing entry in secmap for this user either gets updated with the new information, or a new ntxmap mapping is cached. Secmap is queried for ntxmap users only if the ntxmap.conf file is unavailable, empty, or unable to provide a mapping.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

103

The Usermapper service is a mapping method which runs on a Data Mover in the VNX and is used in CIFS-only user environments. Usermapper automatically generates UIDs and GIDs for Windows domain user and group SIDs and maintains the mapping in a database. The generated UID and GID values start at 32768 and increment for each new user and group being mapped. Custom UID and GID ranges can be configured with a usrmap.cfg file. Custom ranges are not recommended. Contact EMC support for use of custom ranges.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

104

There are different Usermapper roles used for single or multiple VNX environments. The Usermapper roles are Primary, Secondary and client. The Primary and Secondary roles must run on physical Data Movers. Physical or virtual Data Movers can have the client role. Standby Data Movers do not have Usermapper roles. The Primary Usermapper generates user mappings and is defined by default to run on Data Mover 2 on every VNX for File. Only one Primary Usermapper is used within a VNX environment that employs Usermapper. All additional VNXs within the environment will need to be manually configured with a Data Mover having the Secondary Usermapper role. The additional VNXs will require the default Primary Usermapper for Data Mover 2 to be changed to the Secondary Usermapper role. A Secondary Usermapper does not generate user mappings but rather queries the Primary Usermapper for the mappings. A Usermapper client is Data Mover that has neither the Primary or Secondary Usermapper role. Usermapper clients query Primary/Secondary Usermappers within their VNX for user mappings.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

105

In this and the next several slides the mapping operations of Usermapper will be described. The operations will be illustrated for the Usermapper roles within a multi-VNX environment. The scenario environment includes operations of a single Primary Usermapper on one VNX and two Secondary Usermappers on two additional VNXs. The operations of a client usermapper will also be described. For simplicity, the slides just illustrate the cumulative mapping of several windows user SIDs to UIDs. The mapping of group SIDs to GIDs are not shown but are conceptually the same as the user SID to UID mapping. This slide illustrates the mapping operation of Windows User1 in the first access to a VNX Data Mover that has the Primary Usermapper role. The Windows user access includes the user SID. The Data Mover’s Primary Usermapper generates a UID for the user SID and records this SID to UID mapping in its Usermapper database. The Data Mover’s Secmap also records the mapping in its database and its recorded mapping will be used to provide the mapping for any subsequent User1 access to the Data Mover.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

106

This slide builds upon the previous slide to show the cumulative mapping contained in the Primary Usermapper database. In this example Windows User2 accesses a Data Mover on the second VNX that has a Secondary Usermapper role. To provide a UID mapping for the user SID, the Secondary Usermappoer sends a mapping query to the Primary Usermapper which is on Data Mover 2 on VNX1. That Data Mover does not have a mapping entry in its Secmap so the Primary Usermapper must generate a mapping for this new user. A mapping is generated and stored in the Primary Usermapper database for the new user and its Secmap records the mapping in its database. The Primary Usermapper then replies to the Secondary Usermapper query on VNX2. The mapping entry is stored in the Secondary Usermapper database and the Data Mover Secmap database.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

107

This slide builds upon the previous slides to show the cumulative mapping contained in the Primary and Secondary Usermapper databases. In this example Windows User3 accesses Data Mover 2 on VNX3 which has a Secondary Usermapper. The operations are similar to the previous slide. A mapping query from the Secondary Usermapper to the Primary Usermapper is made. The new user mapping is generated and recorded in the same manner. Notice now that the Primary Usermapper has mapping entries for all the users and both Secondary Usermappers each have a entry for a different single user.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

108

This slide builds upon the previous slides to show the cumulative mapping contained in the Primary and Secondary Usermapper databases. In this example Windows User4 accesses Data Mover 3 which is a Usermapper client on VNX1. The client sends a mapping broadcast over the VNX internal network to locate a Data Mover running Usermapper, either Primary or Secondary. In this case the VNX has a Data Mover that is running a Primary Usermapper. A mapping is generated and recorded on Data Mover 2. A mapping reply for the new user is sent to the client and recorded in Secmap on Data Mover 2. Notice that in this multi-VNX environment with a single Primary Usermapper and two Secondary Usermappers – that the entries in each Usermapper database are different. Only the Primary Usermapper database holds mapping entries for all the Windows users. The Secondary Usermapper databases hold entries for only the users that accessed their VNXs.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

109

Now that we can integrate NAS clients with VNX File storage at a basic level, we can move on to discuss other aspects of VNX File services. The components that we have just discussed, such as networking, VDMs, file systems, can now be monitored and managed. We can also enable several VNX File features to address; file content retention and storage space efficiencies.

Advanced networking features can be employed. These include LACP and Fail Safe Networking. And we can also configure local replication for VNX File using VNX SnapSure. As we move forward in this course, we will discuss these topics.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

110

This module covered the configuration of VNX File services for both NFS and CIFS clients. Provisioning storage for File access requires additional layers of configuration beyond that of Block provisioning. For example, in File services, the array handles the volume and file system creation and management; it also must be configured to integrate with the network user environments for NFS and/or CIFS.

VDMs provide portability of File configurations, both from Data Mover to Data Mover, and from one VNX to another. VNX’s AVM incorporates known best practices for file system configuration to provide VNX administrators a simple and automated method to build file systems and their underlying volume structure. File systems are made available to NAS users and applications by exporting, or sharing, directories (or mountpoints) within a VNX file system.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

111

This Lab covers provisioning CIFS storage on a VNX VDM for client access. The lab exercise first prepares the VNX for the CIFS environment. A CIFS server is created on a VDM and joined to a Windows Active Directory domain. Then CIFS shares are created and accessed from a Windows client. Finally a VDM containing a CIFS Server will be moved to a different physical Data Mover.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

112

This lab covered CIFS storage provisioning on a VNX VDM for client access. The VNX was prepared for CIFS. A CIFS server was created on a VDM and joined to the Windows Active Directory domain. CIFS shares were created and accessed from a Windows client. The VDM containing the CIFS server was moved to a different physical Data Mover. Please discuss as a group your experience with the lab exercise. Were there any issues or problems encountered in doing the lab exercise? Are there relevant use cases that the lab exercise objectives could apply to? What are some concerns relating to the lab subject?

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

113

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX File Configuration

114

This module focuses on the file content and space efficiency features for File. Quotas, File Level Retention, File Deduplication, and Fixed-Block Deduplication feature enablement and management are detailed. The module also explores the configuration and use of the Space Reclaim feature and describes the file system Low Space protection.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

1

This lesson covers the VNX file system Quotas feature. It provides an overview of the feature and describes how to configure and manage default file system quotas, Tree Quotas and explicit User and Group Quotas. Quota policy and default quota planning considerations are also explored.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

2

A file system space-based feature for the VNX is Quotas. File system quotas provide the VNX administrator with a tool to track and/or limit usage of a file system. Quota limits can be designated for users, groups, or a directory tree. These limits can also be combined. For example, a user can have a limit on a directory tree which itself has a limit.

Quota limits are supported for both CIFS and NFS data. The quota limit configuration can be set for limiting usage based on storage size in megabytes and/or set based on file count (inodes). The defined limit placed on the user, or group or directory tree can be enforced in several ways. If a hard limit is being defined, as soon as the limit is reached no further data can be saved and a quota exceeded error message is generated. If a soft limit is configured to allow a grace period when the defined quota limit is exceeded. Quotas can be configured just to track usage without enforcing a limit. The document Using Quotas on VNX covers the VNX Quota feature in detail and is available for download from EMC online Support.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

3

The Properties page of a specific file system contains a Quota Settings tab for configuring quota settings of a file system. The Quota Settings page is also available by selecting the Manage Quota Settings link from the File Storage section of the Task pane. EMC recommends enabling quotas on the file systems before it is populated with data because enabling quotas on a file system containing data affects system performance but operations are not disrupted. If enabling quotas on a file system with existing data it is recommended to do this on nonproduction hours to avoid production impact. From the page, User and Group quotas can be enabled on the file system and the option of enforcing Hard limits (prevents writing when limit is reached). The page also provides settings to define User and Group Hard and Soft limits based on space consumption in Megabytes or by file count. If nothing is limit specified, the system will treat it as unlimited and will in affect just be tracking the space of file count usage for the Users or the Groups. A Grace Period for hard and soft limits is also definable and there are also options to log messages when limits are exceeded. The quota setting defined here will be the default quotas applied to all Users and Groups storing data on the file system.

The example illustrates default User and Group Quotas being enabled on the Quota_fs file system. Since no storage or file count limit is specified, this quota is just for tracking User and Group space and file count usage on the file system.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

4

To configure a Tree Quota navigate to the File Systems page and select the Tree Quotas tab and click the Create button. In the dialogue page select a File System for the Tree Quota. Input a Path to a new directory, not an existing one. Define the Hard and Soft limits desired for either Storage space or File Count.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

5

To configure an explicit User Quota navigate to the File Systems page and select the User Quotas tab and click the Create button. In the dialogue page select a File System for the User Quota. Input a Path to an existing Tree Quota or select None to apply the quota to the entire file system. In the User Names section it is possible to define a User Quota for either a Windows user from a specific Windows Domain or a UNIX user from a UNIX domain. There is an option to Use Default Quota Limits if they are set on the file system. Explicit Hard and Soft limits can be defined for the user as desired for either Storage space or File Count. The example illustrates an explicit User quota for the Windows user secho from the hmarine.test domain having a Hard Limit of 5 GB of storage space.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

6

To configure an explicit Group Quota navigate to the File Systems page and select the Group Quotas tab and click the Create button. In the dialogue page select a File System for the Group Quota. Input a Path to an existing Tree Quota or select None to apply the quota to the entire file system. In the Group Names section it is possible to define a Group Quota for either a Windows group from a specific Windows Domain or a UNIX group from a UNIX domain. There is an option to Use Default Quota Limits if they are set on the file system. Explicit Hard and Soft limits can be defined for the group as desired for either Storage space or File Count. The example illustrates an explicit Group quota for the Windows group Eastcoast Sales from the hmarine.test domain having a Hard Limit of 50 GB of storage space.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

7

When configuring quotas, first decide which quota policy to use. There are two types of quota policies: •

Blocks: This is the default policy. Storage usage is calculated in terms of file system blocks (8 KB) used and includes all files, directories and symbolic links.

Filesize: Storage usage calculated in terms of logical file size, directories and symbolic links are ignored. This policy is recommended for CIFS environments. The example illustrates using the server_param command to change the quota policy on Data Mover 2 to filesize. •

Applying explicit quotas lets you customize user or group quotas on a case-by-case basis for each individual user or group creating files or using space in the file system or quota tree. Explicit quotas supersede default quotas. Changes to the default quotas have no effect on users who have written to the file system. Those users inherited the default quotas set when they first wrote to the file system. If no default quotas were set, then no default quotas apply to those users.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

8

Administrator in a Windows environment can manage VNX User Quotas from a Windows client. Windows clients cannot manage VNX Group or Tree Quotas. And the quota limitations are based on the Storage space filesize policy and not the file count limitation.

To access the VNX User Quota entries, from a client with a mapped drive to a VNX share, select the mapped drive Properties page and go to the Quota tab. Clicking the Quota Entries button opens the Quota entries page. From this page the administrator can view, modify, and delete existing VNX User Quotas. New User Quota entries can also be added.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

9

In the NFS environment normal users and root users can view the User and Group Quotas using the quota command. The command will read the quota entries for a quota configured file system and any of its Tree Quotas. The command only reads the entries, it cannot perform any management of the quotas; they cannot be modified, deleted, or added. The first example illustrates a normal user running the quota command with the –s option which presents its output in human readable format. The fields of the output are also shown in the example. Note that the example quota is not configured with grace periods or file count limits. The second example illustrates a root user viewing the quota entries for a specific UID. Both example outputs show the total blocks used, the soft and hard storage limits and its associated storage grace period. The output also displays the file count, the count soft and hard limits and its associated count grace period. The quota command has multiple options. See its man page for detailed information on its usage.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

10

This lesson covers the File Level Retention feature of a VNX file system. An overview of the feature is described and enabling the feature on a VNX file system is illustrated. The lesson also details managing file retention for NFS and CIFS files.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

11

File-level retention (FLR) is a licensed software feature for protecting files from modification or deletion until a specified retention date. By using file-level retention, you can archive data to FLR storage on standard rewritable magnetic disks through NFS, CIFS, and/or FTP operations. FLR allows you to create a permanent, unalterable set of files and directories and ensure the integrity of data. There are two different types of file-level retention available: enterprise (FLR-E) and compliance (FLR-C). FLR-E protects data content from changes made by users through CIFS, NFS, and FTP. FLR-C protects data content from changes made by users through CIFS, NFS, and FTP, from changes made by administrators, and also meets the requirements of SEC rule 17a-4(f). An FLR-enabled file system: •

Safeguards data by ensuring integrity and accessibility



Simplifies the task of archiving data for administrators



Improves storage management flexibility

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

12

The FLR feature is configured from the File System creation page. The feature is Off by default. Select the Enterprise or Compliance retention mode to enable the feature. This feature by itself does not lock files stored on the file system. It just defines how long a file could be locked against deletion if that file were selected for retention. Managing files for retention is performed in several different ways and will be discussed later. If either Enterprise or Compliance is selected, the page will then display three different retention time periods that can be configured; Minimum, Default and Maximum. Select Unlimited if you want the lock retention period to be infinite. Select Limited to specify the retention period. A Minimum Retention Period specifies the shortest retention period for which files on an FLR-enabled file system can be locked and protected from deletion. A Default Retention Period specifies the default retention period that is used in an FLRenabled file system when a file is locked and a retention period is not specified. A Maximum Retention Period specifies the longest retention period for which files on an FLR-enabled file system can be locked and protected from deletion. Use the Compliance option with caution! Once set, to meet SEC compliance, the infinite setting is hard in FLR-C which means the user cannot change the retention period from infinite to another date. For this reason, it is important to exercise caution when locking a file in FLR-C.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

13

One method of managing file locking is from the Properties page of an FLR enabled file. From the FLR Settings tab you can specify whether to automatically lock files in an FLRenabled file system, as well as a policy interval for how long to wait after files are modified before the files are automatically locked. When enabled, auto-locked files are set with the default retention period value. The system default for automatic file locking is disabled. The system default for the policy interval is 1 hour. You can specify whether to automatically delete locked files from an FLR-enabled file system once their retention periods have expired. The system default for automatic file deletion is disabled.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

14

Locking NFS files for retention is done via NFS by adjusting the atime (Last Access Time) of files. To view the atime value of files use the following list command options: ls -1 –time-style=full-iso –time=atime or ls –lu

To lock a file set the file atime value to a future date that is within the retention period values set on the file system using the following touch command & options for specified file. touch –at The example illustrates a file ReadMe.pdf having an atime value of October 2nd 15:00:17 being set to a value in the future of November 2nd 08:00.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

15

The WIN API SetFileTime function is used to manipulate a Windows file atime but Windows does not have a standard utility to set the value. EMC provides the FLR Toolkit for managing file retention for CIFS data. The FLR Toolkit is a suite of GUI and CLI applications used to manage FLR protection of CIFS data from VNX shares on FLR enabled file systems. The toolkit is available for download from the EMC online Support site. The white paper Managing an FLR-Enabled NAS Environment with the EMC File-Level Retention Toolkit is also available and describes its use and operations for managing file level retention of CIFS data. FLR protection can be applied manually or automatically on files. When installed on a Windows system, it adds an FLR Attributes tab to the Windows Explorer properties page of files. The example illustrates the retention options available to manually apply to a specific file: •

Absolute Date and Time

Incremental (with respect to the current date and time or the current Retention Date if it exists) Infinite retention period Note that the specified retention date must comply with the minimum and maximum retention dates that are optionally configured on the file system.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

16

The diagram illustrates the various states within FLR that a file can be in and what states that the file can transition into from a specified state. A file in an File Level Retention-enabled file system is always in one of four possible states: Not Locked, Locked, Append-only, and Expired. Not Locked: All files start as Not Locked. A Not Locked file is an unprotected file that is treated as a regular file in a file system. In an FLR file system, the state of an unprotected file can change to Locked or remain as Not Locked. Locked: A user cannot modify, extend, or delete a Locked file. The file remains Locked until its retention period expires. An administrator can perform two actions on a Locked file:

• Increase the file Retention Date to extend the existing retention period • If the Locked file is initially empty, move the file to the Append-only state Append-only: You cannot delete, rename, and modify the data in an Append-only file, but you can add data to it. The file can remain in the Append-only state forever. However, you can transition it back to the Locked state by setting the file status to Read-only with a Retention Date. Expired: When the retention period ends, the file transitions from the Locked state to the Expired state. You cannot modify or rename a file in the Expired state, but you can delete the file. An Expired file can have its retention period extended such that the file transitions back to the Locked state. An empty expired file can also transition to the Append-only state.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

17

This lesson covers the File Deduplication feature. It provides an overview discussion of the feature and a walkthrough of its operations. It also covers enabling and management of the feature and lists some consideration for its implementation.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

18

File deduplication allows a Data Mover the ability to compress redundant data at a file-level, as well as share the same instance of the data only if they happen to be identical. The file system must have at least 1 MB of free space before deduplication can be enabled. This option is disabled by default. When enabled, file storage efficiency is increased by eliminating redundant data from the files stored in the file system. Deduplication functionality operates on whole files and is applicable to files that are static or nearly static.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

19

During the file deduplication process, each deduplication-enabled file system on the Data Mover is scanned for files that match specific rules set by a policy engine. A file may be deduplicated based on the file’s last access time, modification time, file size, or file extension. Different instances of a file can have different names, security attributes, and timestamps. None of the metadata is affected by deduplication. By default, the policy engine uses the following parameters to determine whether a file is eligible for deduplication:

• • • • •

File size is at least 24 KB (minimumSize) File size is at most 8 TB (maximumSize) File has not been accessed for the past 15 days. (accessTime) File has not been modified for the past 15 days. (modificationTime) File extension

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

20

The next several slides cover the deduplication process workflow. In this example, we have the policy engine processing a file according to the defined policy. VNX for File copies the file to a hidden section on the same file system. This hidden section is dynamically allocated to hold deduplicated data when deduplication is enabled on the file system. The copied data is compressed first. Then, the file data is hashed and the hash value is used to single-instance any identical copies of the data already processed. Next, a stub replaces the original data in the user-visible area of the file system. The stub file references the file data stored in the hidden store. During the deduplication process, the file system must have enough free space available that is equivalent to the size of the original file to be deduplicated plus the size of the compressed version of the file to be stored. An additional 1 percent of the file system must be free to create the hidden store for deduplicated files. For example, in the case of a 10GB file system, then the hidden store will be around 100 MB in size.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

21

Not all files are compressible. Some files such as media files, like in this example, may already be compressed. For File Data Deduplication to further compress these files would yield little to no space savings. Because of this, such files are left uncompressed although they are still evaluated for redundancy.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

22

Not every file meets the requirements of the engine policy. The default policy is configured with preset values for file size and file access thresholds. In our example, the size of this PDF file is too small for deduplication as set by the minimum size policy. This file will not be compressed or single-instanced.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

23

Next, a PPT file is scanned to determine its eligibility. This file fails on the access time check. The access time check requires that the file has not been touched for a certain number of days. The access time threshold is used to avoid modifying active files (default is at least 15 days).

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

24

The policy engine continues on and scans the next file. This file satisfies the policy engine check and then gets copied to the hidden store where the file will be compressed and hashed. To determine if the file is redundant, the file’s unique hash is compared to the hash table containing previously deduplicated files. In this example the hash is matched in the table and is determined to be redundant. The file in the visible area of the file system is then deleted and replaced by a stub file that references the relevant compressed file in the hidden store.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

25

The File Deduplication feature can be enabled either during file system creation time or on an existing file system. The first example illustrates enabling the feature from the file system creation page. The second example illustrates enabling the feature on an existing file system from its Properties page.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

26

The File Deduplication feature is managed from the File System Properties page. From the Deduplication Settings tab there are a variety of setting options for the feature. The example illustrates the page default settings. The Case Sensitivity option is used to unique files based on file name and is needed for NFS environments and not needed for CIFS environments. The Compression Method has two possible settings; Fast or Deep. The Fast setting is optimized for speed rather than space efficiency. The Deep setting is optimized for space efficiency rather than speed. It achieves up to a 30% greater space savings than Fast compression. Selecting Deep compression applies only to new files that are subsequently compressed, and not to existing compressed files. The CIFS Compression Enabled option determines when CIFS compression is allowed. CIFS compression will only occur if Deduplication is turned on or suspended at the File System level. The Duplication Detection Method has three settings; sha1, byte or off. If sha1 is selected a SHA1 hash algorithm is used to identify duplicate data. If byte is selected SHA-1 is used to identify duplicate data candidates then a byte-by-byte comparison is run on the candidates to identify duplicates. This setting adds considerable overhead, especially for large files. If off is selected no duplication detection is performed and the only space savings will come from compression. The deduplication Policy settings for Access Time, Modification Time, Minimum Size and Maximum Size can all be modified to customize File Deduplication operations. The File Extensions Excluded field is used to exclude files having specific extensions. The Minimum Scan Interval defines how often a deduplication scan will run The SavVol High Water Mark specifies the percentage of configured SaVol auto extension threshold that can be used during deduplication operations. SavVol can get consumed if SnapSure or Replication is enabled on the file systems. As blocks are feed by deduplication and new data is written to the freed blocks, SnapSure will preserve freed block content in SavVol. The Backup Data High Water Mark specifies the full percentage that a deduplicated file has to be below in order to trigger space-reduced backups for NDMP. The Pathname Exclude setting specifies pathnames to exclude from File Deduplication operations. File Deduplication can also be Suspended from the Properties page File System tab.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

27

VNX File Deduplication thrives in an environment where file system archiving is being implemented. Deduplication increases storage efficiency for Primary and Secondary, or archival data, depending on the retention period being used. A short retention period on the primary data calls for deduplication on secondary data only. If the primary storage has a long retention period, then both primary and secondary data may be candidates for deduplication. You should not use Data Movers that are heavily utilized for deduplication. The File Deduplication process adaptively throttles itself when the Data Mover is very busy. Therefore, a Data Mover that maintains a high level of usage cannot scan or deduplicate files as quickly as a less busy system. Accessing deduplicated files also uses more system resources than accessing normal files. Use the File Extension Exclude List to limit scan and deduplication of non-compressible, non-duplicate files. File Deduplication-enabled file systems can be backed up by using NDMP NVB (Volume Based Backup) and restored in full by using the full destructive restore method. Because NVB operates at the block level (while preserving the history of which files it backs up), backing up and restoring a deduplicated file system does not cause any data reduplication. The data in the file system is backed up in its compressed form.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

28

This lesson covers the integration of VNX Block Storage Pool Thin LUNs with VNX File storage file systems to achieve several space efficiency features. It discusses an overview of Fixed-Block Deduplication for File and how the feature is enabled. The Space Reclaim feature for VNX file systems and SavVols is examined. The lesson also describes Out of Space protection for file systems build from Thin LUNs.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

29

The Thin LUN Space Efficiencies for File are based on how the Block Storage Pool allocates and releases stripes from the pool to meet growing and diminishing storage needs. When Thin LUNs are created from a Block Storage Pool, 256 MB stripes get created within the Block Storage Pool. Initially a Thin LUN is oversubscribed and only a small number of stripes are created to provide storage needs. As more storage is needed, additional stripes are allocated from the Block Storage Pool to meet the needs. Conversely, as storage needs diminish, the Block Storage Pool will “return” unneeded Thin LUN stripes to the pool for meeting other pool storage needs. When Pool LUNs are provisioned to File, AVM creates a Storage Pool for File that is mapped to the Block Storage Pool (Mapped Pool). It will contain a one-to-one relationship of dVols to LUNs having the sizes presented by the LUNs. If Thin LUNs are provisioned, the dVols will be the sizes for the oversubscribed Thin LUNs. When AVM creates a file system from the dVols, the file system can be created using more space than has been allocated to the Thin LUNs. As the storage needs of the file system grows, additional stripes are allocated from the Block Storage Pool and are provided to the Thin LUNs for use by the file system. Should the storage needs of the file system diminish, the File space efficiency feature will release the space and any freed Thin LUN stripe will be returned to the pool for meeting other pool storage needs.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

30

The Fixed-Block Deduplication space efficiency feature for File leverages the previously discussed Block Deduplication feature. The Fixed-Block Deduplication feature is enabled from the Properties page of a Mapped Pool in Storage Pools for File. When the Fixed-Block Deduplication feature is enabled, it enables Block Deduplication on the Pool LUNs provisioned to File. In general, Fixed-Block Deduplication for File is a storage efficiency feature whose operations are performed by the Block Storage portion of the VNX system and not by the File storage portion of the system. It is available to the VNX via a specific Deduplication enabler for file systems that are built from Pool LUNs. The feature is selectable from any Mapped Pool in Storage Pools for File. The Block Storage uses a hash digest process for identifying duplicate data contained within the Thick or Thin LUNs and consolidates it in such a way that only one actual copy of the data is used by many sources. This feature can result in significant space savings depending on the nature of the data. The deduplication operation is run post-process on the selected dataset. Deduplication is performed within a Block Storage Pool for either Thick or Thin Pool LUNs with the resultant deduplicated LUN being a Thin LUN. As duplicate data is identified, if a 256 MB pool slice is freed up, the free space of the slice is returned to the Block Storage Pool. The Space Reclaim feature is available to any file system or SavVol that is built on Thin LUNs. The feature is run on a schedule or can be initiated manually from the Properties page of a file system. When files and folders on the file system are deleted that space is available to be reclaimed. The reclaim process runs the SCSI unmap command to inform the Block Storage that the associated blocks are free. If blocks from an entire 256 MB pool stripe are freed, the stripe is released back to the Block Storage Pool. The feature will also reclaim space from SavVol as checkpoints are deleted. For environments requiring consistent and predictable performance, EMC recommends using Thick LUNs. If Thin LUN performance is not acceptable, then do not use Fixed-Block Deduplication .

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

31

To enable the Fixed-Block Deduplication feature navigate to Storage Pools for File and select a Mapped Pool and access its Properties page. Check the Fixed-Block Deduplication Enabled option. This will enable Block Deduplication for the Pool LUNs provisioned to File from which the Mapped Pool is built on. The Fixed-Block Deduplication feature will be enabled for all file systems created from this Mapped Pool, even for file systems that were created from the pool prior to the feature being enabled.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

32

The Fixed-Block Deduplication feature for File leverages the Block Deduplication feature of Block. To manage the deduplication, access the Block Storage Pool that the Mapped Pool is built from and from its Properties page select the Deduplication tab. Deduplication can be Paused, the Deduplication rate can be set to High, Medium or Low and a deduplication session can be manually started using the Force Deduplication button.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

33

From the Properties page of a File System, access the Space Reclaim tab to enable the feature. The page displays a value in KB and percentage for Estimated Reclaimable Space. The reclaim process can be run manually by clicking the Start button. The process can also be run on a scheduled basis; Daily, Weekly or Monthly at defined Start and Stop times. The High Water Mark value defines a minimum threshold of space reclamation needed for a scheduled reclaim session to be allowed to run. In the example, the file system has 11% of Estimated Reclaimable Space and the High Water Mark setting is 60%. In this scenario, a scheduled reclaim session would not be run on Sunday because the minimum space savings threshold is not met.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

34

If a Checkpoint exists on a file system, SavVol space can be reclaimed from deleted checkpoints. To enable SavVol space to be reclaimed, in Unisphere navigate to the Properties page of the file system and access the Checkpoint Storage Space Reclaim tab. The page displays a value in KB and percentage for Estimated Reclaimable Space. The reclaim process can be run manually by clicking the Start button. The process can also be run on a scheduled basis; Daily, Weekly or Monthly at defined Start and Stop times. The High Water Mark value defines a minimum threshold of space reclamation needed for a scheduled reclaim session to be allowed to run. In the example, the SavVol has 14% of Estimated Reclaimable Space and the High Water Mark setting is 60%. In this scenario, a scheduled reclaim session would not be run on Sunday because the minimum space savings threshold is not met.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

35

This demo covers the Space Reclaim feature. It illustrates enabling the feature on a file system and performing a reclaim operation. To launch the video use the following URL: https://edutube.emc.com/Player.aspx?vno=huaT4jDzlJpi4b+TtMqbsw==&autoplay=true

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

36

The system protects file systems configured from Thin LUNs with an out of space protection scheme. It is possible to configure a number of Thin LUNs from a Block Storage Pool that exceed the pools physical capacity. When that capacity reaches a free space threshold, the file system(s) created on Thin LUNs from that pool will be mounted read-only. The Low Space Threshold is a fixed value based on the system’s combined Data Mover memory. The threshold value leaves enough free space in the pool to allow Data Mover memory to be flushed to the pool.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

37

This example illustrates a file system, which is created on Thin LUNs, that has reached its low space threshold. The file system mount state has been set to Read Only (Low Space). Data is still available for users to read but no further data can be written to the file system as a protection for it from the Block Storage Pool running totally out of space. The cause of the low space condition is the Free Capacity of the Block Storage Pool has reached the low space threshold value.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

38

A Restore operation is needed to mount the file system read write again. But it is only possible if the Block Storage Pool Free Capacity is above the low space threshold. Pool space will either have to be added or space freed within the pool. In the example, space within the pool has been freed and the Free Capacity is above the low space threshold. The Restore operation can now be performed to return the file system to read-write state.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

39

This module covered VNX File content and space efficiency features. It covered enabling and management of the Quotas, File Level Retention, File Deduplication and Fixed-Block Deduplication for File features. It also detailed configuring and performing a Space Reclaim of a file system. The low space protection of a file system was also discussed.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected] Module: File Content and Space Efficiency Features

40

This module focuses on networking features supported by the VNX, which offer high availability functionality.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

1

This lesson covers basic networking concepts, terminology and describes VLANs and virtual devices.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

2

A key feature to almost any EMC product is its ability to handle single point of failure issues. Addressing the Ethernet network is no different. Today’s environments are highly complex, and the number of possible combinations to address all solutions can be difficult. We can ensure proper deployment by understanding how the technology performs.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

3

The Ethernet switch operates at the Data Link layer of the network (Layer 2), providing software capable of directing network traffic only to the port(s) specified in the destination of the packet. A switched network offers direct node-to-node communication, thus supporting full duplex communication. Switches provide transmission speeds of 10 Mbps, 100 Mbps, 1 Gbps or higher. There are standard Ethernet switches that do not offer any management functionality. These switches are also known as “plug and play” switches. Managed switches are more common today. They provide a variety of software enhancements that allow the network administrator to control and manage network traffic through various methods, such as serial console, telnet, SSH or SMTP. Some of the features that will be of interest for VNX management are Ethernet Channel, LACP and VLANs. There are some switches that can also operate at the Network layer (Layer 3), which allows for routing between VLANs. These switches are called Multilayer switches.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

4

VLANs (Virtual Local Area Networks) are a method of grouping switch ports together into a virtual LAN, as the name would indicate. Switch ports, as well as router interfaces, can be assigned to VLANs. VLANs can be employed for seemingly opposite uses.

VLANs can be used to break up a very large LAN into smaller virtual LANs. This may be useful to control network traffic, such as broadcasts (another name often used for a VLAN is a Broadcast Domain). Although VLANs are not a security vehicle unto themselves, they can be used as part of an overall security scheme in the network. VLANs can also be used to combine separate physical LANs into one virtual LAN. For example, the sales staff of Hurricane Marine is physically dispersed along both the east and west coasts of the United States (WAN), yet all of the network clients have similar network needs and should rightly be in the same logical unit. By employing VLANs, all of the Hurricane Marine sales staff can be in the same logical network, the same virtual LAN. Typically, each IP network segment is assigned to a separate VLAN. Additionally, in order to transmit from one VLAN to another, the traffic has to go through a router, or Layer 3 device. Each router interface or sub-interface can be assigned an IP address and subnet within a specific VLAN. That interface can act as the default gateway for hosts on that VLAN and route traffic to hosts on other VLANs to which the router is also connected.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

5

VLANs create separate broadcast domains (Layer 2 network) on one physical device. This is accomplished by breaking a single switch into multiple broadcast domains, or multiple Layer 2 segments, which limit broadcast traffic. The traffic between VLANs then needs to be routed.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

6

VLANs are logical networks that function independently of the physical network configuration. For example, VLANs enable you to put all of a department’s computers on the same logical subnet, which can increase security and reduce network broadcast traffic.

VLAN Tagging can increase network flexibility by allowing a single network interface to be assigned multiple logical interfaces. A different VLAN can be assigned to each interface. A packet is accepted if the destination IP address is the same as the IP address of the interface, and the packet's VLAN tag is the same as the interface's VLAN ID. Zero (0) is the default value of VLAN ID (blank is also accepted and means the same as 0) and means VLAN tagging is not enabled. If a VLAN tag is being used, the transmitting device will add the proper header to the frame. The switch port should be set to trunk or there could be connectivity problems between the switch port and device.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

7

A trunk line carries traffic from multiple VLANs on a single trunk port. It most often goes from switch to switch, but can also go from server to switch. Cisco has a proprietary trunking protocol called ISL, which works in a similar fashion to the 802.1Q protocol.

By default, all VLANs are allowed across trunk links. For security reasons, the administrator should configure trunk ports on the switch to drop traffic from unused VLANs. This can also cut down on the amount of broadcast traffic sent to each switch and port.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

8

A virtual device is not required in order to access resources in a simple environment. However, if customers have a desire to increase fault tolerance, virtual devices are used. Virtual devices are a combination of 2 or more physical devices, of physical device(s) and virtual device(s) or of 2 or more virtual devices.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

9

These are the three types of VNX virtual devices. They are explained in more detail later in this module.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

10

Ethernet Channel combines multiple physical ports (two, four or eight) into a single logical port for the purpose of providing fault tolerance for Ethernet ports and cabling. Ethernet Channel was not originally designed for load balancing or to increase bandwidth, but has been updated since to include these features. Ethernet Channel does not provide increased bandwidth from the client’s (Data Mover) perspective. Because each interface is connected only to a single port, the client does not receive any added performance. Any increased bandwidth on the side of the channeled host (the Data Mover) is incidental. However, this is not an issue because the objective of Ethernet Channel is to provide fault tolerance, not to increase aggregate bandwidth. Link Aggregation Control Protocol (LACP) is an alternative to EtherChannel. The IEEE 802.3ad Link Aggregation Control Protocol also allows multiple Ethernet links to be combined into a single virtual device on the Data Mover. Like Ethernet Channel, by combining many links into a single virtual device you get:

• • •

Increased Availability (A single link failure does not break the link) Port Distribution (The link used for communication with another computer is determined from the source and destination MAC addresses) Better Link Control (LACP is able to detect broken links passing LACPDU, Link Aggregation Control Protocol Data Unit, frames between the Data Mover and the Ethernet switch)

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

11

Shown here is a comparison of Ethernet Channel and Link Aggregation. A customer may have reasons to utilize one technology over another.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

12

Once an Ethernet Channel (or LACP aggregation) is configured, the Ethernet switch must make a determination as to which physical port to use for a connection. Three statistical load distribution methodologies are available on the VNX; distribution by MAC address, by IP address or by a combination of IP address and TCP port. MAC Address - The Ethernet switch will hash enough bits (1 bit for 2 ports, 2 bits for 4 ports, and 3 bits for 8 ports) of the source and/or destination MAC addresses of the incoming packet through an algorithm (the addresses and the algorithm used are specific to the IOS of the Ethernet switch). The result of the hashing will be used to decide which physical port through which to make the connection. Example: The example below shows the four possible algorithm results of hashing two bits, as well as the port that will be used for that connection. Binary Result - Switch port 00 - 1 , 01- 2 , 10 - 3 , 11 - 4 Keep in mind that traffic coming from a remote network will contain the source MAC address of the router interface nearest the switch. This could skew the algorithm’s outcome and mean that all traffic from the remote network will be directed through the same interface in the channel. IP address - The source and destination IP ports are considered when determining the output port. IP is the default setting. IP address and TCP port - The source and destination IP addresses and ports are considered when determining the output port Configuration - Statistical load distribution can be configured for the whole system by setting the LoadBalance= parameter in the global or local parameters file. It can also be configured per trunk by using the server_sysconfig command. Configuring load distribution on a per trunk basis overrides the entry in the parameters file. Configured via:

• •

Parameters file entry : LoadBalance=mac, tcp, or ip

server_sysconfig (per aggregation)

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

13

Fail Safe Network (FSN) is a virtual network interface feature of the VNX. Like Ethernet Channel and Link Aggregation, FSN supplies fault tolerance out beyond the physical box of the Data Mover, providing redundancy for cabling and switch ports.

But, unlike Ethernet Channel and Link Aggregation, FSN can also provide fault tolerance in the case of switch failure. While Ethernet Channel provides redundancy across active ports (all ports in the channel carrying traffic), FSN is comprised of an active and a standby interface. The standby interface does not send or respond to any network traffic. FSN operation is independent of the switch. Ethernet Channel and Link Aggregation both require an Ethernet switch that supports their corresponding protocols. This is not the case with FSN because it is simply a combination of an active and standby interfaces with failover being orchestrated by the Data Mover itself. Additionally, the two members of the FSN device can be connected to separate Ethernet switches.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

14

In addition to FSNs, VNX supports Ethernet Channel or Link Aggregation configurations that span multiple switches in a single stack, provided the switches support the Cross-Stack feature. There is no additional configuration required on the Data Movers, but there will be specific configuration required by the network administrator. Refer to the switch vendor documentation for details on this configuration. Cross- Stack can span an EtherChannel across two switches. All of the links of the EtherChannel will be active at any time. So if you configured an EtherChannel using four ports on a Data Mover and connected two ports to each of two switches configured for Cross-Stack, your traffic would be flowing across all 4 ports. If you made the same configuration with FSN, you would only be able to have traffic flowing across two active ports, while the other 2 ports would be in a standby state. If the network environment has Cross-Stack capable switches, this configuration would provide greater aggregate bandwidth than an FSN implementation, as there are no active/passive links in a Cross-Stack EtherChannel unless there are more links configured than the standard supports.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

15

This lesson covers how to create an Ethernet Channel, a LACP device, a FSN device and VLAN IDs.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

16

These terms are commonly confused, which can lead to improper implementation. They are referred to throughout this module.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

17

This is a VNX Data Mover enclosure physical device component review. The I/O modules available for the VNX are shown (left to right):

• • •

4 port copper Gigabit Ethernet module

2 port copper Gigabit Ethernet and 2 port Optical Gigabit Ethernet module 2 port Optical 10 Gigabit Ethernet module

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

18

Differentiating between physical and logical can be difficult. This is especially true if the device name and interface name are the same value. Example: It is possible to create an interface called “cge-1-0” on physical device cge-1-0.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

19

As stated in the previous slide, differentiating between physical and logical can be difficult. In order to represent multiple physical devices as one logical device, a virtual device must be created.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

20

This is a slide meant to address the general topic of virtual device creation. The possible selections and options are explored in further detail in later slides.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

21

This slide shows how to configure an Ethernet Channel virtual device using Unisphere. To create an Ethernet Channel virtual device via CLI, use the following commands to combine the Data Mover’s physical network ports into one logical device.

Command: server_sysconfig <movername>

–virtual –name

–create trk –options “device=<device>,<device> [lb=<mac|ip|tcp>]” Example: Combine ports cge-1-0 and cge-1-1 into a virtual device using Ethernet Channel named trk0 server_sysconfig server_2 –virtual –name trk0 –create trk –options “device=cge-1-0,cge-1-1”

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

22

This slide shows how to configure a Link Aggregation virtual device using Unisphere. To create a Link Aggregation virtual device via CLI, use the following commands to combine the Data Mover’s physical network ports into one logical device.

Command: server_sysconfig server_x –virtual –name –create trk – options “device=<device,device> protocol=lacp” Example: server_sysconfig server_2 -virtual –name lacp0 -create trk -options "device=cge-1-2,cge-1-3 protocol=lacp” Verifying that ports are up and running: One way to verify all of the ports are up and running would be to run show port lacpchannel statistic (on a Cisco Systems switch). Each time the command is run you can see that the LACPDU packet reports have changed for active ports.

Monitoring the number of Ibytes and Obytes: Use the server_netstat -i command to monitor the number of Ibytes and Obytes (Input and Output bytes) for each port.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

23

Like Ethernet Channel and Link Aggregation, the FSN virtual device is created using the server_sysconfig command. The FSN device is used for the -Device parameter in the server_ifconfig IP configuration. The FSN virtual device can be based of any combination of like or dissimilar physical or virtual devices. Example:

• • • • • • • •

FE with FE GbE with GbE GbE with FE Ethernet Channel with FE Ethernet Channel with GbE Link Aggregation with FE Link Aggregation with GbE Ethernet Channel with Link Aggregation

The slide shows a FSN created using an Ethernet Channel and a Link Aggregation virtual device. They are configured as a standby/standby with trk0 coming up as the active side of the FSN. Command: server_sysconfig server_x –virtual –name –create fsn – options “device=<device,device>” Example: server_sysconfig server_2 -virtual -name fsn0 -create fsn -option “device=lacp0,trk0"

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

24

This slide shows how to configure a FSN device with a primary device defined using Unisphere. The FSN is configured with lacp0 set as the primary and trk0 as standby. When the “primary” option is specified, the primary device will always be the active device (except when it is in a failed state). This is generally not recommended because of the adverse effect on network performance when operating in conjunction with a degraded primary device. To create a FSN specifying a primary device (not recommended) use the configuration shown here, or from the CLI do the following: Command: server_sysconfig server_x –virtual –name –create fsn –option “primary=<primary_dev> device=<standby_dev>” Example: server_sysconfig server_2 –virtual –name fsn0 –create fsn –option “primary=lacp0 device=lacp0,trk0”

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

25

This information was discussed in detail in the Basic Network Configuration module. Once the virtual device has been created, use the server_ifconfig command to assign an IP address to the virtual device. Be sure to use the name designated for the –name parameter in the server_sysconfig command as the device parameter in the server_ifconfig statement. Example: In the command below, the –Device fsn0 parameter refers to the virtual device that was created (on the previous page) using the –name fsn0 parameter. The –name parameter used in this command here defines the interface name. After the protocol IP statement, provide the IP address, the subnet mask and the broadcast address. server_ifconfig server_2 –create –Device fsn0 –name fsn0-1 –protocol IP 10.127.57.233 255.255.255.240 10.127.57.239

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

26

This slide shows how to list all virtual devices using Unisphere. To display a list of virtual devices via CLI: Command: server_sysconfig <movername> –virtual [nasadmin@cel1cs0 ~]$ server_sysconfig server_2 -virtual server_2 : Virtual devices: fsn0

active=lacp0 primary=lacp0 standby=trk0

lacp0

devices=cge-1-2 cge-1-3 :protocol=lacp

trk0

devices=cge-1-0 cge-1-1

fsn

failsafe nic devices : fsn0

trk

trunking devices : lacp0 trk0

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

27

This slide shows how to delete virtual devices using Unisphere. To delete virtual devices via CLI: Command: server_sysconfig server_x –virtual –delete <device> Example: server_sysconfig server_2 –virtual –delete fsn0

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

28

This lesson covers requirements and consideration when it comes to implementing networking features.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

29

This slide explains VNX Network configuration for LACP and FSN.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

30

To configure multiple logical interfaces using one physical device and assign a VLAN tag, use the following commands: Command:

server_ifconfig server_x -create –Device <device name> -name -protocol IP <subnet mask> vlan= Examples: server_ifconfig server_2 –create –Device fsn0 -name fsn0-1 -protocol IP 10.127.57.233 255.255.255.240 10.127.57.239 vlan=45 server_ifconfig server_2 –create –Device fsn0 -name fsn0-2 -protocol IP 10.127.57.107 255.255.255.224 10.127.57.126 vlan=41 To assign a VLAN tag to an existing interface: Command:

server_ifconfig server_x vlan= Examples: server_ifconfig server_2 fsn0-1 vlan=45 server_ifconfig server_2 fsn0-2 vlan=41 To remove the VLAN tag: Example: server_ifconfig server_2 fsn0-1 vlan=0

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

31

This slide shows a FSN device that consists of a LACP device called lacp0 (comprised of cge1-0, cge-1-1) and another LACP device called lacp1 (comprised of cge-1-2, cge-1-3). Both virtual devices connect to different switches, and the switches would need to be configured to support the LACP configuration. The active device, lacp0, is used for all network traffic, unless both paths in that virtual device fail, or if the switch fails. If that occurred, lacp1 with its associated switch would take over network traffic for the Data Mover.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

32

The five phases illustrated above show how the data path is altered as failures occur. In this example consider data passing through cge0, and trace the data path through each failure.

Phase 1: This phase shows normal operation. Phase 2: The network connection between the Ethernet switch and cge-1-0 has failed. Traffic from cge-1-0 is being redirected towards cge-1-1. Note: lcap0 is still the active path. Phase 3: The network connection between the Ethernet switch and cge-1-1 has failed. This causes lacp0 to be unavailable and the VNX (via FSN) redirects the traffic towards lacp1. The data path is now on cge-1-2 and cge-1-3. Phase 4: The network connection between the Ethernet switch and cge-1-2 has failed. Traffic from cge-1-2 is being redirected towards cge-1-3. Phase 5: (no primary) When the links are restored, the Data Mover does not redirect the FSN data path to lacp0. Data may flow back through cge-1-2, but the data path will be via lacp1. Lacp0 is now in standby mode. Phase 5: (primary = lacp0) When the links are restored, the Data Mover redirects the FSN data path to lacp0. Data will flow back through cge-1-0 and cge-1-1. Lacp1 is now in standby mode.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

33

The speed and duplex setting on each interface must match the settings on the Ethernet switch. If the switch is set to auto-negotiate the speed and duplex, and the interface is hard-coded for 1000FD, the link will fail to connect and give performance problems. A FSN device inherits the speed and duplex settings configured on its member devices. For additional information on speed and duplex settings, see the Basic Network Configuration module in this course.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

34

Any failed network connections will not trigger a Data Mover failover. To protect the environment from service interruption or data unavailability, utilize the network features that help protect against port, cable and switch failures.

The standby Data Mover will inherit any configuration from the primary Data Mover that fails over, but the switch ports that are attached to the standby Data Mover must be configured with the same options as the ports attached to the primary Data Mover. If the primary Data Mover has two LACP trunks configured with two ports each, then the ports attached to the standby Data Mover must have two LACP trunks configured with two ports each as well. In addition, if there is manual VLAN pruning in use, the trunks to the standby Data Mover must also be configured to allow the necessary VLANs for the same VLANs that the primary Data Movers’ trunks are configured to allow. To ensure that the configuration is correctly deployed, test failover before utilizing the machine for production. Configure a CIFS share or NFS export on each primary Data Mover and ensure there is connectivity before and after failover. If this is not tested before a real failover situation, it could cause data unavailability or service interruption for clients until the configuration can be corrected or the primary Data Mover is restored.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

35

To confirm that the configuration has been successfully completed, verify that the trunks have been configured on the switch. Typically, EMC personnel will not have access to the switches in the environment. If this is the case, there is a dependency on the site network administrator to provide the information. For specific switch configuration information, reference “Configuring and Managing Network High Availability on VNX 7.0” available on Powerlink. The commands to create the channels are issued independently to the VNX and to the Ethernet switch. The order doesn’t matter, as the links will not connect until both sides have been correctly configured.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

36

A LACP link can be created with any number of physical devices, but only full duplex can be utilized over the link. This means that if one side is hard-coded for full duplex and the other side is coded to auto-negotiate, the auto-negotiate side will be unsuccessful in link negotiations and will default to half duplex and the LACP trunk will not form. In addition, if a mixture of port speeds is used to configure the link, the LACP standard specifies that the greatest number of ports with the same speed will be used for the link. In other words, if there are four links running at 100 Mbps and two links running at 1000 Mbps, the link will disable the two links running 1000 Mbps and use the four slower 100 Mbps links instead. However, if there are four links at 100 Mbps and four links at 1000 Mbps, the four links at 1000 Mbps will be used. Although multiple links are joined and aggregate bandwidth is increased over the trunk, no single client will necessarily receive an increase in throughput depending upon the type of load balancing utilized over the trunk. In order to get the best load balancing per host possible, use the load balancing algorithm that includes TCP/UDP port numbers. By default, the source and destination IP addresses are used to determine load balancing.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

37

This demo covers the LACP and FSN networking features of a Data Mover. It demonstrates the configuration and testing of an LACP virtual network device on a Data Mover. Then, an FSN will be configured using the LACP virtual device.

To launch the video use the following URL: Networking Features Demo https://edutube.emc.com/Player.aspx?vno=+t2ve3LqIbbGdq7pRRKjyw==&autoplay=true

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

38

This module covered the key points listed.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

39

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Networking Features

40

This module focuses on VNX SnapSure and demonstrates how to configure, manage, and schedule read-only and writeable checkpoints.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

1

This lesson covers the purpose of SnapSure, introduces the key components and explains how SnapSure uses VNX storage.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

2

SnapSure is a VNX for File, Local Protection feature that saves disk space and time by creating a point-in-time view of a file system. This logical view is called a checkpoint, also known as a “snapshot”, and can be mounted as a read-only or writeable file system. SnapSure is mainly used by low-activity, read-only applications such as backups and file system restores. It’s writeable checkpoints can also be used in application testing or decision support scenarios. SnapSure is not a discrete copy product and does not maintain a mirror relationship between source and target volumes. It maintains pointers to track changes to the primary file system and reads data from either the primary file system or from a specified copy area. The copy area is referred to as a SavVol, and is defined as a VNX for File metavolume.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

3

SnapSure checkpoints provide users with multiple point-in-time views of their data. In the illustration above the user’s live, production data is a business proposal Microsoft Word document. If they need to access what that file looked like on previous days, they can easily access read-only versions of that file as viewed from different times. This can be useful for restoring lost files or simply for checking what the data looked like previously. In this example, checkpoints were taken on each day of the week.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

4

PFS A production file system, or PFS, is any typical VNX file system that is being used by an application or user. SavVol Each PFS with a checkpoint has an associated save volume, or SavVol. The first change made to each PFS data block triggers SnapSure to copy that data block to the SavVol. Bitmap SnapSure maintains a bitmap of every data block in the PFS where it identifies if the data block has changed since the creation of the checkpoint. Each PFS with a checkpoint has one bitmap that always refer to the most recent checkpoint. Blockmap A blockmap of the SavVol is maintained to record the address in the SavVol of each “pointin-time” saved PFS data block. Each checkpoint has its own blockmap. Checkpoint A point-in-time view of the PFS. SnapSure uses a combination of live PFS data and saved data to display what the file system looked like at a particular point-in-time. A checkpoint is thus dependent on the PFS and is not a disaster recovery solution. Checkpoints are also known as snapshots. Displayed on this slide is a PFS with three data blocks of content. When the first file system checkpoint is created, a SavVol is also created. The SavVol is a specially marked metavolume that holds the single Bitmap, the particular checkpoint’s blockmap (as we will see, each additional checkpoint will have its own blockmap), and space to preserve the original data values of blocks in the PFS that have been modified since the establishment of the checkpoint. The bitmap holds one bit for every block on the PFS.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

5

This series of slides illustrate how SnapSure operates to preserve a point-in-time view of PFS data. This slide and the next show how an initial write to a data block on the PFS is processed by SnapSure. A write to DB2 of the PFS is initiated and SnapSure holds the write request. The bitmap for DB2 is 0 indicating SnapSure needs to perform a copy on first write operation for PFS DB2. SnapSure copies DB2 data to the first address location in the SavVol. Thus the point-in-time view of DB2 data is preserved within the SavVol by SnapSure.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

6

This slide continues with the initial PFS write operation to DB2 of the PFS. With the original DB2 data copied to the SavVol, SnapSure updates the bitmap value for DB2 to 1 indicating that that data is preserved in the SavVol. The blockmap is also updated with the address in the SavVol where DB2 data is stored. SnapSure releases the write hold and the new DB2 data is written to the PFS.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

7

This slide illustrates SnapSure operations with multiple checkpoints of a PFS. Upon creation of a subsequent checkpoint, SnapSure creates a new bitmap and blockmap for the newest checkpoint. The bitmap for any older checkpoint is removed. Only the most recent read-only checkpoint will have a bitmap. A write to the PFS uses a similar technique as seen in the prior two slides. The write to the PFS is held and SnapSure examines the newest checkpoint bitmap to see if the point-intime view of the data needs to be copied to the SavVol. If the bitmap value is 0 the PFS original data is copied to the SavVol, the bitmap and blockmap are updated, and the write of data to the PFS is released. If the bitmap value for the data were 1, this would indicate that the point-in-time view of data had already been preserved and thus SnapSure would simply write the new data to the PFS.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

8

When a read is made from the newest checkpoint, SnapSure examines the checkpoint bitmap. If the value for the data block is 1, this indicates that the original data is in the SavVol and SnapSure then gets the SavVol location for the point-in-time data from the blockmap and retrieves the data from the SavVol location. If the bitmap value for the data block was 0, this indicates that the data on the PFS is unchanged and thus SnapSure retrieves the data directly from the PFS.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

9

When a read is made from an old checkpoint, SnapSure cannot simply read the bitmap. Instead, it will first have to examine the desired checkpoint’s blockmap to check for any data that has been copied to the SavVol. SnapSure will continue to read through subsequently newer blockmaps as it makes its way to the newest checkpoint. The first referenced value is always the one that is used. If no blockmap contains a reference to the data, that indicates the PFS holds the needed data and SnapSure will read the data from the PFS. For this example a read request is made from Checkpoint 1 for DB1. SnapSure examines Blockmap1 for DB1 and, as seen, its blockmap does not have a reference for DB1 so SnapSure progresses to the next newer checkpoint blockmap. In this example Blockmap2 does hold a reference for DB1 therefore SnapSure will go to the SavVol address to retrieve DB1 data for the read request. In this example, should the read request have been for DB3, SnapSure would have gone to the PFS to retrieve the data for the read request.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

10

SnapSure requires a SavVol to hold data. When you create the first checkpoint of a PFS, SnapSure creates and manages the SavVol automatically by using the same storage pool as the PFS. The following criteria is used for automatic SavVol creation:

• • •

If PFS ≥ 20GB, then SavVol = 20GB If PFS < 20GB and PFS > 64MB, then SavVol = PFS size

If PFS ≤ 64MB, then SavVol = 64MB

If you create another checkpoint, SnapSure uses the same SavVol, but logically separates the point-in-time data using unique checkpoint names. The SavVol can be manually created and managed as well. All that is needed is an unused metavolume. The size of the metavolume is recommended to be 10% of the PFS. Creating a SavVol manually provides for more control regarding the placement of the savVol.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

11

SnapSure utilizes a feature to automatically extend the SavVol to prevent inactivation of older checkpoints. By default, the High Water Mark (HWM) is set to 90%, but this amount can be lowered if necessary. By default, SnapSure is not able to consume more than 20% of the space available to the VNX. This limit of 20% can be changed in the param file /nas/sys/nas_param. If the SavVol was created automatically, the SavVol space will be extended in 20 GB increments until the capacity is below the HWM once more. If the SavVol was manually created, the automatic extension feature will extend the SavVol by 10% of the PFS. In order to extend the SavVol, there must be unused disk space of the same type that the SavVol resides on. If the HWM is set to 0%, this tells SnapSure not to extend the SavVol when a checkpoint reaches near-full capacity. Instead, SnapSure will use the remaining space and then deletes the data in the oldest checkpoint and recycles the space to keep the most recent checkpoint active. It repeats this behavior each time a checkpoint needs space. The SnapSure refresh feature conserves SavVol space by recycling used space. Rather than use new SavVol space when creating a new checkpoint of the PFS, use the refresh feature anytime after you create one or more checkpoints. You can refresh any active checkpoint of a PFS, and in any order. The refresh operation is irreversible. When you refresh a checkpoint, SnapSure maintains the file system name, ID, and mount state of the checkpoint for the new one. The PFS must remain mounted during a refresh.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

12

With SnapSure, you can automate the creation and refresh of read-only checkpoints. Automated checkpoint refresh can be configured with the CLI nas_ckpt_schedule command, Unisphere, or a Linux cron job script. Checkpoint creation and refresh can be scheduled on arbitrary, multiple hours of a day and days of a week or month. You can also specify multiple hours of a day on multiple days of a week, and have more than one schedule per PFS. You must have appropriate VNX for File administrative privileges to use the various checkpoint scheduling and management options. Administrative roles that have read-only privileges can only list and view schedules. Roles with modify privileges can list, view, change, pause, and resume schedules. Roles with full-control privileges can create and delete checkpoint schedules in addition to all other options.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

13

This lesson covers how Writeable Checkpoints work as well as some limitations.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

14

Writeable Checkpoints •

Can be mounted and exported as a read-write file systems



Share the same SavVol with read-only checkpoints



Add write capabilities to both local and remote checkpoints

Writeable checkpoints share the same SavVol with read-only checkpoints. The amount of space used is proportional to the amount of data written to the writeable checkpoint file system. Block overwrites do not consume more space. There is no SavVol shrink. The SavVol grows to accommodate a busy writeable checkpoint file system. The space cannot be returned to the cabinet until all checkpoints of a file system are deleted. A deleted writeable checkpoint returns its space to the SavVol.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

15

You can create, delete, and restore writeable checkpoints. Writeable checkpoints are branched from “baseline” read-only checkpoints. A baseline checkpoint exists for the lifetime of the writeable checkpoint. Any writeable checkpoint must be deleted before the baseline is deleted. Writeable checkpoints and their baselines cannot be refreshed or be part of a checkpoint schedule.

This feature is fully supported in CLI and Unisphere.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

16

The deletion of a writeable checkpoint works just like a read-only checkpoint deletion. The Unisphere GUI allows deletion of both a baseline and any writeable in one-step, otherwise the CLI requires the writeable to be deleted first. In case of a restore from a writeable checkpoint to a PFS, the writeable checkpoint must be remounted as a read-only file system before the restore starts. The GUI does this automatically, the CLI requires the user to remount the writeable checkpoint as read-only first. The restore then proceeds in the background (same as a read-only restore). The writeable checkpoint cannot be mounted read-write during the background restore. The read-write checkpoint remains mounted read-only after the background restore completes.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

17

A writeable checkpoint requires at least one read-only checkpoint for use as a baseline to the writeable checkpoint. If no read-only checkpoint exists, the system will automatically create one when a writeable checkpoint is created. Unlike read-only checkpoints which have only one bit-map for the newest checkpoint, each writeable checkpoint will have a bitmap as well as a blockmap. Data written to the writeable checkpoint is written directly into the SavVol for the PFS. The writeable checkpoint uses the bitmap and blockmap in the same manner as read-only checkpoints; the bitmap identifies if the checkpoint data resides on the PFS or if it is in the SavVol and the blockmap will identify the SavVol address for the written data. A Writeable checkpoint uses the same SavVol of a PFS as read-only checkpoints.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

18

When a write request is made to a writeable checkpoint the data is written to the SavVol. The writeable checkpoint bitmap for the data is set to 1 and its blockmap will contain the SavVol address for the data. This example uses a PFS having an existing read-only checkpoint that is saving point-intime data to the SavVol. A write request is made to DB3 of the writeable checkpoint. The data will be written into the SavVol and the writeable checkpoint bitmap and blockmap will be updated; the bitmap for the data block will be set to 1 and the blockmap will be updated with the SavVol address that holds the data. If a rewrite operation is performed on a writeable checkpoint data block, the data in the SavVol for that data block is simply overwritten and no additional SavVol space is consumed by the rewrite operation. Read operations to a writeable checkpoint use the same methodology as the read-only checkpoints.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

19

You can have only one writeable checkpoint per baseline read-only checkpoint. There is a maximum of 16 writeable checkpoints per PFS. Writeable checkpoints do not count against the 96 user checkpoint limit. So, all together there could a total of 112 user checkpoints per PFS. However, any checkpoint created and used by other VNX features, such as VNX Replicator, count towards the limit. If there are 95 read-only checkpoints on the PFS and the user tries to use VNX Replicator, the replication will fail as the VNX needs to create two checkpoints for that replication session. Writeable checkpoints count towards the maximum number of file systems per cabinet (4096) and the maximum number of mounted file systems per Data Mover (2048). You cannot create a checkpoint from a writeable checkpoint. You can create a writeable checkpoint from a scheduled r/o checkpoint. However, if the writeable checkpoint exists when the schedule executes a refresh, it will fail. Warnings are displayed in Unisphere when creating a writeable checkpoint on a scheduled checkpoint. No warning is displayed using the CLI. For additional information on limitations, see Using VNX Snapsure

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

20

This lesson covers creating a checkpoint file system and accessing it using Windows and Linux or UNIX clients. This lesson also covers checkpoint schedule creation.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

21

CVFS (Checkpoint Virtual File System) is a navigation feature that provides NFS and CIFS clients with read-only access to online, mounted checkpoints from within the PFS namespace. This eliminates the need for administrator involvement in recovering point-intime files. The checkpoints are automatically mounted and able to be read by end users. Hiding the checkpoint directory from the list of file system contents provides a measure of access control by requiring clients to know the exact directory name to access the checkpoints. The name of the hidden checkpoint directory is .ckpt by default. You can change the name from .ckpt to a name of your choosing by using a parameter in the slot_(x)/param file. You can change the checkpoint name presented to NFS/CIFS clients when they list the .ckpt directory, to a custom name, if desired. The default format of checkpoint name is: yyyy_mm_dd_hh.mm.ss_. You can only change the default checkpoint name when you mount the checkpoint. To change the name of a checkpoint pfs04_ckpt1 of pfs_04 to Monday while mounting the checkpoint on Data Mover 2 on mountpoint /pfs04_ckpt1, use the following CLI command: server_mount server_2 -o cvfsname=Monday pfs04_ckpt1 /pfs04_ckpt1

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

22

To view checkpoint data using a Linux or Unix machine, first the production file system will need to be mounted on the client machine. If you list the files in the file system, you will not see any .ckpt directory. The .ckpt directory needs to be explicitly specified in the list command path to view its contents. Each checkpoint will appear as a data directory. Only checkpoints that are mounted and read-only will be displayed.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

23

If we change directory to one of the checkpoint directories, we will see the contents of the production file system at the time the checkpoint was taken. End users can copy any file that has been accidentally deleted from the .ckpt directory into the production file system.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

24

Another method of accessing checkpoint data via CIFS is to use the Shadow Copy Client (SCC). The SCC is a Microsoft Windows feature that allows Windows users to access previous versions of a file via the Microsoft Volume Shadow Copy Service. The SCC will need to be downloaded from Microsoft online if using Windows 2000 or XP. SCC is also supported by VNX to enable Windows clients to list, view, copy, and restore from files in checkpoints created with SnapSure. To view the checkpoint data via SCC, the Previous Versions tab of the file system Properties window will need to be accessed.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

25

In Unisphere, you can schedule checkpoint creation and refreshes at multiple hours of a day, days of a week, or days of a month. You can also specify multiple hours of a day on multiple days of a week to further simplify administrative tasks. More than one schedule per PFS is supported. You can also create a schedule of a PFS that already has a checkpoint created on it, and modify existing schedules. Under the Schedules tab you can find a list of schedules and their runtimes. Runtimes are based on the time zone set on the Control Station of the VNX. There are four possible schedule states: Active: Schedule is past its first execution time and is to run at least once in the future. Pending: Schedule has not yet run. Paused: Schedule has been stopped and is not to run until resumed, at which point, the state returns to Active. Complete: Schedule has reached its end time or maximum execution times and is not to run again unless the end time is changed to a future date.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

26

An automated checkpoint refresh solution can be configured using Unisphere or the Control Statioin CLI command: nas_ckpt_schedule. There is an option to enter names, separated with commas, for the checkpoints that are to be created in the schedule. The number of names you type must equal the number of checkpoints specified in the “Number of checkpoints to keep” field. If you do not type any checkpoint names, the system assigns default names for the checkpoints in the format; ckpt_<schedule name>_. In this automatic naming scheme, schedule_name is the name of the associated checkpoint schedule and nnn is an incremental number, starting at 001. If scripts are going to be used for checkpoints, utilizing the Relative Naming feature can make script writing easier by defining a prefix name for the checkpoint. The prefix name is defined when the schedule is created. When the checkpoint is created, the schedule uses the relative prefix, delimiter, and starting index to create a checkpoint file name relative to the order of checkpoints, starting with 000 by default and incrementing with each new checkpoint. This makes the checkpoint names consistent, predictable, and easily scripted. If the prefix were defined as “nightly”, the delimiter set to “.”, and the starting index set to “0”, the first checkpoint created with this schedule would be named nightly.000 and the second would be named nightly.001.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

27

A checkpoint is not intended to be a mirror, disaster recovery, or high-availability tool. It is partially derived from real time PFS data. A checkpoint might become inaccessible or unreadable if the associated PFS is inaccessible. Only a PFS and its checkpoints saved to a tape or an alternate storage location can be used for disaster recovery. SnapSure allows multiple checkpoint schedules to be created for each PFS. However, EMC supports a total of 96 read-only checkpoints and 16 writeable (scheduled or otherwise) per PFS, as system resources permit. This limit includes checkpoints that currently exist, are created in a schedule, or pending in other schedules for the PFS, and internally created checkpoints, such as for backups. Checkpoint creation and refresh failures can occur if the schedule conflicts with other background processes, such as the internal VNX for File database backup process that occurs from 1 to 5 minutes past the hour. If a refresh failure occurs due to a schedule or resource conflict, you can manually refresh the affected checkpoint, or let it automatically refresh in the next schedule cycle. Also, do not schedule checkpoint creation or refreshes within 15 minutes of each other in the same schedule or between schedules running on the same PFS. Refresh-failure events are sent to the /nas/log/sys_log file.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

28

This lesson covers planning for SnapSure, including scheduling concerns and performance considerations.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

29

When planning and configuring checkpoint schedules, there are some very important considerations. If these points are not carefully included in your planning, undesirable results will likely occur, such as checkpoints that are not created and/or updated. Some key points to consider are:



Do not schedule checkpoint creation/refresh operations to take place at the same time as the VNX Database backup. This function begins at one minute past every hour. During the VNX for File database backup, the database is frozen and new configurations (such as a checkpoint configuration) are not possible. In some very large scale implementations, this database backup could take several minutes to complete.



Do not schedule checkpoints to occur at the same time. This could require careful forethought.

When scheduled tasks are missed because resources are temporarily unavailable, they are automatically retried for a maximum of 15 times, each time sleeping for 15 seconds before retrying. Retries do not occur on such conditions as network outages or insufficient disk space.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

30

Depending on the type of operation, SnapSure can cause a decrease in performance. Creating a checkpoint requires the PFS to be paused. Therefore, PFS write activity is suspended, but read activity continues while the system creates the checkpoint. The pause time depends on the amount of data in the cache, but it is typically one second or less. SnapSure needs time to create the SavVol for the file system if the checkpoint is the first one. Deleting a checkpoint requires the PFS to be paused. All PFS write activity is suspended momentarily, but read activity continues while the system deletes the checkpoint. Restoring a PFS from a checkpoint requires the PFS to be frozen. This means that all PFS activities are suspended during the restore initialization process. When read activity is suspended during a freeze, connections to CIFS users are broken. However, this is not the case when write activity is suspended. The PFS will see performance degradation every time a block is modified for the first time only. This is known as Copy on First Write. Once that particular block is modified, any other modifications to that same block will not impact performance.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

31

Refreshing a checkpoint requires it to be frozen. Checkpoint read activity is suspended while the system refreshes the checkpoint. During a refresh, the checkpoint is deleted and another one is created with the same name. Clients attempting to access the checkpoint during a refresh process experience the following: NFS clients: The system continuously tries to connect indefinitely. When the system thaws, the file system automatically remounts. CIFS clients: Depending on the application running on Windows, or if the system freezes for more than 45 seconds, the Windows application might drop the link. The share might need to be remounted and remapped. If a checkpoint becomes inactive for any reason, read/write activity on the PFS continues uninterrupted.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

32

Writes to a single SavVol are purely sequential. NL-SAS drives have very good sequential I/O performance that is comparable to SAS drives. On the other hand, reads from a SavVol are nearly always random where SAS drives perform better. Workload analysis is important in determining if NL-SAS drives are appropriate for SavVols. Many SnapSure checkpoints are never read from at all; or, if they are, the reads are infrequent and are not performance-sensitive. In these cases, NL-SAS drives could be used for SavVols. If checkpoints are used for testing, data mining and data sharing, and experience periods of heavy read access, then SAS drives are a better choice. Be careful when using multiple SavVols on a single set of NL-SAS drives since the I/O at the disk level will appear random where SAS drives perform better.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

33

This lesson covers the management of checkpoint storage and memory, and how to modify checkpoint schedules.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

34

To configure auto extend for a SavVol, or to determine how much SavVol storage a particular file system has, simply access the Properties page of one of the file system’s checkpoints. Towards the bottom of the page there will be a link for Checkpoint Storage. This link will provide information regarding the state of the SavVol, it’s metavolume name and dVol usage, and auto extend settings. There is only one SavVol per file system, no matter how many checkpoints there are associated with that file system. You may also manually extend a SavVol from the checkpoint storage page.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

35

SavVol storage may also be verified by using the Control Station CLI as shown here on this slide. By list all of the checkpoints in a file system, we can also determine how much space each checkpoint is using. The order of checkpoint creation is the order in which the checkpoints will be listed. The y in the inuse field shows that the checkpoints are mounted. The value in the fullmark field is the current SavVol HWM. The value in the total_savvol_used field is the cumulative total of the SavVol used by all PFS checkpoints and not each individual checkpoint in the SavVol. The value in the ckpt_usage_on_savvol field is the SavVol space used by a specific checkpoint. The values displayed in the total_savvol_used and ckpt_usage_on_savvol fields are rounded up to the nearest integer. Therefore, the displayed sum of all ckpt_usage_on_savvol values might not equal the total_savvol_used value.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

36

As we mentioned in a previous lesson, when a checkpoint is refreshed, SnapSure deletes the checkpoint and creates a new checkpoint, recycling SavVol space while maintaining the old file system name, ID, and mount state. This one way of creating more SavVol space without actually extending the SavVol. Once a SavVol is extended, even if all the checkpoint data is deleted, that space is not returned to the system unless the SavVol has been created from Thin pool LUNs. In other words, a SavVol built on classic or Thick pool LUNs is not decreased in size automatically by the system. When refreshing a checkpoint, SnapSure will first unmount the checkpoint and delete the old checkpoint data. Then, a new checkpoint will be created and assigned as the active, or newest, checkpoint. Next, SnapSure will remount the checkpoint back on the Data Mover.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

37

The checkpoint refresh, restore, and delete operations may all be performed from the Checkpoints page. Navigate to Data Protection > Snapshots > File System Checkpoints.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

38

Once a checkpoint schedule is up and running, several settings may be modified without having to create a new schedule. The schedule name, description and times are some of the values that are modifiable. Checkpoint name and schedule recurrence cannot be modified, even if the schedule is paused. In this case, a new schedule will need to be created.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

39

VNX for File allocates up to 1 GB of physical RAM per Data Mover to store the blockmaps for all checkpoints of all file systems on a Data Mover. If a Data Mover has less than 4 GB of RAM, then 512 MB will be allocated. Each time a checkpoint is read, the system queries it to find the location of the required data block. For any checkpoint, blockmap entries that are needed by the system but not resident in main memory are paged in from the SavVol. The entries stay in main memory until system memory consumption requires them to be purged.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

40

The server_sysstat command, when run with the option switch, “-blockmap”, provides the current blockmap memory allocation and the amount of blocks paged to disk while not in use. Each Data Mover has a predefined blockmap memory quota that is dependent on the hardware type and VNX for File code being used. For more information please refer to the VNX Network Server Release Notes.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

41

This module covered the key points listed.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

42

This Lab covers VNX SnapSure local replication. First SnapSure is configured and checkpoints are created. Then checkpoints are used to restore files from NFS and CIFS clients. A Checkpoint Refresh operation is performed; and finally, a file system Restore is performed from a checkpoint.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

43

This lab covered VNX SnapSure local replication. Checkpoints were created and files were restored from NFS and CIFS clients. A Checkpoint Refresh operation was performed and a file system was restored from a checkpoint. Please discuss as a group your experience with the lab exercise. Were there any issues or problems encountered in doing the lab exercise? Are there relevant use cases that the lab exercise objectives could apply to? What are some concerns relating to the lab subject?

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: VNX SnapSure

44

This module focuses on performing and testing Data Mover failover and failback.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

1

This lesson covers Data Mover failover, including the failover process, how stand by Data Movers work, and planning for Data Mover failover.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

2

A foundation of High Availability for File is Data Mover Failover to protect against a possible failure of a Data Mover. When Data Mover failover is configured, the Control Station monitors all of the Data Mover with a long list of conditions that could indicate a problem with a Primary Data Mover. If any of these conditions are identified, the Control Station, will transfer functionality from the Primary to the Standby Data Mover without disrupting the availability of the file systems. After failover, the Standby Data Mover assumes the identity and configurations of the faulted Data Mover. These include TCP/IP and Ethernet addresses, mounted file systems, exports and configurations for NFS and/or CIFS. The impact to service is dependent on the connected client/application’s ability to handle the change gracefully. Any FTP, archive, or NDMP sessions that are active when the failure occurs are automatically disconnected and must be manually restarted. At the same time, the original Primary Data Mover remains in a faulted state until the problem is resolved.

After the triggering condition is remedied, the File services can be manually returned to the original Data Mover by the VNX administrator. If the problem persists, and if CallHome or Email Home is configured, the Control Station calls EMC Customer Service or your service provider with a notification of the event and diagnostic information.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

3

The table shown here lists typical situations which will trigger Data Mover failover, as well as some instances which do not trigger a failover. Note: If the Control Station is not running, Data Mover failover cannot occur. When the Control Station returns to service, it will recognize the Data Mover failure and initiate the appropriate action depending on the automatic, retry, or manual failover policy.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

4

The Standby Data Mover is a hot-spare for the Primary Data Movers, and can act as a spare for up to 7 primaries (depending on model). The recommended ratio is one Standby for every three Data Movers. A two Data Mover VNX is pre-configured with server_3 as a Standby for server_2.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

5

The failover policy determines how a standby Data Mover takes over after a primary Data Mover fails. The Control Station invokes this policy when it detects the failure of the primary Data Mover

The following are the three failover policies from which you can choose when you configure Data Mover failover:



Auto: The VNX Control Station immediately activates the standby Data Mover (default policy).



Retry: The VNX Control Station first tries to recover the primary Data Mover by resetting (rebooting) it. If the recovery fails, the Control Station activates the standby.



Manual: The Control Station shuts down the primary Data Mover and takes no other action. The standby must be activated manually It can only be configured using CLI.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

6

To prepare the Data Mover for failover configuration first you must determine how many Standby Data Movers are required, and ensure they are configured to support their corresponding Primary Data Mover(s).



Check that the Primary Data Movers and the corresponding Standbys have the same network hardware components.



Ensure that the Standby Data Mover is free from all networking and file system configurations (there should be no IP configurations set for the network interfaces).

Prior to configuration, check the Ethernet switch to verify that the switch ports of the Primary Data Movers are assigned to the same VLANs as the Standby Data Mover and set to the same speed/duplex. In addition, verify that any LACP/Link aggregation configuration related to the ports for the Primary Data Movers is identical for the Standby. If a Primary Data Mover is connected to a port configured with Jumbo frames then the Standby Data Mover should be connected to ports with the same configuration. Also, if a Standby Data Mover will be supporting multiple Primary Data Movers, the Standby Data Mover must support one network configuration shared by all Primary Data Movers.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

7

In order to configure server_3 as a Standby Data Mover for server_2 using Unisphere follow the steps below: To configure server_3 as the Standby for server_2:

1. 2. 3. 4. 5. 6.

From Unisphere screen select System > Hardware > Data Movers. Right click on the highlighted Data Mover and click Properties From the Role drop-down menu, select primary From the Standby Movers options list, check the desired Standby from the list Select the desired policy from the Failover Policy drop-down menu. Click Apply.

Note: server_3 reboots.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

8

This lesson covers how to test, restore and delete Data Mover failover configuration.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

9

It is important to test Data Mover failover to ensure that, if needed, a Data Movers properly failovers and clients can properly access their file systems by way of the Standby Data Mover, which is now acting as a Primary Data Mover. If any network or Operating System environmental changes are made, ensure that the Standby Data Mover still provides client access to their file systems.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

10

It is recommended that you periodically test the functionality of the Data Mover failover configuration. Testing failover involves manually forcing a Data Mover to failover. Run the following CLI command from either Unisphere or a SSH connection to the Control Station:

server_standby <primary_DM> –activate mover Example: To force server_2 to failover to its Standby Data Mover, use the following command: server_standby server_2 -activate mover server_2: replace in progress ..done commit in progress (not interruptible)...done server_2: renamed as server_2.faulted.server_3 server_3: renamed as server_2

Note: The Primary Data Mover is renamed to server_2.faulted.server_3 (OriginalPrimary.faulted.OriginalStandby) and the Standby Data Mover assumes the name of the failed Primary.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

11

After a failover has occurred and any issues that caused the failover have been corrected, or after completing a Data Mover failover test, restore the failed Data Mover back to its Primary status using the restore option. This slide shows how to use Unisphere to restore a Data Mover after a failover.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

12

You can delete a failover relationship within the Unisphere Properties page of a Data Mover by navigating to System > Hardware > Data Movers and selecting the individual Data Mover. There are two steps to the operation because an operation is done on two different Data Movers. The interface on the left shows the first step which is to delete an existing Standby relationship from a Primary Data Mover. All you have to do here is uncheck the Standby Data Mover. The interface on the right shows the second step which is to change the role of the of a Data Mover from Standby to Primary. Note: If the Data Mover is a Standby for more than one Primary, you must remove the relationship for each Data Mover. The Data Mover that was previously the Standby will reboot.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

13

This demo/lab covers the configuration of Data Mover failover and then testing failover. To launch the video use the following URL: https://edutube.emc.com/Player.aspx?vno=u/nei80YW2SjhLC/erYBhQ==&autoplay=true

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

14

This module covered Data Mover failover, including Data Mover failover, test Data Mover failover and failback.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

15

This course covered all the key elements of managing and integrating VNX Block and File Infrastructure. That includes integrating VNX Block access for open systems hosts (Linux and Windows) as well as ESXi hosts through FC and iSCSI connectivity, and File access using NFS and CIFS/SMB. This course also covered initial storage system configuration and security using Unisphere tool. The course also covered configuration of local replications solutions for both Block and File environments.

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Module: Data Mover Failover

16

Copyright 2015 EMC Corporation. All rights reserved.

[email protected]

Course Summary

17

Related Documents


More Documents from "Makwana Nainesh"

2
March 2021 0
4
March 2021 0
Chapter 6 Problems
January 2021 1