Studentguide-secure-v3.0.pdf

  • Uploaded by: Johan
  • 0
  • 0
  • January 2021
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Studentguide-secure-v3.0.pdf as PDF for free.

More details

  • Words: 49,632
  • Pages: 382
Loading documents preview...
Installation and Maintenance of Hitachi NAS Platform TCI2035

Courseware Version 3.0

Notice: This document is for informational purposes only, and does not set forth any warranty, express or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems. This document describes some capabilities that are conditioned on a maintenance contract with Hitachi Data Systems being in effect, and that may be configuration-dependent, and features that may not be currently available. Contact your local Hitachi Data Systems sales office for information on feature and product availability. Hitachi Data Systems sells and licenses its products subject to certain terms and conditions, including limited warranties. To see a copy of these terms and conditions prior to purchase or license, please call your local sales representative to obtain a printed copy. If you purchase or license the product, you are deemed to have accepted these terms and conditions. THE INFORMATION CONTAINED IN THIS MANUAL IS DISTRIBUTED ON AN "AS IS" BASIS WITHOUT WARRANTY OF ANY KIND, INCLUDING WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NONINFRINGEMENT. IN NO EVENT WILL HDS BE LIABLE TO THE END USER OR ANY THIRD PARTY FOR ANY LOSS OR DAMAGE, DIRECT OR INDIRECT, FROM THE USE OF THIS MANUAL, INCLUDING, WITHOUT LIMITATION, LOST PROFITS, BUSINESS INTERRUPTION, GOODWILL OR LOST DATA, EVEN IF HDS EXPRESSLY ADVISED OF SUCH LOSS OR DAMAGE. Hitachi Data Systems is registered with the U.S. Patent and Trademark Office as a trademark and service mark of Hitachi, Ltd. The Hitachi Data Systems logotype is a trademark and service mark of Hitachi, Ltd. The following terms are trademarks or service marks of Hitachi Data Systems Corporation in the United States and/or other countries: Hitachi Data Systems Registered Trademarks Hi-Track, ShadowImage, TrueCopy, Essential NAS Platform, Universal Storage Platform Hitachi Data Systems Trademarks HiCard, HiPass, Hi-PER Architecture, HiReturn, Hi-Star, iLAB, NanoCopy, Resource Manager, SplitSecond, TrueNorth, Universal Star Network All other trademarks, trade names, and service marks used herein are the rightful property of their respective owners. NOTICE: Notational conventions: 1KB stands for 1,024 bytes, 1MB for 1,024 kilobytes, 1GB for 1,024 megabytes, and 1TB for 1,024 gigabytes, as is consistent with IEC (International Electrotechnical Commission) standards for prefixes for binary and metric multiples. © Hitachi Data Systems Corporation 2013. All Rights Reserved HDS Academy 1073

Contact Hitachi Data Systems at www.hds.com.

This training course is based on firmware version 11.1.3250.xx also referenced as Angel-2.

Page ii

HDS Confidential: For distribution only to authorized parties.

Contents INTRODUCTION ..............................................................................................................IX

Welcome and Introductions ........................................................................................................ix Course Description ..................................................................................................................... x Required Knowledge and Skills ..................................................................................................xi Supplemental Courses............................................................................................................... xii Course Objectives ..................................................................................................................... xiii Course Topics ........................................................................................................................... xiv Learning Paths ...........................................................................................................................xv Collaborate and Share .............................................................................................................. xvi HDS Academy Is on Twitter and LinkedIn ............................................................................... xvii

1. PLATFORM OVERVIEW ............................................................................................ 1-1

Module Objectives ................................................................................................................... 1-1 Hitachi NAS Platform ............................................................................................................... 1-2 Hitachi NAS Portfolio ............................................................................................................... 1-3 Hitachi Unified Storage ............................................................................................................ 1-4 Hitachi Unified Storage Options ............................................................................................... 1-5 Hitachi Unified Storage (HUS) ................................................................................................. 1-6 What Is Hitachi NAS Platform or NAS Gateway Technology? ................................................ 1-7 High-level Implementation ....................................................................................................... 1-8 Platform Performance Specifications ....................................................................................... 1-9 Differences Between Models 3080 and 3090 ........................................................................ 1-10 HNAS 3090 Performance Accelerator ................................................................................... 1-11 Differences Between Models 4060 and 4080 ........................................................................ 1-12 What Is What ......................................................................................................................... 1-13 High-performance NAS Platform 3200 Rear View ................................................................ 1-14 Hitachi NAS Platform Models 3080 and 3090 ....................................................................... 1-15 Cable Side HNAS 3080 or 3090 G1 and G2 ......................................................................... 1-16 Hitachi NAS Platform Models 4xx0 ........................................................................................ 1-17 Summary Hitachi NAS Platform 4100.................................................................................... 1-18 Module Summary ................................................................................................................... 1-19 Module Review ...................................................................................................................... 1-20

2. HARDWARE ARCHITECTURE .................................................................................... 2-1

Module Objectives ................................................................................................................... 2-1 Hitachi NAS 3080 and 3090 Simplified Block Diagram ........................................................... 2-2 Hitachi NAS 4060 and 4080 Simplified Block Diagram ........................................................... 2-4 Mercury (Main) FPGA Board (MFB) Model 30x0 .................................................................... 2-6 Memory and Cache per Single Node....................................................................................... 2-7 Mercury (Main) Motherboard (MMB) ....................................................................................... 2-8 Mercury (Main) FPGA Board (MFB) ........................................................................................ 2-9 Hitachi NAS Platform 30x0 Rear Panel ................................................................................. 2-10 Hitachi NAS Platform Port Layout 3080/3090 ....................................................................... 2-11 Hitachi NAS Platform 4xx0 Rear Panel ................................................................................. 2-12 Hitachi NAS Platform Port Layout 4060/4080/4100 .............................................................. 2-13 Hitachi NAS Platform Models 4xx0 Flavors ........................................................................... 2-14 MMB Module Flavors and Port Layout ................................................................................. 2-15 NVRAM or Battery Status LED .............................................................................................. 2-16 Facia and Status LEDs 3100 and 3200 ................................................................................. 2-17 Facia and Status LEDs 3080 and 3090 ................................................................................. 2-18 Facia and Status LEDs 4060, 4080, and 4100 ...................................................................... 2-19 Power/Server Status LED ...................................................................................................... 2-20 NAS Node Status LED (Alert) ................................................................................................ 2-21 Reset and Power Switch ........................................................................................................ 2-22 Redundant and Hot Swappable Power Supply Unit (PSU) ................................................... 2-23 HDS Confidential: For distribution only to authorized parties.

Page iii

Contents

SMU200 and SMU300 Replaces SMU100 ........................................................................... 2-24 SMU400 Early Information .................................................................................................... 2-25 Module Summary .................................................................................................................. 2-26 Module Review ...................................................................................................................... 2-27

3. SOFTWARE ARCHITECTURE ..................................................................................... 3-1

Module Objectives ................................................................................................................... 3-1 Software Components Hitachi NAS Platform Models ............................................................. 3-2 Node Boot Sequence .............................................................................................................. 3-3 BOS and Linux Incorporated (BALI) ........................................................................................ 3-4 Platform API (PAPI) ................................................................................................................. 3-5 NAS Platform Software Suite .................................................................................................. 3-6 Hitachi NAS Platform Software Licensing ............................................................................... 3-7 Hitachi NAS Software Bundles ................................................................................................ 3-8 Module Summary .................................................................................................................. 3-10 Module Review ...................................................................................................................... 3-11

4. INSTALLATION OF HITACHI NAS PLATFORM .............................................................. 4-1

Module Objectives ................................................................................................................... 4-1 Installation Outline of Hitachi NAS Platform ............................................................................ 4-2 Rack Mounting......................................................................................................................... 4-3 Login User Accounts Using Embedded SMU ......................................................................... 4-4 Login User Accounts Using External SMU .............................................................................. 4-5 Null Modem Cable Configuration ............................................................................................ 4-6 Three Important Success Criteria............................................................................................ 4-7 Single HNAS 30x0 or 4xx0 with Embedded SMU ................................................................... 4-8 Initial Setup Single Node Embedded SMU ............................................................................. 4-9 Default Interface Settings for 3080 and 3090 ........................................................................ 4-10 Single Node Initial Setup: Models 3080 and 3090 ................................................................ 4-11 Node Initial Setup: Models 4060, 4080, and 4100 ................................................................ 4-12 Node Initial Setup Model 4xx0 1 of 3 .................................................................................... 4-13 Node Initial Setup Model 4xx0 2 of 3 .................................................................................... 4-14 Node Initial Setup Model 4xx0 3 of 3 .................................................................................... 4-15 Single Node Initial Setup: License Keys ............................................................................... 4-16 Initial Node Setup: Hitachi NAS Platform GUI....................................................................... 4-17 Adding License Key ............................................................................................................... 4-18 Initial Setup: Hitachi NAS Platform Node GUI....................................................................... 4-19 Server Setup Wizard ............................................................................................................. 4-20 Single Node Initial Setup: File Service EVS .......................................................................... 4-21 Hitachi NAS Platform Management Console ........................................................................ 4-22 Clustering from A to Z ........................................................................................................... 4-23 Initial Setup: First Node in a Cluster ...................................................................................... 4-25 Cluster Initial Setup: Model 30x0 CLI First Node .................................................................. 4-26 Initial Setup: External SMU ................................................................................................... 4-27 Initial Setup: External SMU CLI ............................................................................................. 4-28 Initial Setup: SMU Wizard ..................................................................................................... 4-29 Initial Setup: SMU GUI .......................................................................................................... 4-30 Initial Setup: Managed Servers ............................................................................................. 4-31 Initial Setup: Hitachi NAS Platform Node GUI....................................................................... 4-32 Cluster Initial Setup: License Keys ........................................................................................ 4-33 Initial Setup: Hitachi NAS Platform Licenses ........................................................................ 4-34 Adding License Key ............................................................................................................... 4-35 Cluster Initial Setup: Enable Clustering ................................................................................. 4-36 Initial Setup: Promote Clustering ........................................................................................... 4-37 Promoted to a Single-Node Cluster....................................................................................... 4-38 HNAS Clustered with External SMU ..................................................................................... 4-39 Cluster Initial Setup: Second Node ....................................................................................... 4-40

Page iv

HDS Confidential: For distribution only to authorized parties.

Contents

Cluster Initial Setup: Models 30x0 CLI Second Node............................................................ 4-41 Initial Setup: Flow and IP Addressing .................................................................................... 4-42 Initial Setup: Hitachi NAS Platform Node GUI ....................................................................... 4-43 Initial Setup: License Key ...................................................................................................... 4-44 Adding License Key ............................................................................................................... 4-45 Initial Setup: Join the Second Node....................................................................................... 4-46 Initial Setup: Add Single Node 2 to Clustered Node 1 ........................................................... 4-47 Two Node Cluster Configured ............................................................................................... 4-48 Initial Setup: File Service EVS ............................................................................................... 4-49 Module Summary ................................................................................................................... 4-50 Module Review ...................................................................................................................... 4-51

5. ETHERNET AND FIBRE CHANNEL NETWORKS ............................................................ 5-1

Module Objectives ................................................................................................................... 5-1 GbE Cable Distances............................................................................................................... 5-2 HNAS 30x0 Cluster 10GbE Interface (XFI) ............................................................................. 5-3 Finisar Small Form Factor (SFP+) ........................................................................................... 5-4 HNAS Models 4xx0 Use SFP+ ................................................................................................ 5-5 Cable Distance and Optical Media Type ................................................................................. 5-6 HNAS 4xx0 SFP+ Copper TwinAx Cable Assembly ............................................................... 5-7 Cable Distance and Copper Media Type ................................................................................. 5-8 NAS Platform Models 3080 and 3090 Networks ..................................................................... 5-9 NAS Platform Models 4060, 4080, and 4100 Networks ........................................................ 5-10 Hitachi NAS 30x0 Network and Embedded SMU .................................................................. 5-11 Hitachi NAS 4xx0 Network and External SMU ...................................................................... 5-12 Hitachi NAS 4xx0 Network and Clustering ............................................................................ 5-13 Private and Public Management Network Embedded SMU 30x0 ......................................... 5-14 Private and Public Management Network External SMU 30x0 Cluster ................................. 5-15 Private and Public Management Network with SMU Managed Legacy Storage................... 5-16 EVS Connectivity in a Cluster ................................................................................................ 5-17 IP Addressing and EVS ......................................................................................................... 5-18 Aggregation Configuration Screen Models 30x0 ................................................................... 5-19 Aggregation Configuration Screen Models 4xx0 ................................................................... 5-20 LACP Protocol Usage ............................................................................................................ 5-21 NTP and Management Network ............................................................................................ 5-22 Fibre Channel Connectivity .................................................................................................... 5-23 Storage Considerations: Platform Differences ...................................................................... 5-24 AMS200, 500, 1000, 2000 and HUS ..................................................................................... 5-25 Enterprise Including VSP (Not HUS VM) ............................................................................... 5-26 Hitachi Unified Storage VM .................................................................................................... 5-27 Fibre Channel Minimum Configuration for 2-Node 2200 Cluster .......................................... 5-28 Fibre Channel Configuration for 2-Node 3100 Cluster and Enterprise Storage .................... 5-29 High-performance NAS Platform 3200 Connectivity ............................................................. 5-31 Fibre Channel Switchless Configuration for 2-Node 3100 or 3200 Cluster ........................... 5-33 Fibre Channel Switchless Configuration for Single 3100 or 3200 Node ............................... 5-34 Fibre Channel Fabric Configuration for 2-Node Cluster HNAS 4xx0 .................................... 5-35 Fibre Channel Best Practice Configuration for 2-Node Cluster Using Secure Storage Domains5-38 Fibre Channel Recommended Configuration for 2-Node Cluster Enterprise 4xx0 ............... 5-39 Fibre Channel Configuration for 2-Node Cluster Enterprise 4xx0 ......................................... 5-40 Fibre Channel Switch-less Configuration for 2-Node Cluster Modular 30x0 ......................... 5-41 Fibre Channel Switch-less Configuration for 2-Node Cluster Enterprise 4xx0 ...................... 5-43 Fibre Channel Switch-less 2-Node Cluster Configuration 30x0 and NetApp 2680 ............... 5-44 Fibre Channel Switch-less Configuration for 2-Node Cluster 4xx0 Enterprise ...................... 5-45 Most Important SCSI Command Node 1 ............................................................................... 5-46 Most Important SCSI Command Node 2 ............................................................................... 5-47 Problem Determination Example 1 ........................................................................................ 5-48 Problem Determination Example 2 ........................................................................................ 5-49 HDS Confidential: For distribution only to authorized parties.

Page v

Contents

Storage Considerations ......................................................................................................... 5-50 Storage Enhancements for HNAS ........................................................................................ 5-51 HUS 100 Options and HNAS ................................................................................................ 5-52 Module Summary .................................................................................................................. 5-53 Module Review ...................................................................................................................... 5-54

6. FILE SYSTEM AND ACCESS PROTOCOLS ................................................................... 6-1

Module Objectives ................................................................................................................... 6-1 From Disk Drive to HNAS Virtualized Storage ........................................................................ 6-2 Hitachi Storage System Integration ......................................................................................... 6-3 BlueArc RAID Rack Discovery ................................................................................................ 6-4 Create System Drives ............................................................................................................. 6-5 System Drives – Create SD .................................................................................................... 6-6 CLI Displaying the System Drives ........................................................................................... 6-8 From Disk Drive to HNAS Virtualized Storage ........................................................................ 6-9 Hitachi Dynamic Provisioning (HDP) and HNAS ................................................................... 6-10 From Physical Disk to Storage Pool ...................................................................................... 6-11 Expanding a Storage Pool ..................................................................................................... 6-12 File System in a Storage Pool ............................................................................................... 6-13 File System Using Auto Expansion ....................................................................................... 6-14 System Drive Groups (SDG) ................................................................................................. 6-15 Hitachi Dynamic Provisioning (HDP) ..................................................................................... 6-16 Storage Pool Best Practices.................................................................................................. 6-17 Storage Pools Specifications................................................................................................. 6-19 Creating a Storage Pool ........................................................................................................ 6-20 File System Specifications .................................................................................................... 6-21 File System Definition ............................................................................................................ 6-22 Tiered File Systems (Tiered Storage Pools) ......................................................................... 6-23 Creating a Tiered Storage Pool ............................................................................................. 6-24 Creating a Tiered Store Pool ................................................................................................. 6-25 Displaying a Tiered Storage Pool .......................................................................................... 6-26 From Disk Drive to Drive Letter and UNIX Mount Point ........................................................ 6-27 What Are the Similarities? ..................................................................................................... 6-28 What Is Different? .................................................................................................................. 6-29 UNIX Permissions ................................................................................................................. 6-30 Windows Permissions ........................................................................................................... 6-31 Common Internet File System (CIFS) Authentication/Active Directory Service (ADS) ......... 6-32 ADS and Network Basic Input/Output System (NetBIOS) .................................................... 6-33 ADS and Domain Name System (DNS) ................................................................................ 6-34 ADS Computers..................................................................................................................... 6-35 ADS Computer Properties ..................................................................................................... 6-36 CIFS Shares .......................................................................................................................... 6-37 Network File System (NFS) and Exports .............................................................................. 6-38 Multi-protocol Access ............................................................................................................ 6-39 Module Summary .................................................................................................................. 6-40 Module Review ...................................................................................................................... 6-41

7. N-WAY CLUSTERING AND ENTERPRISE VIRTUAL SERVER (EVS) ................................ 7-1

Module Objectives ................................................................................................................... 7-1 Enterprise Virtual Servers (EVS) Attributes ............................................................................ 7-2 EVS Configuration Summary .................................................................................................. 7-3 Virtual Server Configuration .................................................................................................... 7-4 Automatic EVS Migration (Clustering) Network Problem ........................................................ 7-5 Automatic EVS Migration (Clustering) Node HW Problem ..................................................... 7-6 2-node Clustering .................................................................................................................... 7-7 Clustering Basics ..................................................................................................................... 7-8 NVRAM Usage in a 2-way Clustered Configuration................................................................ 7-9

Page vi

HDS Confidential: For distribution only to authorized parties.

Contents

N-way Clustering .................................................................................................................... 7-10 NVRAM Usage in a 4-way Clustered Configuration .............................................................. 7-11 Cluster Configuration ............................................................................................................. 7-12 EVS Failover Functionality and Process Summary ............................................................... 7-13 IP Address before Failover .................................................................................................... 7-14 On Failing Over ...................................................................................................................... 7-15 After Failover .......................................................................................................................... 7-16 Cluster Failover Reporting ..................................................................................................... 7-17 Let’s Have a Look at a Single Node ...................................................................................... 7-18 A Cluster Improves Things .................................................................................................... 7-19 Hitachi Synchronous Disaster Recovery (Sync DR) Cluster Service .................................... 7-20 Sync DR Components and Connectivity................................................................................ 7-21 This Is NOT a Sync DR Cluster ............................................................................................. 7-22 Module Summary ................................................................................................................... 7-23 Module Review ...................................................................................................................... 7-24

8. MAINTENANCE........................................................................................................ 8-1

Module Objectives ................................................................................................................... 8-1 Node IP Addresses 1 of 2 ........................................................................................................ 8-2 Node IP Addresses 2 of 2 ........................................................................................................ 8-3 Management Facilities ............................................................................................................. 8-4 Securing Management Access ................................................................................................ 8-5 Useful Command Line Utilities ................................................................................................. 8-6 CLI Commands and Context ................................................................................................... 8-7 Maintenance Actions................................................................................................................ 8-8 Software Patching .................................................................................................................... 8-9 Software Version Numbers and Names ................................................................................ 8-10 Software Upgrades ................................................................................................................ 8-15 Upgrade Path in Release Notes ............................................................................................ 8-16 Software Version Example from Daily Summary Email......................................................... 8-17 Saving External SMU Configuration Before Upgrade............................................................ 8-18 Saving Embedded SMU and 30x0/4xx0 Server Registry ...................................................... 8-19 External SMU SW Upgrade and Downgrade ........................................................................ 8-20 1a. Selecting CentOS Installation Method Second ................................................................ 8-21 1b. Selecting CentOS Installation Method Clean .................................................................. 8-22 2. External SMU Application Upgrade Procedures ................................................................ 8-23 Embedded SMU Upgrade and Downgrade 30x0/4xx0.......................................................... 8-24 Upgrade of Embedded SMU SW from the GUI ..................................................................... 8-25 Model 30x0 and 4xx0 Server Upgrade Procedures ............................................................... 8-26 Hitachi Command Suite (HCS) and Device Manager ............................................................ 8-27 Hitachi Command Suite (HCS) 7.3.0 ..................................................................................... 8-28 Hitachi Command Suite (HCS) Version 7.4 and up ............................................................... 8-29 SNMP Manager Connectivity (First SNMP Hi-Track) ............................................................ 8-30

9. TROUBLESHOOTING AND REPLACEMENT .................................................................. 9-1

Module Objectives ................................................................................................................... 9-1 Other Hitachi NAS Platform Management Interfaces .............................................................. 9-2 Storage Array Setup ................................................................................................................ 9-3 Alert SMTP Connectivity .......................................................................................................... 9-4 Configuring SMTP Servers ...................................................................................................... 9-5 Configuring SMU Email Alerts Forwarding .............................................................................. 9-6 Set up Email Forwarding on the SMU ..................................................................................... 9-7 Set Up Email Profile ................................................................................................................. 9-8 Daily Health Check Email ........................................................................................................ 9-9 Alerts Summary Email ........................................................................................................... 9-10 Diagnostic Download ............................................................................................................. 9-11 Diagnostic Report: Email for the Nodes................................................................................. 9-12 HDS Confidential: For distribution only to authorized parties.

Page vii

Contents

Diagnostic Report: Email for SMU and More ........................................................................ 9-13 Performance Information Report (PIR) ................................................................................. 9-14 Performance Graph ............................................................................................................... 9-15 Using the trouble Command.................................................................................................. 9-16 trouble Reporter Examples .................................................................................................... 9-17 trouble Performance Reporter Examples .............................................................................. 9-18 Server-Based Packet Capturing ............................................................................................ 9-19 Fascia (Bezel) Removal ........................................................................................................ 9-20 Model 30x0 G1 Fan Replacement Procedure ....................................................................... 9-21 Model 30x0 G1 Removing Fan Unit ...................................................................................... 9-22 Model 30x0 G2/4xx0 Fan Replacement ............................................................................... 9-23 Model 30x0/4xx0 Battery Pack .............................................................................................. 9-24 General Battery Precautions ................................................................................................. 9-25 Model 30x0 G1 NVRAM Battery Replacement ..................................................................... 9-26 Model 30x0 G1 Battery Connector ........................................................................................ 9-27 Model 30x0 G2/4xx0 Battery Replacement........................................................................... 9-28 Battery Replacement in Caddy.............................................................................................. 9-29 Model 30x0 G1 Hard Disk Replacement Procedure ............................................................. 9-30 Model 30x0 G1 Hard Disk Cabling and Positioning .............................................................. 9-31 Model 30x0 G2/4xx0 G2 Hard Disk Replacement ................................................................ 9-32 Hardware Field System Testing ............................................................................................ 9-33 Manufacturing Test and Diagnostic Software (MTDS) .......................................................... 9-34 MTDS Console ...................................................................................................................... 9-35 MTDS Test Commands ......................................................................................................... 9-36 Executing: mtds field-test ...................................................................................................... 9-37 Ending: mtds field-test ........................................................................................................... 9-38 Mercury Motherboard Memory Test Memtest86+ ................................................................. 9-39 Unrecoverable Configuration or Logical Errors ..................................................................... 9-40 Factory Reset to Default Assessment ................................................................................... 9-41 Fixing Logical Errors .............................................................................................................. 9-42 Resetting Servers to Factory Defaults .................................................................................. 9-43 HNAS Server Node Replacement ......................................................................................... 9-44 Spare Part List Model 30x0 ................................................................................................... 9-45 Spare Part List SMU, Switches, and Optics .......................................................................... 9-46 General Precautions .............................................................................................................. 9-47 Module Summary .................................................................................................................. 9-48 Module Review ...................................................................................................................... 9-49

NEXT STEPS ............................................................................................................. N-1 GLOSSARY .............................................................................................................. G-1 EVALUATING THIS COURSE ....................................................................................... E-1

Page viii

HDS Confidential: For distribution only to authorized parties.

Introduction Welcome and Introductions  Student Introductions • Name • Position • Experience • Your expectations

HDS Confidential: For distribution only to authorized parties.

Page ix

Introduction Course Description

Course Description

Page x

HDS Confidential: For distribution only to authorized parties.

Introduction Required Knowledge and Skills

Required Knowledge and Skills  Successfully completed: • Hitachi Enterprise Storage Systems Installation, Configuration and Support or • Hitachi Modular Storage Systems Installation, Configuration and Support

 For the best results from this training, it is important that you have experience and skills in: • NAS and SAN concepts • TCP/IP networking concepts such as router and switches • Network management and maintenance • UNIX/Linux administration • Microsoft® Windows® administration

HDS Confidential: For distribution only to authorized parties.

Page xi

Introduction Supplemental Courses

Supplemental Courses  Supplemental courses include: • TCI2102 — Administration and Operation of Hitachi NAS Platform

Page xii

HDS Confidential: For distribution only to authorized parties.

Introduction Course Objectives

Course Objectives

HDS Confidential: For distribution only to authorized parties.

Page xiii

Introduction Course Topics

Course Topics Modules

Lab Activities

Course Introduction 1. Platform Overview

1. Component Identification

2. Hardware Architecture

2. Hitachi 30x0 and 4xx0 Initial Setup

3. Software Architecture

3. External SMU Initial Setup

4. Installation of Hitachi NAS Platform Models 30x0 and 4xx0

4. Hitachi 30x0 or 4xx0 LUN Discovery

5. Ethernet and Fibre Channel Networks

5. Networking

6. File System and Access Protocols

6. File System and Basic CIFS Administration

7. N-way Clustering and Enterprise 7. Switch-less Clustering Virtual Server (EVS) 8. Maintenance

8. Maintenance and Firmware Upgrade

9. Troubleshooting and Replacement

9. Troubleshooting and Replacement

Page xiv

HDS Confidential: For distribution only to authorized parties.

Introduction Learning Paths

Learning Paths  Are a path to professional certification  Enable career advancement  Are for customers, partners and employees • Available on HDS.com, Partner Xchange and HDSnet

 Are available from the instructor • Details or copies

HDS.com: http://www.hds.com/services/education/ Partner Xchange Portal: https://portal.hds.com/ HDSnet: http://hdsnet.hds.com/hds_academy/ Please contact your local training administrator if you have any questions regarding Learning Paths or visit your applicable website.

HDS Confidential: For distribution only to authorized parties.

Page xv

Introduction Collaborate and Share

Collaborate and Share  Learn what’s new in the Academy  Ask the Academy a question  Discover and share expertise  Shorten your time to mastery  Give your feedback  Participate in forums

Academy in theLoop! theLoop: http://loop.hds.com/community/hds_academy/course_announcements_ and_feedback_community ― HDS internal only

Page xvi

HDS Confidential: For distribution only to authorized parties.

Introduction HDS Academy Is on Twitter and LinkedIn

HDS Academy Is on Twitter and LinkedIn

Follow the HDS Academy on Twitter for regular training updates.

LinkedIn is an online community that enables students and instructors to actively participate in online discussions related to Hitachi Data Systems products and training courses.

These are the URLs for Twitter and LinkedIn:  http://twitter.com/#!/HDSAcademy  http://www.linkedin.com/groups?gid=3044480&trk=myg_ugrp_ovr

HDS Confidential: For distribution only to authorized parties.

Page xvii

Introduction HDS Academy Is on Twitter and LinkedIn

Page xviii

HDS Confidential: For distribution only to authorized parties.

1. Platform Overview Module Objectives  Upon completion of this module, you should be able to: • State the purpose and benefits of using Hitachi NAS Platform • State the concept of the Hitachi NAS Platform architecture • Identify the positioning of the Hitachi NAS Platform in the Hitachi Data Systems NAS portfolio

HDS Confidential: For distribution only to authorized parties.

Page 1-1

Platform Overview Hitachi NAS Platform

Hitachi NAS Platform

BlueArc Corporation, now a part of Hitachi Data Systems:  Private company founded in 1998  Headquartered in San Jose, CA with an R&D center in the United Kingdom  Highest performing NAS server in the industry  File serving  Email serving  Second largest high-end NAS company (Gartner)  Fastest growing NAS company three years in a row (Gartner)  Many years of sales success  Global sales, professional services, and support infrastructure  BlueArc has been part of Hitachi Data Systems since September 2011

Page 1-2

HDS Confidential: For distribution only to authorized parties.

Platform Overview Hitachi NAS Portfolio

Hitachi NAS Portfolio 4100

Hitachi NAS Platform 3090 PA

4080

140K IOPS per Node 32PB Max Capacity

96K IOPS per Node 105K IOPS per Node 4PB Max Capacity 16PB Max Capacity

Price

3090

3080 F1140

4060

73K IOPS per Node 70K IOPS per Node 4PB Max Capacity 8PB Max Capacity

41K IOPS per Node 2PB Max Capacity

10K IOPS per Node 2PB Max Capacity

Features/Capacity/Performance Performance numbers are only used for comparison purposes. HNAS 3090 is shown with and without Performance Accelerator. HNAS 3090 PA is with Performance Accelerator installed. For more exact and customer facing numbers consult the appropriate and updated performance documents. F1140 = Hitachi NAS Platform F1140 3080 = Hitachi NAS Platform 3080 3090 = Hitachi NAS Platform 3090 3090 PA = Hitachi NAS Platform 3090 including Performance Accelerator license 4060 = Hitachi NAS Platform 4060 4080 = Hitachi NAS Platform 4080 4100 = Hitachi NAS Platform 4100

HDS Confidential: For distribution only to authorized parties.

Page 1-3

Platform Overview Hitachi Unified Storage

Hitachi Unified Storage

NFS

CIFS

Hitachi Command Suite (HCS)

Page 1-4

HDS Confidential: For distribution only to authorized parties.

FC

iSCSI

Platform Overview Hitachi Unified Storage Options

Hitachi Unified Storage Options

=

+

IP-SAN/ FCoE

=

+

CIFS/ NFS

=

+

FC-SAN

#10: FC

#11: iSCSI #12: FCoE

File Module (HNAS)

HDS Confidential: For distribution only to authorized parties.

Page 1-5

Platform Overview Hitachi Unified Storage (HUS)

Hitachi Unified Storage (HUS)

Hitachi Unified Storage (HUS) Entry HUS 110

File modules Block modules

F3080 or F4060

Model: XS

Mid HUS 130

Max HUS 150

F3080 or F4060

F30x0 or F4xx0

F3080 or F4060

F30x0 or F4xx0

Model: S

Hitachi Command Suite

F3080 is File module M1 F3090 is File module M2

Page 1-6

HDS Confidential: For distribution only to authorized parties.

Model: MH

Platform Overview What Is Hitachi NAS Platform or NAS Gateway Technology?

What Is Hitachi NAS Platform or NAS Gateway Technology? LAN / WAN

Fibre Channel

The Gateway technology works like a converter between LAN/WAN file-level data access and Fibre Channel block-level data access. A NAS Gateway is primarily designed to perform the data store and retrieve tasks among the huge number of tasks a file server normally is designed to take care of. By designing a server primarily to carry out the data store and retrieve tasks, it often helps to outperform file servers designed to span over multiple file server functions. Benefits:  Feature rich  Asset protection  NAS/SAN consolidation for improved Total Cost of Ownership (TCO)

HDS Confidential: For distribution only to authorized parties.

Page 1-7

Platform Overview High-level Implementation

High-level Implementation Servers (NFS, CIFS, FTP, and iSCSI)

IP Data Network

Private Management Network Hitachi NAS Platform Two-node Cluster Dual Fibre Channel Switches/SANs

Hitachi Data Systems Enterprise Storage and Hitachi Data Systems Modular and Unified Storage

Standby SMU SMU

Public SAN/Storage Management Network

Public Management Network

Management WS

Consult HiFIRE for interoperability with FC switches, and supported firmware levels. Support for Enterprise Storage Systems:  Hitachi Unified Storage VM (HUS VM)  Hitachi Virtual Storage Platform (VSP)  Hitachi Universal Storage Platform V (USPV)  Hitachi Universal Storage Platform VM (USPVM)

Support for Modular Storage Systems:  Hitachi Unified Storage 110 (HUS 110)  Hitachi Unified Storage 120 (HUS 120)  Hitachi Unified Storage 130 (HUS 130)  Hitachi Adaptable Modular Storage 2100 (AMS2100)  Hitachi Adaptable Modular Storage 2300 (AMS2300)  Hitachi Adaptable Modular Storage 2500 (AMS2500)  Hitachi Simple Modular Storage 100 (SMS100)  Hitachi Workgroup Modular Storage 100 (WMS100)  Hitachi Adaptable Modular Storage 200 (AMS200)  Hitachi Adaptable Modular Storage 500 (AMS500)  Hitachi Adaptable Modular Storage 1000 (AMS1000)

Page 1-8

HDS Confidential: For distribution only to authorized parties.

Platform Overview Platform Performance Specifications

Platform Performance Specifications

3080/M1

3090/M2

4060

4080

4100

2

2

2

4 initial / 8 later

4 initial / 8 later

Usable Capacity max per Cluster

2PB

4PB

8PB

16PB

16PB Initial / 32 PB later

FS Size Max per Cluster

64TB

128TB

256TB

256TB initial / 512TB later

256TB Initial / 1PB later

Max # of FS per Cluster

125

125

125

125

125

Max # of System Drives per Cluster

512

512

512

512

512

30,000 22,000

45,000 90,000

60,000 221,000

60,000 221,000

60,000 474,000

LAN / File Serving

6 X 1Gb1 + 2 X 10Gb2

6 X 1Gb1 + 2 X 10Gb2

4 x 10Gb3

4 x 10Gb3

4 x 10Gb3

Fibre Channel / Backend Storage

4 x 4GbFC4

4 x 4GbFC4

4 x 8GbFC5

4 x 8GbFC5

4 x 8GbFC5

2 X 10Gb2

2 X 10Gb2

2 X 10Gb3

2 X 10Gb3

2 X 10Gb3

Cluster Nodes max per HNAS cluster

Max concurrent: Connections Open Files per single Node/Server

Cluster Interconnect

1) 1GbE copper 2) XFP modules – multi & single mode optical 3) SFP+ modules – passive copper, multi & single mode optical

4) SFP modules – multi mode optical 5) SFP+ modules – multi mode optical

HDS Confidential: For distribution only to authorized parties.

Page 1-9

Platform Overview Differences Between Models 3080 and 3090

Differences Between Models 3080 and 3090

3090

3080

Page 1-10

HDS Confidential: For distribution only to authorized parties.

Platform Overview HNAS 3090 Performance Accelerator

HNAS 3090 Performance Accelerator  The Performance Accelerator enables a throughput and I/O performance enhancement within the Mercury server VLSI • Throughput component ▪ Connection between the Storage Interface (SI) FPGA and the Tachyon Fibre Channel controller changes from 4 lanes to 8 lanes

• IOPS component ▪ The number of cache controllers within the SI FPGA increases from 1 to 2

 As with all performance changes the exact results depend on many factors and will be different for each customer’s applications.  If the bottleneck in a system is neither the PCIe connection to Tachyon nor the SI cache controller, then installing Performance Accelerator is unlikely to make any difference.

Licensing  Performance Accelerator is a licensed feature and will only be enabled if the Performance Accelerator license is present. Performance Accelerator is supported on:  NAS 3090 only Performance Accelerator is installed by:  Installing a Performance Accelerator license  Performing a full system reboot  If clustered, reboot one node at a time

HDS Confidential: For distribution only to authorized parties.

Page 1-11

Platform Overview Differences Between Models 4060 and 4080

Differences Between Models 4060 and 4080

4060

4080

Page 1-12

HDS Confidential: For distribution only to authorized parties.

Platform Overview What Is What

What Is What

Hitachi Data Systems

BlueArc Mercury

HNAS 3080 (G1)

Mercury 50

HNAS 3080 (G2) and F3080 (File Module M1)

Mercury 55

HNAS 3090 (G1)

Mercury 100

HNAS 3090 (G2) and F3090 (File Module M2)

Mercury 110

HNAS 4060/4080 F4060/F4080 HNAS 4100 F4100

Mercury 220 Mercury 230

HDS Confidential: For distribution only to authorized parties.

Page 1-13

Platform Overview High-performance NAS Platform 3200 Rear View

High-performance NAS Platform 3200 Rear View NIM

FSA/ FSX

FSB

SIM

Power Supply Units (PSUs) Batteries

System Management Unit (SMU) High-performance NAS Server The basic High-performance NAS Server consists of a System Management Unit (SMU), connected to a 4U form factor chassis. The chassis includes redundant fans and Power Supplies, four blade modules called Network Interface Modules (NIMs), a File System X module (FSX), a File System B module (FSB) and a Storage Interface Module (SIM). System Management Unit The SMU is a dedicated appliance for execution of management functions on the High-performance NAS system. The SMU can manage multiple nodes in a clustered environment and several clusters. It is not involved with any client data movement. The SMU consist of an “Off the shelf” PC running Linux and using a BlueArcdeveloped application for the SMU functionality.

Page 1-14

HDS Confidential: For distribution only to authorized parties.

Platform Overview Hitachi NAS Platform Models 3080 and 3090

Hitachi NAS Platform Models 3080 and 3090 Mercury FPGA Board (MFB)

Mercury Motherboard (MMB)

Power Supply Units (PSUs)

System Management Unit (SMU)

Hitachi NAS Platform Models 3080 and 3090 The Hitachi NAS Platform models 3080 and 3090 are able to support the same file services features as the traditional high-performance NAS Platform called the BlueArc Titans. The 3080 and 3090 consist of only one 1U blade built into a 3U cabinet. This blade is not an FRU in Generation 1 (G1), so the cabinet including this blade is one FRU. The only FRUs in the cabinet are the two power supply units (PSUs), the three fan assemblies (FANs), the two HDDs, the 10Gb Small Form Factor Pluggables (XFPs), and Small Form-Factor Pluggables (SFPs). The 3080 and 3090 can use a “built-in” (embedded) SMU in the same processor box controlled by Linux as the OS and an SMU application running on top of Linux. This SMU application can be disabled and an external SMU can be used for management. The external SMU is the same HW and Linux version as the SMU used for the Hitachi High-performance NAS Platform models 3100 and 3200 or BlueArc Titan 3, but the SMU software might need to be upgraded to the same level as the firmware in the 3080 and 3090.

HDS Confidential: For distribution only to authorized parties.

Page 1-15

Platform Overview Cable Side HNAS 3080 or 3090 G1 and G2

Cable Side HNAS 3080 or 3090 G1 and G2

G1

G2

Page 1-16

HDS Confidential: For distribution only to authorized parties.

Platform Overview Hitachi NAS Platform Models 4xx0

Hitachi NAS Platform Models 4xx0

4060/ 4080

4100

HDS Confidential: For distribution only to authorized parties.

Page 1-17

Platform Overview Summary Hitachi NAS Platform 4100

Summary Hitachi NAS Platform 4100 HNAS 4100

Performance Targets • NFS Spec2008 in IOP/s: 140K per node • Throughput: 2000MBs Scalability Targets • 125 file systems per cluster • File system sizes up to 1PB • Up to 32PB shared storage • Disk capacity 32PB • Directory capacity up to 16 million files • Up to 1,024 snapshots

• • • • • • •

High Availability • Hot swappable units • Clustering up to 8 nodes • NVRAM mirroring • Parallel RAID striping • Active-Active clustering

Page 1-18

HDS Confidential: For distribution only to authorized parties.

Unified NAS and IP SAN Hardware accelerated Virtual volumes and servers Multi-protocol support Multi-Tiered Storage (MTS) Policy-based management Data protection features

Platform Overview Module Summary

Module Summary  In this module, you have learned to: • State the purpose and benefits of using Hitachi NAS Platform • State the concept of the Hitachi NAS Platform architecture • Identify the positioning of the Hitachi NAS Platform in the Hitachi Data Systems NAS portfolio

HDS Confidential: For distribution only to authorized parties.

Page 1-19

Platform Overview Module Review

Module Review 1. Which Hitachi storage systems does the Hitachi NAS Platform support? 2. Is a 10GbE customer data LAN supported on the Hitachi NAS Platform 3080 model? 3. Does the 4060 model support 1GbE UTP on the customer data LAN? 4. Is an external SMU required? 5. List the external connectivity differences between models 3080 and 3090. 6. How many nodes can be controlled by the embedded SMU?

Page 1-20

HDS Confidential: For distribution only to authorized parties.

2. Hardware Architecture Module Objectives

 Upon completion of this module, you should be able to: • Identify the hardware components of the Hitachi NAS Platform • Interpret important indicators and status • Explain the external connectivity specification

HDS Confidential: For distribution only to authorized parties.

Page 2-1

Hardware Architecture Hitachi NAS 3080 and 3090 Simplified Block Diagram

Hitachi NAS 3080 and 3090 Simplified Block Diagram HNAS Chassis Memory 3GB 10GbE

10GbE

SiliconFS™ File system Metadata (WFS)

Data Movement (TFL)

Network Interface (NI)

10GbE

Mercury FPGA Board (MFB1)

Memory 3GB

10GbE

Cache 10GB

NVRAM 2GB

GbE GbE

Fastpath

Memory 2GB Fastpath

GbE GbE

Fastpath

Fastpath

GbE GbE

(MBI) BALI

GbE

eth0

SMU

Disk Interface (DI)

FCI

Sector Cache 4GB

FC

Mercury Motherboard (MMB) Memory 8GB

Intel Core 2 Duo E8400 3.0Ghz

eth1

HNAS 3080 and 3090 documentation: Mercury FPGA Board (MFB) HNAS 4060, 4080, and 4100 documentation: Main FPGA Board (MFB)  Network Interface (NI)

 Ethernet TX and RX  Ethernet TCP and framing  Replaces RX, TX and TCP  Data movement and NVRAM (TDP, FDP and WLOG = TFL)  TDP FDP WLOG  Replaces TDP, FDP and WLOG  Motherboard Interface (MBI)  Bridges PCI bus to other FPGAs  Deals with interrupts  Replaces PCI block in WLOG  Wise File System (WFS)  WFS file system chip  Deals with all file system functions Page 2-2

FC

Q E FC 4 FC +

HDS Confidential: For distribution only to authorized parties.

GbE

Hardware Architecture Hitachi NAS 3080 and 3090 Simplified Block Diagram

 Replaces WFILE, WDIR, OBJ, FSA  Disk Interface (DI)  Moving data to and from disk, sector cache  Equivalent of Storage Interface Module (SIM)  Fibre Channel Interface (FCI)  Interfaces DI and Tachyon QE4+  Replaces PCI block in Storage Interface Module (SIM)

HDS Confidential: For distribution only to authorized parties.

Page 2-3

Hardware Architecture Hitachi NAS 4060 and 4080 Simplified Block Diagram

Hitachi NAS 4060 and 4080 Simplified Block Diagram HNAS Chassis Memory 4GB 10GbE

SiliconFS™ File system Metadata (WFS)

Data Movement (TFL)

Network Interface (NI)

10GbE

Main FPGA Board (MFB2)

Memory 4GB

Cache 10GB

NVRAM 4GB

10GbE 10GbE Fastpath

Memory 8GB Fastpath

10GbE 10GbE Fastpath

Fastpath

(MBI)

eth0

SMU

FCI

Sector Cache 4GB

FC

Memory 16GB

Intel Xeon Quad Core E31225 3.1Ghz

eth1

HNAS 3080 and 3090 documentation: Mercury FPGA Board (MFB) HNAS 4060, 4080, and 4100 documentation: Main FPGA Board (MFB)  Network Interface (NI)

 Ethernet TX and RX  Ethernet TCP and framing  Replaces RX, TX and TCP  Data movement and NVRAM (TDP, FDP and WLOG = TFL)  TDP FDP WLOG  Replaces TDP, FDP and WLOG  Motherboard Interface (MBI)  Bridges PCI bus to other FPGAs  Deals with interrupts  Replaces PCI block in WLOG  Wise File System (WFS)  WFS file system chip Page 2-4

FC

Q E FC 8 FC

Main Motherboard (MMB)

BALI GbE

Disk Interface (DI)

HDS Confidential: For distribution only to authorized parties.

GbE

Hardware Architecture Hitachi NAS 4060 and 4080 Simplified Block Diagram

 Deals with all file system functions  Replaces WFILE, WDIR, OBJ, FSA  Disk Interface (DI)  Moving data to and from disk, sector cache  Equivalent of Storage Interface Module (SIM)  Fibre Channel Interface (FCI)  Interfaces DI and Tachyon QE4+  Replaces PCI block in Storage Interface Module (SIM)

HDS Confidential: For distribution only to authorized parties.

Page 2-5

Hardware Architecture Mercury (Main) FPGA Board (MFB) Model 30x0

Mercury (Main) FPGA Board (MFB) Model 30x0

      

MBI (Arria) Motherboard Interface TFL (Stratix III) Data movement and NVRAM WFS (Stratix III) Supports all file system functions NI (Stratix III) Network Interface DI (Stratix III) Disk Interface FCI (Stratix II) Fibre Channel Interface Product Marketing will often only count the 4 main Stratix III FPGA in customer-facing brochures and material HNAS 3080 and 3090 documentation: Mercury FPGA Board (MFB) HNAS 4060, 4080, and 4100 documentation: Main FPGA Board (MFB) Model 4xx0 will be using a newer FPGA Family: Stratix IV.

Page 2-6

HDS Confidential: For distribution only to authorized parties.

Hardware Architecture Memory and Cache per Single Node

Memory and Cache per Single Node

3080/M1

3090/M2

4060

4080

4100

CPU Memory in GBs

8

8

16

16

32

NVRAM1 in GBs

2

2

4

4

8

Metadata Cache in GBs

10

10

10

10

36

Sector Cache in GBs

4

4

4

4

16

Other in GBs

8

8

12

12

16

Total in GBs

32

32

46

46

108

(1) The NVRAM Data Retention period will be 72 hours and the NVRAM battery needs replacing every 2 years

HDS Confidential: For distribution only to authorized parties.

Page 2-7

Hardware Architecture Mercury (Main) Motherboard (MMB)

Mercury (Main) Motherboard (MMB) MMB  Off the shelf x86 motherboard  Single processor

• 30x0 dual core and 4xx0 quad core

       

On board 10/100/1000 Ethernet (3) Connected to 2 x 2.5” HDD (Linux SW RAID-1 configuration) Runs Debian Linux 5.0 Inter-module communications over loopback Linux sockets and shared memory 64 bit architecture Model 30x0 8GB memory Model 40X0 16GB memory Model 4100 32GB memory

HNAS 3080 and 3090 documentation: Mercury MotherBoard (MMB) HNAS 4060, 4080, and 4100 documentation: Main MotherBoard (MMB) The MMB contains a multi core CPU and 8, 16, 32GB of system memory. All of the software tasks run on the MMB. All the custom hardware functionality resides on the MFB. The MFB contains all the FPGA functionality found in Hitachi NAS models.

Page 2-8

HDS Confidential: For distribution only to authorized parties.

Hardware Architecture Mercury (Main) FPGA Board (MFB)

Mercury (Main) FPGA Board (MFB) MFB  Single custom PCB is similar in size to a motherboard  Connects to MMB using four PCIe lanes  Six FPGAs (Replacing 13 High-performance NAS FPGAs)  Model 30x0 24GB memory  Model 40X0 50GB memory  Model 4100 76GB memory

HNAS 3080 and 3090 documentation: Mercury MotherBoard (MMB) HNAS 4060, 4080, and 4100 documentation: Main MotherBoard (MMB)

HDS Confidential: For distribution only to authorized parties.

Page 2-9

Hardware Architecture Hitachi NAS Platform 30x0 Rear Panel

Hitachi NAS Platform 30x0 Rear Panel

2 x 10G ETHERNET CLUSTER PORTS (XFP)

6 x 1G ETHERNET NETWORK PORTS (1000BASE-T COPPER )

2 x 10G ETHERNET NETWORK PORTS (XFP)

PRIVATE 10/100 ETHERNET 5-PORT SWITCH (100BASE-T COPPER )

4 x 1/2/4G FIBRE CHANNEL PORTS (SFP )

NVRAM STATUS LED POWER STATUS LED ALERT LED

PWR SWITCH

RESET SWITCH

2 X REDUNDANT, HOT-SWAPPABLE PSU

RESERVED

MOTHERBOARD MOUSE AND KEYBOARD

2 x USB

SERIAL PORT

RJ45 (future use)

RESERVED

MOTHERBOARD VIDEO

2 x 10/100/1000 ETHERNET MANAGEMENT PORTS

MOTHERBOARD PORT LAYOUT MAY VARY. KEY MOTHERBOARD PORTS ARE IDENTIFIED BY LABELLING.

Five sets of Ethernet ports:  3 x 10/100/1000 Motherboard ports (RJ45)  2 active management ports, 1 inactive reserved for future use  6 x 1G file serving ports (RJ45)  2 x 10G file serving ports (XFP)  2 x 10G cluster ports (XFP)  Five port unmanaged switch (RJ45, no internal connections) Can aggregate file serving ports:  Up to 8 aggregations  Cannot mix 1G and 10G ports in an aggregation Also, USB ports, serial port, VGA, keyboard and mouse

Page 2-10

HDS Confidential: For distribution only to authorized parties.

Hardware Architecture Hitachi NAS Platform Port Layout 3080/3090

Hitachi NAS Platform Port Layout 3080/3090

2 x 10GbE Cluster Interconnect (Optional)

2 x 10GbE File Serving (Optional)

6 x GbE File Serving

5 x 10/100 4 x FC Switch for Storage private (2 optional) network management

Public

Private

KVM for initial setup.

USB ports for Keyboard and/or external media

Serial port for initial setup.

Hitachi NAS 3080 and 3090 have 5 sets of Ethernet ports. From left to right we have 2 10GbE cluster interconnect ports for use when clustering NAS 3080 and 3090 systems. Two 10GbE XFP and 6 1 GbE RJ45 Ethernet ports are for connecting to your public network for client access. Two 10GbE cluster interconnects (XFP). For client file services, there are 2 x 10GbE file serving ports (XFP) and 6 x 1GbE file serving ports (RJ45). Can aggregate file serving ports. All of the “like” Ethernets ports can be combined into one or more aggregations. The only restriction is that the 10GbE ports cannot be combined with the 1GbE ports in the same aggregation. Up to 8 aggregations. Direct traffic to specific ports by giving aggregations the appropriate IP address. Next in line is a five-port unmanaged RJ45 switch (no internal connections). Then, there are four 1/2/4Gbps Fibre Channel storage service ports. All four Fiber Channel ports can be used simultaneously and still maintain their maximum speed of 4Gbps. On the MMB, we have the mouse and keyboard PS2 ports and a video port for connection to a KVM switch, two USB ports, one serial interface. Two Ethernet ports for connection to the public and private networks for management access. The third Ethernet port above the USB ports is not currently active but might be used in the future.

HDS Confidential: For distribution only to authorized parties.

Page 2-11

Hardware Architecture Hitachi NAS Platform 4xx0 Rear Panel

Hitachi NAS Platform 4xx0 Rear Panel

2 x 10G ETHERNET CLUSTER PORTS (SFP+)

4 x 10G ETHERNET NETWORK PORTS (SFP+)

NVRAM STATUS LED

4 x 2/4/8G FIBRE CHANNEL PORTS (SFP+ )

POWER STATUS LED ALERT LED

PWR SWITCH RESET SWITCH

2 X REDUNDANT, HOT-SWAPPABLE PSU

MOTHERBOARD MOUSE AND KEYBOARD

2 x USB

RJ45

SERIAL PORT

MOTHERBO ARD VIDEO

2 x 10/100/1000 ETHERNET MANAGEMENT PORTS

MOTHERBOARD PORT LAYOUT MAY VARY. KEY MOTHERBOARD PORTS ARE IDENTIFIED BY LABELLING.

Three sets of Ethernet ports:  3 x 10/100/1000 Motherboard ports (RJ45)  2 active management ports, 1 inactive reserved for future use  2 x 10G file serving ports (SFP+)  2 x 10G cluster ports (SFP+) Can aggregate file serving ports:  Up to 4 aggregations Also, USB ports, serial port, VGA, keyboard and mouse

Page 2-12

HDS Confidential: For distribution only to authorized parties.

Hardware Architecture Hitachi NAS Platform Port Layout 4060/4080/4100

Hitachi NAS Platform Port Layout 4060/4080/4100

2 x 10GbE Cluster Interconnect (Optional)

4 x 10GbE File Serving

Intelligent Platform Management Interface (IPMI) (Future use)

4 x FC Storage (2 optional)

Public

Private

Mouse Keyboard

VGA

USB ports for Keyboard and/or external media

Serial port for initial setup.

Hitachi NAS 4060, 4080, and 4100 have 3 sets of Ethernet ports. From left to right we have 2 10GbE cluster interconnect ports for use when clustering NAS 4060 and 4080 systems. Then four 10GbE cluster interconnects (SFP+). For client file services, there are 4 x 10GbE file serving ports (SFP+). Can aggregate file serving ports. From 1 and up to 4 aggregations. Direct traffic to specific ports by giving aggregations the appropriate IP address. Next in line, there are four 2/4/8Gbps Fibre Channel storage service ports. All four Fiber Channel ports can be used simultaneously and still maintain their maximum speed of 8Gbps. On the MMB, we have the mouse and keyboard PS2 ports and a video port for connection to a KVM switch, two USB ports, one serial interface. Two Ethernet ports for connection to the public and private networks for management access. The third Ethernet port above the USB ports is not currently active but might be used in the future for Intelligent Platform Management (IPMI).

HDS Confidential: For distribution only to authorized parties.

Page 2-13

Hardware Architecture Hitachi NAS Platform Models 4xx0 Flavors

Hitachi NAS Platform Models 4xx0 Flavors

Supermicro

Tyan

Page 2-14

HDS Confidential: For distribution only to authorized parties.

Hardware Architecture MMB Module Flavors and Port Layout

MMB Module Flavors and Port Layout HNAS 3080/3090 TYAN Toledo HNAS 4060/4080/4100 Supermicro

HNAS 4060/4080/4100 TYAN

HDS Confidential: For distribution only to authorized parties.

Page 2-15

Hardware Architecture NVRAM or Battery Status LED

NVRAM or Battery Status LED NVRAM STATUS LED

3100/3200 FSB Module

3080/3090/4060/4080/4100 MFB

 NVRAM LED – – – –

Page 2-16

Off: disabled or battery exhausted Green solid: Operational Green flash: Contents protected by battery Hold reset button for five seconds to isolate battery for shipping NVRAM enabled when functional software boots

HDS Confidential: For distribution only to authorized parties.

Hardware Architecture Facia and Status LEDs 3100 and 3200

Facia and Status LEDs 3100 and 3200

Status Power LED

HDS Confidential: For distribution only to authorized parties.

Status LED

Page 2-17

Hardware Architecture Facia and Status LEDs 3080 and 3090

Facia and Status LEDs 3080 and 3090

Status Power LED

Page 2-18

HDS Confidential: For distribution only to authorized parties.

Status LED

Hardware Architecture Facia and Status LEDs 4060, 4080, and 4100

Facia and Status LEDs 4060, 4080, and 4100

Status Power LED

HDS Confidential: For distribution only to authorized parties.

Status LED

Page 2-19

Hardware Architecture Power/Server Status LED

Power/Server Status LED 3100/3200 NIM Module

POWER STATUS LED

3080/3090/4060/4080/4100 MFB

 Power/Status LEDs (Mirror the fascia LEDs) – Off – Flash (5Hz) – Flash (0.6Hz) Green

Page 2-20

— The Server is not powered up. — The Server is booting. — The Server is available to host file services but is not currently doing so. — Normal operation with a single Server or an active Server in a clustered operation.

HDS Confidential: For distribution only to authorized parties.

Hardware Architecture NAS Node Status LED (Alert)

NAS Node Status LED (Alert) 3100/3200 NIM Module

STATUS LED

3080/3090/4060/4080/4100 MFB

 Status LED (Amber) – Off – Amber – Slow Flash – Flash (0.8Hz)

— Normal operation. — Critical failure and the NAS Server is not operational — System shutdown has failed; flashes once every three seconds — The NAS Server needs attention and a non-critical failure has been detected; for example, a fan or power supply has failed

HDS Confidential: For distribution only to authorized parties.

Page 2-21

Hardware Architecture Reset and Power Switch

Reset and Power Switch 3100/3200 NIM Module

RESET Switch

3080/3090/4060/4080/4100 MFB

 RESET Switch

• With all Hitachi NAS Platforms, pressing the reset button is always preferable to pulling the power cables or using the main switch THE RESET SWITCH • Generates diagnostic dumps

 Power Switch 3080/3090/4060/4080/4100 • Effectively a motherboard power switch • Should not be required in normal use

Page 2-22

HDS Confidential: For distribution only to authorized parties.

AND POWER SWITCHES ARE RECESSED AND REQUIRE THE INSERTION OF A PEN OR SIMILAR OBJECT TO ACTIVATE

Hardware Architecture Redundant and Hot Swappable Power Supply Unit (PSU)

Redundant and Hot Swappable Power Supply Unit (PSU) 3080/3090 (450W) DC GOOD LED PSU STATUS LED AC GOOD LED

4060/4080/4100 (500W)  90-264V, 47-63Hz AC  PSU used in 30x0 450W is not compatible with the PSU used in 4xx0 500W  System has dual load-sharing PSUs and can function on one unit

Cable retention feature

3100/3200 HNAS 3100/3200 PSU Status:

Green = main power, DC, internal fans, battery OK Amber = PSU fault, including internal fans, battery Off = no main power or switched off HNAS 30x0/4xx0:  AC good LED  On: AC input is powered, operating normally  Off: Check AC input feed  DC good LED  On: DC output operating normally  Off: Disconnect power, wait for 10 seconds, reconnect  If this does not fix the problem, replace the PSU  PSU status LED  Off: OK  On: Internal fault – e.g. exceeds acceptable temperature or fan failure  If operating range has been exceeded, disconnect for 10 minutes, and then reconnect  Replace PSU if LED remains on

HDS Confidential: For distribution only to authorized parties.

Page 2-23

Hardware Architecture SMU200 and SMU300 Replaces SMU100

SMU200 and SMU300 Replaces SMU100

 Newer SMU200 or SMU300 replaces previous SMU100  Faster processor, more memory, larger HDD, DVD-ROM drive, and no floppy  After Tiger-1 v7.0 SMU100 HW is end of life (EOL)  With Angel-1 v11.0 SMU200 HW is end of life (EOL)  SMU400 HW will replace the SMU200 in CY 2013 SMU100

SMU200

SMU300

April 2011 Pentium 4 2.8 GHz, 1 GB, 80GB (SATA)

Pentium Dual-Core 1.8 GHz, 1 GB, 500GB (SATA)

Intel Core 2 Duo E7500 2.93 GHz, 4 GB, 1TB (SATA)

As of today, Hitachi only sells SMU200s and SMU300s. If an SMU100 is earmarked for replacement due to a defect, only an SMU200 or SMU300 is delivered as a replacement unit. Hitachi does not stock SMU100s for spare parts. The SMU200 will run of stock during 2011, and then only SMU300s will be delivered and stocked for spare parts.

Page 2-24

HDS Confidential: For distribution only to authorized parties.

Hardware Architecture SMU400 Early Information

SMU400 Early Information

 Front-swappable disk, to facilitate RMA’s of only disk drives (instead of the entire SMU).  Dual-redundant power supplies  IPMI – to facilitate remote KVM (no more need for physical access to an SMU).

(PROVIDED AS-IS, neither supported nor maintained by HNAS engineering or HDS Support.) Private Public

!

Intel Xeon E3-1220v2 3.1GHz CPU 1 TB SATA disk - front-swappable. Only 1 of 4 slots is used. 8 GB RAM.

SMU400 will not be released and GA at the same time as HNAS 4xx0 and Angel-2 SW release.

HDS Confidential: For distribution only to authorized parties.

Page 2-25

Hardware Architecture Module Summary

Module Summary

 In this module, you have learned to: • Identify the hardware components of the Hitachi NAS Platform • Interpret important indicators and status • Explain the external connectivity specification

Page 2-26

HDS Confidential: For distribution only to authorized parties.

Hardware Architecture Module Review

Module Review 1. Can the Customer Data LAN media be twisted pair or optical? 2. Which bit-rates are supported on the Cluster Interconnect interface? 3. Which media is used for the Cluster Interconnect interface? 4. How many PCBs are included in HNAS 3080 or 3090? 5. How many PCBs are included in HNAS 4060, 4080, and 4100? 6. Are the PSUs in the 30x0 and 4xx0 interchangeable?

HDS Confidential: For distribution only to authorized parties.

Page 2-27

Hardware Architecture Module Review

Page 2-28

HDS Confidential: For distribution only to authorized parties.

3. Software Architecture Module Objectives  Upon completion of this module, you should be able to: • Identify the BlueArc Operating System (BOS) in the Hitachi NAS Platform nodes • Identify the software components of the Hitachi NAS Platform nodes • Follow the individual steps in the boot process • Explain the structure of licensing for the HNAS system • List the components in the Hitachi NAS Platform software suite

HDS Confidential: For distribution only to authorized parties.

Page 3-1

Software Architecture Software Components Hitachi NAS Platform Models

Software Components Hitachi NAS Platform Models 1 2 3 4

BALI

EVS0

PCIe

SMU

PAPI SOAP client

Atlas SOAP Client

MMB

MBI

Atlas SOAP server

MFB

MCP PAPI

PAPI SOAP server

Linux

BALI SMU

The underlying operating system is Linux. Linux manages the hardware, including the mirrored HDDs and the network protocol stack. SOAP = Simple Object Access Protocol MFB = Mercury FPGA Board MMB = Mercury Motherboard MCP = Mercury Charge/Power Board SMU = System Management Unit PAPI = Platform API BALI = BOS And Linux Incorporated

Page 3-2

HDS Confidential: For distribution only to authorized parties.

Software Architecture Node Boot Sequence

Node Boot Sequence 1 2 3 4

BALI

EVS0

PCIe

PAPI SOAP client

SMU Atlas SOAP Client

MMB

MBI

Atlas SOAP server

MFB

MCP PAPI

PAPI SOAP server

Linux

BALI SMU

 To make the node operational, the first requirement is to boot up the motherboard and load the Linux kernel.  Next step is the bring up the 3 most important application modules, BALI, PAPI and Embedded SMU (if the embedded SMU services are not disabled).  Having BALI active enables the Mercury FPGA Board to reset and load the firmware for file services and start the Enterprise Virtual Servers (EVSs).  EVS0 is by default configured and will now be accessible for administrative purposes like configuration tasks and monitoring.

HDS Confidential: For distribution only to authorized parties.

Page 3-3

Software Architecture BOS and Linux Incorporated (BALI)

BOS and Linux Incorporated (BALI)

 “BOS And Linux Incorporated” (BALI)  A software platform  Fundamental Hitachi NAS Platform enabler  Locked to a single core (core 1)

BALI starts after Linux is running and is the software that controls the NAS node functionality. BALI = BOS and Linux Incorporated.

Page 3-4

HDS Confidential: For distribution only to authorized parties.

Software Architecture Platform API (PAPI)

Platform API (PAPI)

 “Platform API” (PAPI) is a Linux application  Provides platform independence for managing Linux configuration • Network, (DNS, NIS, IP), Date/Time, Package management, Version and status

 BALI registry is the “master”

• So we can propagate changes round the cluster and overwrites Linux configuration if there is a mismatch

 PAPI client in both BALI and SMU • Never accessed directly

 PAPI has a housekeeper

• Regularly scans for configuration mismatches and fixes them

PAPI communicates the necessary information to the Linux platform for execution. And, it scans periodically for Linux configuration changes and fixes any discrepancies. The custom FPGA System Board is managed through a device driver as any other device would be. The Linux network stack provides connectivity used for management. There is a SOAP client or server for each of the major BlueArc software components. SOAP was implemented first in Stone-1 v6.0, which enables different firmware versions to communicate. SOAP is an industry standard. If you are not familiar with SOAP, it is a simple XML based protocol used to allow applications to exchange information over HTTP. It makes the individual components fairly independent of each other making development and modifications much simpler. SOAP = Simple Object Access Protocol PAPI = Platform API if you try to change Linux, PAPI will overwrite the changes API = Application Programming Interface. XML = Extensible Markup Language The PAPI services can be restarted on request.

HDS Confidential: For distribution only to authorized parties.

Page 3-5

Software Architecture NAS Platform Software Suite

NAS Platform Software Suite

 Virtualization •

Virtual file system and volumes



Basic and Premium Deduplication



Enterprise Virtual servers (EVS)



Clustered Name Space (CNS)

 Storage Management •

Integrated tiered storage



Tiered File Systems (TFS)



Policy-based data migration and replication

 Data Protection •

Snapshots



Asynchronous replication



Anti-virus scanning



Disk-to-disk and disk-to-tape backup



TrueCopy Remote Replication and ShadowImage Replication



Synchronous Disaster Recovery

TrueCopy Remote Replication refers to Hitachi TrueCopy Remote Replication bundle ShadowImage Replication refers to Hitachi ShadowImage Replication Synchronous Disaster Recovery refers to Synchronous Disaster Recovery for Hitachi NAS Platform

Page 3-6

HDS Confidential: For distribution only to authorized parties.

Software Architecture Hitachi NAS Platform Software Licensing

Hitachi NAS Platform Software Licensing

EVS Security Model iSCSI

Ultra BASE Key

The above example is only to explain the concept of license bundles and individual licenses. The bundles might be changed for reasons like adjusting the solution to the market and competitive solutions.

HDS Confidential: For distribution only to authorized parties.

Page 3-7

Software Architecture Hitachi NAS Software Bundles

Hitachi NAS Software Bundles Entry  CIFS and NFS  2x Enterprise Virtual Server  Storage Pool, FS Audit  File System Rollback  Quick Snapshot Restore  Base Deduplication  60-day Trial License  Cluster Name Space  HA cluster  iSCSI  File System Recover from Snapshot  Data Migrator  Replication, incl. Object, IDR, IBR, ADC  XVL (External Cross Volume Links)  Data Migrator to Cloud1  File Clone  Read Caching  Synchronous Image Backup  Virtual Server Migration  Virtual Server Security  Premium Deduplication  PerfAccelerator2  Terabyte License  WORM

Value  4x Enterprise Virtual Server  Entry bundle, plus:  iSCSI  File System Recover from Snapshot  Data Migrator  Replication, incl. Object, IDR, IBR, ADC  XVL (External Cross Volume Links)  Data Migrator to Cloud1  File Clone  Enterprise Virtual Server  Read Caching  Synchronous Image Backup  Virtual Server Security  Virtual Server Migration  Premium Deduplication  PerfAccelerator2  Terabyte License  WORM

Ultra  64x Virtual Server  Value bundle, plus:  Replication, incl. Object, IDR, IBR, ADC  XVL (External Cross Volume Links)  Data Migrator to Cloud1  File Clone  Read Caching  Synchronous Image Backup  Virtual Server Security  Virtual Server Migration  PerfAccelerator2  Premium Deduplication  Terabyte License  WORM

 Same software package on HUS & HUS VM.  Licenses are perpetual licenses, per node  Enterprise Virtual Server license upgrades are available in Insight  Enterprise License Agreement provides volume discounts for 10+ nodes; available as term or perpetual licenses  NAS Virtual Infrastructure Integrator (V2I) is an optional application 

1Data

Migrator to Cloud will be release April 2013 in HNAS OS v11.1



2PerfAccelerator

only for File Module M2 or 3090

Optional Items, in blue, purchased separately.

Valid and active license key options:

Page 3-8

CIFS

- license for CIFS

NFS

- license for NFS

ISCSI

- license for ISCSI

WORM

- license for WORM filesystem

SFM

- license for EVS migration within a server farm

DM

- license for DataMigrator

CNS

- license for CNS

QSR

- license for snapRestore

FSR

- license for FS roll back

ReadCache

- license for ReadCache

EvsSecurity

- license for EVS security

MetroCluster

- license for MetroCluster

JetMirror

- license for JetMirror

XVL

- license for XVL HDS Confidential: For distribution only to authorized parties.

Software Architecture Hitachi NAS Software Bundles

FSRS

- license for FS Recover from Snapshot

HDS

- license for HDS storage

JetClone

- license for JetClone

JetImage

- license for JetImage

JetCenterStandard

- license for JetCenterStandard

JetCenterFoundation

- license for JetCenterFoundation

PerfAccelerator

- license for PerfAccelerator

BaseDeduplication

- license for base Deduplication

PremiumDeduplication

- license for premium Deduplication

DMCloud

- license for Data Migrator Cloud Option

CLUSTER:

- license for cluster with nodes

EVS:

- license for EVS in cluster ('max' for unlimited EVS)

ModelType:

- license for upgrading HNAS model (valid values: '4080')

TB:

- license for TB

EXP:mm/dd/yyyy

- Expiry date for the license

(List might not be complete and subject for change in newer firmware version, please refer to release notes.)

HDS Confidential: For distribution only to authorized parties.

Page 3-9

Software Architecture Module Summary

Module Summary  In this module, you have learned to: • Identify the BlueArc Operating System (BOS) in the Hitachi NAS Platform nodes • Identify the software components of the Hitachi NAS Platform nodes • Follow the individual steps in the boot process • Explain the structure of licensing for the HNAS system • List the components in the Hitachi NAS Platform software suite

Page 3-10

HDS Confidential: For distribution only to authorized parties.

Software Architecture Module Review

Module Review 1. From where is the FW loaded and updated in a HNAS 4100 model? 2. From where is the FW loaded and updated in a HNAS 3080 or 3090 models? 3. What is a Cluster Name Space? 4. List some of the Data Protection features.

HDS Confidential: For distribution only to authorized parties.

Page 3-11

Software Architecture Module Review

Page 3-12

HDS Confidential: For distribution only to authorized parties.

4. Installation of Hitachi NAS Platform Module Objectives  Upon completion of this module, you should be able to: • Identify the physical environmental and specifications • Understand the login accounts used on the different networks • Recognize the individual steps in the hardware and software installation flow for the Hitachi NAS Platform as a single node • List the individual steps in the hardware installation flow for the external SMU installation and set up procedures • Perform the procedure to join an additional Hitachi NAS Platform node into the N-way cluster

HDS Confidential: For distribution only to authorized parties.

Page 4-1

Installation of Hitachi NAS Platform Installation Outline of Hitachi NAS Platform

Installation Outline of Hitachi NAS Platform 1. Rack mounting 2. Pre cabling a) To avoid IP-address conflicts do not connect any customer facing network to the nodes until initial setup is completed 3. Fibre Channel (FC) switch configuration 4. Storage subsystem configuration 5. Initial setup of the first node If a single node, install SMU application and process stops here; otherwise continue: 6. Initial setup of SMU a) SMU Initial Configuration (CLI — Command Line Interface) b) SMU Wizard (GUI) 7. Initial setup of the second node in the cluster 8. Join the second node to the cluster

The sequence above is a suggestion to get the basic configuration completed. Reference: MK-90BA021-xx Hitachi NAS Platform System Installation Guide MK-92HNAS015-xx Hitachi NAS Platform model 4000 System Installation Guide Release 11.1.3250

Page 4-2

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Rack Mounting

Rack Mounting

   

Telescopic clip in rails fit 610 mm – 740 mm deep racks Minimal fasteners and simple installation Secure with screws Do not use any other rail kit

HDS Confidential: For distribution only to authorized parties.

Page 4-3

Installation of Hitachi NAS Platform Login User Accounts Using Embedded SMU

Login User Accounts Using Embedded SMU supervisor supervisor

MFB

MBI

BALI

EVS0

SSC

PCIe

SMU

Private management

SOAP Client

SOAP server

MMB

Public data network 1 2 3 4

SOAP client

SSC

SSH server

manager nasadmin

root nasadmin

IMS

eth1

supervisor supervisor

Linux

WEB server

SSC

PAPI

SOAP server

manager nasadmin

supervisor supervisor

admin nasadmin

root@mercury(bash):~#

Console

eth0 Console

IMS

Console

MFB = Mercury (Main) FPGA Board MMB = Mercury (Main) MotherBoard SMU = System Management Unit BALI = BOS And Linux Incorporated PAPI = Platform API

Page 4-4

HDS Confidential: For distribution only to authorized parties.

Public management

Installation of Hitachi NAS Platform Login User Accounts Using External SMU

Login User Accounts Using External SMU supervisor supervisor

MBI

BALI

SSC

PCIe

Private management

SMU SOAP server

MMB

EVS0

SOAP Client

MFB

Public data network 1 2 3 4

supervisor supervisor

SOAP client

eth1

SOAP server supervisor supervisor

Linux

SSC

SSH server

manager nasadmin

root nasadmin

Console Console

SSC

SSC

PAPI

eth0

IMS SMU

manager nasadmin

admin nasadmin

root@mercury(bash):~#

Console

IMS

Public management

HDS Confidential: For distribution only to authorized parties.

Page 4-5

Installation of Hitachi NAS Platform Null Modem Cable Configuration

Null Modem Cable Configuration

 A standard null modem cable from an electronics shop will work  Linksys, DF700 and similar null modem cables do not work  Ensure that the cable used is configured as shown in this cable pin connection chart Pin

Pin

Abbreviation

Signal

DCD

1

1

DCD

Data Carrier Detected

Receive Data

RD

2

2

RD

Receive Data

Transmit Data

TD

3

3

TD

Transmit Data

Data Terminal Ready

DTR

4

4

DTR

Data Terminal Ready

Signal Ground

SG

5

5

SG

Signal Ground

Data Set Ready

DSR

6

6

DSR

Data Set Ready

Request To Send

RTS

7

7

RTS

Request To Send

Clear To Send

CTS

8

8

CTS

Clear To Send

Ring Indicator

RI

9

9

RI

Ring Indicator

Signal

Abbreviation

Data Carrier Detected

Interface configuration:  115,200 bps  8 data bits  1 stop bit  No parity  No Flow control  VT100 emulation

Page 4-6

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Three Important Success Criteria

Three Important Success Criteria Private LAN (eth1): Your Values

Examples

Subnet mask

255.255.255.0

Component

Name

IP-address

Name

IP-address

Admin EVS Node

1

Avn1

192.0.2.15

Admin EVS Node

2

Temp1

192.0.216

Cluster name

Mycluster

Node1 (Clustername-1) Node 2

Mycluster-1

192.0.2.11

(Clustername-2)

Mycluster-2

192.0.2.12

External SMU

Smu1

192.0.2.10

Public LAN SMU (eth0): Your Values

Examples

Subnet mask Component External SMU

255.255.0.0 Name

IP-address

Name

IP-address

Smu1

10.123.789.10

1. Planning . . . . . . 2. Planning . . . . . . 3. Follow the PLAN!! Collect all customer related information and fill in this form as a minimum. Configure the nodes with the data even before connecting to the network.

HDS Confidential: For distribution only to authorized parties.

Page 4-7

Installation of Hitachi NAS Platform Single HNAS 30x0 or 4xx0 with Embedded SMU

Single HNAS 30x0 or 4xx0 with Embedded SMU SSH/GUI, NTP, SMTP, Hi-Track®, ….

Storage, NTP, Switches, Hi-Track….

Two IP addresses associated with Admin Virtual Node (AVN) • eth0 public address, equivalent to SMU address on HNAS 12.120.56.111 • eth1 private network address (Optional) 192.0.2.15

Addresses are permanently assigned to the node as there are no clustering considerations to worry about AVN:12.120.56.111

AVN: 192.0.2.15

The IP address in the red network is optional. If components on the red network need to be managed by the internal SMU or the customer has services or Hi-Track on that network, an IP address for the Admin Virtual Node is required.

Page 4-8

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Initial Setup Single Node Embedded SMU

Initial Setup Single Node Embedded SMU

Public

Node 1

eth0

Admin EVS IP 12.120.56.111

1. CLI setup eth1

GW

Private

Public

HDS Confidential: For distribution only to authorized parties.

Page 4-9

Installation of Hitachi NAS Platform Default Interface Settings for 3080 and 3090

Default Interface Settings for 3080 and 3090  Please check, could change without warning Setting

Default settings Default 1

Default 2

Root password

nasadmin

nasadmin

Manager password

nasadmin

nasadmin

Admin password

nasadmin

nasadmin

Admin EVS public IP address (eth0)

192.168.31.xxx

192.168.4.xxx

Subnet mask

255.255.255.0

255.255.255.0

Admin EVS private IP address (eth1)

192.0.2.2

192.0.2.2

Node private IP address (eth1)

192.0.2.200

192.0.2.200

Subnet mask

255.255.255.0

255.255.255.0

Gateway

192.168.31.254

192.168.4.1

Host name

myhost

testhost

Domain

mydomain.com

testdomain.com

The 4060, 4080, and 4100 models are not preconfigured with any default configuration. Therefore the process for initial setup is somewhat different from the 3080 and 3090 models.

Page 4-10

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Single Node Initial Setup: Models 3080 and 3090

Single Node Initial Setup: Models 3080 and 3090

1. Connect serial null-modem cable a. 115.200bps, 8 bits/ byte, 1 stop bit, no parity, no flow-control

2. Log in as manager, password nasadmin will bring up the BOS console 3. Use the evsipaddr command to list and update the public IP address a. example list: evsipaddr –l b. example update: evsipaddr –e 0 –u –i 12.120.56.111 –m 255.255.240.0 –p eth0 Or evsipaddr –e 0 –a –i 12.120.56.111 –m 255.255.240.0 –p eth0 evsipaddr –e 0 –r –i xxx.xxx.xxx.xxx

HDS Confidential: For distribution only to authorized parties.

Page 4-11

Installation of Hitachi NAS Platform Node Initial Setup: Models 4060, 4080, and 4100

Node Initial Setup: Models 4060, 4080, and 4100

1. The 4xx0 nodes only load Linux and will not be able to continue to load the BALI console. 2. Power status flashing. 3. The state is indicated with missing: /etc/opt/mfb.ini network setup file.

4. The advantage is being able to setup and control networks settings using one easy script and use customer specific settings. 5. IP address conflicts will be avoided just following the plan. 6. See the next two pages for the initial setup flow.

Page 4-12

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Node Initial Setup Model 4xx0 1 of 3

Node Initial Setup Model 4xx0 1 of 3 mercury login: root Password: ♦♦♦♦♦♦♦♦ (nasadmin) Last login: Tue May 14 10:54:23 UTC 2013 on ttyS0 Linux mercury 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 WARNING: root access should be used only under instruction from your support team. Modifying system settings or installed packages could adversely affect the server. root@mercury(bash):~# nas-preconfig This script configures the server's basic network settings (when such settings have not been set before). Please provide the server's: - IP address - netmask - gateway - domain name - host name After this phase of setup has completed, further configuration may be carried out via web browser. Please enter the Admin Service Private (eth1) IP address

Continue on next page > >

HDS Confidential: For distribution only to authorized parties.

Page 4-13

Installation of Hitachi NAS Platform Node Initial Setup Model 4xx0 2 of 3

Node Initial Setup Model 4xx0 2 of 3 < <<< continued Please enter the Admin Service Private (eth1) IP address 192.0.2.45 Please enter the Admin Service Private (eth1) Netmask 255.255.255.0 Please enter the Optional Admin Service Public (eth0) IP address 12.120.56.111 Please enter the Admin Service Public (eth0) Netmask 255.255.240.0 Please enter the Optional Physical Node (eth1) IP address 192.0.2.41 Please enter the Physical Node (eth1) Netmask 255.255.255.0 Please enter the Gateway 12.120.56.254 Please enter the Domain name (without the host name) hds.com Please enter the Hostname (without the domain name) Nas1 Continue on next page > >

Page 4-14

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Node Initial Setup Model 4xx0 3 of 3

Node Initial Setup Model 4xx0 3 of 3 < <<< continued Admin Public (eth0): IP = 12.120.56.111 ; Netmask = 255.255.240.0 Admin Private (eth1): IP = 192.0.2.45 ; Netmask = 255.255.255.0 Physical Node (eth1): IP = 192.0.2.41 ; Netmask = 255.255.255.0 Gateway: 12.120.56.254 Domain: hds.com Unit Hostname: nas1 Are the above settings correct? [y/n] y Configuration written to /etc/opt/mfb.ini. root@mercury(bash):~# reboot Broadcast message from root@mercury (ttyS0) (Tue May 14 09:10:37 2013): The system is going down for reboot NOW! root@mercury(bash):~#

HDS Confidential: For distribution only to authorized parties.

Page 4-15

Installation of Hitachi NAS Platform Single Node Initial Setup: License Keys

Single Node Initial Setup: License Keys

Public

Node 1

eth0

Admin EVS IP 12.120.56.111 eth1

1. CLI setup

GW

2. GUI License Keys: CFS, NFS, …

Private

The license keys for the single node are added.

Page 4-16

HDS Confidential: For distribution only to authorized parties.

Public

Installation of Hitachi NAS Platform Initial Node Setup: Hitachi NAS Platform GUI

Initial Node Setup: Hitachi NAS Platform GUI  Add license key as needed

HDS Confidential: For distribution only to authorized parties.

Page 4-17

Installation of Hitachi NAS Platform Adding License Key

Adding License Key

Page 4-18

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Initial Setup: Hitachi NAS Platform Node GUI

Initial Setup: Hitachi NAS Platform Node GUI  Finish the server setup by clicking Server Setup Wizard  The Server Setup Wizard is a sequence of tasks that can be done individually

HDS Confidential: For distribution only to authorized parties.

Page 4-19

Installation of Hitachi NAS Platform Server Setup Wizard

Server Setup Wizard  Recommended to delete Storage Pools, EVS and File System after implementation test

Page 4-20

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Single Node Initial Setup: File Service EVS

Single Node Initial Setup: File Service EVS

Public

Node 1

eth0

Admin EVS IP 12.120.56.111 eth1

Data EVS1 213.1.15.22

1. CLI setup

GW

2. GUI License Keys: CFS, NFS, …

Private

Public

After the process for initializing the node configuration, the administration process can be initiated. Data EVS can be created to enable File Services being offered for the clients connected via the data network. In the above example only one EVS (EVS1) is created.

HDS Confidential: For distribution only to authorized parties.

Page 4-21

Installation of Hitachi NAS Platform Hitachi NAS Platform Management Console

Hitachi NAS Platform Management Console

Pay attention to the lack of a scroll function. In the embedded SMU GUI there is no scroll function. Only one server can be managed.

Page 4-22

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Clustering from A to Z

Clustering from A to Z Storage, NTP Switches Hi-Track….

SSH/GUI, NTP, SMTP, Hi-Track, ….

SMU:12.120.56.222

SMU: 192.0.2.40

One IP address associated with AVN

• eth1 private network address 192.0.2.45

Addresses are permanent as there are no clustering considerations to worry about. AVN IP address on eth0 is optional.

AVN:12.120.56.111

AVN: 192.0.2.45

The tasks list from the white paper “Clustering from A – Z” 1. Planning and System Assurance Document (SAD) 2. Initial setup external SMU 3. Initial setup Node 1 a) Initial setup 4060/4080/4100 Node 1 b) or Initial setup 3080/3090 Node 1 c)

or Initial setup 3100/3200 Node 1

4. Initial setup Node 2 a) Initial setup 4060/4080/4100 Node 2 b) or Initial setup 3080/3090 Node 2 c)

or Initial setup 3100/3200 Node 2

5. Cabling a) Cabling private and public network b) Cabling storage c)

Cabling cluster interconnect

d) Cabling customer data network HDS Confidential: For distribution only to authorized parties.

Page 4-23

Installation of Hitachi NAS Platform Clustering from A to Z

6. Manage node 1 7. Add license bundles, TB and cluster key to node 1 8. Promote node 1 as single node cluster 9. Manage node 2 10. Add cluster license key to node 2 11. Add node 2 into cluster

Page 4-24

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Initial Setup: First Node in a Cluster

Initial Setup: First Node in a Cluster

Public

Node 1

eth0

eth1 Admin EVS IP 192.0.2.45

1. CLI setup

GW

Private

Public

HDS Confidential: For distribution only to authorized parties.

Page 4-25

Installation of Hitachi NAS Platform Cluster Initial Setup: Model 30x0 CLI First Node

Cluster Initial Setup: Model 30x0 CLI First Node

1. Connect serial null-modem cable a. 115.200bps, 8 bits/ byte, 1 stop bit, no parity, no flow-control

2. Log in as manager, password nasadmin will bring up the BOS console 3. Use the evsipaddr command to list and update the public IP address a. example list: evsipaddr –l b. example update: evsipaddr –e 0 –u –i 192.0.2.45 –m 255.255.250.0 –p eth1

or evsipaddr –e 0 –a –i 192.0.2.45 –m 255.255.255.0 –p eth1 evsipaddr –e 0 –r –i xxx.xxx.xxx.xxx

For HNAS 4060, 4080, 4100 models refer to the 4 pages starting with page 13.

Page 4-26

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Initial Setup: External SMU

Initial Setup: External SMU 2. CLI setup

SMU

Node 1

Public

eth0

eth1 Admin EVS IP 192.0.2.45

1. CLI setup

Public IP Address 12.120.56.222

GW

Private

Public

This part of initial setup is among other parameters defining the public network access of the SMU. The customer will need to supply the address on the public customer network, which is intended to be used for the management interface address.

HDS Confidential: For distribution only to authorized parties.

Page 4-27

Installation of Hitachi NAS Platform Initial Setup: External SMU CLI

Initial Setup: External SMU CLI 1. Connect serial null-modem cable • 115,200bps, 8 bits/ byte, 1 stop bit, no parity, no flow-control

2. Log in as root • Default root password is nasadmin • Run smu-unconfig to revert to factory defaults • SMU reboots after the smu-unconfig process is complete

3. Run smu-config • Login as root password nasadmin and run smu-config • Follow CLI-based setup wizard to supply the SMU network configuration • SMU reboots after the process is complete

4. Next step: • Finish SMU setup using SMU wizard GUI

The serial cable is only intended to be used for the initial installation process. It is strongly recommended to remove the serial cable after installation to avoid any performance impact on the management plane.

Page 4-28

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Initial Setup: SMU Wizard

Initial Setup: SMU Wizard 3. GUI SMU Wizard

2. CLI setup

Private IP Address 192.0.2.40

Node 1

Public IP Address 12.120.56.222

Public

eth0

eth1 Admin EVS IP 192.0.2.45

1. CLI setup

SMU

GW

Private

Public

During the SMU Wizard process, the private LAN address is given. It is recommended you use the default address range on the “Rack Network”: 192.0.2.X. Escalation and dump analysis is easier when the addressing follows the default address and ranges for the private network. The passwords, DNS and Domain, SMTP host, time zone, and public NTP host access are defined during this process as well.

HDS Confidential: For distribution only to authorized parties.

Page 4-29

Installation of Hitachi NAS Platform Initial Setup: SMU GUI

Initial Setup: SMU GUI  Point your browser to the public IP of the SMU  Click SMU Setup Wizard and complete the SMU configuration • SMU application will restart upon completion

Page 4-30

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Initial Setup: Managed Servers

Initial Setup: Managed Servers 3. GUI SMU Wizard

2. CLI setup

Private IP Address 192.0.2.40

SMU

Public IP Address 12.120.56.222

Public

4. GUI Managed Servers

Node 1

eth0

eth1 Admin EVS IP 192.0.2.45

1. CLI setup

GW

Private

Public

In the Managed Servers GUI, specify the IP address of the Admin EVS and the UserID/Password for the node. This specification is for the SMU to make a connection to the Admin EVS.

HDS Confidential: For distribution only to authorized parties.

Page 4-31

Installation of Hitachi NAS Platform Initial Setup: Hitachi NAS Platform Node GUI

Initial Setup: Hitachi NAS Platform Node GUI  Log in to the SMU and click Managed Servers • Click Add and follow the prompts to specify a new managed Hitachi NAS server (Admin EVS) • Once specified, the managed server will appear in the Server Status Console

Page 4-32

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Cluster Initial Setup: License Keys

Cluster Initial Setup: License Keys 3. GUI SMU Wizard

2. CLI setup

Private IP Address 192.0.2.40

SMU

Public IP Address 12.120.56.222

Public

4. GUI Managed Servers

Node 1

eth0

eth1 Admin EVS IP 192.0.2.45

1. CLI setup

GW

5. GUI License Keys: CFS, NFS, … Cluster:1

Private

Public

In step 5, the license keys for the primary node are added.

HDS Confidential: For distribution only to authorized parties.

Page 4-33

Installation of Hitachi NAS Platform Initial Setup: Hitachi NAS Platform Licenses

Initial Setup: Hitachi NAS Platform Licenses

 Add license key as needed

Page 4-34

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Adding License Key

Adding License Key

HDS Confidential: For distribution only to authorized parties.

Page 4-35

Installation of Hitachi NAS Platform Cluster Initial Setup: Enable Clustering

Cluster Initial Setup: Enable Clustering 3. GUI SMU Wizard

2. CLI setup

SMU

Private IP Address 192.0.2.40

4. GUI Managed Servers

Public IP Address 12.120.56.222

Public

6. GUI Cluster Wizard

Node 1

eth0

Private IP Address 192.0.2.41 eth1 Admin EVS IP 192.0.2.45

1. CLI setup

GW

5. GUI License Keys: CFS, NFS, … Cluster:1

Private

Public

The Cluster Wizard defines the physical IP address of the primary node which is used for heartbeat and Cluster Interconnect addresses by the cluster software. It also promotes the Active-Active cluster and provides a cluster name. This process ends with a primary node restart.

Page 4-36

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Initial Setup: Promote Clustering

Initial Setup: Promote Clustering 1. Go to SMU 2. Under Server Settings, click Cluster Wizard 3. Enter cluster name and node IP Address a. Refer to Lab Configuration Sheet

The quorum device would normally be the SMU that manages the node, but could actually be any SMU containing quorums and addressable on the private rack network. As an example, the MetroCluster solution recommends using a quorum in different location than the Primary SMU or Standby SMU. Having this flexibility, the quorum could be located in a third location different from the Primary/Secondary site.

HDS Confidential: For distribution only to authorized parties.

Page 4-37

Installation of Hitachi NAS Platform Promoted to a Single-Node Cluster

Promoted to a Single-Node Cluster  Ready to add more nodes to the cluster

Page 4-38

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform HNAS Clustered with External SMU

HNAS Clustered with External SMU Storage, NTP Switches Hi-Track….

SSH/GUI NTP Hi-Track, ….

SMU:12.120.56.222

SMU: 192.0.2.40

AVN: 192.0.2.45

Node: 192.0.2.41 AVN:12.120.56.111

• eth1 private network address 192.0.2.45 • transitory as AVN can migrate

One IP address per node associated with node

Node: 192.0.2.42 AVN:12.120.56.111

One IP address associated with AVN

AVN: 192.0.2.45

• eth1 private network address 192.0.2.41 and 192.0.2.42 • permanently configured so can contact a node even if BALI doesn’t come up

AVN IP address on eth0 is optional

The IP address in the blue network is optional. If the customer has services or HiTrack on that network, an IP address for the Admin Virtual Node is required as well on the blue network.

HDS Confidential: For distribution only to authorized parties.

Page 4-39

Installation of Hitachi NAS Platform Cluster Initial Setup: Second Node

Cluster Initial Setup: Second Node 3. GUI SMU Wizard

2. CLI setup

SMU

Private IP Address 192.0.2.40

4. GUI Managed Servers

Public IP Address 12.120.56.222

Public 7. CLI setup

6. GUI Cluster Wizard

Node 1

eth0

Node 2

eth0

eth1

GW

Private IP Address 192.0.2.41 eth1 Admin EVS IP 192.0.2.45

1. CLI setup

GW

Admin EVS IP 192.0.2.46

5. GUI License Keys: CFS, NFS, … Cluster:1

Private

Page 4-40

HDS Confidential: For distribution only to authorized parties.

Public

Installation of Hitachi NAS Platform Cluster Initial Setup: Models 30x0 CLI Second Node

Cluster Initial Setup: Models 30x0 CLI Second Node

1. Connect serial null-modem cable a. 115,200bps, 8 bits/ byte, 1 stop bit, no parity, no flow-control

2. Log in as manager, password nasadmin will bring up the BOS console 3. Use the evsipaddr command to list and update the public IP address a. example list: evsipaddr –l b. example update: evsipaddr –e 0 –u –i 192.0.2.46 –m 255.255.250.0 –p eth1 Or evsipaddr –e 0 –a –i 192.0.2.46 –m 255.255.250.0 –p eth1 evsipaddr –e 0 –r –i xxx.xxx.xxx.xxx

For HNAS 4060, 4080, 4100 models refer to the 4 pages starting with page 13.

HDS Confidential: For distribution only to authorized parties.

Page 4-41

Installation of Hitachi NAS Platform Initial Setup: Flow and IP Addressing

Initial Setup: Flow and IP Addressing 3. GUI SMU Wizard

2. CLI setup

SMU

Private IP Address 192.0.2.40

4. GUI Managed Servers

Public IP Address 12.120.56.222

Public 7. CLI setup

6. GUI Cluster Wizard

Node 1

eth0

Node 2

eth0

Private IP Address 192.0.2.41 eth1 Admin EVS IP 192.0.2.45

1. CLI setup

5. GUI License Keys: CFS, NFS, … Cluster:1

GW

eth1

GW

Admin EVS IP 192.0.2.46

8. GUI Managed Servers

Private

Public

In case the single node 2 is going to join an existing cluster, this node also needs to be managed by the SMU in order to install the license key for clustering.

Page 4-42

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Initial Setup: Hitachi NAS Platform Node GUI

Initial Setup: Hitachi NAS Platform Node GUI  Log in to the SMU and click Managed Servers. • Click Add and follow the prompts to specify the new managed NAS server (Admin EVS). • Once specified, the managed server will appear in the Server Status Console.

Select the Managed Server for Node 1 and add Node 2 to the list of managed servers by specifying the IP Address of the admin EVS of Node 2.

HDS Confidential: For distribution only to authorized parties.

Page 4-43

Installation of Hitachi NAS Platform Initial Setup: License Key

Initial Setup: License Key 3. GUI SMU Wizard

2. CLI setup

SMU

Private IP Address 192.0.2.40

4. GUI Managed Servers

Public IP Address 12.120.56.222

Public 7. CLI setup

6. GUI Cluster Wizard

Node 1

eth0

Node 2

eth0

Private IP Address 192.0.2.41 eth1 Admin EVS IP 192.0.2.45

1. CLI setup

5. GUI License Keys: CFS, NFS, … Cluster:1

GW

eth1

GW

Admin EVS IP 192.0.2.46

8. GUI Managed Servers

Private

9. GUI License Key Cluster:1

Public

Select the admin EVS of Node 2 and choose Server Settings to add the license key to Node 2.

Page 4-44

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Adding License Key

Adding License Key

Only one license is added to Node 2 (The cluster license: “MAX 1 Nodes”), since the “protocols” are already licensed on the primary Node 1.

HDS Confidential: For distribution only to authorized parties.

Page 4-45

Installation of Hitachi NAS Platform Initial Setup: Join the Second Node

Initial Setup: Join the Second Node 3. GUI SMU Wizard

2. CLI setup

SMU

Private IP Address 192.0.2.40

4. GUI Managed Servers

6. GUI Cluster Wizard

Node 1

eth0

Private IP Address 192.0.2.41 eth1 Admin EVS IP 192.0.2.45

1. CLI setup

5. GUI License Keys:

CFS, NFS, … Cluster:1 Cluster:1+1=2

GW

Public IP Address 12.120.56.222

10. GUI Cluster Conf.

Node 2

7. CLI setup

eth0

Private IP Address 192.0.2.42 eth1

GW

Admin EVS IP 192.0.2.46

8. GUI Managed Servers

Private

Page 4-46

Public

HDS Confidential: For distribution only to authorized parties.

9. GUI License Key Cluster:1

Public

Installation of Hitachi NAS Platform Initial Setup: Add Single Node 2 to Clustered Node 1

Initial Setup: Add Single Node 2 to Clustered Node 1 1. Go to SMU 2. In the Server status console window scroll down and select Node 1 (Admin EVS) 3. Under Server Settings, click Cluster Configuration and select Add Cluster Node a. Enter the IP Address for Node 2

HDS Confidential: For distribution only to authorized parties.

Page 4-47

Installation of Hitachi NAS Platform Two Node Cluster Configured

Two Node Cluster Configured

 After automatic reboot of Node 2, the node will join the cluster defined in Node 1

Page 4-48

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Initial Setup: File Service EVS

Initial Setup: File Service EVS 3. GUI SMU Wizard

2. CLI setup

SMU

Private IP Address 192.0.2.40

4. GUI Managed Servers

6. GUI Cluster Wizard

Node 1

eth0

Private IP Address 192.0.2.41 eth1 Admin EVS IP 192.0.2.45

1. CLI setup

5. GUI License Keys: CFS, NFS, …Cluster:1+1=2 Cluster:1+1=2

GW

Public IP Address 12.120.56.222

Public

10. GUI Cluster Conf.

Node 2

7. CLI setup

eth0

Private IP Address 192.0.2.42 eth1

GW

Data EVS1 213.1.15.22 8. GUI Managed Servers

9. GUI License Key Cluster:1

Private

Public

After the process for initializing and establishing a 2-node cluster configuration, the administration process can be initiated. Data EVS on either node can be created to enable File Services being offered for clients connected via the data network. In the above example only one EVS (EVS1) is created on Node 2.

HDS Confidential: For distribution only to authorized parties.

Page 4-49

Installation of Hitachi NAS Platform Module Summary

Module Summary  In this module, you have learned to: • Identify the physical environmental and specifications • Understand the login accounts used on the different networks • Recognize the individual steps in the hardware and software installation flow for the Hitachi NAS Platform as a single node • List the individual steps in the hardware installation flow for the external SMU installation and set up procedures • Perform the procedure to join an additional Hitachi NAS Platform node into the N-way cluster

Page 4-50

HDS Confidential: For distribution only to authorized parties.

Installation of Hitachi NAS Platform Module Review

Module Review 1. Which brand of rail kit is mandatory? 2. License keys are electronically stored in which component? 3. Which components are not fitted into a node you receive from the distribution center? 4. List the 3 criteria for a successful installation. 5. Is the external SMU initial setup done by CLI or GUI? 6. Is the node initial setup done by CLI or GUI? 7. How is initial setup initiated on HNAS 4xx0 models?

HDS Confidential: For distribution only to authorized parties.

Page 4-51

Installation of Hitachi NAS Platform Module Review

Page 4-52

HDS Confidential: For distribution only to authorized parties.

5. Ethernet and Fibre Channel Networks Module Objectives  Upon completion of this module, you should be able to: • List the Gigabit Ethernet (1GbE and 10GbE) network maximum cable length • Explain the private and public network configuration scenarios for both platforms • Differentiate between private Rack LAN and public User Data LAN • Examine “The good, the bad and the ugly” back-end SAN configurations

HDS Confidential: For distribution only to authorized parties.

Page 5-1

Ethernet and Fibre Channel Networks GbE Cable Distances

GbE Cable Distances

1000Base-SX

Multi Mode

850nm

62.5 micron

250m

Multi Mode

850nm

50 micron

550m

Single Mode

1300nm

9 micron

5km

1000Base-CX

2 pair

DB9

Twinax

25m

1000Base-TX

4 pair UTP

RJ45

CAT5 or CAT5E

100m

Single Mode

1550nm

9 micron

70km

2000, 3100 1000Base-SX and 3200 1000Base-LX only.

2000, 3100, 1000Base-ZX 3200, (Not IEEE standard) 3080, and 3090.

Page 5-2

GbE cable distances

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks HNAS 30x0 Cluster 10GbE Interface (XFI)

HNAS 30x0 Cluster 10GbE Interface (XFI) 10Gb Protocol XFP 300m

LC Multimode OM3

XFP 10Km

LC Singlemode

XFP 40Km

LC Singlemode

Check current specification documents and both Cluster Interconnect and Data ports that may differ.

"X" = 10 XFI is the standard interface for connecting 10 Gigabit Ethernet MAC devices to a XFP interface. As of the mid-year 2006, most 10GbE products use XAUI interface that has four lanes running at 3.125Gbit/sec using 8B/10B encoding. XFI provides a single lane running at 10.3125Gbit/sec with a 64B/66B encoding scheme. The XFP (10Gigabit Small Form Factor Pluggable) used in models 3080 and 3090 is a hot-swappable, protocol independent optical transceiver. It typically operates at 850nm, 1310nm, or 1550nm for 10GB/sec SONET/SDH, Fibre Channel, Gigabit Ethernet and other applications including DWDM links.

HDS Confidential: For distribution only to authorized parties.

Page 5-3

Ethernet and Fibre Channel Networks Finisar Small Form Factor (SFP+)

Finisar Small Form Factor (SFP+) 8Gb Fibre Channel Protocol

10Gb Protocol SFP+ 300m

LC Multimode OM3

SFP+ 10Km

LC Singlemode

SFP+ 300m

LC Multimode OM3

Check current specification documents and both Cluster Interconnect and Data ports that may differ.

Page 5-4

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks HNAS Models 4xx0 Use SFP+

HNAS Models 4xx0 Use SFP+  The 10GbE and 8Gbp SFP+ is not interchangeable.  2 x SFP+ 10GbE Cluster Ports  4 x SFP+ 10GbE Network Ports • FTLX8571D3BCV • X = 10 (10GbE)

 4 x SFP+ 8Gbp FC Storage Ports • FTLF8528P3BNV • F = FC (Fibre Channel)

HDS Confidential: For distribution only to authorized parties.

Page 5-5

Ethernet and Fibre Channel Networks Cable Distance and Optical Media Type

Cable Distance and Optical Media Type

Application

0M1 (Multimode Fibre) 62.5µ

0M2 (Multimode Fibre) 50µ

0M3 (Multimode Fibre) 50µ

0M4 (Multimode Fibre) 50µ

0S1 (Singlemode Fibre) 9µ

850 nm

1300 nm

850 nm

1300 nm

850 nm

1300 nm

850 nm

1300 nm

1310 nm

1550 nm

ATM 622 Mbps

300m

500m

300m

500m

300m

500m

300m

500m

2000m

-

Fibre Channel 1062 Mbps

300m

-

500m

-

500m

-

500m

-

2000m

-

FDDI

-

2000m

-

2000m

-

2000m

-

2000m

-

-

100 Base-FX Ethernet

-

2000m

-

2000m

-

2000m

-

2000m

-

-

1000 Base-SX Ethernet

275m

-

550m

-

550m

-

550m

-

-

-

1000 Base-LX Ethernet

-

550m

-

>550m

-

>550m

-

>550m

5000m

-

10G BaseLX4 Ethernet

-

300m

-

300m

-

300m

-

-

10Km

-

33m

-

82m

-

300m

-

400m

-

10Km

40Km

10G BaseSR/SW Ethernet

Page 5-6

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks HNAS 4xx0 SFP+ Copper TwinAx Cable Assembly

HNAS 4xx0 SFP+ Copper TwinAx Cable Assembly  Price band $75 to $100  Only available for the 4xx0 models Brand

Length in meter

Part number

Cisco

1

SFP-H10GB-CU1M

Cisco

3

SFP-H10GB-CU3M

Cisco

5

SFP-H10GB-CU5M

Molex

7

747524701

HDS Confidential: For distribution only to authorized parties.

Page 5-7

Ethernet and Fibre Channel Networks Cable Distance and Copper Media Type

Cable Distance and Copper Media Type

Page 5-8

Protocol

Connector (Media)

Cable

1000Base-TX

RJ45 4 pair UTP Copper

CAT5 or CAT5E

10GBaseT

RJ45 10GbaseT Copper

Cat6, Cat6A, Cat7

4–6W

10GBase-CX1

SFP+ CU copper

Twinax

1 – 1.5W 10m

HDS Confidential: For distribution only to authorized parties.

Power

Distance 100m 100m

Ethernet and Fibre Channel Networks NAS Platform Models 3080 and 3090 Networks

NAS Platform Models 3080 and 3090 Networks HNAS Chassis Memory 3GB 10GbE

Cluster

10GbE

Interconnect

10GbE

Data

10GbE

Mercury FPGA Board (MFB)

Memory 3GB

SiliconFS™ File system Metadata (WFS)

Data Movement (TFL)

Network Interface (NI)

Cache 10GB

Interface

NVRAM 2GB

GbE GbE

Fastpath

GbE

Data

GbE

Interface

Memory 2GB Fastpath

Fastpath

Fastpath

GbE GbE

(MBI) BALI

GbE

Management Port eth0

SMU Intel Core 2 Duo E8400 3.0Ghz

Disk Interface (DI) Sector Cache 4GB

FCI

FC FC

FC

FC FC

Mercury Motherboard (MMB) Memory 8GB

Management Port eth1

HDS Confidential: For distribution only to authorized parties.

GbE

Page 5-9

Ethernet and Fibre Channel Networks NAS Platform Models 4060, 4080, and 4100 Networks

NAS Platform Models 4060, 4080, and 4100 Networks HNAS Chassis Memory 4GB

10GbE 10GbE

Cluster Interconnect

10GbE

SiliconFS™ File system Metadata (WFS)

Data Movement (TFL)

Network Interface (NI)

Cache 10GB

NVRAM 4GB

10GbE 10GbE

Main FPGA Board (MFB2)

Memory 4GB

Fastpath

Data

Memory 8GB Fastpath

Interface

10GbE

Fastpath

Fastpath

(MBI) BALI GbE

Page 5-10

Management

eth0 Port eth0

SMU

Disk Interface (PDI) Sector Cache 4GB

FCI FC

FC

Q E FC 8 FC FC

Main Motherboard (MMB) Memory 16GB

Intel Xeon Quad Core

HDS Confidential: For distribution only to authorized parties.

Management

eth1

Port eth1

GbE

Ethernet and Fibre Channel Networks Hitachi NAS 30x0 Network and Embedded SMU

Hitachi NAS 30x0 Network and Embedded SMU

External IP data network connection

External IP network connection (customer facing)

Private Management network (internal switch)

Private Management network (internal)

HDS Confidential: For distribution only to authorized parties.

Page 5-11

Ethernet and Fibre Channel Networks Hitachi NAS 4xx0 Network and External SMU

Hitachi NAS 4xx0 Network and External SMU

External IP data network connection

External IP network connection (customer facing) SMU

Page 5-12

HDS Confidential: For distribution only to authorized parties.

Private Management network (internal)

Ethernet and Fibre Channel Networks Hitachi NAS 4xx0 Network and Clustering

Hitachi NAS 4xx0 Network and Clustering

External IP Cluster interconnection links data network connection

External IP network connection (customer facing)

Private Management network (internal)

SMU

HDS Confidential: For distribution only to authorized parties.

Page 5-13

Ethernet and Fibre Channel Networks Private and Public Management Network Embedded SMU 30x0

Private and Public Management Network Embedded SMU 30x0 Data (public) Network

Public

Hitachi NAS Platform

Private Private Management Network

Public

Public Management Network

Page 5-14

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Private and Public Management Network External SMU 30x0 Cluster

Private and Public Management Network External SMU 30x0 Cluster Data (public) Network

Public Private Private Management Network

SMU

Public

Public Management Network

HDS Confidential: For distribution only to authorized parties.

Page 5-15

Ethernet and Fibre Channel Networks Private and Public Management Network with SMU Managed Legacy Storage

Private and Public Management Network with SMU Managed Legacy Storage Data (public) Network

Public Hitachi NAS Platform

Private Private Management Network

Public

Public Public Management Network

!

Legacy NetApp/LSI storage (Also known as BlueArc storage) can be managed by either embedded or external SMU. The interface for management can either be the private (red) or customer-facing (blue) management network. In this scenario, using embedded SMU and single configuration, the 5-port switch on the 30x0 can be used as a private management switch.

Page 5-16

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks EVS Connectivity in a Cluster

EVS Connectivity in a Cluster

ag2

ag1

ag1

ag2 EVS1-2 10.16.16.42

EVS1-3 10.16.16.34

EVS1-1 192.168.3.81

Private Management

Node 1

192.168.3.39 EVS0 192.0.2.15

192.0.2.12

192.0.2.11

Node 2

172.145.2.14 Public Management

HDS Confidential: For distribution only to authorized parties.

Page 5-17

Ethernet and Fibre Channel Networks IP Addressing and EVS

IP Addressing and EVS

Private Management Admin EVS EVS on Node 1 EVS on Node 2

Admin EVS

The diagram displays the address assignment for the management and public (data) LAN. Pay attention to the address 192.0.2.25 which is an EVS, but only for administration purposes. The address can reside on either physical Node-1 or Node2, as the other EVS addresses. The addresses 192.0.2.21 and 192.0.2.22 are tightly coupled to the physical node and cannot move. The address 10.67.64.15 and 10.67.68.169 are the public addresses of the internal admin EVS.

Page 5-18

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Aggregation Configuration Screen Models 30x0

Aggregation Configuration Screen Models 30x0

1GbE 10GbE Next slide Read notes

ge1 to ge6 = 1GbE, tg1 to tg2 = 10GbE Can aggregate file serving ports (Data network)  Up to 8 aggregations  But cannot mix 1G and 10G ports in an aggregate  Direct traffic to specific ports by giving aggregations the appropriate IP address Aggregation is constructed by ticking the physical ge or tg interface numbers which should belong to that aggregate. LACP is selected per aggregate, and if not selected, this aggregate will be set for static instead. Hitachi NAS Platform Round Robin algorithm is not recommended due to the high risk for frame “out-of-order delivery”.

HDS Confidential: For distribution only to authorized parties.

Page 5-19

Ethernet and Fibre Channel Networks Aggregation Configuration Screen Models 4xx0

Aggregation Configuration Screen Models 4xx0

tg1 to tg4 = 10GbE

Page 5-20

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks LACP Protocol Usage

LACP Protocol Usage

A-A ag1 = tg2 + tg4 Switch port redundancy

A-A

P-P

ag2 = tg1 + tg2 + ge3 + ge4 Port and switch redundancy LACP is a negotiated protocol which uses "Actor" and "Partner" link entities. Partner takes cues from Actor when Actor decides to bring a link up. In a single-switch configuration, there is no functional difference between the LACP and static aggregation. In static aggregation both parties bring up the previously defined aggregation link member unconditionally. Where LACP can be utilized to its fullest is in a link/switch failover situation. In this scenario, one would create a *single* aggregation on the HNAS server side and split it between two switches (for example, 4 links are going to one switch and 2 links to the other). Since the Actor can only bring up a logical link (which can be a number of physical links) with a Partner, only one switch will be active at a time. In a 4+2 scenario, the switch with more links will be favored. In a symmetrical split (for example, 3+3), any one of the switches can be chosen as an LACP Partner. Static aggregation link cannot be split between the switches.

HDS Confidential: For distribution only to authorized parties.

Page 5-21

Ethernet and Fibre Channel Networks NTP and Management Network

NTP and Management Network Public Data (public) Network

NTP Server

Public

Private Hitachi NAS Platform

SMU

SMU time sync.

Public Management

Page 5-22

HDS Confidential: For distribution only to authorized parties.

Private Management

Ethernet and Fibre Channel Networks Fibre Channel Connectivity

Fibre Channel Connectivity

Fibre Channel Connectivity

HDS Confidential: For distribution only to authorized parties.

Page 5-23

Ethernet and Fibre Channel Networks Storage Considerations: Platform Differences

Storage Considerations: Platform Differences

 For proper configuration of a Hitachi NAS 3100 or 3200 Node cluster, the FC Host port configurations need to be identical. In other words, port 1 on all cluster nodes needs to see the same logical units (LUNs) on the same FC port.  For proper configuration of a Hitachi NAS 30X0 or 4XX0 Node cluster, the FC Host port configurations need to be identical. In other words, port 1 on all cluster nodes needs to see the same logical units (LUNs) on the same FC controller /cluster  See notes and following slides for details. AMS family and HUS 100 have the concept of controllers: controller 0 and controller 1. Enterprise family up to VSP does not have controllers, and the left nipple of the port ID will be used as a virtual controller number. Channel port 1A will belong to virtual controller 1, 3A to controller 3, and channel port 8c will belong to virtual controller 8. HUS VM does not have controllers, but the cluster ID will be interpreted from the SCSI inquiry command output. Therefore, channel port 1A will belong to cluster 1, 3A to cluster 1, and channel port 8c will belong to cluster 2. Examples on the following pages using scsi-racks command to create the output.

Page 5-24

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks AMS200, 500, 1000, 2000 and HUS

AMS200, 500, 1000, 2000 and HUS

HDS Confidential: For distribution only to authorized parties.

Page 5-25

Ethernet and Fibre Channel Networks Enterprise Including VSP (Not HUS VM)

Enterprise Including VSP (Not HUS VM)

= 7A

= 8A

Page 5-26

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Hitachi Unified Storage VM

Hitachi Unified Storage VM

HDS Confidential: For distribution only to authorized parties.

Page 5-27

Ethernet and Fibre Channel Networks Fibre Channel Minimum Configuration for 2-Node 2200 Cluster

Fibre Channel Minimum Configuration for 2-Node 2200 Cluster

Node 2

Node 1

Fibre Channel ports

ASIC

ASIC

ASIC

ASIC

1

3

1

3

2

4

1

2

8

6

3

4

Fabric 1

Fabric 2 2

9 Ctl-1

50060E800043053

50060E800043050

0B FC

FC

0A

50060E800043052

Set path priority to LUNs on AMS using odd or even paths

1B

1A Ctl-0

50060E800043051

Owning controller

Adaptable Modular Storage

Owning controller LUN 0 Ctl-0

LUN 1 Ctl-1

LUN 2 Ctl-0

LUN 3 Ctl-1

This is the minimum configuration with a two-way clustered connection. Numbers in the fabric indicate the port numbers on the FC Switch. Preferred path configuration: System Drive 0, 2, 4……. Node 1 port 1 to interface 0A LUN 0, 2, 4……. System Drive 1, 3, 5……. Node 1 port 3 to interface 1A LUN 1, 3, 5……. System Drive 0, 2, 4……. Node 2 port 1 to interface 0A LUN 0, 2, 4……. System Drive 1, 3, 5……. Node 2 port 3 to interface 1A LUN 1, 3, 5……. Zoning Fabric 1: Zone 1: port 1 and 2 Zone 2: port 3 and 2 Zoning Fabric 2: Zone 1: port 6 and 9 Zone 2: port 8 and 9

Page 5-28

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Fibre Channel Configuration for 2-Node 3100 Cluster and Enterprise Storage

Fibre Channel Configuration for 2-Node 3100 Cluster and Enterprise Storage

Node 2

Node 1 ASIC 1

2

3

ASIC 4

1

2

3

4

Fibre Channel ports Fabric 1

1

3 4

2 Map all LUNs to all ports on the Universal Storage Platform. Preferred path can still be used from NAS node to fine-tune performance.

3A

8

6

5B

7

4A

Fabric 2 9

6B

Virtual Storage Platform

LUN 0

LUN 1

LUN 2

LUN 3

A key difference in the chip-to-chip failover from the node-to-node failover is that for a chip-to-chip failover, the EVS will stay up on the node. What this means is that the interaction with the clients is different as follows: Chip-to-Chip Behavior One major benefit to the chip-to-chip failover is that unaffected file systems will continue to serve data, and will not be interrupted. During System Drive failover to the other chip, the EVS will continue to interact with connected clients. Clients attempting access to a file system that is using a System Drive that is moving to the other chip will receive I/O errors. Once the failover is complete, the I/O errors will stop and normal service will continue. Node-to-Node Behavior For a node-to-node failover, the EVS (and all file systems) will completely disappear for some period of time. No responses (I/O errors) will be returned. The EVS will reappear on the other node and normal service can continue.

HDS Confidential: For distribution only to authorized parties.

Page 5-29

Ethernet and Fibre Channel Networks Fibre Channel Configuration for 2-Node 3100 Cluster and Enterprise Storage

Thus, in a chip-to-chip failover the clients will maintain connectivity and will get I/O errors, while in node-to-node failover the clients will lose connectivity and might not receive errors. The clients should be prepared for both possibilities. The best way to maintain optimum connectivity and availability while minimizing potential system impact is to properly configure the system to avoid chip-to-chip failover unless there is a specific combination of multiple failures (paths for LUNs must have failed to a particular chip but remain available to the other chip).

Page 5-30

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks High-performance NAS Platform 3200 Connectivity

High-performance NAS Platform 3200 Connectivity

Node 2

Node 1 ASIC 1

2

3

ASIC 4

5

6

7

ASIC 8

1

2

3

ASIC 4

5

6

7

8

Fibre Channel ports Fabric 1

Map all LUNs to all ports on the Universal Storage Platform. Preferred path can still be used from Highperformance NAS to fine-tune performance.

Fabric 2

3A

5B

4A

6B

Virtual Storage Platform

LUN 0

LUN 1

LUN 2

LUN 3

If connectivity to a System Drive is unavailable, the High-performance NAS Platform node will re-establish connectivity in the following order: 1. Move connectivity to that System Drive to another port (any of the other three) on the same Tachyon chip. 2. Move connectivity to that System Drive to the other Tachyon chip (if available). 3. If connectivity is lost for that System Drive via both Tachyon chips on that node, then one of the following three options will happen: a) Maintain EVS connectivity, but fail only the affected file systems If the EVS contains several file systems, and at least one file system (that is, all System Drives associated with that file system) is still able to be accessed through the primary High-performance NAS Platform server, then the EVS will stay on the primary system. Access will continue to the good file systems, and only the file systems without connectivity will fail. This is done to maintain uninterrupted access to the good file systems.

HDS Confidential: For distribution only to authorized parties.

Page 5-31

Ethernet and Fibre Channel Networks High-performance NAS Platform 3200 Connectivity

b) Fail over the EVS to the alternate node If all file systems fail within the EVS (lost connectivity), the primary node will then check with other nodes to determine if any node has connectivity. If another node has connectivity, the EVS will then failover to the other node. c) Fail the EVS If all file systems fail within the EVS (lost connectivity), the primary node will then check with the other nodes to determine if any node has connectivity. If no nodes have connectivity, the EVS will stay on the original node, but the file systems will fail. Note that for a single-node system, failover to other nodes in “b)” above is not an option.

Page 5-32

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Fibre Channel Switchless Configuration for 2-Node 3100 or 3200 Cluster

Fibre Channel Switchless Configuration for 2-Node 3100 or 3200 Cluster

Node 2

Node 1 ASIC 1

2

3

ASIC 4

1

2

3

4

Fibre Channel ports

Not supported

0E

0F

1E

1F

Hitachi Unified Storage (HUS)

LUN 0 Ctl-0

LUN 1 Ctl-1

LUN 2 Ctl-0

LUN 3 Ctl-1

Direct Attached Storage (DAS) is not supported for Hitachi High-performance NAS models 2100, 2200, 3100, and 3200. It is only supported on Hitachi NAS Platform models 30x0 and 4xx0.

HDS Confidential: For distribution only to authorized parties.

Page 5-33

Ethernet and Fibre Channel Networks Fibre Channel Switchless Configuration for Single 3100 or 3200 Node

Fibre Channel Switchless Configuration for Single 3100 or 3200 Node

Node 1 ASIC 1

2

3

4

Fibre Channel ports

Hitachi Unified Storage (HUS)

0A

0C

LUN 0 Ctl-0

1A

LUN 1 Ctl-1

LUN 2 Ctl-0

1C

LUN 3 Ctl-1

DAS is supported on Hitachi High-performance NAS nodes in a single node configuration. In a single node configuration there is no issue with seeing the same storage image on all nodes.

Page 5-34

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Fibre Channel Fabric Configuration for 2-Node Cluster HNAS 4xx0

Fibre Channel Fabric Configuration for 2-Node Cluster HNAS 4xx0 Fibre Channel Host ports ASIC

ASIC 1

2

3

1

4

1

Fabric 1

3

0F

7

1E

LUN 0 Ctl-0

8

6 4

2

0E

3

4

Node 2

Node 1

Hitachi Unified Storage (HUS)

2

LUN 1 Ctl-1

Fabric 2 9

1F

LUN 2 Ctl-0

LUN 3 Ctl-1

This is the recommended configuration with a two-way clustered connection. Numbers in the fabric indicate the port numbers on the FC Switch. Preferred path configuration System Drive 0, 4, 8……. Node 1 port 1 to interface 0E LUN 0, 4, 8……. System Drive 1, 5, 9……. Node 1 port 3 to interface 1E LUN 1, 5, 9……. System Drive 2, 6, A……. Node 1 port 3 to interface 0F LUN 2, 6, A……. System Drive 3, 7, B……. Node 1 port 1 to interface 1F LUN 3, 7, B……. System Drive 0, 4, 8……. Node 2 port 1 to interface 0E LUN 0, 4, 8……. System Drive 1, 5, 9……. Node 2 port 3 to interface 1E LUN 1, 5, 9……. System Drive 2, 6, A……. Node 2 port 3 to interface 0F LUN 2, 6, A……. System Drive 3, 7, B……. Node 2 port 1 to interface 1F LUN 3, 7, B

HDS Confidential: For distribution only to authorized parties.

Page 5-35

Ethernet and Fibre Channel Networks Fibre Channel Fabric Configuration for 2-Node Cluster HNAS 4xx0

Zoning Fabric 1: Zone 1: port 1, 2 and 4 Zone 2: port 3, 2 and 4 Zoning Fabric 2: Zone 1: port 6, 7 and 9 Zone 2: port 8, 7 and 9

Page 5-36

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Fibre Channel Fabric Configuration for 2-Node Cluster HNAS 4xx0

Fibre Channel Host ports ASIC

ASIC 1

2

3

1

4

1

Fabric 1

3

0F

7

1E

LUN 0 Ctl-0

8

6 4

2

0E

3

4

Node 2

Node 1

Hitachi Unified Storage (HUS)

2

LUN 1 Ctl-1

Fabric 2 9

1F

LUN 2 Ctl-0

LUN 3 Ctl-1

This is the recommended configuration with a two-way clustered connection. Numbers in the fabric indicate the port numbers on the FC Switch. Preferred path configuration System Drive 0, 4, 8……. Node 1 port 1 to interface 0E LUN 0, 4, 8……. System Drive 1, 5, 9……. Node 1 port 3 to interface 1E LUN 1, 5, 9……. System Drive 2, 6, A……. Node 1 port 3 to interface 0F LUN 2, 6, A……. System Drive 3, 7, B……. Node 1 port 1 to interface 1F LUN 3, 7, B……. System Drive 0, 4, 8……. Node 2 port 1 to interface 0E LUN 0, 4, 8……. System Drive 1, 5, 9……. Node 2 port 3 to interface 1E LUN 1, 5, 9……. System Drive 2, 6, A……. Node 2 port 3 to interface 0F LUN 2, 6, A……. System Drive 3, 7, B……. Node 2 port 1 to interface 1F LUN 3, 7, B Zoning Fabric 1: Zone 1: port 1, 2 and 4 Zone 2: port 3, 2 and 4 Zoning Fabric 2: Zone 1: port 6, 7 and 9 Zone 2: port 8, 7 and 9

HDS Confidential: For distribution only to authorized parties.

Page 5-37

Ethernet and Fibre Channel Networks Fibre Channel Best Practice Configuration for 2-Node Cluster Using Secure Storage Domains

Fibre Channel Best Practice Configuration for 2-Node Cluster Using Secure Storage Domains Fibre Channel Host ports ASIC

ASIC 1

2

3

1

4

3

4

Node 2

Node 1

3

1

8

6

Fabric 1 Preferred path and 2 secure storage domains can be used to fine-tune performance. Although more than 2 paths per LUN is supported, engineering recommend only 2 paths per LUN.

2

4

3A

LUN 0

5B

LUN 31

7 4A

Fabric 2

9 6B

LUN 32

Hitachi Unified Storage (HUS VM)

LUN 63

The HNAS development team recommends keeping the number of paths as low as possible, which means two paths per LUN. To satisfy this recommendation and keep the default setting, the secure storage domains could be arranged as above.

Page 5-38

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Fibre Channel Recommended Configuration for 2-Node Cluster Enterprise 4xx0

Fibre Channel Recommended Configuration for 2-Node Cluster Enterprise 4xx0 Fibre Channel Host ports ASIC

ASIC 1

2

3

1

4

2

3

4

Node 2

Node 1

1 3

Fabric 1

4

2 Map all LUNs to all ports on the Hitachi Unified Storage VM. Preferred path can still be used from NAS node to finetune performance.

3A

LUN 0

1B

LUN 1

8

6 7

4A

LUN 2

Fabric 2

9 2C

Hitachi Unified Storage (HUS VM)

LUN 3

For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node clusters, the FC port configurations need to be identical. In other words, port 1 on all cluster nodes needs to see the same logical units (LUNs) on the same disk system controller port or controller. Try to follow one Hport at a time on each node: Hport 1 on node 1 see LUN 0 over 3A and 4A which is virtual controller 3 and 4 Hport 1 on node 2 see LUN 0 over 3A and 4A which is virtual controller 3 and 4 Then try the next Hport on each node: Hport 3 on node 1 see LUN 0 over 1B and 2C which is virtual controller 1 and 2 Hport 3 on node 2 see LUN 0 over 1B and 2C which is virtual controller 1 and 2

HDS Confidential: For distribution only to authorized parties.

Page 5-39

Ethernet and Fibre Channel Networks Fibre Channel Configuration for 2-Node Cluster Enterprise 4xx0

Fibre Channel Configuration for 2-Node Cluster Enterprise 4xx0 Fibre Channel Host ports ASIC

ASIC 1

2

3

1

4

2

3

4

Node 2

Node 1

1 3

Fabric 1

4

2 Map all LUNs to all ports on the Hitachi Unified Storage VM, preferred path can still be used from NAS node to fine-tune performance.

3A

LUN 0

1B

LUN 1

8

6 7

4A

LUN 2

9 2C

Fabric 2 BAD!!

Hitachi Unified Storage (HUS VM)

LUN 3

For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node clusters, the FC port configurations need to be identical. In other words, port 1 on all cluster nodes needs to see the same logical units (LUNs) on the same disk system controller port or controller. Try to follow one Hport at a time on each node: Hport 1 on node 1 see LUN 0 over 3A which is virtual controller 3 Hport 1 on node 2 see LUN 0 over 4A which is virtual controller 4 Then try the next Hport on each node: Hport 3 on node 1 see LUN 0 over 1B which is virtual controller 1 Hport 3 on node 2 see LUN 0 over 2C which is virtual controller 2 The requirement of seeing the storage in the same way from both nodes is not fulfilled, and EVS migration will be affected.

Page 5-40

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Fibre Channel Switch-less Configuration for 2-Node Cluster Modular 30x0

Fibre Channel Switch-less Configuration for 2-Node Cluster Modular 30x0 Fibre Channel Host ports ASIC

ASIC 1

2

3

1

4

2

3

4

Node 2

Node 1

0A

0C

LUN 0 Ctl-0

1A

LUN 1 Ctl-1

LUN 2 Ctl-0

1C

Hitachi Unified Storage (HUS)

LUN 3 Ctl-1

For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node clusters, the FC port configurations need to be identical. In other words, port 1 on all cluster nodes needs to see the same logical units (LUNs) on the same disk system controller. Try to follow one Hport at a time on each node: Hport 1 on node 1 see LUN 0 over 0A which is controller 0 Hport 1 on node 2 see LUN 0 over 0C which is controller 0 Then try the next Hport on each node : Hport 3 on node 1 see LUN 0 over 1A which is controller 1 Hport 3 on node 2 see LUN 0 over 1C which is controller 1  Host ports must be in loop mode: fc-link-type -t nl  Each RAID controller must be connected to both HNAS servers.  At most, have 2 storage arrays in a switchless cluster.  Preferred paths (if any) should be set using host port only. HDS Confidential: For distribution only to authorized parties.

Page 5-41

Ethernet and Fibre Channel Networks Fibre Channel Switch-less Configuration for 2-Node Cluster Modular 30x0

Fibre Channel Host ports ASIC

ASIC 1

2

3

1

4

2

3

4

Node 2

Node 1

0A

0C

LUN 0 Ctl-0

1A

LUN 1 Ctl-1

LUN 2 Ctl-0

1C

Hitachi Unified Storage (HUS)

LUN 3 Ctl-1

For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node clusters, the FC port configurations need to be identical. In other words, port 1 on all cluster nodes needs to see the same logical units (LUNs) on the same disk system controller. Try to follow one Hport at a time on each node: Hport 1 on node 1 see LUN 0 over 1A which is controller 1 Hport 1 on node 2 see LUN 0 over 0C which is controller 0 Then try the next Hport on each node: Hport 3 on node 1 see LUN 0 over 0A which is controller 0 Hport 3 on node 2 see LUN 0 over 1C which is controller 1 The requirement of seeing the storage in the same way from both nodes is not fulfilled, and EVS migration will be affected.

Page 5-42

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Fibre Channel Switch-less Configuration for 2-Node Cluster Enterprise 4xx0

Fibre Channel Switch-less Configuration for 2-Node Cluster Enterprise 4xx0 Fibre Channel Host ports ASIC

ASIC 1

2

3

1

4

2

3

4

Node 2

Node 1

Port numbers are just examples Map all LUNs to all ports on the Virtual Storage Platform. Preferred path can still be used from NAS node to fine-tune performance.

1A

1B

6A

6B

Virtual Storage Platform

LUN 0

LUN 1

LUN 2

LUN 3

 High-performance NAS Platform models 3100 and 3200 do not support Direct Attached Storage (DAS) in a two node cluster configuration, only as a single node.  Hitachi NAS Platform models 30x0 and 4xx0 support single node and two node clustered configurations using DAS connectivity.  A maximum of two storage systems can be connected using DAS as backend connectivity.  You can connect storage using direct FC connections, or an FC switch; however, do not use both connection types in the same system configuration.  The Hitachi NAS Platform in a switch-less configuration using the Hitachi Enterprise Storage Systems (9900V, USP, and USP-V) introduces cabling restrictions when connected using direct FC connections.  The Hitachi NAS platform treats the first letter of the port number as a virtual controller with a limit of two controllers maximum.  Therefore, connections need to be grouped into only two controller groups, and each controller group must be visible from both nodes and switches. (For a direct-connect example: connect node 1 to ports 1A and 6A, and node 2 to ports 1B and 6B).

HDS Confidential: For distribution only to authorized parties.

Page 5-43

Ethernet and Fibre Channel Networks Fibre Channel Switch-less 2-Node Cluster Configuration 30x0 and NetApp 2680

Fibre Channel Switch-less 2-Node Cluster Configuration 30x0 and NetApp 2680 Fibre Channel Host ports ASIC

ASIC 1

2

3

1

4

2

3

4

Node 2

Node 1

RS12C/RS24C

3

4

3

Ctl B

Ctl A

LUN 0

4

LUN 1

LUN 2

LUN 3

For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node clusters, the FC port configurations need to be identical. In other words, port 1 on all cluster nodes needs to see the same logical units (LUNs) on the same disk system controller. Try to follow one Hport at a time on each node: Hport 1 on node 1 sees LUN 0 over FC port 3 which is controller A Hport 1 on node 2 sees LUN 0 over FC port 4 which is controller A Then try the next Hport on each node: Hport 3 on node 1 sees LUN 0 over FC port 3 which is controller B Hport 3 on node 2 sees LUN 0 over FC port 4 which is controller B

Page 5-44

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Fibre Channel Switch-less Configuration for 2-Node Cluster 4xx0 Enterprise

Fibre Channel Switch-less Configuration for 2-Node Cluster 4xx0 Enterprise Fibre Channel Host ports ASIC

ASIC 1

2

3

1

4

2

3

4

Node 2

Node 1

Port numbers are just examples Map all LUNs to all ports on the Universal Storage Platform, preferred path can still be used from Highperformance NAS to fine-tune performance.

5D

7B

6D

8B

Virtual Storage Platform

LUN 0

LUN 1

LUN 2

BAD!!

LUN 3

For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node clusters, the FC port configurations need to be identical. In other words, port 1 on all cluster nodes needs to see the same logical units (LUNs) on the same disk system controller. Try to follow one Hport at a time on each node: Hport 1 on node 1 sees LUN 0 over 5D which is virtual controller 5 Hport 1 on node 2 sees LUN 0 over 7B which is virtual controller 7 Then try the next Hport on each node: Hport 3 on node 1 sees LUN 0 over 6D which is virtual controller 6 Hport 3 on node 2 sees LUN 0 over 8B which is virtual controller 1 The requirements of seeing the storage in the same way from both nodes is not fulfilled, and EVS migration will be affected.

HDS Confidential: For distribution only to authorized parties.

Page 5-45

Ethernet and Fibre Channel Networks Most Important SCSI Command Node 1

Most Important SCSI Command Node 1

 To get a good view from the node point of view use: scsi-racks

Page 5-46

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Most Important SCSI Command Node 2

Most Important SCSI Command Node 2

 To get a good view from the node point of view use: scsi-racks

HDS Confidential: For distribution only to authorized parties.

Page 5-47

Ethernet and Fibre Channel Networks Problem Determination Example 1

Problem Determination Example 1

Page 5-48

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Problem Determination Example 2

Problem Determination Example 2

HDS Confidential: For distribution only to authorized parties.

Page 5-49

Ethernet and Fibre Channel Networks Storage Considerations

Storage Considerations

 The Adaptable Modular Storage system has the concept of “path priority,” while the Hitachi NAS Platform node uses a concept of “preferred path.”  For the AMS models before the 2000 series, it is critical to align “path priority” and “preferred path.”  The HUS 100 family is compliant with the SCSI SPC-3 Asymmetric Logical Unit Access (ALUA) controller model  HNAS nodes can with HUS 100 use ALUA to balance the preferred access schema across used System Drives (SD)  The Hitachi NAS Platform node can support multiple paths to the same LUN but does not perform dynamic load balancing across the paths.

Page 5-50

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Storage Enhancements for HNAS

Storage Enhancements for HNAS

 AMS2000 code 08B7/D introduced a new feature that allows the HNAS to identify the preferred path to an SD.  From that version the sdpath command should not be used for manual path balancing.  HUS100 code 0935/A introduced a new Host Group option, called “HNAS Option Mode”  This option enable HNAS to interrogate more detailed storage SCSI information  From HNAS version 11.2.33xx.xx (angel-3) this information enable for automatically expanding the q-depth triggered by the “Command Queue Expansion Mode” port option on the HUS100

HDS Confidential: For distribution only to authorized parties.

Page 5-51

Ethernet and Fibre Channel Networks HUS 100 Options and HNAS

HUS 100 Options and HNAS Host Group Options

Port Options

Page 5-52

HDS Confidential: For distribution only to authorized parties.

Ethernet and Fibre Channel Networks Module Summary

Module Summary  In this module, you have learned to: • List the Gigabit Ethernet (1GbE and 10GbE) network maximum cable length • Explain the private and public network configuration scenarios for both platforms • Differentiate between private Rack LAN and public User Data LAN • Examine “The good, the bad and the ugly” back-end SAN configurations

HDS Confidential: For distribution only to authorized parties.

Page 5-53

Ethernet and Fibre Channel Networks Module Review

Module Review 1. What media type is supported and for which interfaces? 2. Indicate the approximate maximum distance for: 10GbE multimode? ___ 1GbE UTP? ____ 10GbE UTP? ____ 3. Can the SMU be a NTP client and NTP server? 4. How many initiators can be enabled on a node to get access over the SAN to the storage targets? 5. Can the customer data LAN and the private management LAN physically be the same? 6. Which components can be managed through the Private Management? And how is this accomplished? 7. How many paths are needed from the node to storage? 8. Is multipathing including load balancing supported?

Page 5-54

HDS Confidential: For distribution only to authorized parties.

6. File System and Access Protocols Module Objectives  Upon completion of this module, you should be able to: • Explain the storage pools of the Hitachi NAS Platform • Describe file system structure • List the storage pool and file system specifications • Identify the benefits of Tiered File Systems (TFS) • Describe the access protocol used by Microsoft® Windows® and UNIX • Identify the implementation differences for the access protocols

HDS Confidential: For distribution only to authorized parties.

Page 6-1

File System and Access Protocols From Disk Drive to HNAS Virtualized Storage

From Disk Drive to HNAS Virtualized Storage

H N A S

SD 0

SD 1

SD 2

SD 3

SD 4

SD 5

SD 6

SD 7

SD 8

SD 9

LDEV 11

LDEV 12

LDEV 13

LDEV 16

LDEV 19

LDEV 20

LDEV 21

LDEV 22

LDEV 23

LDEV 0

RG = RAID Group LDEV = Logical Device HNAS = Hitachi NAS Platform SD = System Drive SP = Storage Pool FS = File System EVS = Enterprise Virtual Server SHR = Share

Page 6-2

HDS Confidential: For distribution only to authorized parties.

SD L L S D U T E N O V R A RG G E

File System and Access Protocols Hitachi Storage System Integration

Hitachi Storage System Integration  Provision RAID groups and logical units (LDEVs/LUNs) using storage vendor management application • Connect your storage, Hitachi NAS Platform, and the Fibre Channel (FC) switches to form back-end SAN or DAS FC • Configure FC switch zoning

In the SMU, go to Storage Management > System Drives and verify that new System Drives (in other words, LUNs presented by storage) are visible to the Hitachi NAS Platform node. 1. Verify storage capacity license limit. 2. Once verified, allow Hitachi NAS Platform Node access to the specified System Drives. The refresh status can be executed via CLI using the scsi-refresh command. DAS stands for Direct attached Storage

HDS Confidential: For distribution only to authorized parties.

Page 6-3

File System and Access Protocols BlueArc RAID Rack Discovery

BlueArc RAID Rack Discovery  You must discover the storage array before you can manage it

When you click on Discover Racks, the IP addresses of both RAID controllers are discovered, and the array becomes manageable by the BlueArc Systems Server. Once a rack is added, the following events occur:  The selected RAID racks appear on the RAID Racks list page and on the System Monitor (for the currently selected managed server).  The SMU begins logging rack events, which can be viewed through the Event Log link on the RAID Rack Details page.  RAID rack severe events will be forwarded to each managed server that has discovered the rack and included in its event log. This triggers the server's alert mechanism, possibly resulting in alert emails and SNMP traps.  The RAID rack’s time is synchronized daily with SMU time.  If system drives are present on the RAID rack, the rack “cache block size” will be set to 16KB. Note that if there is a problem with either array controller, the rack will be discovered, but, in a degraded (partially discovered) state and will have reduced functionality. You must resolve the problem with the array, then, remove and rediscover the array. Page 6-4

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols Create System Drives

Create System Drives  From the System Drives management screen, click Create  Select the storage array on which you are going to build your System Drive

SMU only has the API scripts to build RAID arrays on BlueArc Storage Arrays (LSI). On other vendors’ storage, you will use their native application. Supported RAID types are RAID-1, 5, and 6 on BlueArc RC16 arrays. 1. Navigate to the System Drives page (Home > Storage Management > System Drives). 2. In the System Drives page, click Create. 3. Select a rack. When the Select RAID Rack page is displayed, select a rack, then click Next. 4. Indicate the RAID level. 5. Specify the drive parameters (size, name, stripe size).

HDS Confidential: For distribution only to authorized parties.

Page 6-5

File System and Access Protocols System Drives – Create SD

System Drives – Create SD  Select the RAID level for the System Drive

Page 6-6

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols System Drives – Create SD

 Create an SD in a 7+1 RAID-5 RAID group

Select the number of drives in your RAID group. This includes Parity drives. You are able to create multiple SDs within a single RAID group.  This is not recommended because it will cause the heads on the disk to seek between 2 physical areas on disk. Select the stripe depth for your RAID groups, keeping in mind Superflush.

HDS Confidential: For distribution only to authorized parties.

Page 6-7

File System and Access Protocols CLI Displaying the System Drives

CLI Displaying the System Drives  From the CLI more details can be displayed. • Pay attention to the Mirror column indicating the role in a TrueCopy pair.

Page 6-8

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols From Disk Drive to HNAS Virtualized Storage

From Disk Drive to HNAS Virtualized Storage

EVS2 192.168.3.25

EVS1 192.168.3.21

SP 01

FS01/

FS02/

H N A S

EVS3 192.168.3.31

SP 02

/VV1

FS03/

FS04/

SD 0

SD 1

SD 2

SD 3

SD 4

SD 5

SD 6

SD 7

SD 8

SD 9

LDEV 11

LDEV 12

LDEV 13

LDEV 16

LDEV 19

LDEV 20

LDEV 21

LDEV 22

LDEV 23

LDEV 0

HDS Confidential: For distribution only to authorized parties.

SD L L S D U T E N O R V A RG G E

Page 6-9

File System and Access Protocols Hitachi Dynamic Provisioning (HDP) and HNAS

Hitachi Dynamic Provisioning (HDP) and HNAS

SP 01

SD 0

H N A S

EVS2 192.168.3.25

EVS1 192.168.3.21

FS01/

SD 1

FS02/

SD 2

/VV1

SD 3

SD

SD 4

DP-VOL 15 DP-VOL 14 DP-VOL 13 DP-VOL 12 DP-VOL 11

DP POOL S T O R A RG G E

Some of the current restrictions:  An HDP Pool hosting HNAS System Drives (SDs) should never be overprovisioned.  HNAS is not HDP thin-provisioned-volume aware.  If an HDP Pool runs out of disk space, the HNAS System Drive experiences SCSI and I/O errors, fails the entire span and unmounts it automatically .  Always monitor and ensure that the HDP Pools for HNAS are never oversubscribed.  HNAS does not have the ability to adapt to DP-VOL size changes.  The size of the DPVOLs must never change.  All the DPVOLs used in a HNAS storage pool should have the same performance capabilities.

Page 6-10

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols From Physical Disk to Storage Pool

From Physical Disk to Storage Pool Stripeset COD – Configuration On Disk Stripe 1

FS1

Stripe 2 Stripe 3 Stripe 4 Parity Stripe

SD/LDEV

No Access access

No Access access

No Access access

No Access access

No Access access

No Access access

No Access access

No access

No access

RAID-5

3+1

The storage pool consists of one or more System Drives (SDs). A single SD has the same capacity (in bytes) as a Logical Unit Number (LUN) presented over the SAN. This diagram illustrates the concept of storage pools and should not be interpreted as a best practice. For best practices, consult the appropriate documentation for the modular or enterprise disk subsystems.

HDS Confidential: For distribution only to authorized parties.

Page 6-11

File System and Access Protocols Expanding a Storage Pool

Expanding a Storage Pool Stripeset COD – Configuration On Disk Stripe 1

FS1

Stripe 2 Stripe 3 Stripe 4 Parity Stripe

SD/LDEV

No Access access

No Access access

No Access access

No Access access

No Access access

No Access access

No Access access

No access

No access

RAID-5

3+1

A storage pool can be expanded non-disruptively in capacity by adding one or more system drives (LUNs). The number of LUNs added per expansion also defines the size of the new stripeset used by the storage pool. Therefore, performance may differ in comparison with the stripeset in the storage pool before the expansion, so be aware. The Dynamic Write Balancing mechanism compensates for this phenomenon and is enabled by default.

Page 6-12

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols File System in a Storage Pool

File System in a Storage Pool Stripeset COD – Configuration On Disk Stripe 1 Stripe 2

FS1

FS2

Stripe 3 Stripe 4 Parity Stripe

Storage Pool

SD/LDEV

No Access access

No Access access

No Access access

No Access access

No Access access

No Access access

No Access access

No access

No access

With the storage pool concept, more than one file system can be allocated in the same storage pool. The concept of storage pool requires a license key for storage pools.

HDS Confidential: For distribution only to authorized parties.

Page 6-13

File System and Access Protocols File System Using Auto Expansion

File System Using Auto Expansion Stripeset

FS3

COD – Configuration On Disk Stripe 1

FS2

Stripe 2 Stripe 3

FS1

Stripe 4 Parity Stripe

Storage Pool

SD/LDEV

No Access access

No Access access

No Access access

No Access access

No Access access

No Access access

No Access access

No access

No access

Auto expansion can be used as a kind of thin provisioning on the file system level. File systems are created with a maximum value, but only a user-defined fraction of the maximum is pre-allocated. In this way the capacity of all file systems can be greater than the available space in the storage pool. When more data is added to the file system, the pre-allocated space expands as needed. Of course, the file systems together cannot grow larger than the storage pool capacity allows, so growth of the file systems should be taken into consideration when enlarging the storage pool.

Page 6-14

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols System Drive Groups (SDG)

System Drive Groups (SDG)  To enable advanced file system technologies, it is essential that the NAS nodes understand the storage RAID group and HW-LUN/LDEV relationship  The SDG represents the SDs/LUNs/LDEVs mapping, that share the same storage RAID group  Autogroup assignment should only be selected when a LDEV spans over a complete RAID group (1 SD in each SDG)

SD/LDEV

SD/LDEV

SD/LDEV

SD/LDEV

SDG

SD/LDEV

SD/LDEV

HDS Confidential: For distribution only to authorized parties.

SD/LDEV

SD/LDEV

Page 6-15

File System and Access Protocols Hitachi Dynamic Provisioning (HDP)

Hitachi Dynamic Provisioning (HDP)  The HDP feature introduces a virtualization layer that hides the RAID group layout for the HNAS nodes  The best practice is to assign one SD/LUN/DP-VOL mapping, into one SDG SD/DPV

SD/DPV

SD/DPV

SD/DPV

DP-VOL 4 DP-VOL 3 DP-VOL 2 DP-VOL 1

DPV = DP-VOL

Page 6-16

HDS Confidential: For distribution only to authorized parties.

SDG

File System and Access Protocols Storage Pool Best Practices

Storage Pool Best Practices

 One LDEV/LUN spans over the complete RAID group  Minimum of four (4) SDs in a storage pool  Even number of SDs in pool  Design with future expansion in mind  Queue depth: • All file systems in a storage pool belong EVSs on one node • In other words, do not share the same storage pool across nodes ▪ SCSI Queue depth is cluster wide ▪ The maximum SCSI Queue depth is 500 per modular storage system target port ▪ The NAS node has a fixed SCSI Queue depth of 32 per LDEV

scsi-queue-limits-show: . . . . …..HITACHI AMS500 / DF700M Default: per controller: 0, per target port: 500, per system drive: 32 Current: per controller: 0, per target port: 500, per system drive: 32 HITACHI AMS1000 / DF700H Default: per controller: 0, per target port: 500, per system drive: 32 Current: per controller: 0, per target port: 500, per system drive: 32 HITACHI SMS100 / SA800 Default: per controller: 0, per target port: 500, per system drive: 32 Current: per controller: 0, per target port: 500, per system drive: 32 HITACHI SMS110 / SA810 Default: per controller: 0, per target port: 500, per system drive: 32 Current: per controller: 0, per target port: 500, per system drive: 32 HITACHI AMS2100 / DF800S Default: per controller: 0, per target port: 500, per system drive: 32 Current: per controller: 0, per target port: 500, per system drive: 32 HITACHI AMS2300 / DF800M Default: per controller: 0, per target port: 500, per system drive: 32 Current: per controller: 0, per target port: 500, per system drive: 32 HITACHI AMS2500 / DF800H Default: per controller: 0, per target port: 500, per system drive: 32 Current: per controller: 0, per target port: 500, per system drive: 32 HITACHI VSP / R700 Default: per controller: 0, per target port: 2000, per system drive: 32 Current: per controller: 0, per target port: 2000, per system drive: 32 HITACHI HUS110 / DF850XS Default: per controller: 0, per target port: 500, per system drive: 32 HDS Confidential: For distribution only to authorized parties.

Page 6-17

File System and Access Protocols Storage Pool Best Practices Current: per controller: 0, per target port: 500, per system drive: 32 HITACHI HUS130 / DF850S Default: per controller: 0, per target port: 500, per system drive: 32 Current: per controller: 0, per target port: 500, per system drive: 32 HITACHI HUS150 / DF850MH Default: per controller: 0, per target port: 500, per system drive: 32 Current: per controller: 0, per target port: 500, per system drive: 32 HITACHI HUS-VM / HM700 Default: per controller: 0, per target port: 2000, per system drive: 32 Current: per controller: 0, per target port: 2000, per system drive: 32 HITACHI Default HDS / UNKNOWN / OTHER Default: per controller: 0, per target port: 256, per system drive: 32 Current: per controller: 0, per target port: 256, per system drive: 32

Page 6-18

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols Storage Pools Specifications

Storage Pools Specifications

 Storage pool — Expandable Container of file systems • • • •

Initial creation with up to 32 System Drives A storage pool can be expanded 63 times Expandable to 256TB (FW version 10.1 and above 1PB) Can contain up to 16,384 chunks

HDS Confidential: For distribution only to authorized parties.

Page 6-19

File System and Access Protocols Creating a Storage Pool

Creating a Storage Pool

When you only allow access to the SDs you need, it will make the storage pool assignment much easier. In this scenario we select access to 4 SDs on the system drive screen and now, using the storage pool wizard, all you need is to check all and then continue. At this stage not considering which SDs to use.

Page 6-20

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols File System Specifications

File System Specifications

   

One or more file systems may be created in a storage pool 256TB maximum capacity limit; maximum of 1,023 chunks Up to 128 file systems in a storage pool Maximum 125 file systems in a cluster and 128 in a single node

The file system on the Hitachi NAS Platform was originally called “Silicon File System”. The HNAS file system can be displayed as shown on the screen capture. Newer file system versions are called Wise File System version 1 (WFS1) and Wise File System version 2 (WFS2).

HDS Confidential: For distribution only to authorized parties.

Page 6-21

File System and Access Protocols File System Definition

File System Definition

Set the Size Limit and enable Auto-Expansion for file systems that will grow as needed.

Disable AutoExpansion and specify initial file system size to create file systems with maximum capacity. Block Size: 32KB – Best Performance - big files 4KB – Optimal space utilization.

Assign the file system to an EVS. WORM is supported by Hitachi. Format for BlueArc JetMirror target. Prepare for Deduplication.

Choosing a file system block size is an important decision because it affects performance, storage size, and the efficiency of storage utilization. A file system with a 32KB block size provides higher throughput when transferring large files. However, a file system with a 4KB block size performs better than a file system with a 32KB block size when subjected to a large number of smaller I/O operations. If the file system contains many relatively small files, a 4KB file system block size provides more efficient space utilization. When saving a 42KB file in a file system with a 32KB block size, the 42KB file takes up two 32KB blocks, for a total of 64KB used (2 x 32KB = 64KB). In a file system with a 4KB block size, the 42KB file takes up eleven 4KB blocks, for a total of 44KB used (11 x 4KB = 44KB). In this case, the 32KB block size wastes 22KB of space while the 4KB block size wastes only 2KB of space. One advantage of configuring multiple file systems within the same storage pool is that applications requiring a 4KB block size can share storage with applications that require a 32KB block size.

Page 6-22

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols Tiered File Systems (Tiered Storage Pools)

Tiered File Systems (Tiered Storage Pools)

 Tiered File Systems (TFS) provides cost efficiency with equal performance by using fewer or lower cost disks • Can be deployed with or without SSD drives • New installation with no degradation in I/O performance Tier 0 High Speed SSD or SAS Disks

Metadata Tier 1 High Speed SAS Disks

Small Reads & Writes

Metadata & User Data

User Data Larger Reads & Writes

Traditional file system

Tier 1 Lower Cost NL-SAS or SATA Disks

Tiered File Systems

HDS Confidential: For distribution only to authorized parties.

Page 6-23

File System and Access Protocols Creating a Tiered Storage Pool

Creating a Tiered Storage Pool

Page 6-24

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols Creating a Tiered Store Pool

Creating a Tiered Store Pool

Step 1

Step 2

HDS Confidential: For distribution only to authorized parties.

Page 6-25

File System and Access Protocols Displaying a Tiered Storage Pool

Displaying a Tiered Storage Pool sd-list –i --show-tier

span-list

Page 6-26

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols From Disk Drive to Drive Letter and UNIX Mount Point

From Disk Drive to Drive Letter and UNIX Mount Point 192.168.3.78

MNT1

VV1 on EVS2

MNT2

FS03 on EVS3

192.168.3.41

SHR1 on Win1 (X:) SHR3 on Win3 (Y:)

SHR1

FS03/

FS01/ Win1 Server

EVS1 192.168.3.21

Win3 Server

EVS2 192.168.3.25

NFSD

SP 01

FS01/

FS02/

H N A S

SHR3 EVS3 192.168.3.31

NFSD

SP 02

/VV1

FS03/

FS04/

SD 0

SD 1

SD 2

SD 3

SD 4

SD 5

SD 6

SD 7

SD 8

SD 9

LDEV 11

LDEV 12

LDEV 13

LDEV 16

LDEV 19

LDEV 20

LDEV 21

LDEV 22

LDEV 23

LDEV 0

SD L L S D U T E N O R V RG

A G E

RG = RAID Group LDEV = Logical Device HNAS = Hitachi NAS Platform SD = System Drive SP = Storage Pool FS = File System EVS = Enterprise Virtual Server SHR = Share

HDS Confidential: For distribution only to authorized parties.

Page 6-27

File System and Access Protocols What Are the Similarities?

What Are the Similarities?  NFS and CIFS are protocols  NFS and CIFS enables sharing the same “Storage”  NFS and CIFS access across a network  NFS and CIFS have built in: • System login security • Connection protocol • File and Directory security • File and Directory locking

The way the similarities are implemented makes CIFS and NFS very different!

Page 6-28

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols What Is Different?

What Is Different?

Issue System Login Security

NFS v2/v3

CIFS

Client logs into server Server authenticates Server exports directories

Client logs into domain Domain authenticates Server shares directories

No historical relationship Do not have to re-authenticate

Client/server share history Have to re-authenticate

File and Directory Security

Check U-ID /G-ID at request time U-ID/G-ID per file/directory

Use ACL for the share U-ID (SID) checked against ACL

File and Directory Locking

Advisory locks Works for good citizens only

Mandatory locks Access decides the lock

Stateful/ Stateless connection

HDS Confidential: For distribution only to authorized parties.

Page 6-29

File System and Access Protocols UNIX Permissions

UNIX Permissions

Owner

Group

Everyone

Maybe stupid – but “Simple”!

Page 6-30

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols Windows Permissions

Windows Permissions Maybe Advanced — but “Complex”!

HDS Confidential: For distribution only to authorized parties.

Page 6-31

File System and Access Protocols Common Internet File System (CIFS) Authentication/Active Directory Service (ADS)

Common Internet File System (CIFS) Authentication/Active Directory Service (ADS)

 Add a CIFS Name to each EVS that will host Microsoft Windows clients • ADS CIFS names will be automatically added to the specified Active Directory using Dynamic DNS (DDNS) • ADS accounts are placed in the “Computers” folder by default

DNS: DNS is used to translate host names into IP addresses. With DNS, records must be created manually for every host name and IP address. Dynamic DNS: On TCP/IP networks, the Domain Name System (DNS) is the most common method to resolve a host name with an IP address, facilitating IP-based communication. With DNS, records must be created manually for every host name and IP address. Starting with Microsoft Windows 2000, Microsoft enabled support for Dynamic DNS, with a DNS database that allows authenticated hosts to automatically add a record of their host name and IP address, thus eliminating the need for manual creation of records.

Page 6-32

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols ADS and Network Basic Input/Output System (NetBIOS)

ADS and Network Basic Input/Output System (NetBIOS)  Legacy Microsoft and some non-Microsoft Windows CIFS clients may require NetBIOS to be enabled  On customer request, DDNS can be disabled

Using NetBIOS: When enabled, NetBIOS allows NetBIOS and WINS on this server. If this server communicates by name with computers that use older Microsoft Windows versions, this setting is required. By default, the server is configured to use NetBIOS. Disabling NetBIOS has some advantages:  Simplifies the transport of SMB traffic  Removes WINS and NetBIOS broadcast as a means of name resolution  Standardizes name resolution on DNS for file and printer sharing

HDS Confidential: For distribution only to authorized parties.

Page 6-33

File System and Access Protocols ADS and Domain Name System (DNS)

ADS and Domain Name System (DNS)  The server registers each CIFS name and IP address with the directory’s Dynamic DNS server (DDNS)

Same EVS represented 3 times

Domain Controller

Page 6-34

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols ADS Computers

ADS Computers

 CIFS names will appear as unique computers in the Active Directory Computers folder

HDS Confidential: For distribution only to authorized parties.

Page 6-35

File System and Access Protocols ADS Computer Properties

ADS Computer Properties  Computer Properties for Hitachi NAS Platform Node EVS

Page 6-36

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols CIFS Shares

CIFS Shares  Shares can be created through the GUI (shown) or using computer management MMC.  New shares are created with an “Everyone Full” permission.  File and directory access applies according to the ACL.  CIFSv1 and CIFSv2 is supported on all platform families.

Access configuration – optional IP-based restrictions. Example: 19.168.*.*(rw) 10.1.3.38(noaccess) 10.1.2.0/24(ro) Ordering is important. Start specific, then make more general. Notes:  All clients on network ID 19.168.0.0/55.55.0.0 will have read and write access.  The client WS with IP address 10.1..38 will have no access at all, and all other WS IP addresses will have read-only access.

HDS Confidential: For distribution only to authorized parties.

Page 6-37

File System and Access Protocols Network File System (NFS) and Exports

Network File System (NFS) and Exports  NFSv4 support added to NFSv3 and NFSv2 support • Some performance and security improvements

 By default, exports grant read/write access to all clients and squash root (user and group) to 65534

Access configuration – optional IP-based restrictions. Example: 19.168.0.1(norootsquash) 19.168.*.*(rw) *(ro) Ordering is important. Start specific, then make more general bin. Notes:  The root account (UID/GID = 0/0) on the client WS with IP address 19.168.0.1 will not be mapped to “anonymous” (uid/gid 65534).  All clients on network ID 19.168.0.0/55.55.0.0 will have read and write access.  All other WS IP addresses will have read-only access.  root squash: Map requests from uid/gid 0 (root) to the anonymous uid/gid (65534). Note that this does not apply to any other UIDs that might be equally sensitive, such as super users.

Page 6-38

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols Multi-protocol Access

Multi-protocol Access CIFS Client

NFS Client

iSCSI Client

FTP Client

LAG

HNAS 3080

CIFS

NFS

iSCSI

FTP

Share

Export

Target LUN

FTP Dir

Expandable File system To use Internet Small Computer Systems Interface (iSCSI) storage on the server, one or more iSCSI logical units (LUs) must be defined. iSCSI logical units share blocks of SCSI storage that are accessed through iSCSI targets. iSCSI targets can be found through an iSNS database or through a target portal. Once an iSCSI target has been found, an Initiator running on a Microsoft Windows server can access the logical unit as a “local disk” through its target. Security mechanisms can be used to prevent unauthorized access to iSCSI targets. On the server, an iSCSI logical unit shares regular files residing on a file system. As a result, iSCSI benefits from file system management functions provided by the server, such as NVRAM logging, snapshots, and quotas.

HDS Confidential: For distribution only to authorized parties.

Page 6-39

File System and Access Protocols Module Summary

Module Summary  In this module, you have learned to: • Explain the storage pools of the Hitachi NAS Platform • Describe file system structure • List the storage pool and file system specifications • Identify the benefits of Tiered File Systems (TFS) • Describe the access protocol used by Microsoft® Windows® and UNIX • Identify the implementation differences for the access protocols

Page 6-40

HDS Confidential: For distribution only to authorized parties.

File System and Access Protocols Module Review

Module Review 1. List the acronyms of some popular file systems. 2. List some functions most file systems have in common. 3. Specify the maximum volume size in the Hitachi NAS Platform 3090. 4. Which benefits can the customer achieve with storage pools? 5. Which file access protocol is used on the Microsoft Windows platform? 6. Which file access protocol is used on the UNIX platform?

HDS Confidential: For distribution only to authorized parties.

Page 6-41

File System and Access Protocols Module Review

Page 6-42

HDS Confidential: For distribution only to authorized parties.

7. N-way Clustering and Enterprise Virtual Server (EVS) Module Objectives  Upon completion of this module, you should be able to: • Explain the concept of an Enterprise Virtual Server (EVS) • Explain the purpose of clustering • Define dataflow in NVRAM • Define IP address assignment in case of an error • List the failure detection areas • Recognize the failover and recovery operation • Describe a Synchronous Disaster Recovery Cluster

HDS Confidential: For distribution only to authorized parties.

Page 7-1

N-way Clustering and Enterprise Virtual Server (EVS) Enterprise Virtual Servers (EVS) Attributes

Enterprise Virtual Servers (EVS) Attributes

EVS 1

IP Address Policy

EVS 3

IP Address Policy

....

EVS 2

IP Address Policy

 Each EVS has the following attributes assigned: • One or more IP addresses • One or more file systems ▪

All EVSs may see the same logical devices (LDEVs) but can only access the area belonging to the file systems assigned to the EVS (host [node]-based masking)

• Port assignment for performance management per EVS • NFS/CIFS exported resources • Command line interface (CLI) context

EVS allows administrators to create up to 64 logical servers within a single physical system. Each virtual server can have a separate address and policy.

Page 7-2

HDS Confidential: For distribution only to authorized parties.

N-way Clustering and Enterprise Virtual Server (EVS) EVS Configuration Summary

EVS Configuration Summary

 EVS is the virtual file server component of the Hitachi NAS Platform solution  Maximum of 64 EVSs per server node/cluster nodes Anatomy of an EVS:

• One or more file serving IP addresses • Can host one or more file systems • Is the container for CIFS shares, NFS exports, and more • Bound to one Link Aggregation Group (LAG) • In a cluster failover scenario, EVSs migrate from the failed node to an online node

HDS Confidential: For distribution only to authorized parties.

Page 7-3

N-way Clustering and Enterprise Virtual Server (EVS) Virtual Server Configuration

Virtual Server Configuration

These screen shots explain the IP address assignments for EVS, as well as the EVS types.

Page 7-4

HDS Confidential: For distribution only to authorized parties.

N-way Clustering and Enterprise Virtual Server (EVS) Automatic EVS Migration (Clustering) Network Problem

Automatic EVS Migration (Clustering) Network Problem

ag2

ag1

ag1

ag2 EVS1-2 10.33.123.12

EVS1-3 10.33.123.14

EVS1-1 192.168.3.11

Private Management

Node 1

192.168.3.19 EVS0 192.0.2.15

192.0.12

192.0.2.11

Node 2

172.145.2.14 Public Management

HDS Confidential: For distribution only to authorized parties.

Page 7-5

N-way Clustering and Enterprise Virtual Server (EVS) Automatic EVS Migration (Clustering) Node HW Problem

Automatic EVS Migration (Clustering) Node HW Problem

ag2

ag1

ag1

ag2 EVS1-2 10.33.123.12

EVS1-3 10.33.123.14

EVS1-1 192.168.3.11

Private Management

192.168.3.19 EVS0 192.0.2.15

Node 1

192.0.2.12

192.0.2.11

172.145.2.14 Public Management

Page 7-6

HDS Confidential: For distribution only to authorized parties.

Node 2

N-way Clustering and Enterprise Virtual Server (EVS) 2-node Clustering

2-node Clustering Hitachi NAS Platform 2-node Cluster

Cluster Heartbeat

Data Network

Data Network Clients

Hitachi NAS Platform

Cluster Heartbeat

Cluster Interconnect

SMU Management Network

(Quorum device)

Mgmt. Network

Cluster Heartbeat

System Configuration

Cluster Heartbeat

Hitachi NAS Platform

The Hitachi NAS Platform supports 2-node Active-Active (A-A) clusters. In A-A cluster configurations, each node can host several independent EVSs, which can service network requests simultaneously. A maximum of 64 EVSs per 2-node cluster are supported. Should either of the nodes in the cluster fail, the EVSs from the failed node will automatically migrate to the remaining node. Network clients will not typically be aware of the failure and will not experience any loss of service, although the cluster may operate with reduced performance until the failed node is restored. After the node is restored and is ready for normal operation, the EVS can be migrated manually back to the original node. Note: SMU stands for System Management Unit.

HDS Confidential: For distribution only to authorized parties.

Page 7-7

N-way Clustering and Enterprise Virtual Server (EVS) Clustering Basics

Clustering Basics  The Cluster is Active-Active when one or more EVS is defined in both nodes  Clusters of 2 to 8 nodes (3080, 3090, 4060 two, 4080, 4100 four, 4080, 4100 later eight) • Clusters greater than 2 nodes require dual interconnect 10GbE switches

 Quorum is maintained by having a majority of votes • The Quorum Device votes only in even-node clusters • The Quorum Device resides on a System Management Unit • Node(s) not a part of a quorum do not host services

 Failover means EVS migration • Occurs when: ▪ All GbE ports in an aggregation used by an EVS fail ▪ All file systems associated with an EVS are offline ▪ Remote node goes offline; in other words, it is no longer sending heartbeats  A kind of Active-Passive cluster configuration can be achieved in a 2-node cluster having one node serving all EVSs and no EVSs being serviced by the second node.  In case of SMU failure, the cluster failover functionality can be affected in a 2- or 4-node cluster.  The interconnect switches for clusters greater than 2 nodes need to support 10GbE.  Support for up to 64 EVS per single node or 2-node to 4-node clusters.

Page 7-8

HDS Confidential: For distribution only to authorized parties.

N-way Clustering and Enterprise Virtual Server (EVS) NVRAM Usage in a 2-way Clustered Configuration

NVRAM Usage in a 2-way Clustered Configuration

(4) Write to disk

(4) Write to disk (2) Mirror Data and Acknowledge through HSI

(2) Mirror Data and Acknowledge through HSI CACHE

(1) Write

Network Clients

CACHE

(3) Write complete

(3) Write complete

Hitachi NAS Platform 2-way Cluster

(1) Write

Network Clients

When the Hitachi NAS Platform node is configured as a 2-node cluster, then, in addition to buffering all the file system modifications, each cluster node mirrors the NVRAM contents of the other cluster node. This mirroring of the cluster nodes’ NVRAM content ensures data integrity in the event of a cluster node failure. When a cluster node takes over for the failed node, it uses the contents of the NVRAM mirror to complete all file system modifications that were not yet committed to the storage by the failed server. HSI = High Speed Interconnect/Interface

HDS Confidential: For distribution only to authorized parties.

Page 7-9

N-way Clustering and Enterprise Virtual Server (EVS) N-way Clustering

N-way Clustering Hitachi NAS Platform N-way Cluster Hitachi NAS Platform

Cluster Heartbeat

Cluster Heartbeat

Cluster Heartbeat

Cluster Interconnect

Data Network

Data Network Clients

Hitachi NAS Platform

SMU

Cluster Heartbeat

(Quorum device) Hitachi NAS Platform

Cluster Heartbeat

Hitachi NAS Platform

Mgmt. Network

System Configuration

Cluster Heartbeat

N-way clustering allows up to 4 Hitachi NAS Platform 3090 nodes to be configured as a single Hitachi NAS Platform Cluster. When formed into a cluster, the Hitachi NAS Platform nodes are called cluster nodes. In a Hitachi NAS Platform cluster, nodes are not passive. Each node is active and able to host independent EVSs, which can serve network requests simultaneously. A maximum of 64 EVSs per cluster are supported. If a cluster node fails, the EVSs from the failing node automatically migrate to other cluster nodes. The EVSs from the failed node are then hosted by the other nodes in the cluster. Network clients will not typically be aware of the failure and will not experience any loss of service, although the cluster may operate with reduced performance until the failed node is restored. After the failed node is restored and is ready for normal operation, an EVS can be migrated back to the original node manually .

Page 7-10

HDS Confidential: For distribution only to authorized parties.

N-way Clustering and Enterprise Virtual Server (EVS) NVRAM Usage in a 4-way Clustered Configuration

NVRAM Usage in a 4-way Clustered Configuration Hitachi NAS Platform 4-way Cluster CACHE

(4) Write to disk

1

(1) Write

(2) Mirror Data/ Acknowledge through HSI

(4) Write to disk

CACHE

Network Clients

3

CACHE

4

(3) Write complete

(2) Mirror Data and Acknowledge through HSI

Network Clients

2 CACHE

(3) Write complete

(1) Write

When the Hitachi NAS Platform node is configured as a cluster, then, in addition to buffering all the file system modifications, each cluster node mirrors the NVRAM contents of the other cluster nodes in sequence. This mirroring of the cluster nodes’ NVRAM content ensures data integrity in the event of any one cluster node failure. When a cluster node takes over for the failed node, it uses the contents of the NVRAM mirror to complete all file system modifications that were not yet committed to storage by the failed server.

HDS Confidential: For distribution only to authorized parties.

Page 7-11

N-way Clustering and Enterprise Virtual Server (EVS) Cluster Configuration

Cluster Configuration

Quorum Device

Item

Description

Cluster Name

Name of cluster.

Status

Overall cluster status (online or offline).

Health

Cluster health: • Robust • Degraded

Name

Name of server hosting the QD (in other words, the SMU on which the QD resides).

IP Address

IP address of server hosting the QD (in other words, the SMU on which the QD resides).

Status

QD status: • Configured - Attached to the cluster. The QDs vote is not needed when any cluster contains an odd number of operational nodes.. • Owned - the QD is attached to the cluster and owned by a specific node in the cluster. • Not up - the QD cannot be contacted. • Seized - the QD has been taken over by another cluster.

Quorum Device services are provided by the SMU. While servers and clusters in a server farm are managed by a single SMU, an SMU can provide quorum services for up to 8 clusters in a server farm. To do so, the SMU hosts a pool of 8 available Quorum Devices (QDs). When a new cluster is formed, a QD must be assigned to the cluster. Once assigned to the cluster, the QD is “owned” by that cluster and is no longer available. Removing a QD from a cluster releases its ownership and returns the QD service to the pool of available QDs. If you need to add or remove the cluster’s QD, click the appropriate button (Add Quorum or Remove Quorum). If the QD is removed from the cluster, the port will be released back to the SMU’s pool of QDs and ports.

Page 7-12

HDS Confidential: For distribution only to authorized parties.

N-way Clustering and Enterprise Virtual Server (EVS) EVS Failover Functionality and Process Summary

EVS Failover Functionality and Process Summary  Out of all configured EVSs, only the EVS affected by a problem will failover (migrate) to another node  In case of node hardware or software failure, all EVSs hosted by this node will migrate  Even an EVS that has migrated to another node, due to failure, can be migrated to a third node  Failback is performed manually and is an EVS migration to the preferred node  An EVS that is not running on the preferred node is indicated with orange in the GUI  Migrating an EVS enables the IP address(s) associated with the EVS on the other node, together with all services and shares configured on the EVS  If the Admin EVS is running on a failing node, this admin EVS is migrated as well  Failback is also a manual operation for the admin EVS

HDS Confidential: For distribution only to authorized parties.

Page 7-13

N-way Clustering and Enterprise Virtual Server (EVS) IP Address before Failover

IP Address before Failover Hosts

192.168.0.3

Arp table

192.168.0.3 : xxx01 192.168.0.4 : xxx02

NIC1

MAC address: xxx01

NAS node1

NIC2

MAC address: xxx02

NAS node2

EVS 1:

EVS 2:

IP address : 192.168.0.3

IP address : 192.168.0.4

File services: NFS, CIFS, FTP…

File services: NFS, CIFS, FTP…

The way the ARP protocol maps the MAC address and IP address is displayed in the above diagram under normal operation for two different EVSs on two different physical nodes.

Page 7-14

HDS Confidential: For distribution only to authorized parties.

N-way Clustering and Enterprise Virtual Server (EVS) On Failing Over

On Failing Over

 An IP alias of EVS 1’s IP address is created on node2  Node2 broadcasts gratuitous ARP packets, which force an update of the ARP table of host clients

Hosts 192.168.0.3

Update!

Arp table

192.168.0.3 : xxx01 192.168.0.4 : xxx02 Gratuitous ARP

NIC1

MAC address: xxx01

NAS node1 Failure EVS 1: IP address : 192.168.0.3 File services: NFS, CIFS, FTP…

NIC2

MAC address: xxx02

NAS node2 Failover

EVS 2: IP address : 192.168.0.4 File services: NFS, CIFS, FTP…

HDS Confidential: For distribution only to authorized parties.

Page 7-15

N-way Clustering and Enterprise Virtual Server (EVS) After Failover

After Failover  Client hosts can continue to access EVS 1 of NAS node1 with the same IP address but through node2

192.168.0.3

New ARP table

192.168.0.3 : xxx02 192.168.0.4 : xxx02

Hosts

NIC1

MAC address: xxx01

NIC2

NASFailure node1

MAC address: xxx02

NAS node2 EVS 2: IP address : 192.168.0.4 File services: NFS, CIFS, FTP… EVS 1: Failover

IP address : 192.168.0.3 File services: NFS, CIFS, FTP…

After the failover process is completed, the updated ARP table in the clients will associate the IP for EVS 1 with the same MAC address as for EVS 2. This way, the clients using the IP address or the associated name for EVS 1 on node1 will not detect any difference before and after failover. Clients with a historical host relationship (stateful connection) like CIFS will need to re-authenticate before a transfer can be re-established.

Page 7-16

HDS Confidential: For distribution only to authorized parties.

N-way Clustering and Enterprise Virtual Server (EVS) Cluster Failover Reporting

Cluster Failover Reporting SMU

syslogd Hi-Track Monitor

Customer

Event log

Private Management Network

Hitachi NAS Platform

SNMP SMTP

Failure

Data Network

Private Depending on the customer network and server configuration, several or all error reporting methods can be used for error reporting. On the SMU, the error-reporting relay functions must be configured, and the “Network Management” and “Mail Servers” adjusted by the customer to reflect the configuration of the SMU. Alternatively SMTP Servers as an example can reside on the Data Network as well. Hi-Track® Monitor uses “SNMP get” commands to do a health check, and an EVS admin IP address on the public LAN can be used as well to interrogate any status changes and issue the alerting process setup by the CE.

HDS Confidential: For distribution only to authorized parties.

Page 7-17

N-way Clustering and Enterprise Virtual Server (EVS) Let’s Have a Look at a Single Node

Let’s Have a Look at a Single Node

 No redundancy except Network and FC links EVS FS

Page 7-18

EVS FS

FS SPAN SPAN system drives

 If the Node fails, loses Network connectivity or cannot reach the storage: no fileservice can be provided

HDS Confidential: For distribution only to authorized parties.

N-way Clustering and Enterprise Virtual Server (EVS) A Cluster Improves Things

A Cluster Improves Things

EVS FS

FS SPAN system drives

EVS FS SPAN

 In a traditional HNAS cluster, high-availability is achieved by adding additional nodes which share configuration and storage  Automatic EVS migration to other cluster nodes will ensure that file service can be provided, even if one node is down or has connectivity problems  If the Storage fails, all nodes are affected, until storage is back online

HDS Confidential: For distribution only to authorized parties.

Page 7-19

N-way Clustering and Enterprise Virtual Server (EVS) Hitachi Synchronous Disaster Recovery (Sync DR) Cluster Service

Hitachi Synchronous Disaster Recovery (Sync DR) Cluster Service Groningen

Utrecht

EVS FS

FS SPAN system drives

P

P

P

EVS FS SPAN

P

S

TrueCopy

Page 7-20

 The idea of a Sync DR Cluster is to add an additional copy of the data using TrueCopy  Since it does not really make sense to keep 2 copies of data at one location, the cluster is usually stretched over 2 locations S

S

S

 Problems with one of the 2 nodes will still be handled by the “well known” cluster mechanism providing highavailability

HDS Confidential: For distribution only to authorized parties.

N-way Clustering and Enterprise Virtual Server (EVS) Sync DR Components and Connectivity

Sync DR Components and Connectivity

HDS Confidential: For distribution only to authorized parties.

Page 7-21

N-way Clustering and Enterprise Virtual Server (EVS) This Is NOT a Sync DR Cluster

This Is NOT a Sync DR Cluster Groningen

EVS FS

FS SPAN system drives

Utrecht

EVS FS SPAN

P

P

P

P

S

S

S

S

EVS FS

FS SPAN system drives

TrueCopy

TrueCopy

EVS FS SPAN

S

S

S

S

P

P

P

P

There are other approaches to combine high Availability (HA) and Disaster Recovery (DR) with HNAS, however these approaches are not called HNAS “Sync DR Cluster” or “Metro Cluster” and are usually customer-specific services and configurations delivered by GSS.

Page 7-22

HDS Confidential: For distribution only to authorized parties.

N-way Clustering and Enterprise Virtual Server (EVS) Module Summary

Module Summary  In this module, you have learned to: • Explain the concept of an Enterprise Virtual Server (EVS) • Explain the purpose of clustering • Define dataflow in NVRAM • Define IP address assignment in case of an error • List the failure detection areas • Recognize the failover and recovery operation • Describe a Synchronous Disaster Recovery Cluster

HDS Confidential: For distribution only to authorized parties.

Page 7-23

N-way Clustering and Enterprise Virtual Server (EVS) Module Review

Module Review 1. What are the major benefits of clustering the nodes in the system? 2. Will a running application continue to run during and after the failover? 3. How many nodes can be included in one cluster? 4. Which configuration parameters must be aligned across the nodes in the cluster? 5. Which conditions will result in an automatic failover? 6. How is the event of a failover reported? 7. List the benefits of creating multiple EVSs in a cluster.

Page 7-24

HDS Confidential: For distribution only to authorized parties.

8. Maintenance Module Objectives  Upon completion of this module, you should be able to: • Differentiate the IP addresses used to identify different components and functions in the Hitachi NAS Platform • List the different management facilities • Recognize the naming and versioning convention for the software in System Management Unit (SMU) and node • Follow the upgrade procedures for hardware and software • Install and configure Hi-Track Remote Monitoring system

HDS Confidential: For distribution only to authorized parties.

Page 8-1

Maintenance Node IP Addresses 1 of 2

Node IP Addresses 1 of 2  Administrative IP Addresses • Assigned to 10/100/1000 private management port or if required to the 1GbE (30x0 only) and 10GbE aggregated ports ▪ Accessing the private management network through the External or Embedded SMU ▪ Creating an admin services IP address to the 1GbE (30x0 only) or 10GbE aggregated ports • On the 30x0 and 4xx0 cluster configuration the eth0 interface can be used to administrate and monitor the nodes as well • Server administration using SMU, SSC and SSH • IP-based access restriction on a per-service basis

Page 8-2

HDS Confidential: For distribution only to authorized parties.

Maintenance Node IP Addresses 2 of 2

Node IP Addresses 2 of 2  File Serving (EVS) IP Addresses • Assigned to 1GbE (30x0 only) and 10GbE aggregated ports only • Supports file service protocols CIFS, NFS, FTP, and the block-based protocol iSCSI • IP-based access restriction on share and exports • Version 11.1.xxxx.xx supports Data Migrator to cloud, where private management eth1 or public management eth0 can be used for migration to a cloud.

 Cluster Node IP Addresses • Assigned to 10/100/1000 private management network only • Used for inter-cluster and Quorum Device communications • Physical, non-migrating IP, stays with the cluster node

HDS Confidential: For distribution only to authorized parties.

Page 8-3

Maintenance Management Facilities

Management Facilities  Management Services: • •

HTTPS — GUI, Primary Management Interface ▪ https://<SMU_IP>/

ssh – Access the Node CLI ▪ ssh manager@<SMU_IP>; enter the managed server ▪ ssh [email protected]



Telnet – Access the Node CLI



ssc/pssc – Utility for Running Remote Commands ▪ ssc –u supervisor –p supervisor 192.0.2.2



scp – Secure Copy to/from Server Flash ▪ scp [email protected]:/ ▪ scp [email protected]:/event.log ./event.log

HTTP: HyperText Transfer Protocol HTTPS:

HyperText Transfer Protocol Secure

PSSC: Perl SiliconServer Control SCP: Secure CoPy SSC:

SiliconServer Control

SSH: Secure SHell

Page 8-4

HDS Confidential: For distribution only to authorized parties.

Maintenance Securing Management Access

Securing Management Access  GUI Access: • Home > SMU Administration > Security Options • Home > Server Settings

 CLI Access:

• mscfg <server> ▪ HTTP

▪ HTTPS

— Atlas server

— Atlas server (Secure)

▪ Telnet — Telnet server ▪ ssc

— SSC/PSSC CLI

▪ SNMP

— SNMP agent

▪ vss

— VSS Hardware provider DLL connection

• [enable | disable] • [restrict on|off]

• [addhost ] [removehost ]

VSS Hardware Provider: Through the integration between the Volume Shadow Copy Service (VSS), hardware or software VSS providers, application-level writers and backup applications, VSS enables integral backups that are point-in-time and application-level consistent without the backup tool having knowledge about the internals of each application.

HDS Confidential: For distribution only to authorized parties.

Page 8-5

Maintenance Useful Command Line Utilities

Useful Command Line Utilities  “Tab Completion”

• As an example: > disk +

→I

> diskusage_applet

 help  man

 apropos <what>

• all with: |more |grep

Page 8-6

HDS Confidential: For distribution only to authorized parties.

Maintenance CLI Commands and Context

CLI Commands and Context  pn <no>

— physical node

or

 cn <no>

— cluster-node

 vn <no>

— virtual node (EVS)

 evssel <no>

— virtual node (EVS)

 for-each-evs

— all EVSs

 for-each-cnode

— all physical nodes in the cluster

Example: pn all fc-link-status

HDS Confidential: For distribution only to authorized parties.

Page 8-7

Maintenance Maintenance Actions

Maintenance Actions

 The most often requested maintenance action is firmware upgrade  SMU upgrade and Hitachi NAS Platform Node firmware upgrades are often tightly coupled, and the sequence is important

 No Release Notes (RN) and no upgrade procedure? — no upgrade!  Pay attention to Product Support Alerts  Not strictly following the up- and down-grade procedures can result in unrecoverable error situations and customer outage  Make sure prerequisites are understood and are fulfilled

Page 8-8

HDS Confidential: For distribution only to authorized parties.

Maintenance Software Patching

Software Patching  Procedure is the same as with major software releases  Upgrade could be only SMU or only Node  Read Release Notes published with the software (SW) package version

10 . 2 . 3073 . 05 Major Release

Patches and fixes

New software features and support for new hardware

Build level Maintenance releases

We occasionally get questions from customers asking how often we release major (i.e. 10.0.x, 11.0.x) and minor (i.e. 10.2.x, 11.1.x) HNAS OS code. Moving forward, we are planning:  1 major release every 15-18 months  3 minor releases every 12 month period Please keep in mind these are guidelines and we reserve the right to make adjustments and changes. If you have any additional questions regarding this topic, free feel to reach out to a member of the HNAS PM team.

HDS Confidential: For distribution only to authorized parties.

Page 8-9

Maintenance Software Version Numbers and Names

Software Version Numbers and Names

Version 4.2 - Nov 06 NODE

SMU

Version 4.3 - Feb 07

Version 5.0 – Nov 07

Version

4.2.???.x

Version

4.3 .???.x

Version

5.0.???.x

Version

6.0.???.x

Name

Octopus

Name

Razor

Name

Parrot

Name

Stone 1

Version

4.2 .???.x

Version

4.3.???.x

Version

5.0.???.x

Version

6.0.???.x

Name

Beech

Name

Copper

Name

Davos

Name

Eaglecrest

SMU OS

RH 7.2

SMU OS

RH 7.2

SMU OS

CentOS 4.4

SMU OS

CentOS 4.4

Node software names = “Fish” names SMU software names = Ski resorts names

Page 8-10

Version 6.0 – Oct 08

HDS Confidential: For distribution only to authorized parties.

Maintenance Software Version Numbers and Names

3100/3200 only Version 6.1 – Oct 09 NODE

SMU

3080/3090 only Version 6.5 - Sep 09

Version 7.0 – Sep 10

Version 7.0 – Oct 10

Version

6.1.???.x

Version

6.5 .???.x

Version

7.0.2048.x

Version

7.0.2050.x

Name

Stone 2

Name

Nemo

Name

Tiger-1

Name

Tiger-1 (M1)

Version

6.1 .???.x

Version

6.5.???.x

Version

7.0.2048.x

Version

7.0.2050.x

Name

Taos

Name

Taos

SMU OS

Debian 5.0

SMU OS

Debian 5.0

Name

Eaglecrest

Name

Northstar

SMU OS CentOS 4.4 SMU OS Debian 5.0 External: CentOS 4.4

External: CentOS 4.4

External: CentOS 4.4

Node software names = “Fish” names SMU software names = Ski resorts names

HDS Confidential: For distribution only to authorized parties.

Page 8-11

Maintenance Software Version Numbers and Names

30x0 only Version 8.0 – Feb 11 NODE

SMU

Version 8.1 – May 11

Version 10.0 – Q1 12

Version

8.0.2226.x Version

8.1.2312.x

Version

10.0.xxxx.x

Version

10.1.xxxx.x

Name

Vampire-1 Name

Vampire-2

Name

Unicorn-1

Name

Unicorn-2

Version

8.0.2226.x Version

8.1.2312.x

Version

Version

10.1.xxxx.x

Name

Vail-1

Vail-2

Name

Uplands-1

Name

Uplands-2

Debian 5.0 External: CentOS 6.0

SMU OS Debian 5.0 External: CentOS 6.2

SMU OS Debian 5.0 External: CentOS 4.8

Name

SMU OS Debian 5.0 SMU OS External: CentOS 4.8

10.0.xxxx.x

Node software names = “Fish” names SMU software names = Ski resorts names

Page 8-12

30x0 only

Version 10.1 – Q2 12

HDS Confidential: For distribution only to authorized parties.

Maintenance Software Version Numbers and Names

Version 8.2 – 08/11 NODE

Version Name

SMU

8.2.2374.06 Vampire-3

30x0 only

30x0 only

Version 10.2 – 08/12

Version 11.0 – 12/12

Version Name

11.0.3123.xx

Version 11.1 – 4/13 Version Name

11.1.3225.xx

Unicorn-3

Name

10.2.3071.03

Version

11.0.3123.xx

Version

11.1.3225.xx

Name

Uplands-3

Name

Alpine-1

Name

Alpine-2

SMU OS

Debian 5.0 External: CentOS 6.2

Version

8.2.2374.01 Version

Name

Vail-3

SMU OS Debian 5.0 External: CentOS 4.8

10.2.3071.04 Version

30x0 only

Angel-1

SMU OS Debian 5.0 External: CentOS 6.2

Angel-2

SMU OS Debian 5.0 External: CentOS 6.2

Node software names = “Fish” names SMU software names = Ski resorts names

HDS Confidential: For distribution only to authorized parties.

Page 8-13

Maintenance Software Version Numbers and Names

4xx0 only Version 11.1 – 7/13 NODE

SMU

Version

11.1.3250.xx

30x0 and 4xx0

30x0 and 4xx0

Version 11.2 – 08/13

Version 12.0 – Q1/14

Version

Name

Angel-2

Name

Version

11.1.3225.xx

Version

Name

Alpine-2

SMU OS Debian 5.0 External: CentOS 6.2

11.2.33xx.xx Version Angel-3

11.2.33xx.xx

Name

Alpine-3

SMU OS

Debian 5.0 External: CentOS 6.2

Name

Bat-1

Version 13.0 – ?/? Version Name

13.0.xxxx.xx Cornet-1

Version

12.0.xxxx.x

Version

13.x.xxxx.x

Name

??

Name

??

SMU OS Debian 5.0 External: CentOS 6.?

Node software names = “Fish” names SMU software names = Ski resorts names

Page 8-14

12.0.xxxx.xx

30x0 and 4xx0

HDS Confidential: For distribution only to authorized parties.

SMU OS Debian 5.0 External: CentOS 6.?

Maintenance Software Upgrades

Software Upgrades

 Following are the general rules followed by software upgrades: • Upgrade from 10.x to 11.x to 12.x is rolling upgrades • Upgrade from version 5.x to 6.x is not a “rolling” upgrade • As well version 6.x to 7.x, 7.x to 8.x, and 8.x to 10.x requires total system outage and a maintenance window is required • Upgrade from 5.0 to 5.1 is not a “rolling” upgrade. System outage and a maintenance window is required • From Stone 1 release (version 6.0) “dot releases” as rolling upgrade is supported upgrading from 6.0 to 6.1, or 6.1 to 6.2, - but NOT 6.0 to 6.2 • Going from 5.0.1042.x to 5.0.1289.x (Maintenance release) can often be done as a “rolling” upgrade, node by node. Read the RN to make sure system outage and a maintenance window is not required • Patch release upgrade 5.0.1042.05 to 5.0.1042.09 can be done as a “rolling” upgrade, node by node “Rolling” upgrades means doing the upgrade node by node while the customer still has access to the file systems and shares on the other nodes.

HDS Confidential: For distribution only to authorized parties.

Page 8-15

Maintenance Upgrade Path in Release Notes

Upgrade Path in Release Notes

 Consult the latest Upgrade Path in the Release Notes: • Read the notes • Do not compromise

In rolling upgrade (in green), cluster nodes may boot one-at-a-time into the new firmware version. EVS migration works between revisions that support rolling upgrades, and each revision can read NVRAM written by the other revision. Cluster upgrades (in red) require all cluster nodes to shut down and boot into the new firmware version simultaneously. EVS migration between revisions does not work. Often NVRAM from one version is unreadable in the other version, which requires file systems to be fully unmounted in one version before they can be mounted in the other version. [1] Due to defect 58192, upgrades from versions 8.0 through 8.2.2312.08 must go to 8.1.2312.09 or 8.1.2350.22 before going to a higher version. [2] Due to defect 66378, upgrades from 8.1.2350.22 (or earlier 8.X builds) require careful EVS migration between nodes during rolling upgrades to 8.1.2350.22, and again from here to any higher version. See release notes for detailed instructions. [3] Due to defect 66551, file systems will not mount without intervention in the event of a failover from a node running 10.2 to a node running 10.0, so a cluster should not be left with a node on each level any longer than necessary for the upgrade process.

Page 8-16

HDS Confidential: For distribution only to authorized parties.

Maintenance Software Version Example from Daily Summary Email

Software Version Example from Daily Summary Email

HDS Confidential: For distribution only to authorized parties.

Page 8-17

Maintenance Saving External SMU Configuration Before Upgrade

Saving External SMU Configuration Before Upgrade

Saving the SMU Configuration Manually: 1. From the Home page, click SMU Administration. Then, click SMU Backup 2. Click backup. 3. Choose a location (on your PC) to store/archive the configuration. 4. Click OK. A copy of that backup is also kept on the SMU. SMU automatic backup runs daily and the last 14 backups are saved on the SMU. Important: An internal SMU’s backup can only be restored to an internal SMU, and an external backup only to an external SMU.

Page 8-18

HDS Confidential: For distribution only to authorized parties.

Maintenance Saving Embedded SMU and 30x0/4xx0 Server Registry

Saving Embedded SMU and 30x0/4xx0 Server Registry

HDS Confidential: For distribution only to authorized parties.

Page 8-19

Maintenance External SMU SW Upgrade and Downgrade

External SMU SW Upgrade and Downgrade  Since the 5.0.xxx.xx release including CentOS 4.4 the external SMU is made ready to be partitioned for a dual boot concept  This concept enable easy fallback to an earlier version in case of a problem  To ensure fall back to an older version always use: second-kvm

— Second SMU OS install using KVM. or

second-serial

— Second SMU OS install using serial console.

External SMU

Version 8.0

Version 7.0 smu-boot-alt-partition

Grub loader

Note: Code upgrades from 8.x to 10.x and 10.0 to 10.2 require a clean-kvm or cleanserial upgrade. Fallback means downgrade to the earlier SMU version, and will again be a "clean" process. Configuration backups are essential for both up- and downgrade.

Page 8-20

HDS Confidential: For distribution only to authorized parties.

Maintenance 1a. Selecting CentOS Installation Method Second

1a. Selecting CentOS Installation Method Second  Keep the current version and the configuration  Easy fall back to previous version

This process only installs the Linux Operating system and makes the other partition ready to host the SMU Application in the next step.

HDS Confidential: For distribution only to authorized parties.

Page 8-21

Maintenance 1b. Selecting CentOS Installation Method Clean

1b. Selecting CentOS Installation Method Clean  Format the complete HDD and delete the configuration  No fall back to previous version

This process formats the complete HDD and installs the Linux operating system and makes one partition ready to host the SMU Application in the next step.

Page 8-22

HDS Confidential: For distribution only to authorized parties.

Maintenance 2. External SMU Application Upgrade Procedures

2. External SMU Application Upgrade Procedures  Connect serial null-modem cable 

115200 baud, 8 data bits, no parity, 1 stop bit, no flow control

 Install the update running the update script 

Log in as root



Insert the Upgrade CD and close the CD/DVD drive, wait 5 seconds



From the prompt, mount the CD/DVD by typing: mount /media/cdrecorder or mount /media/cdrom





Then type: /media/cdrecorder/autorun or /media/cdrom/autorun to start the upgrade process Upon completion the system will reboot automatically

This process installs the HNAS SMU application, using the CD/DVD player build into the external SMU.

HDS Confidential: For distribution only to authorized parties.

Page 8-23

Maintenance Embedded SMU Upgrade and Downgrade 30x0/4xx0

Embedded SMU Upgrade and Downgrade 30x0/4xx0  Embedded SMU: • To transfer the updated files select your preferred method: ▪ Connect DVD/CD player with media to the node ▪ SCP ISO image to the node and mount the ISO file ▪ Connect a USB memory stick and mount the ISO file ▪ Transfer the packages over HTTP using the GUI (Avoid Wi-Fi!)

• Install and upgrade the same as the external SMU • Uninstall process is available for the Embedded SMU only • Downgrade of the embedded SMU will be uninstalled followed by reinstalled  Linux upgrade/downgrade: • Will be automated • Linux patching has until now fixed known issues

 Embedded SMU program will be uninstalled on nodes assembled in 2013 and later

Page 8-24

HDS Confidential: For distribution only to authorized parties.

Maintenance Upgrade of Embedded SMU SW from the GUI

Upgrade of Embedded SMU SW from the GUI

Browse to the ISO image file on your client computer Start the upgrade process

This upgrade procedure using HTTP to upload an ISO image should only be used for embedded SMU upgrade. The external SMU should NOT be upgraded using this method.

HDS Confidential: For distribution only to authorized parties.

Page 8-25

Maintenance Model 30x0 and 4xx0 Server Upgrade Procedures

Model 30x0 and 4xx0 Server Upgrade Procedures 1. Under Server Settings, click Upgrade Firmware 2. Select Managed Server and use the Server 3. Specify the location of the firmware files and click Apply

You have an option to pre-stage the firmware without rebooting.

The file format for Hitachi NAS 3080 and 3090 must be in tar format.

Page 8-26

HDS Confidential: For distribution only to authorized parties.

Maintenance Hitachi Command Suite (HCS) and Device Manager

Hitachi Command Suite (HCS) and Device Manager

HDS Confidential: For distribution only to authorized parties.

Page 8-27

Maintenance Hitachi Command Suite (HCS) 7.3.0

Hitachi Command Suite (HCS) 7.3.0

HCS version 7.3.x support link and launch, calling the appropriate page in the SMU web GUI.

Page 8-28

HDS Confidential: For distribution only to authorized parties.

Maintenance Hitachi Command Suite (HCS) Version 7.4 and up

Hitachi Command Suite (HCS) Version 7.4 and up

Over time with newer releases more and more functions will be executed as CLI commands in the background, making it transparent to the users how the task is executed.

HDS Confidential: For distribution only to authorized parties.

Page 8-29

Maintenance SNMP Manager Connectivity (First SNMP Hi-Track)

SNMP Manager Connectivity (First SNMP Hi-Track)

3. SNMP Manager?

Admin EVS

HNAS 4xx0

2. SNMP Manager? Admin EVS

HNAS 4xx0

1. SNMP Manager?

First implementation of Hi-Track, used Hi-Track monitor as an HDS programmed SNMP manager server. This method is taken over by Hi-Track using the SMU CLI, logging into the SMU. Questions You/Your Customer Need to Answer Scenario 1, using the private (red eth1) network:  Will you allow SNMP Manager Monitor on this network? Scenario 2, using the customer facing management (blue eth0) network:  Do you have connectivity for both eth0 connectors?  Have you configured an AVN IP address on both eth1 and eth0? Scenario 3, using the customer facing data (green ag1-8) network:  Does the customer allow monitoring/SNMP traffic on his file services data network?  Have you configured an AVN IP address on both eth1 and the AG?

Page 8-30

HDS Confidential: For distribution only to authorized parties.

Maintenance SNMP Agent Configuration on the Hitachi NAS Node

SNMP Agent Configuration on the Hitachi NAS Node

Adding a community called “public” as RO (Read Only) is all that is required to configure the NAS node, so the SNMP Manager can get information from the SNMP agent. Most often customers define the community to be used.

HDS Confidential: For distribution only to authorized parties.

Page 8-31

Maintenance Hi-Track Monitor SNMP Configuration

Hi-Track Monitor SNMP Configuration

 Configuring the SNMP Hi-Track Monitor is done in the same way as for FC switches and NetApp NAS Gateways.  Type in the serial number correctly since this is not interrogated from the management information base (MIB) file in the SNMP agent.  The IP address is represented by an administrative EVS IP address addressable through the customer’s network and aggregates.  SNMP Access ID reflects the public RO community defined before in the Hitachi NAS node.  This method is still supported, but the install base is rapidly migrating to the new SMU CLI method with a lot more detailed information and capabilities.

Page 8-32

HDS Confidential: For distribution only to authorized parties.

Maintenance Monitoring Devices

Monitoring Devices

HDS Confidential: For distribution only to authorized parties.

Page 8-33

Maintenance Hi-Track Monitor Version 5.7 and Up

Hi-Track Monitor Version 5.7 and Up  From Hi-Track Monitor version 5.7 and up, a new monitoring method has been introduced  Hi-Track monitor: log into SMU using SSH and manager account  Will monitor all entities managed by SMU  Remote user account can be customized  Only the SMU IP address needs to be registered  Hitachi NAS (HNAS) Server accounts will automatically be registered  Issuing commands against Admin EVS such as: diagshowall and eventlog-show

Page 8-34

HDS Confidential: For distribution only to authorized parties.

Maintenance Connectivity of Hi-Track Monitor SSH to SMU

Connectivity of Hi-Track Monitor SSH to SMU

Admin EVS

HNAS 4xx0

2. Hi-Track Monitor? Admin EVS

HNAS 4xx0

1. Hi-Track Monitor?

Questions You/Your Customer Need to Answer Scenario 1, using the private (red eth1) network:  Will you allow Hi-Track Monitor on this network?  Does the Hi-Track server have a second NIC card to Hi-Track DB connectivity?  Where do you monitor, as an example, the Modular Storage product family? Scenario 2, using the customer-facing management (blue eth0) network:  Do you have connectivity for both eth0 connectors?  Have you configured an AVN IP address on both eth1 and eth0?  Are the DF products monitored in this network?  Do you need a second NIC to the Hi-Track DB connection?

HDS Confidential: For distribution only to authorized parties.

Page 8-35

Maintenance Hi-Track Monitor using SMU

Hi-Track Monitor using SMU

Page 8-36

HDS Confidential: For distribution only to authorized parties.

Maintenance Monitoring Devices

Monitoring Devices

HDS Confidential: For distribution only to authorized parties.

Page 8-37

Maintenance Detail Status of the SMU

Detail Status of the SMU

Page 8-38

HDS Confidential: For distribution only to authorized parties.

Maintenance Detail Status of the Cluster

Detail Status of the Cluster

HDS Confidential: For distribution only to authorized parties.

Page 8-39

Maintenance Hi-Track Graphical Configuration Output

Hi-Track Graphical Configuration Output

Page 8-40

HDS Confidential: For distribution only to authorized parties.

Maintenance Logical View

Logical View

HDS Confidential: For distribution only to authorized parties.

Page 8-41

Maintenance Best Practices Check

Best Practices Check

Page 8-42

HDS Confidential: For distribution only to authorized parties.

Maintenance Module Summary

Module Summary  In this module, you have learned to: • Differentiate the IP addresses used to identify different components and functions in the Hitachi NAS Platform • List the different management facilities • Recognize the naming and versioning convention for the software in System Management Unit (SMU) and node • Follow the upgrade procedures for hardware and software • Install and configure Hi-Track Remote Monitoring system

HDS Confidential: For distribution only to authorized parties.

Page 8-43

Maintenance Module Review

Module Review 1. List the configuration parameters needed on the nodes to enable HiTrack monitoring. 2. List some help functions discussed in the module. 3. Which software upgrades can be performed as “rolling upgrades”? 4. What is a mandatory requirement before starting the software upgrade procedure?

Page 8-44

HDS Confidential: For distribution only to authorized parties.

9. Troubleshooting and Replacement Module Objectives  Upon completion of this module, you should be able to: • Set up the monitoring and reporting tools • Recognize error messages created by reporting tools • Gather necessary information for escalation • Identify the required standard documentation to implement replacement processes • Recognize the importance of electrostatic discharge (ESD) precautions

HDS Confidential: For distribution only to authorized parties.

Page 9-1

Troubleshooting and Replacement Other Hitachi NAS Platform Management Interfaces

Other Hitachi NAS Platform Management Interfaces  Call-home Mechanism: • SMTP-based mechanism for alerts and monitoring • Selective notification profiles • Daily performance data included

 SNMP v1/v2c  Syslog  Telnet/SSH/SSC access to NAS Platform Nodes (admin EVS), command line interface (CLI)  Hi-Track Monitor from version 3.8 and up  Hitachi Device Manager software  Hitachi Command Suite (HCS)

Page 9-2

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Storage Array Setup

Storage Array Setup  Storage is managed using Hitachi Data Systems native utilities • Hitachi Storage Navigator program • Service Processor (SVP) • Maintenance PC • Management PC • Hitachi Storage Navigator Modular (HSNM and HSNM2) • Web browser • Device Manager software • And others

HDS Confidential: For distribution only to authorized parties.

Page 9-3

Troubleshooting and Replacement Alert SMTP Connectivity

Alert SMTP Connectivity

Admin EVS

3. SMTP Server?

HNAS 4xx0 (SMU forwarding)

2. SMTP 1. SMTP Server? Server?

Admin EVS

HNAS 4xx0

Questions You/Your Customer Need to Answer: Scenario 1, using the private (red eth1) network:  Have you configured the AVN to alert the SMU IP?  As the SMU can only use DNS names, is DNS working?  Have you configured the SMU to relay the SMTP alerts?  Do you have connectivity to the customers SMTP server over the blue network? Scenario 2, using the customer facing management (blue eth0) network:  Do you have connectivity for both eth0 connectors?  Have you configured an AVN IP on both eth1 and eth0?  Do you have connectivity to the customers SMTP server over the blue network? Scenario 3, using the customer facing data (green ag1-8) network:  Does the customer allow monitoring/SMTP traffic on his file services data network?  Have you configured an AVN IP address on both eth1 and the AG?  Have you configured the AVN to alert to the customers SMTP server (name or IP address)? Page 9-4

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Configuring SMTP Servers

Configuring SMTP Servers  Configure a primary and secondary SMTP server • Use the SMU’s private network IP (like 192.0.2.60) as a mail server

HDS Confidential: For distribution only to authorized parties.

Page 9-5

Troubleshooting and Replacement Configuring SMU Email Alerts Forwarding

Configuring SMU Email Alerts Forwarding

 Select SMU email forwarding

Page 9-6

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Set up Email Forwarding on the SMU

Set up Email Forwarding on the SMU

 Insert the name or IP address of the customers SMTP server  DNS functionality is essential for SMU email forwarding using names

HDS Confidential: For distribution only to authorized parties.

Page 9-7

Troubleshooting and Replacement Set Up Email Profile

Set Up Email Profile

1. From the Home page, click Status & Monitoring. 2. Click Email Alerts Setup. 3. Click Add. 4. Give the profile a name. 5. Modify defaults as desired (Above is the default). 6. Create an email text. 7. Add one or more recipients. 8. Click OK.

Page 9-8

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Daily Health Check Email

Daily Health Check Email

This screen shows an example of Daily Health Check through email.

HDS Confidential: For distribution only to authorized parties.

Page 9-9

Troubleshooting and Replacement Alerts Summary Email

Alerts Summary Email

This screen shows the alerts summary received as requested through email.

Page 9-10

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Diagnostic Download

Diagnostic Download  Download complete system diagnostics log through the GUI  The first diagnostic should be executed before troubleshooting  Diagnostic logs may be emailed

 Server diagnostics can be sent from the server’s CLI by issuing the following command: diagemail <email_address>

HDS Confidential: For distribution only to authorized parties.

Page 9-11

Troubleshooting and Replacement Diagnostic Report: Email for the Nodes

Diagnostic Report: Email for the Nodes

Above screen shows the diagnostic report for both nodes in a cluster.

Page 9-12

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Diagnostic Report: Email for SMU and More

Diagnostic Report: Email for SMU and More

Above screen shows the Diagnostic reports for the SMU as well for the FC- switch. The diagnostics for storage do not cover Hitachi Data Systems storage at the moment.

HDS Confidential: For distribution only to authorized parties.

Page 9-13

Troubleshooting and Replacement Performance Information Report (PIR)

Performance Information Report (PIR)  The PIR provides explicit and granular details on dozens of performance relevant server statistics  SMU GUI can provide some graphical overview

 Custom PIRs may be sent from the server’s CLI:

pir [] [-r <;-separated recipients>] [-s <subject>] [--volume ][--cancel]

Page 9-14

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Performance Graph

Performance Graph

HDS Confidential: For distribution only to authorized parties.

Page 9-15

Troubleshooting and Replacement Using the trouble Command

Using the trouble Command

lab1-1:$ trouble ……………..truncated…………..

fs-protocols:cifs (on FSA; base priority 200) Domain Controller 192.168.1.63 on EVS 3: Priority 201: Pnode 1 FSA: Unable to contact Domain Controller. Problem with DC on local address: 172.20.20.31 Problem with DC on local address: 172.30.30.31 Fix problems with CIFS names first, if necessary. Check EVS 3's machine account(s) on the Domain Controller(s). A machine account should be configured for each of the EVS's CIFS names. Use the 'vn 3 cifs-dc prod' command to initiate a Domain Controller reconnect. To see: vn 3 cifs-name list To see: vn 3 cifs-dc list -v To see: vn 3 cifs-dc-errors [trouble took 1.30 min.] lab1-1:$

Page 9-16

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement trouble Reporter Examples

trouble Reporter Examples

HDS Confidential: For distribution only to authorized parties.

Page 9-17

Troubleshooting and Replacement trouble Performance Reporter Examples

trouble Performance Reporter Examples

Page 9-18

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Server-Based Packet Capturing

Server-Based Packet Capturing  Built-in network capture utility, accessible through CLI  Captures on any interface, including multi-port aggregations!  Protocol and host based filtering  Captures can be sent from the server by email  WARNING: Not for use on a NAS node in production!  Usage example:

packet-capture --start --filter "host 10.2.1.1" ag1 packet-capture --stop ag1 nail -n -a tmp -s "My Capture" <email_address> ssc ssget tmp ~/capture.cap

HDS Confidential: For distribution only to authorized parties.

Page 9-19

Troubleshooting and Replacement Fascia (Bezel) Removal

Fascia (Bezel) Removal

Page 9-20

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Model 30x0 G1 Fan Replacement Procedure

Model 30x0 G1 Fan Replacement Procedure 1. Remove the fascia. 2. Identify the fan to be replaced. 3. Fans are labeled on the chassis, numbered 1 to 3.

Please refer to Hitachi NAS Platform Hardware Reference MK-99BA013-

4. Disconnect the fan lead from its adjacent connector. 5. Remove the upper fan retention bracket. 6. Remove the lower fan retention bracket of the fan that is being replaced. 7. The fan can now be replaced. 8. The new fan must be fitted in the same way with the arrow indicating the direction of airflow into the server. 9. Secure the fan by fitting the brackets in the reverse order and reconnecting the fan.

HDS Confidential: For distribution only to authorized parties.

Page 9-21

Troubleshooting and Replacement Model 30x0 G1 Removing Fan Unit

Model 30x0 G1 Removing Fan Unit Upper fan retention bracket

Fan power connector lead

Page 9-22

Lower fan retention bracket

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Model 30x0 G2/4xx0 Fan Replacement

Model 30x0 G2/4xx0 Fan Replacement 30x0 G2

Please refer to Hitachi NAS Platform Hardware Reference MK-90BA030- or MK-92HNAS030-

4xx0

HDS Confidential: For distribution only to authorized parties.

Page 9-23

Troubleshooting and Replacement Model 30x0/4xx0 Battery Pack

Model 30x0/4xx0 Battery Pack  Secured in place when the fascia is fitted • NiMH - 72 hours backup of NVRAM

 Conditioning

• Regular conditioning cycle • Can run a full conditioning cycle to properly determine the current capacity

 Replacement

• By removing the fascia • Only replace with pack bearing same part number • Procedure will be supplied

 Lifetime • • • •

Page 9-24

Minimum two years life Alert generated when replacement required Shelf life of spares is six months Store packs between 10C and 25C for optimal life

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement General Battery Precautions

General Battery Precautions  Batteries left in a system that has been improperly powered down will drain beyond usefulness sometime after 72 hours  If the battery is left connected in an improperly shut down system, the battery must be recharged within 30 days  If the system is to be powered down for an extended period, from the server console, run the CLI command: shutdown --ship  Wait 10-15 seconds, then check that the NVRAM status LED is off  When the NVRAM status LED is off, the batteries will no longer power the NVRAM, and the nodes are shut down correctly for storage and/ or shipment

HDS Confidential: For distribution only to authorized parties.

Page 9-25

Troubleshooting and Replacement Model 30x0 G1 NVRAM Battery Replacement

Model 30x0 G1 NVRAM Battery Replacement 

Battery Replacement Procedure

Please refer to Hitachi NAS Platform Hardware Reference MK-99BA013-



Remove the fascia



Disconnect the battery lead from its adjacent connector



The battery can now be replaced. The new battery must be fitted in the same orientation with the lead exiting the back face of the pack at the bottom



Reconnect the new battery



Battery replacement should be done as quickly as possible and only when the new pack is at hand. NVRAM is not battery backed while the battery is disconnected (High-performance NAS Platform has two batteries, one in each PSU)

When the server is powered down following a clean shutdown and NVRAM is not in battery backup mode, the battery will still self-discharge at approx 1% per day. If the NVRAM is still in battery backup as indicated by the flashing NVRAM LED then the battery can be manually isolated using the reset button (see reset button description). Battery packs have a shelf life of up to 6 months** before conditioning is recommended. Conditioning tests the pack and maintains optimal capacity. When fully charged, the battery can be left fitted in a server in storage for a maximum of 6 months**. Battery conditioning equipment will be made available at key service sites and does not require Hitachi NAS Platform hardware. ** Life testing is ongoing to determine if these limits can be increased.

Page 9-26

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Model 30x0 G1 Battery Connector

Model 30x0 G1 Battery Connector  Remember to disconnect the battery lead, releasing the latch on the left side of the connector

HDS Confidential: For distribution only to authorized parties.

Page 9-27

Troubleshooting and Replacement Model 30x0 G2/4xx0 Battery Replacement

Model 30x0 G2/4xx0 Battery Replacement 30x0 G2

4xx0

Page 9-28

HDS Confidential: For distribution only to authorized parties.

Please refer to Hitachi NAS Platform Hardware Reference MK-90BA030- or MK-92HNAS030-

Troubleshooting and Replacement Battery Replacement in Caddy

Battery Replacement in Caddy

The spare battery for the G1 version is stocked as the G2 version. This means for G1, the battery needs to be removed from the caddy. The battery in the caddy is compatible with both G1 and G2 generations.

HDS Confidential: For distribution only to authorized parties.

Page 9-29

Troubleshooting and Replacement Model 30x0 G1 Hard Disk Replacement Procedure

Model 30x0 G1 Hard Disk Replacement Procedure 1. Shut down the server and disconnect power to both PSUs 2. Remove the fascia and one or both fans. Disconnection of drives is easier with both fans 1 and 2 removed 3. Identify the drive to be replaced. Drives are labeled on the chassis and named A and B as shown. Replace one drive only 4. Disconnect the power connector and SATA cable from the drive. Do NOT disconnect the SATA cable from the motherboard 5. Undo the thumbscrew on the drive carrier and slide the carrier out 6. If the replacement drive is not already fitted to a carrier then remove the four screws fixing the faulty drive to the carrier and fit the new drive into the carrier in the same orientation. Re-fit the carrier by locating it in the lugs and tighten the thumb screw 7. Reconnect the drive power and SATA cable 8. Replace the fans and fascia 9. The system will configure the new drive on reboot; however, user interaction is required to run the appropriate script The hard drive is mounted in the carrier using four “Torx” fixing screws. Use Torx screwdriver T10.

Page 9-30

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Model 30x0 G1 Hard Disk Cabling and Positioning

Model 30x0 G1 Hard Disk Cabling and Positioning

Do not borrow any HDD as a spare part from another node. The HDD needs to be new and blank from the spare warehouse. Procedures will not work and there is a severe risk of booting an incorrect image.

HDS Confidential: For distribution only to authorized parties.

Page 9-31

Troubleshooting and Replacement Model 30x0 G2/4xx0 G2 Hard Disk Replacement

Model 30x0 G2/4xx0 G2 Hard Disk Replacement

30x0 G2

Please refer to Hitachi NAS Platform Hardware Reference MK-90BA030- or MK-92HNAS030-

4xx0

Page 9-32

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Hardware Field System Testing

Hardware Field System Testing

 Manufacturing Test and Diagnostic Software (MTDS)

HDS Confidential: For distribution only to authorized parties.

Page 9-33

Troubleshooting and Replacement Manufacturing Test and Diagnostic Software (MTDS)

Manufacturing Test and Diagnostic Software (MTDS)  MTDS is primarily used for testing many different parts of the Mercury server hardware during production  The MTDS field test runs a test list designed for testing the hardware in the field and assessing if the hardware is OK  MTDS field test runs approximately 100 different hardware tests aimed at testing the Mercury FPGA board, but also performs tests on the HDDs, chassis fans, PSUs, etc. Note: MTDS field test does not perform a memory test on the MMB memory. To test the MMB memory in the field you need to run memtest86+

Page 9-34

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement MTDS Console

MTDS Console  Connect a KVM or console RS-232 connection • Null modem cable • Terminal server connected to console

 Mercury server must be stopped before running the mtds command

IMS

HDS Confidential: For distribution only to authorized parties.

Page 9-35

Troubleshooting and Replacement MTDS Test Commands

MTDS Test Commands  Available commands are: battery-test

bring-up-test

cpu-cmos-test

cpu-dmi-test

cpu-mem-test

cpu-sensors-test

data-sizes

debug-test

dimm-qual-test

dvt

emc-test

ess-test

eth-switch-test

fan-fru-test

fan-test

fc-port-test

field-test

fpga-prog-test

fpga-ram-test

fpga-sdram-conf

ge-port-test

glue-logic-test

hdd-test

i2c-test

inter-fpga-test

led-test

manufacture-test

mbi-flash-test

mcp-test

mfb-test

monitors

nvram-test

pcie-test

post

pre-test

psu-fru-test

psu-test

rom-test

seeprom-test

stress-test

system-config -check

tg-port-test

thermal-test

versions

voltage-test

Page 9-36

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Executing: mtds field-test

Executing: mtds field-test /opt/mercury-mtds/bin/mtds field-test

adm46:/home/manager# /opt/mercury-mtds/bin/mtds field-test Version : 8.1.2351.09 Directory : /home/builder/vampire/2351.09/main/bin/x86_64_linux-bart_libc2.7_release Build date : Aug 5 2011, 10:05:52 Log file path: /var/opt/mtds/log/B1038029_log/ chassis-monitor process detected; PID = 3487 Are you sure you want to stop the Mercury server? (y/n) y Successfully requested mfb.elf reset, waiting for exit. If it takes longer than 300 seconds to exit it will be killed....mfb exited successfully after 7 seconds. Waiting for stop script to complete Wait 10000mS......Stopped Test list settings: Continue-on-fail Stress phase time: 5 minutes 2012-03-22 17:32:14 Executing list of 108 tests for 1 cycles (Some additional tests may be run if all tests pass) + Test 2002: pcie-test mbi-pci-check ...............................Passed + Test 2001: pcie-test mbi-scratch .................................Passed + Test 2201: glue-logic-test glue-register-test ....................Passed + Test 2210: glue-logic-test check-failover-interface ..............Passed truncated.....................................

HDS Confidential: For distribution only to authorized parties.

Page 9-37

Troubleshooting and Replacement Ending: mtds field-test

Ending: mtds field-test ……….truncated Testing complete : 2012-03-22 17:43:49 Overall result : PASSED Tests run : 114, Passed: 114 Test success rate : 100.00% ---------------------------------------------------------------Re-starting monitors [====================] Setting kernel variables (/etc/sysctl.conf)...done. Setting kernel variables (/etc/sysctl.d/mercury-platform.conf)...done. Reloading internet superserver configuration: xinetd. Motherboard is Tyan S5211 Chassis monitor daemon started Chassis monitor is now running as expected, PID = 11168 Checking the chassis drive configuration is valid and fault tolerant Starting checks Checks complete No changes were made to the configuration. RAID monitor daemon started: Version: 8.1.2351.09 MMB monitor shared memory initialized. CPU frequency states: 2 MMB monitor daemon initialized. Poll interval=60 Monitors re-started Elapsed: 11m34.331s adm46:/home/manager# RAID monitor daemon initialized. Poll interval=60 File=/proc/mdstat adm46:/home/manager#

Page 9-38

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Mercury Motherboard Memory Test Memtest86+

Mercury Motherboard Memory Test Memtest86+  The BlueArc customized version of memtest86+ is available on all HNAS 30x0 installed with SU 7.0 or later from the factory  Connect a KVM on the server, reboot the server, ‘break into’ the GRUB menu (by hitting a key during the GRUB loader) and select the “MEMTEST” option  This version of memtest allows a repeat count to be specified on the kernel line (in the GRUB menu). If a repeat count hasn't been specified, it defaults to -1

HDS Confidential: For distribution only to authorized parties.

Page 9-39

Troubleshooting and Replacement Unrecoverable Configuration or Logical Errors

Unrecoverable Configuration or Logical Errors  HNAS configuration  Linux configuration

Page 9-40

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Factory Reset to Default Assessment

Factory Reset to Default Assessment  Should NOT be the first tool to use having configuration issues  Can be used as the final recovery tool when: • Uncorrectable configuration issues • Linux corruption • Node boot problems • Bring nodes back to factory defaults • Rebuilding production partition for reinstallation

 The tool will not fix grub boot loader issues  Recovery partitions need to be intact  Should NOT be used when one of the HDDs is in error

HDS Confidential: For distribution only to authorized parties.

Page 9-41

Troubleshooting and Replacement Fixing Logical Errors

Fixing Logical Errors  Read and understand the: •

FE-90BA022-xx “Resetting Servers to Factory Defaults” before execution

 Always open a case with GSC and ask for supervision  Cases are essential for tracking, statistics, and quality improvement  Let GSC advise which version is appropriate in your case  There will only be one image for every major release  After recovery an FW upgrade might be required  Remember the hwdb parameter (If you not want to have another MAC ID !!)  Un-mount Memory stick before reboot  Have good connection with the man with the long beard above the clouds

Page 9-42

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Resetting Servers to Factory Defaults

Resetting Servers to Factory Defaults  Request USB recovery files for the build version on your system  The /var, /opt and the / (NOT /root only!) partition will be overwritten  Locate a USB memory stick (4GB minimum)  Create a USB memory stick with recovery files (Careful using WinZip!)  Boot in the “Mercury Recovery” partition using the “grub” menu  Check for both /dev/sda and /dev/sdb being accessible  Mount USB (/dev/sdc1) to /mnt  Run:

• /mnt/mercury-reinstall-main-partitions --preserve-hwdb • reboot

• nas-preconfig • reboot

HDS Confidential: For distribution only to authorized parties.

Page 9-43

Troubleshooting and Replacement HNAS Server Node Replacement

HNAS Server Node Replacement  Only the 3090, 4100, and 4060 models will be on stock as a spare part  If a 3080 replacement is required, a USB conversion tool is required  Conversion tools are tracked and are required to be returned after use!  Replacing 4080 with a 4060 in a cluster will turn the 4060 into a 4080  The G1 model and G2 model have different spare part numbers  If the node to be replaced is in a single node configuration, a complete set of license keys are required for the new node  FPGA package replacement in a G2 model will not change the MAC ID  Replacing a node in a cluster, does not require new license keys  The “Hitachi NAS Platform Server Replacement Procedures” will list all the steps needed for a successful node replacement Things to consider:  G1 or G2 model  G2 chassis do not include MFB package.  MFB package do not fit into G1 model  New MAC ID in single node configuration means new licenses  New network MAC ID  Change ownership of storage pools for single node  WWN zoning  LUN security  Conversion tool (3080)  Planning

Page 9-44

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Spare Part List Model 30x0

Spare Part List Model 30x0  Spare part lists for model 4xx0 will be available after GA • Link to HNAS logistic: http://logistics.hds.com/Spares/main_BLU.htm

Stay updated with the latest spare part list under Logistic Global. http://logistics.hds.com/Spares/main_BLU.htm

HDS Confidential: For distribution only to authorized parties.

Page 9-45

Troubleshooting and Replacement Spare Part List SMU, Switches, and Optics

Spare Part List SMU, Switches, and Optics

Page 9-46

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement General Precautions

General Precautions  Proper ESD precautions should be used any time you work on the node system

 Proper ventilation and cooling of all components relies on the system being “intact”  Lifting: • All of the node components, especially drive enclosures, are extremely heavy and require two people to lift them

HDS Confidential: For distribution only to authorized parties.

Page 9-47

Troubleshooting and Replacement Module Summary

Module Summary  In this module, you have learned to: • Set up the monitoring and reporting tools • Recognize error messages created by reporting tools • Gather necessary information for escalation • Identify the required standard documentation to implement replacement processes • Recognize the importance of electrostatic discharge (ESD) precautions

Page 9-48

HDS Confidential: For distribution only to authorized parties.

Troubleshooting and Replacement Module Review

Module Review 1. List some error reporting tools support by the Hitachi NAS Platform. 2. How is Hitachi storage monitored and managed? 3. Which email accounts can receive email notifications? 4. Which requirements do we have to specify for the customers to get email alerting to work? 5. Can network traffic be monitored on a specific aggregate without externally connected network analyzers?

HDS Confidential: For distribution only to authorized parties.

Page 9-49

Troubleshooting and Replacement Module Review

Page 9-50

HDS Confidential: For distribution only to authorized parties.

Your Next Steps

Certification: http://www.hds.com/services/education/certification Learning Center: http://learningcenter.hds.com White Papers: http://www.hds.com/corporate/resources/

HDS Confidential: For distribution only to authorized parties.

Page N-1

Your Next Steps

Learning Paths: APAC: http://www.hds.com/services/education/apac/?_p=v#GlobalTabNavi Americas: http://www.hds.com/services/education/northamerica/?tab=LocationContent1#GlobalTabNavi EMEA: http://www.hds.com/services/education/emea/#GlobalTabNavi HDS Community: http://community.hds.com - Open to all customers, partners, prospects, and internals theLoop: http://loop.hds.com/message/18879#18879 ― HDS internal only LinkedIn: http://www.linkedin.com/groups?home=&gid=3044480&trk=anet_ug_hm& goback=%2Emyg%2Eanb_3044480_*2 Twitter: http://twitter.com/#!/HDSAcademy

Page N-2

HDS Confidential: For distribution only to authorized parties.

Communicating in a Virtual Classroom — Tools and Features Virtual Classroom Basics Overview of Communicating in a Virtual Classroom  Chat  Q&A  Feedback Options • Raise Hand • Yes/No • Emoticons

 Markup Tools • Drawing Tools • Text Tool

HDS Confidential: For distribution only to authorized parties.

Page V-1

Communicating in a Virtual Classroom — Tools and Features Reminders: Intercall Call-Back Teleconferencing

Reminders: Intercall Call-Back Teleconferencing

Page V-2

HDS Confidential: For distribution only to authorized parties.

Communicating in a Virtual Classroom — Tools and Features Feedback Features — Try Them

Feedback Features — Try Them

Raise Hand

Yes

No

Emoticons

HDS Confidential: For distribution only to authorized parties.

Page V-3

Communicating in a Virtual Classroom — Tools and Features Markup Tools (Drawing and Text) — Try Them

Markup Tools (Drawing and Text) — Try Them

Pointer

Page V-4

Text Writing Tool

Drawing Tools

Highlighter Annotation Colors

HDS Confidential: For distribution only to authorized parties.

Eraser

Communicating in a Virtual Classroom — Tools and Features Transferring Your Audio to Virtual Breakout Rooms

Transferring Your Audio to Virtual Breakout Rooms  Automatic • With Intercall / WebEx Teleconference Call-Back Feature

 Otherwise • To transfer your audio from Main Room to virtual Breakout Room 1. Enter *9 2. You will hear a recording – follow instructions 3. Enter Your Assigned Breakout Room number #  For example, *9 1# (Breakout Room #1) • To return your audio to Main Room  Enter *9

HDS Confidential: For distribution only to authorized parties.

Page V-5

Communicating in a Virtual Classroom — Tools and Features Intercall (WebEx) Technical Support

Intercall (WebEx) Technical Support  800.374.1852

Page V-6

HDS Confidential: For distribution only to authorized parties.

Communicating in a Virtual Classroom — Tools and Features WebEx Hands-On Labs

WebEx Hands-On Labs WebEx Hands-On Lab Operations  From session, Instructor starts Hands-On remote lab  Instructor assigns lab teams (lab teams assigned to a computer)  Learners are prompted to connect to their lab computer • Click Yes

 After connecting to lab computer, learners see a message asking them to disconnect and connect to the new teleconference • Click Yes You do not need to hang up and dial a new number, Intercall auto connects you to the lab conference.

HDS Confidential: For distribution only to authorized parties.

Page V-7

Communicating in a Virtual Classroom — Tools and Features WebEx Hands-On Lab Operations

 Instructor can join each lab team’s conference.  Members of a lab group can communicate: • With each other using CHAT and telephone Lower right hand corner of computer screen

• With Instructor using Raise Hand feature

 Only one learner is in control of the lab desktop at any one time. • To pass control, select learner name and click Presenter Ball

Page V-8

HDS Confidential: For distribution only to authorized parties.

Training Course Glossary A B C D E F G H I J K L M N O P Q R S T U V W X Y Z —A—

AIX — IBM UNIX.

AaaS — Archive as a Service. A cloud computing business model. AAMux — Active-Active Multiplexer.

AL — Arbitrated Loop. A network in which nodes contend to send data, and only 1 node at a time is able to send data.

ACC — Action Code. A SIM (System Information Message).

AL-PA — Arbitrated Loop Physical Address. AMS — Adaptable Modular Storage.

ACE — Access Control Entry. Stores access rights for a single user or group within the Windows security model.

APAR — Authorized Program Analysis Reports.

ACL — Access Control List. Stores a set of ACEs, so that it describes the complete set of access rights for a file system object within the Microsoft Windows security model. ACP ― Array Control Processor. Microprocessor mounted on the disk adapter circuit board (DKA) that controls the drives in a specific disk array. Considered part of the back end; it controls data transfer between cache and the hard drives. ACP Domain ― Also Array Domain. All of the array-groups controlled by the same pair of DKA boards, or the HDDs managed by 1 ACP PAIR (also called BED). ACP PAIR ― Physical disk access control logic. Each ACP consists of 2 DKA PCBs to provide 8 loop paths to the real HDDs. Actuator (arm) — Read/write heads are attached to a single head actuator, or actuator arm, that moves the heads around the platters. AD — Active Directory. ADC — Accelerated Data Copy. Address — A location of data, usually in main memory or on a disk. A name or token that identifies a network component. In local area networks (LANs), for example, every node has a unique address. ADP — Adapter. ADS — Active Directory Service.

APF — Authorized Program Facility. In IBM z/OS and OS/390 environments, a facility that permits the identification of programs that are authorized to use restricted functions. API — Application Programming Interface. APID — Application Identification. An ID to identify a command device. Application Management — The processes that manage the capacity and performance of applications. ARB — Arbitration or request. ARM — Automated Restart Manager. Array Domain — Also ACP Domain. All functions, paths, and disk drives controlled by a single ACP pair. An array domain can contain a variety of LVI or LU configurations. Array Group — Also called a parity group. A group of hard disk drives (HDDs) that form the basic unit of storage in a subsystem. All HDDs in a parity group must have the same physical capacity. Array Unit — A group of hard disk drives in 1 RAID structure. Same as parity group. ASIC — Application specific integrated circuit. ASSY — Assembly. Asymmetric virtualization — See Out-of-band virtualization. Asynchronous — An I/O operation whose initiator does not await its completion before

HDS Confidential: For distribution only to authorized parties.

Page G-1

proceeding with other work. Asynchronous I/O operations enable an initiator to have multiple concurrent I/O operations in progress. Also called Out-of-band virtualization. ATA —Advanced Technology Attachment. A disk drive implementation that integrates the controller on the disk drive itself. Also known as IDE (Integrated Drive Electronics) Advanced Technology Attachment. ATR — Autonomic Technology Refresh.

or Yottabyte (YB). Note that variations of this term are subject to proprietary trademark disputes in multiple countries at the present time. BIOS — Basic Input/Output System. A chip located on all computer motherboards that governs how a system boots and operates. BLKSIZE — Block size. BLOB — Binary Large OBject. BP — Business processing.

Authentication — The process of identifying an individual, usually based on a username and password.

BPaaS —Business Process as a Service. A cloud computing business model.

AUX — Auxiliary Storage Manager.

BPM — Business Process Management.

Availability — Consistent direct access to information over time.

BPO — Business Process Outsourcing. Dynamic BPO services refer to the management of partly standardized business processes, including human resources delivered in a pay-per-use billing relationship or a selfservice consumption model.

-back to top-

—B— B4 — A group of 4 HDU boxes that are used to contain 128 HDDs.

BPAM — Basic Partitioned Access Method.

BST — Binary Search Tree.

BA — Business analyst.

BSTP — Blade Server Test Program.

Back end — In client/server applications, the client part of the program is often called the front end and the server part is called the back end.

BTU — British Thermal Unit.

Backup image—Data saved during an archive operation. It includes all the associated files, directories, and catalog information of the backup operation. BADM — Basic Direct Access Method. BASM — Basic Sequential Access Method. BATCTR — Battery Control PCB. BC — (1) Business Class (in contrast with EC, Enterprise Class). (2) Business coordinator. BCP — Base Control Program. BCPii — Base Control Program internal interface. BDW — Block Descriptor Word. BED — Back end director. Controls the paths to the HDDs. Big Data — Refers to data that becomes so large in size or quantity that a dataset becomes awkward to work with using traditional database management systems. Big data entails data capacity or measurement that requires terms such as Terabyte (TB), Petabyte (PB), Exabyte (EB), Zettabyte (ZB) Page G-2

Business Continuity Plan — Describes how an organization will resume partially or completely interrupted critical functions within a predetermined time after a disruption or a disaster. Sometimes also called a Disaster Recovery Plan. -back to top-

—C— CA — (1) Continuous Access software (see HORC), (2) Continuous Availability or (3) Computer Associates. Cache — Cache Memory. Intermediate buffer between the channels and drives. It is generally available and controlled as two areas of cache (cache A and cache B). It may be battery-backed. Cache hit rate — When data is found in the cache, it is called a cache hit, and the effectiveness of a cache is judged by its hit rate. Cache partitioning — Storage management software that allows the virtual partitioning of cache and allocation of it to different applications. CAD — Computer-Aided Design.

HDS Confidential: For distribution only to authorized parties.

CAGR — Compound Annual Growth Rate. Capacity — Capacity is the amount of data that a storage system or drive can store after configuration and/or formatting. Most data storage companies, including HDS, calculate capacity based on the premise that 1KB = 1,024 bytes, 1MB = 1,024 kilobytes, 1GB = 1,024 megabytes, and 1TB = 1,024 gigabytes. See also Terabyte (TB), Petabyte (PB), Exabyte (EB), Zettabyte (ZB) and Yottabyte (YB). CAPEX — Capital expenditure — the cost of developing or providing non-consumable parts for the product or system. For example, the purchase of a photocopier is the CAPEX, and the annual paper and toner cost is the OPEX. (See OPEX). CAS — (1) Column Address Strobe. A signal sent to a dynamic random access memory (DRAM) that tells it that an associated address is a column address. CAS-column address strobe sent by the processor to a DRAM circuit to activate a column address. (2) Content-addressable Storage. CBI — Cloud-based Integration. Provisioning of a standardized middleware platform in the cloud that can be used for various cloud integration scenarios. An example would be the integration of legacy applications into the cloud or integration of different cloud-based applications into one application. CBU — Capacity Backup. CBX —Controller chassis (box).

Centralized management — Storage data management, capacity management, access security management, and path management functions accomplished by software. CF — Coupling Facility. CFCC — Coupling Facility Control Code. CFW — Cache Fast Write. CH — Channel. CH S — Channel SCSI. CHA — Channel Adapter. Provides the channel interface control functions and internal cache data transfer functions. It is used to convert the data format between CKD and FBA. The CHA contains an internal processor and 128 bytes of edit buffer memory. Replaced by CHB in some cases. CHA/DKA — Channel Adapter/Disk Adapter. CHAP — Challenge-Handshake Authentication Protocol. CHB — Channel Board. Updated DKA for Hitachi Unified Storage VM and additional enterprise components. Chargeback — A cloud computing term that refers to the ability to report on capacity and utilization by application or dataset, charging business users or departments based on how much they use. CHF — Channel Fibre. CHIP — Client-Host Interface Processor. Microprocessors on the CHA boards that process the channel commands from the hosts and manage host access to cache.

CCHH — Common designation for Cylinder and Head.

CHK — Check.

CCI — Command Control Interface.

CHP — Channel Processor or Channel Path.

CCIF — Cloud Computing Interoperability Forum. A standards organization active in cloud computing.

CHPID — Channel Path Identifier.

CDP — Continuous Data Protection.

CHT — Channel tachyon. A Fibre Channel protocol controller.

CDR — Clinical Data Repository CDWP — Cumulative disk write throughput. CE — Customer Engineer. CEC — Central Electronics Complex. CentOS — Community Enterprise Operating System.

CHN — Channel adapter NAS.

CHSN or C-HSN— Cache Memory Hierarchical Star Network.

CICS — Customer Information Control System. CIFS protocol — Common internet file system is a platform-independent file sharing system. A network file system accesses protocol primarily used by Windows clients to communicate file access requests to Windows servers.

HDS Confidential: For distribution only to authorized parties.

Page G-3

CIM — Common Information Model.

• Data discoverability

CIS — Clinical Information System.

• Data mobility

CKD ― Count-key Data. A format for encoding data on hard disk drives; typically used in the mainframe environment.

• Data protection

CKPT — Check Point. CL — See Cluster. CLI — Command Line Interface. CLPR — Cache Logical Partition. Cache can be divided into multiple virtual cache memories to lessen I/O contention. Cloud Computing — “Cloud computing refers to applications and services that run on a distributed network using virtualized resources and accessed by common Internet protocols and networking standards. It is distinguished by the notion that resources are virtual and limitless, and that details of the physical systems on which software runs are abstracted from the user.” — Source: Cloud Computing Bible, Barrie Sosinsky (2011) Cloud computing often entails an “as a service” business model that may entail one or more of the following: • Archive as a Service (AaaS) • Business Process as a Service (BPaas)

• Dynamic provisioning • Location independence • Multitenancy to ensure secure privacy • Virtualization Cloud Fundamental —A core requirement to the deployment of cloud computing. Cloud fundamentals include: • Self service • Pay per use • Dynamic scale up and scale down Cloud Security Alliance — A standards organization active in cloud computing. CLPR — Cache Logical Partition. Cluster — A collection of computers that are interconnected (typically at high-speeds) for the purpose of improving reliability, availability, serviceability or performance (via load balancing). Often, clustered computers have access to a common pool of storage and run special software to coordinate the component computers' activities.

• Private File Tiering as a Service (PFTaas)

CM ― Cache Memory, Cache Memory Module. Intermediate buffer between the channels and drives. It has a maximum of 64GB (32GB x 2 areas) of capacity. It is available and controlled as 2 areas of cache (cache A and cache B). It is fully battery-backed (48 hours).

• Software as a Service (Saas)

CM DIR — Cache Memory Directory.

• SharePoint as a Service (SPaas)

CME — Communications Media and Entertainment.

• Failure as a Service (FaaS) • Infrastructure as a Service (IaaS) • IT as a Service (ITaaS) • Platform as a Service (PaaS)

• SPI refers to the Software, Platform and Infrastructure as a Service business model. Cloud network types include the following: • Community cloud (or community network cloud) • Hybrid cloud (or hybrid network cloud) • Private cloud (or private network cloud) • Public cloud (or public network cloud) • Virtual private cloud (or virtual private network cloud) Cloud Enabler —a concept, product or solution that enables the deployment of cloud computing. Key cloud enablers include: Page G-4

CM-HSN — Control Memory Hierarchical Star Network. CM PATH ― Cache Memory Access Path. Access Path from the processors of CHA, DKA PCB to Cache Memory. CM PK — Cache Memory Package. CM/SM — Cache Memory/Shared Memory. CMA — Cache Memory Adapter. CMD — Command. CMG — Cache Memory Group. CNAME — Canonical NAME.

HDS Confidential: For distribution only to authorized parties.

CNS — Cluster Name Space or Clustered Name Space.

CSTOR — Central Storage or Processor Main Memory.

CNT — Cumulative network throughput.

C-Suite — The C-suite is considered the most important and influential group of individuals at a company. Referred to as “the C-Suite within a Healthcare provider.”

CoD — Capacity on Demand. Community Network Cloud — Infrastructure shared between several organizations or groups with common concerns. Concatenation — A logical joining of 2 series of data, usually represented by the symbol “|”. In data communications, 2 or more data are often concatenated to provide a unique name or reference (e.g., S_ID | X_ID). Volume managers concatenate disk address spaces to present a single larger address space. Connectivity technology — A program or device's ability to link with other programs and devices. Connectivity technology allows programs on a given computer to run routines or access objects on another remote computer. Controller — A device that controls the transfer of data from a computer to a peripheral device (including a storage system) and vice versa. Controller-based virtualization — Driven by the physical controller at the hardware microcode level versus at the application software layer and integrates into the infrastructure to allow virtualization across heterogeneous storage and third party products. Corporate governance — Organizational compliance with government-mandated regulations. CP — Central Processor (also called Processing Unit or PU). CPC — Central Processor Complex. CPM — Cache Partition Manager. Allows for partitioning of the cache and assigns a partition to a LU; this enables tuning of the system’s performance. CPOE — Computerized Physician Order Entry (Provider Ordered Entry). CPS — Cache Port Slave. CPU — Central Processing Unit. CRM — Customer Relationship Management. CSS — Channel Subsystem. CS&S — Customer Service and Support.

CSV — Comma Separated Value or Cluster Shared Volume. CSVP — Customer-specific Value Proposition. CSW ― Cache Switch PCB. The cache switch (CSW) connects the channel adapter or disk adapter to the cache. Each of them is connected to the cache by the Cache Memory Hierarchical Star Net (C-HSN) method. Each cluster is provided with the 2 CSWs, and each CSW can connect 4 caches. The CSW switches any of the cache paths to which the channel adapter or disk adapter is to be connected through arbitration. CTG — Consistency Group. CTL — Controller module. CTN — Coordinated Timing Network. CU — Control Unit (refers to a storage subsystem. The hexadecimal number to which 256 LDEVs may be assigned). CUDG — Control Unit Diagnostics. Internal system tests. CUoD — Capacity Upgrade on Demand. CV — Custom Volume. CVS ― Customizable Volume Size. Software used to create custom volume sizes. Marketed under the name Virtual LVI (VLVI) and Virtual LUN (VLUN). CWDM — Course Wavelength Division Multiplexing. CXRC — Coupled z/OS Global Mirror. -back to top-

—D— DA — Device Adapter. DACL — Discretionary access control list (ACL). The part of a security descriptor that stores access rights for users and groups. DAD — Device Address Domain. Indicates a site of the same device number automation support function. If several hosts on the same site have the same device number system, they have the same name.

HDS Confidential: For distribution only to authorized parties.

Page G-5

DAP — Data Access Path. Also known as Zero Copy Failover (ZCF). DAS — Direct Attached Storage. DASD — Direct Access Storage Device. Data block — A fixed-size unit of data that is transferred together. For example, the X-modem protocol transfers blocks of 128 bytes. In general, the larger the block size, the faster the data transfer rate. Data Duplication — Software duplicates data, as in remote copy or PiT snapshots. Maintains 2 copies of data. Data Integrity — Assurance that information will be protected from modification and corruption. Data Lifecycle Management — An approach to information and storage management. The policies, processes, practices, services and tools used to align the business value of data with the most appropriate and cost-effective storage infrastructure from the time data is created through its final disposition. Data is aligned with business requirements through management policies and service levels associated with performance, availability, recoverability, cost, and what ever parameters the organization defines as critical to its operations. Data Migration — The process of moving data from 1 storage device to another. In this context, data migration is the same as Hierarchical Storage Management (HSM). Data Pipe or Data Stream — The connection set up between the MediaAgent, source or destination server is called a Data Pipe or more commonly a Data Stream. Data Pool — A volume containing differential data only. Data Protection Directive — A major compliance and privacy protection initiative within the European Union (EU) that applies to cloud computing. Includes the Safe Harbor Agreement. Data Stream — CommVault’s patented high performance data mover used to move data back and forth between a data source and a MediaAgent or between 2 MediaAgents. Data Striping — Disk array data mapping technique in which fixed-length sequences of Page G-6

virtual disk data addresses are mapped to sequences of member disk addresses in a regular rotating pattern. Data Transfer Rate (DTR) — The speed at which data can be transferred. Measured in kilobytes per second for a CD-ROM drive, in bits per second for a modem, and in megabytes per second for a hard drive. Also, often called data rate. DBL — Drive box. DBMS — Data Base Management System. DBX — Drive box. DCA ― Data Cache Adapter. DCTL — Direct coupled transistor logic. DDL — Database Definition Language. DDM — Disk Drive Module. DDNS — Dynamic DNS. DDR3 — Double data rate 3. DE — Data Exchange Software. Device Management — Processes that configure and manage storage systems. DFS — Microsoft Distributed File System. DFSMS — Data Facility Storage Management Subsystem. DFSM SDM — Data Facility Storage Management Subsystem System Data Mover. DFSMSdfp — Data Facility Storage Management Subsystem Data Facility Product. DFSMSdss — Data Facility Storage Management Subsystem Data Set Services. DFSMShsm — Data Facility Storage Management Subsystem Hierarchical Storage Manager. DFSMSrmm — Data Facility Storage Management Subsystem Removable Media Manager. DFSMStvs — Data Facility Storage Management Subsystem Transactional VSAM Services. DFW — DASD Fast Write. DICOM — Digital Imaging and Communications in Medicine. DIMM — Dual In-line Memory Module. Direct Access Storage Device (DASD) — A type of storage device, in which bits of data are stored at precise locations, enabling the computer to retrieve information directly without having to scan a series of records.

HDS Confidential: For distribution only to authorized parties.

Direct Attached Storage (DAS) — Storage that is directly attached to the application or file server. No other device on the network can access the stored data.

DKU — Disk Array Frame or Disk Unit. In a multi-frame configuration, a frame that contains hard disk units (HDUs).

Director class switches — Larger switches often used as the core of large switched fabrics.

DLIBs — Distribution Libraries.

Disaster Recovery Plan (DRP) — A plan that describes how an organization will deal with potential disasters. It may include the precautions taken to either maintain or quickly resume mission-critical functions. Sometimes also referred to as a Business Continuity Plan. Disk Administrator — An administrative tool that displays the actual LU storage configuration. Disk Array — A linked group of 1 or more physical independent hard disk drives generally used to replace larger, single disk drive systems. The most common disk arrays are in daisy chain configuration or implement RAID (Redundant Array of Independent Disks) technology. A disk array may contain several disk drive trays, and is structured to improve speed and increase protection against loss of data. Disk arrays organize their data storage into Logical Units (LUs), which appear as linear block paces to their clients. A small disk array, with a few disks, might support up to 8 LUs; a large one, with hundreds of disk drives, can support thousands. DKA ― Disk Adapter. Also called an array control processor (ACP). It provides the control functions for data transfer between drives and cache. The DKA contains DRR (Data Recover and Reconstruct), a parity generator circuit. Replaced by DKB in some cases. DKB — Disk Board. Updated DKA for Hitachi Unified Storage VM and additional enterprise components. DKC ― Disk Controller Unit. In a multi-frame configuration, the frame that contains the front end (control and memory components). DKCMN ― Disk Controller Monitor. Monitors temperature and power status throughout the machine. DKF ― Fibre disk adapter. Another term for a DKA.

DKUPS — Disk Unit Power Supply. DKUP — Disk Unit Power Supply. DLM — Data Lifecycle Management. DMA — Direct Memory Access. DM-LU — Differential Management Logical Unit. DM-LU is used for saving management information of the copy functions in the cache. DMP — Disk Master Program. DMT — Dynamic Mapping Table. DMTF — Distributed Management Task Force. A standards organization active in cloud computing. DNS — Domain Name System. DOC — Deal Operations Center. Domain — A number of related storage array groups. DOO — Degraded Operations Objective. DP — Dynamic Provisioning (pool). DP-VOL — Dynamic Provisioning Virtual Volume. DPL — (1) (Dynamic) Data Protection Level or (2) Denied Persons List. DR — Disaster Recovery. DRAC — Dell Remote Access Controller. DRAM — Dynamic random access memory. DRP — Disaster Recovery Plan. DRR — Data Recover and Reconstruct. Data Parity Generator chip on DKA. DRV — Dynamic Reallocation Volume. DSB — Dynamic Super Block. DSF — Device Support Facility. DSF INIT — Device Support Facility Initialization (for DASD). DSP — Disk Slave Program. DT — Disaster tolerance. DTA —Data adapter and path to cache-switches. DTR — Data Transfer Rate. DVE — Dynamic Volume Expansion. DW — Duplex Write.

HDS Confidential: For distribution only to authorized parties.

Page G-7

DWDM — Dense Wavelength Division Multiplexing.

ERP — Enterprise Resource Planning.

DWL — Duplex Write Line or Dynamic Workspace Linking.

ESB — Enterprise Service Bus.

ESA — Enterprise Systems Architecture.

-back to top-

—E—

ESC — Error Source Code. ESD — Enterprise Systems Division (of Hitachi) ESCD — ESCON Director.

EAL — Evaluation Assurance Level (EAL1 through EAL7). The EAL of an IT product or system is a numerical security grade assigned following the completion of a Common Criteria security evaluation, an international standard in effect since 1999. EAV — Extended Address Volume. EB — Exabyte. EC — Enterprise Class (in contrast with BC, Business Class). ECC — Error Checking and Correction. ECC.DDR SDRAM — Error Correction Code Double Data Rate Synchronous Dynamic RAM Memory. ECM — Extended Control Memory.

ESCON ― Enterprise Systems Connection. An input/output (I/O) interface for mainframe computer connections to storage devices developed by IBM. ESD — Enterprise Systems Division. ESDS — Entry Sequence Data Set. ESS — Enterprise Storage Server. ESW — Express Switch or E Switch. Also referred to as the Grid Switch (GSW). Ethernet — A local area network (LAN) architecture that supports clients and servers and uses twisted pair cables for connectivity. ETR — External Time Reference (device). EVS — Enterprise Virtual Server.

ECN — Engineering Change Notice.

Exabyte (EB) — A measurement of data or data storage. 1EB = 1,024PB.

E-COPY — Serverless or LAN free backup.

EXCP — Execute Channel Program.

EFI — Extensible Firmware Interface. EFI is a specification that defines a software interface between an operating system and platform firmware. EFI runs on top of BIOS when a LPAR is activated.

ExSA — Extended Serial Adapter.

EHR — Electronic Health Record. EIG — Enterprise Information Governance. EMIF — ESCON Multiple Image Facility. EMPI — Electronic Master Patient Identifier. Also known as MPI. Emulation — In the context of Hitachi Data Systems enterprise storage, emulation is the logical partitioning of an Array Group into logical devices. EMR — Electronic Medical Record. ENC — Enclosure or Enclosure Controller. The units that connect the controllers with the Fibre Channel disks. They also allow for online extending a system by adding RKAs. EOF — End of Field. EOL — End of Life. EPO — Emergency Power Off. EREP — Error REPorting and Printing. Page G-8

-back to top-

—F— FaaS — Failure as a Service. A proposed business model for cloud computing in which largescale, online failure drills are provided as a service in order to test real cloud deployments. Concept developed by the College of Engineering at the University of California, Berkeley in 2011. Fabric — The hardware that connects workstations and servers to storage devices in a SAN is referred to as a "fabric." The SAN fabric enables any-server-to-any-storage device connectivity through the use of Fibre Channel switching technology. Failback — The restoration of a failed system share of a load to a replacement component. For example, when a failed controller in a redundant configuration is replaced, the devices that were originally controlled by the failed controller are usually failed back to the replacement controller to restore the I/O balance, and to restore failure tolerance.

HDS Confidential: For distribution only to authorized parties.

Similarly, when a defective fan or power supply is replaced, its load, previously borne by a redundant component, can be failed back to the replacement part. Failed over — A mode of operation for failuretolerant systems in which a component has failed and its function has been assumed by a redundant component. A system that protects against single failures operating in failed over mode is not failure tolerant, as failure of the redundant component may render the system unable to function. Some systems (e.g., clusters) are able to tolerate more than 1 failure; these remain failure tolerant until no redundant component is available to protect against further failures. Failover — A backup operation that automatically switches to a standby database server or network if the primary system fails, or is temporarily shut down for servicing. Failover is an important fault tolerance function of mission-critical systems that rely on constant accessibility. Also called path failover. Failure tolerance — The ability of a system to continue to perform its function or at a reduced performance level, when 1 or more of its components has failed. Failure tolerance in disk subsystems is often achieved by including redundant instances of components whose failure would make the system inoperable, coupled with facilities that allow the redundant components to assume the function of failed ones. FAIS — Fabric Application Interface Standard. FAL — File Access Library. FAT — File Allocation Table. Fault Tolerant — Describes a computer system or component designed so that, in the event of a component failure, a backup component or procedure can immediately take its place with no loss of service. Fault tolerance can be provided with software, embedded in hardware or provided by hybrid combination. FBA — Fixed-block Architecture. Physical disk sector mapping. FBA/CKD Conversion — The process of converting open-system data in FBA format to mainframe data in CKD format. FBUS — Fast I/O Bus. FC ― Fibre Channel or Field-Change (microcode update) or Fibre Channel. A technology for

transmitting data between computer devices; a set of standards for a serial I/O bus capable of transferring data between 2 ports. FC RKAJ — Fibre Channel Rack Additional. Module system acronym refers to an additional rack unit that houses additional hard drives exceeding the capacity of the core RK unit. FC-0 ― Lowest layer on fibre channel transport. This layer represents the physical media. FC-1 ― This layer contains the 8b/10b encoding scheme. FC-2 ― This layer handles framing and protocol, frame format, sequence/exchange management and ordered set usage. FC-3 ― This layer contains common services used by multiple N_Ports in a node. FC-4 ― This layer handles standards and profiles for mapping upper level protocols like SCSI an IP onto the Fibre Channel Protocol. FCA ― Fibre Adapter. Fibre interface card. Controls transmission of fibre packets. FC-AL — Fibre Channel Arbitrated Loop. A serial data transfer architecture developed by a consortium of computer and mass storage device manufacturers, and is now being standardized by ANSI. FC-AL was designed for new mass storage devices and other peripheral devices that require very high bandwidth. Using optical fiber to connect devices, FC-AL supports full-duplex data transfer rates of 100MBps. FC-AL is compatible with SCSI for high-performance storage systems. FCC — Federal Communications Commission. FCIP — Fibre Channel over IP, a network storage technology that combines the features of Fibre Channel and the Internet Protocol (IP) to connect distributed SANs over large distances. FCIP is considered a tunneling protocol, as it makes a transparent point-topoint connection between geographically separated SANs over IP networks. FCIP relies on TCP/IP services to establish connectivity between remote SANs over LANs, MANs, or WANs. An advantage of FCIP is that it can use TCP/IP as the transport while keeping Fibre Channel fabric services intact.

HDS Confidential: For distribution only to authorized parties.

Page G-9

FCoE – Fibre Channel over Ethernet. An encapsulation of Fibre Channel frames over Ethernet networks. FCP — Fibre Channel Protocol. FC-P2P — Fibre Channel Point-to-Point. FCSE — Flashcopy Space Efficiency. FC-SW — Fibre Channel Switched. FCU— File Conversion Utility. FD — Floppy Disk or Floppy Drive. FDDI — Fiber Distributed Data Interface. FDR — Fast Dump/Restore. FE — Field Engineer. FED — (Channel) Front End Director. Fibre Channel — A serial data transfer architecture developed by a consortium of computer and mass storage device manufacturers and now being standardized by ANSI. The most prominent Fibre Channel standard is Fibre Channel Arbitrated Loop (FC-AL). FICON — Fiber Connectivity. A high-speed input/output (I/O) interface for mainframe computer connections to storage devices. As part of IBM's S/390 server, FICON channels increase I/O capacity through the combination of a new architecture and faster physical link rates to make them up to 8 times as efficient as ESCON (Enterprise System Connection), IBM's previous fiber optic channel standard. FIPP — Fair Information Practice Principles. Guidelines for the collection and use of personal information created by the United States Federal Trade Commission (FTC). FISMA — Federal Information Security Management Act of 2002. A major compliance and privacy protection law that applies to information systems and cloud computing. Enacted in the United States of America in 2002. FLGFAN ― Front Logic Box Fan Assembly. FLOGIC Box ― Front Logic Box. FM — Flash Memory. Each microprocessor has FM. FM is non-volatile memory that contains microcode. FOP — Fibre Optic Processor or fibre open. FQDN — Fully Qualified Domain Name. FPC — Failure Parts Code or Fibre Channel Protocol Chip. Page G-10

FPGA — Field Programmable Gate Array. Frames — An ordered vector of words that is the basic unit of data transmission in a Fibre Channel network. Front end — In client/server applications, the client part of the program is often called the front end and the server part is called the back end. FRU — Field Replaceable Unit. FS — File System. FSA — File System Module-A. FSB — File System Module-B. FSI — Financial Services Industries. FSM — File System Module. FSW ― Fibre Channel Interface Switch PCB. A board that provides the physical interface (cable connectors) between the ACP ports and the disks housed in a given disk drive. FTP ― File Transfer Protocol. A client-server protocol that allows a user on 1 computer to transfer files to and from another computer over a TCP/IP network. FWD — Fast Write Differential. -back to top-

—G— GA — General availability. GARD — General Available Restricted Distribution. Gb — Gigabit. GB — Gigabyte. Gb/sec — Gigabit per second. GB/sec — Gigabyte per second. GbE — Gigabit Ethernet. Gbps — Gigabit per second. GBps — Gigabyte per second. GBIC — Gigabit Interface Converter. GCMI — Global Competitive and Marketing Intelligence (Hitachi). GDG — Generation Data Group. GDPS — Geographically Dispersed Parallel Sysplex. GID — Group Identifier within the UNIX security model. gigE — Gigabit Ethernet.

HDS Confidential: For distribution only to authorized parties.

GLM — Gigabyte Link Module.

HDDPWR — Hard Disk Drive Power.

Global Cache — Cache memory is used on demand by multiple applications. Use changes dynamically, as required for READ performance between hosts/applications/LUs.

HDU ― Hard Disk Unit. A number of hard drives (HDDs) grouped together within a subsystem.

GPFS — General Parallel File System.

Heterogeneous — The characteristic of containing dissimilar elements. A common use of this word in information technology is to describe a product as able to contain or be part of a “heterogeneous network," consisting of different manufacturers' products that can interoperate.

GSC — Global Support Center. GSI — Global Systems Integrator. GSS — Global Solution Services. GSSD — Global Solutions Strategy and Development. GSW — Grid Switch Adapter. Also known as E Switch (Express Switch). GUI — Graphical User Interface. GUID — Globally Unique Identifier. -back to top-

—H— H1F — Essentially the floor-mounted disk rack (also called desk side) equivalent of the RK. (See also: RK, RKA, and H2F). H2F — Essentially the floor-mounted disk rack (also called desk side) add-on equivalent similar to the RKA. There is a limitation of only 1 H2F that can be added to the core RK Floor Mounted unit. See also: RK, RKA, and H1F. HA — High Availability. HANA — High Performance Analytic Appliance, a database appliance technology proprietary to SAP. HBA — Host Bus Adapter — An I/O adapter that sits between the host computer's bus and the Fibre Channel loop and manages the transfer of information between the 2 channels. In order to minimize the impact on host processor performance, the host bus adapter performs many low-level interface functions automatically or with minimal processor involvement. HCA — Host Channel Adapter. HCD — Hardware Configuration Definition. HD — Hard Disk.

Head — See read/write head.

Heterogeneous networks are made possible by standards-conforming hardware and software interfaces used in common by different products, thus allowing them to communicate with each other. The Internet itself is an example of a heterogeneous network. HiCAM — Hitachi Computer Products America. HIPAA — Health Insurance Portability and Accountability Act. HIS — (1) High Speed Interconnect. (2) Hospital Information System (clinical and financial). HiStar — Multiple point-to-point data paths to cache. HL7 — Health Level 7. HLQ — High-level Qualifier. HLS — Healthcare and Life Sciences. HLU — Host Logical Unit. H-LUN — Host Logical Unit Number. See LUN. HMC — Hardware Management Console. Homogeneous — Of the same or similar kind. Host — Also called a server. Basically a central computer that processes end-user applications or requests. Host LU — Host Logical Unit. See also HLU. Host Storage Domains — Allows host pooling at the LUN level and the priority access feature lets administrator set service levels for applications. HP — (1) Hewlett-Packard Company or (2) High Performance.

HDA — Head Disk Assembly. HDD ― Hard Disk Drive. A spindle of hard disk platters that make up a hard drive, which is a unit of physical storage within a subsystem.

HPC — High Performance Computing. HSA — Hardware System Area. HSG — Host Security Group.

HDS Confidential: For distribution only to authorized parties.

Page G-11

HSM — Hierarchical Storage Management (see Data Migrator). HSN — Hierarchical Star Network. HSSDC — High Speed Serial Data Connector. HTTP — Hyper Text Transfer Protocol. HTTPS — Hyper Text Transfer Protocol Secure. Hub — A common connection point for devices in a network. Hubs are commonly used to connect segments of a LAN. A hub contains multiple ports. When a packet arrives at 1 port, it is copied to the other ports so that all segments of the LAN can see all packets. A switching hub actually reads the destination address of each packet and then forwards the packet to the correct port. Device to which nodes on a multi-point bus or loop are physically connected. Hybrid Cloud — “Hybrid cloud computing refers to the combination of external public cloud computing services and internal resources (either a private cloud or traditional infrastructure, operations and applications) in a coordinated fashion to assemble a particular solution.” — Source: Gartner Research. Hybrid Network Cloud — A composition of 2 or more clouds (private, community or public). Each cloud remains a unique entity but they are bound together. A hybrid network cloud includes an interconnection. Hypervisor — Also called a virtual machine manager, a hypervisor is a hardware virtualization technique that enables multiple operating systems to run concurrently on the same computer. Hypervisors are often installed on server hardware then run the guest operating systems that act as servers. Hypervisor can also refer to the interface that is provided by Infrastructure as a Service (IaaS) in cloud computing. Leading hypervisors include VMware vSphere Hypervisor™ (ESXi), Microsoft® Hyper-V and the Xen® hypervisor. -back to top-

—I— I/F — Interface. I/O — Input/Output. Term used to describe any program, operation, or device that transfers data to or from a computer and to or from a peripheral device. IaaS —Infrastructure as a Service. A cloud computing business model — delivering computer infrastructure, typically a platform virtualization environment, as a service, along with raw (block) storage and networking. Rather than purchasing servers, software, data center space or network equipment, clients buy those resources as a fully outsourced service. Providers typically bill such services on a utility computing basis; the amount of resources consumed (and therefore the cost) will typically reflect the level of activity. IDE — Integrated Drive Electronics Advanced Technology. A standard designed to connect hard and removable disk drives. IDN — Integrated Delivery Network. iFCP — Internet Fibre Channel Protocol. Index Cache — Provides quick access to indexed data on the media during a browse\restore operation. IBR — Incremental Block-level Replication or Intelligent Block Replication. ICB — Integrated Cluster Bus. ICF — Integrated Coupling Facility. ID — Identifier. IDR — Incremental Data Replication. iFCP — Internet Fibre Channel Protocol. Allows an organization to extend Fibre Channel storage networks over the Internet by using TCP/IP. TCP is responsible for managing congestion control as well as error detection and recovery services. iFCP allows an organization to create an IP SAN fabric that minimizes the Fibre Channel fabric component and maximizes use of the company's TCP/IP infrastructure. IFL — Integrated Facility for LINUX. IHE — Integrating the Healthcare Enterprise. IID — Initiator ID. IIS — Internet Information Server.

Page G-12

HDS Confidential: For distribution only to authorized parties.

ILM — Information Life Cycle Management.

ISL — Inter-Switch Link.

ILO — (Hewlett-Packard) Integrated Lights-Out.

iSNS — Internet Storage Name Service.

IML — Initial Microprogram Load.

ISOE — iSCSI Offload Engine.

IMS — Information Management System.

ISP — Internet service provider.

In-band virtualization — Refers to the location of the storage network path, between the application host servers in the storage systems. Provides both control and data along the same connection path. Also called symmetric virtualization. INI — Initiator. Interface —The physical and logical arrangement supporting the attachment of any device to a connector or to another device. Internal bus — Another name for an internal data bus. Also, an expansion bus is often referred to as an internal bus. Internal data bus — A bus that operates only within the internal circuitry of the CPU, communicating among the internal caches of memory that are part of the CPU chip’s design. This bus is typically rather quick and is independent of the rest of the computer’s operations. IOC — I/O controller. IOCDS — I/O Control Data Set. IODF — I/O Definition file.

ISPF — Interactive System Productivity Facility. ISPF/PDF — Interactive System Productivity Facility/Program Development Facility. ISV — Independent Software Vendor. ITaaS — IT as a Service. A cloud computing business model. This general model is an umbrella model that entails the SPI business model (SaaS, PaaS and IaaS — Software, Platform and Infrastructure as a Service). ITSC — Informaton and Telecommunications Systems Companies. -back to top-

—J— Java — A widely accepted, open systems programming language. Hitachi’s enterprise software products are all accessed using Java applications. This enables storage administrators to access the Hitachi enterprise software products from any PC or workstation that runs a supported thin-client internet browser application and that has TCP/IP network access to the computer on which the software product runs. Java VM — Java Virtual Machine.

IOPH — I/O per hour.

JBOD — Just a Bunch of Disks.

IOS — I/O Supervisor. IOSQ — Input/Output Subsystem Queue.

JCL — Job Control Language.

IP — Internet Protocol. The communications protocol that routes traffic across the Internet.

JMP —Jumper. Option setting method.

IPv6 — Internet Protocol Version 6. The latest revision of the Internet Protocol (IP).

JNLG — Journal Group.

IPL — Initial Program Load.

JMS — Java Message Service. JNL — Journal. JRE —Java Runtime Environment. JVM — Java Virtual Machine.

IPSEC — IP security.

J-VOL — Journal Volume.

IRR — Internal Rate of Return. ISC — Initial shipping condition or Inter-System Communication. iSCSI — Internet SCSI. Pronounced eye skuzzy. An IP-based standard for linking data storage devices over a network and transferring data by carrying SCSI commands over IP networks. ISE — Integrated Scripting Environment.

-back to top-

—K— KSDS — Key Sequence Data Set. kVA— Kilovolt Ampere. KVM — Kernel-based Virtual Machine or Keyboard-Video Display-Mouse. kW — Kilowatt.

iSER — iSCSI Extensions for RDMA. HDS Confidential: For distribution only to authorized parties.

-back to top-

Page G-13

—L— LACP — Link Aggregation Control Protocol. LAG — Link Aggregation Groups. LAN — Local Area Network. A communications network that serves clients within a geographical area, such as a building. LBA — Logical block address. A 28-bit value that maps to a specific cylinder-head-sector address on the disk. LC — Lucent connector. Fibre Channel connector that is smaller than a simplex connector (SC). LCDG — Link Processor Control Diagnostics. LCM — Link Control Module. LCP — Link Control Processor. Controls the optical links. LCP is located in the LCM. LCSS — Logical Channel Subsystems. LCU — Logical Control Unit. LD — Logical Device. LDAP — Lightweight Directory Access Protocol. LDEV ― Logical Device or Logical Device (number). A set of physical disk partitions (all or portions of 1 or more disks) that are combined so that the subsystem sees and treats them as a single area of data storage. Also called a volume. An LDEV has a specific and unique address within a subsystem. LDEVs become LUNs to an open-systems host. LDKC — Logical Disk Controller or Logical Disk Controller Manual.

networks where it is difficult to predict the number of requests that will be issued to a server. If 1 server starts to be swamped, requests are forwarded to another server with more capacity. Load balancing can also refer to the communications channels themselves. LOC — “Locations” section of the Maintenance Manual. Logical DKC (LDKC) — Logical Disk Controller Manual. An internal architecture extension to the Control Unit addressing scheme that allows more LDEVs to be identified within 1 Hitachi enterprise storage system. Longitudinal record —Patient information from birth to death. LPAR — Logical Partition (mode). LR — Local Router. LRECL — Logical Record Length. LRP — Local Router Processor. LRU — Least Recently Used. LSS — Logical Storage Subsystem (equivalent to LCU). LU — Logical Unit. Mapping number of an LDEV. LUN ― Logical Unit Number. 1 or more LDEVs. Used only for open systems. LUSE ― Logical Unit Size Expansion. Feature used to create virtual LUs that are up to 36 times larger than the standard OPEN-x LUs. LVDS — Low Voltage Differential Signal

LDS — Linear Data Set.

LVI — Logical Volume Image. Identifies a similar concept (as LUN) in the mainframe environment.

LED — Light Emitting Diode.

LVM — Logical Volume Manager.

LDM — Logical Disk Manager.

LFF — Large Form Factor. LIC — Licensed Internal Code. LIS — Laboratory Information Systems.

-back to top-

—M—

LM — Local Memory.

MAC — Media Access Control. A MAC address is a unique identifier attached to most forms of networking equipment.

LMODs — Load Modules.

MAID — Massive array of disks.

LNKLST — Link List.

MAN — Metropolitan Area Network. A communications network that generally covers a city or suburb. MAN is very similar to a LAN except it spans across a geographical region such as a state. Instead of the workstations in a LAN, the

LLQ — Lowest Level Qualifier.

Load balancing — The process of distributing processing and communications activity evenly across a computer network so that no single device is overwhelmed. Load balancing is especially important for Page G-14

HDS Confidential: For distribution only to authorized parties.

workstations in a MAN could depict different cities in a state. For example, the state of Texas could have: Dallas, Austin, San Antonio. The city could be a separate LAN and all the cities connected together via a switch. This topology would indicate a MAN. MAPI — Management Application Programming Interface. Mapping — Conversion between 2 data addressing spaces. For example, mapping refers to the conversion between physical disk block addresses and the block addresses of the virtual disks presented to operating environments by control software.

Microcode — The lowest-level instructions that directly control a microprocessor. A single machine-language instruction typically translates into several microcode instructions. Fortan Pascal C High-level Language Assembly Language Machine Language Hardware Microprogram — See Microcode. MIF — Multiple Image Facility. Mirror Cache OFF — Increases cache efficiency over cache data redundancy.

Mb — Megabit.

M-JNL — Primary journal volumes.

MB — Megabyte.

MM — Maintenance Manual.

MBA — Memory Bus Adaptor.

MMC — Microsoft Management Console.

MBUS — Multi-CPU Bus.

Mode — The state or setting of a program or device. The term mode implies a choice, which is that you can change the setting and put the system in a different mode.

MC — Multi Cabinet. MCU — Main Control Unit, Master Control Unit, Main Disk Control Unit or Master Disk Control Unit. The local CU of a remote copy pair. Main or Master Control Unit.

MP — Microprocessor. MPA — Microprocessor adapter.

MCU — Master Control Unit.

MPB – Microprocessor board.

MDPL — Metadata Data Protection Level.

MPI — (Electronic) Master Patient Identifier. Also known as EMPI.

MediaAgent — The workhorse for all data movement. MediaAgent facilitates the transfer of data between the data source, the client computer, and the destination storage media. Metadata — In database management systems, data files are the files that store the database information; whereas other files, such as index files and data dictionaries, store administrative information, known as metadata. MFC — Main Failure Code. MG — (1) Module Group. 2 (DIMM) cache memory modules that work together. (2) Migration Group. A group of volumes to be migrated together. MGC — (3-Site) Metro/Global Mirror. MIB — Management Information Base. A database of objects that can be monitored by a network management system. Both SNMP and RMON use standardized MIB formats that allow any SNMP and RMON tools to monitor any device defined by a MIB.

MPIO — Multipath I/O. MP PK – MP Package MPU — Microprocessor Unit. MQE — Metadata Query Engine (Hitachi). MS/SG — Microsoft Service Guard. MSCS — Microsoft Cluster Server. MSS — (1) Multiple Subchannel Set. (2) Managed Security Services. MTBF — Mean Time Between Failure. MTS — Multitiered Storage. Multitenancy — In cloud computing, multitenancy is a secure way to partition the infrastructure (application, storage pool and network) so multiple customers share a single resource pool. Multitenancy is one of the key ways cloud can achieve massive economy of scale. M-VOL — Main Volume. MVS — Multiple Virtual Storage.

HDS Confidential: For distribution only to authorized parties.

-back to top-

Page G-15

—N—

—O—

NAS ― Network Attached Storage. A disk array connected to a controller that gives access to a LAN Transport. It handles data at the file level.

OCC — Open Cloud Consortium. A standards organization active in cloud computing.

NAT — Network Address Translation. NDMP — Network Data Management Protocol. A protocol meant to transport data between NAS devices.

OEM — Original Equipment Manufacturer. OFC — Open Fibre Control. OGF — Open Grid Forum. A standards organization active in cloud computing. OID — Object identifier.

NetBIOS — Network Basic Input/Output System.

OLA — Operating Level Agreements.

Network — A computer system that allows sharing of resources, such as files and peripheral hardware devices.

OLTT — Open-loop throughput throttling.

Network Cloud — A communications network. The word "cloud" by itself may refer to any local area network (LAN) or wide area network (WAN). The terms “computing" and "cloud computing" refer to services offered on the public Internet or to a private network that uses the same protocols as a standard network. See also cloud computing. NFS protocol — Network File System is a protocol that allows a computer to access files over a network as easily as if they were on its local disks. NIM — Network Interface Module. NIS — Network Information Service (originally called the Yellow Pages or YP). NIST — National Institute of Standards and Technology. A standards organization active in cloud computing. NLS — Native Language Support. Node ― An addressable entity connected to an I/O bus or network, used primarily to refer to computers, storage devices, and storage subsystems. The component of a node that connects to the bus or network is a port. Node name ― A Name_Identifier associated with a node. NPV — Net Present Value. NRO — Network Recovery Objective. NTP — Network Time Protocol. NVS — Non Volatile Storage. -back to top-

OLTP — On-Line Transaction Processing. OMG — Object Management Group. A standards organization active in cloud computing. On/Off CoD — On/Off Capacity on Demand. ONODE — Object node. OPEX — Operational Expenditure. This is an operating expense, operating expenditure, operational expense, or operational expenditure, which is an ongoing cost for running a product, business, or system. Its counterpart is a capital expenditure (CAPEX). ORM — Online Read Margin. OS — Operating System. Out-of-band virtualization — Refers to systems where the controller is located outside of the SAN data path. Separates control and data on different connection paths. Also called asymmetric virtualization. -back to top-

—P— P-2-P — Point to Point. Also P-P. PaaS — Platform as a Service. A cloud computing business model — delivering a computing platform and solution stack as a service. PaaS offerings facilitate deployment of applications without the cost and complexity of buying and managing the underlying hardware, software and provisioning hosting capabilities. PaaS provides all of the facilities required to support the complete life cycle of building and delivering web applications and services entirely from the Internet. PACS – Picture Archiving and Communication System.

Page G-16

HDS Confidential: For distribution only to authorized parties.

PAN — Personal Area Network. A communications network that transmit data wirelessly over a short distance. Bluetooth and Wi-Fi Direct are examples of personal area networks. PAP — Password Authentication Protocol. Parity — A technique of checking whether data has been lost or written over when it is moved from 1 place in storage to another or when it is transmitted between computers. Parity Group — Also called an array group. This is a group of hard disk drives (HDDs) that form the basic unit of storage in a subsystem. All HDDs in a parity group must have the same physical capacity. Partitioned cache memory — Separate workloads in a “storage consolidated” system by dividing cache into individually managed multiple partitions. Then customize the partition to match the I/O characteristics of assigned LUs. PAT — Port Address Translation. PATA — Parallel ATA. Path — Also referred to as a transmission channel, the path between 2 nodes of a network that a data communication follows. The term can refer to the physical cabling that connects the nodes on a network, the signal that is communicated over the pathway or a subchannel in a carrier frequency. Path failover — See Failover. PAV — Parallel Access Volumes. PAWS — Protect Against Wrapped Sequences. PB — Petabyte.

PDSE — Partitioned Data Set Extended. Performance — Speed of access or the delivery of information. Petabyte (PB) — A measurement of capacity — the amount of data that a drive or storage system can store after formatting. 1PB = 1,024TB. PFA — Predictive Failure Analysis. PFTaaS — Private File Tiering as a Service. A cloud computing business model. PGP — Pretty Good Privacy (encryption). PGR — Persistent Group Reserve. PI — Product Interval. PIR — Performance Information Report. PiT — Point-in-Time. PK — Package (see PCB). PL — Platter. The circular disk on which the magnetic data is stored. Also called motherboard or backplane. PM — Package Memory. POC — Proof of concept. Port — In TCP/IP and UDP networks, an endpoint to a logical connection. The port number identifies what type of port it is. For example, port 80 is used for HTTP traffic. POSIX — Portable Operating System Interface for UNIX. A set of standards that defines an application programming interface (API) for software designed to run under heterogeneous operating systems. P-P — Point-to-point; also P2P.

PCB — Printed Circuit Board. PCHIDS — Physical Channel Path Identifiers. PCI — Power Control Interface. PCI CON — Power Control Interface Connector Board. PCI DSS — Payment Card Industry Data Security Standard. PCIe — Peripheral Component Interconnect Express. PDEV— Physical Device.

PDS — Partitioned Data Set.

PP — Program product.

PBC — Port By-pass Circuit.

PD — Product Detail.

PDM — Policy based Data Migration or Primary Data Migrator.

PPRC — Peer-to-Peer Remote Copy. Private Cloud — A type of cloud computing defined by shared capabilities within a single company; modest economies of scale and less automation. Infrastructure and data reside inside the company’s data center behind a firewall. Comprised of licensed software tools rather than on-going services. Example: An organization implements its own virtual, scalable cloud and business units are charged on a per use basis.

HDS Confidential: For distribution only to authorized parties.

Page G-17

Private Network Cloud — A type of cloud network with 3 characteristics: (1) Operated solely for a single organization, (2) Managed internally or by a third-party, (3) Hosted internally or externally. PR/SM — Processor Resource/System Manager. Protocol — A convention or standard that enables the communication between 2 computing endpoints. In its simplest form, a protocol can be defined as the rules governing the syntax, semantics, and synchronization of communication. Protocols may be implemented by hardware, software, or a combination of the 2. At the lowest level, a protocol defines the behavior of a hardware connection. Provisioning — The process of allocating storage resources and assigning storage capacity for an application, usually in the form of server disk drive space, in order to optimize the performance of a storage area network (SAN). Traditionally, this has been done by the SAN administrator, and it can be a tedious process. In recent years, automated storage provisioning (also called autoprovisioning) programs have become available. These programs can reduce the time required for the storage provisioning process, and can free the administrator from the often distasteful task of performing this chore manually. PS — Power Supply. PSA — Partition Storage Administrator . PSSC — Perl Silicon Server Control.

QoS — Quality of Service. In the field of computer networking, the traffic engineering term quality of service (QoS) refers to resource reservation control mechanisms rather than the achieved service quality. Quality of service is the ability to provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow. QSAM — Queued Sequential Access Method. -back to top-

—R— RACF — Resource Access Control Facility. RAID ― Redundant Array of Independent Disks, or Redundant Array of Inexpensive Disks. A group of disks that look like a single volume to the server. RAID improves performance by pulling a single stripe of data from multiple disks, and improves fault-tolerance either through mirroring or parity checking and it is a component of a customer’s SLA. RAID-0 — Striped array with no parity. RAID-1 — Mirrored array and duplexing. RAID-3 — Striped array with typically nonrotating parity, optimized for long, singlethreaded transfers. RAID-4 — Striped array with typically nonrotating parity, optimized for short, multithreaded transfers. RAID-5 — Striped array with typically rotating parity, optimized for short, multithreaded transfers. RAID-6 — Similar to RAID-5, but with dual rotating parity physical disks, tolerating 2 physical disk failures.

PSU — Power Supply Unit. PTAM — Pickup Truck Access Method.

RAIN — Redundant (or Reliable) Array of Independent Nodes (architecture).

PTF — Program Temporary Fixes. PTR — Pointer.

RAM — Random Access Memory.

PU — Processing Unit. Public Cloud — Resources, such as applications and storage, available to the general public over the Internet. P-VOL — Primary Volume. -back to top-

RAM DISK — A LUN held entirely in the cache area. RAS — Reliability, Availability, and Serviceability or Row Address Strobe. RBAC — Role Base Access Control. RC — (1) Reference Code or (2) Remote Control.

—Q— QD — Quorum Device

RCHA — RAID Channel Adapter.

QDepth — The number of I/O operations that can run in parallel on a SAN device; also WWN QDepth.

RCU — Remote Control Unit or Remote Disk Control Unit.

Page G-18

RCP — Remote Control Processor.

HDS Confidential: For distribution only to authorized parties.

RCUT — RCU Target. RD/WR — Read/Write. RDM — Raw Disk Mapped. RDMA — Remote Direct Memory Access. RDP — Remote Desktop Protocol. RDW — Record Descriptor Word. Read/Write Head — Read and write data to the platters, typically there is 1 head per platter side, and each head is attached to a single actuator shaft. RECFM — Record Format Redundant. Describes the computer or network system components, such as fans, hard disk drives, servers, operating systems, switches, and telecommunication links that are installed to back up primary resources in case they fail. A well-known example of a redundant system is the redundant array of independent disks (RAID). Redundancy contributes to the fault tolerance of a system. Redundancy — Backing up a component to help ensure high availability. Reliability — (1) Level of assurance that data will not be lost or degraded over time. (2) An attribute of any commuter component (software, hardware, or a network) that consistently performs according to its specifications. REST — Representational State Transfer. REXX — Restructured extended executor. RID — Relative Identifier that uniquely identifies a user or group within a Microsoft Windows domain. RIS — Radiology Information System. RISC — Reduced Instruction Set Computer. RIU — Radiology Imaging Unit.

language and development environment, can write object-oriented programming in which objects on different computers can interact in a distributed network. RMI is the Java version of what is generally known as a RPC (remote procedure call), but with the ability to pass 1 or more objects along with the request. RndRD — Random read. ROA — Return on Asset. RoHS — Restriction of Hazardous Substances (in Electrical and Electronic Equipment). ROI — Return on Investment. ROM — Read Only Memory. Round robin mode — A load balancing technique which distributes data packets equally among the available paths. Round robin DNS is usually used for balancing the load of geographically distributed Web servers. It works on a rotating basis in that one server IP address is handed out, then moves to the back of the list; the next server IP address is handed out, and then it moves to the end of the list; and so on, depending on the number of servers being used. This works in a looping fashion. Router — A computer networking device that forwards data packets toward their destinations, through a process known as routing. RPC — Remote procedure call. RPO — Recovery Point Objective. The point in time that recovered data should match. RPSFAN — Rear Power Supply Fan Assembly. RRDS — Relative Record Data Set. RS CON — RS232C/RS422 Interface Connector. RSD — RAID Storage Division (of Hitachi). R-SIM — Remote Service Information Message.

R-JNL — Secondary journal volumes.

RSM — Real Storage Manager.

RK — Rack additional. RKAJAT — Rack Additional SATA disk tray.

RTM — Recovery Termination Manager.

RLGFAN — Rear Logic Box Fan Assembly.

RTO — Recovery Time Objective. The length of time that can be tolerated between a disaster and recovery of data.

RLOGIC BOX — Rear Logic Box.

R-VOL — Remote Volume.

RMF — Resource Measurement Facility.

R/W — Read/Write.

RKAK — Expansion unit.

RMI — Remote Method Invocation. A way that a programmer, using the Java programming HDS Confidential: For distribution only to authorized parties.

-back to top-

Page G-19

—S—

SBM — Solutions Business Manager.

SA — Storage Administrator.

SBOD — Switched Bunch of Disks.

SA z/OS — System Automation for z/OS.

SBSC — Smart Business Storage Cloud.

SAA — Share Access Authentication. The process of restricting a user's rights to a file system object by combining the security descriptors from both the file system object itself and the share to which the user is connected.

SC — (1) Simplex connector. Fibre Channel connector that is larger than a Lucent connector (LC). (2) Single Cabinet.

SaaS — Software as a Service. A cloud computing business model. SaaS is a software delivery model in which software and its associated data are hosted centrally in a cloud and are typically accessed by users using a thin client, such as a web browser via the Internet. SaaS has become a common delivery model for most business applications, including accounting (CRM and ERP), invoicing (HRM), content management (CM) and service desk management, just to name the most common software that runs in the cloud. This is the fastest growing service in the cloud market today. SaaS performs best for relatively simple tasks in IT-constrained organizations. SACK — Sequential Acknowledge. SACL — System ACL. The part of a security descriptor that stores system auditing information. SAIN — SAN-attached Array of Independent Nodes (architecture). SAN ― Storage Area Network. A network linking computing devices to disk or tape arrays and other devices over Fibre Channel. It handles data at the block level. SAP — (1) System Assist Processor (for I/O processing), or (2) a German software company. SAP HANA — High Performance Analytic Appliance, a database appliance technology proprietary to SAP. SARD — System Assurance Registration Document. SAS —Serial Attached SCSI. SATA — Serial ATA. Serial Advanced Technology Attachment is a new standard for connecting hard drives into computer systems. SATA is based on serial signaling technology, unlike current IDE (Integrated Drive Electronics) hard drives that use parallel signaling. Page G-20

SBX — Small Box (Small Form Factor).

SCM — Supply Chain Management. SCP — Secure Copy. SCSI — Small Computer Systems Interface. A parallel bus architecture and a protocol for transmitting large data blocks up to a distance of 15 to 25 meters. SD — Software Division (of Hitachi). SDH — Synchronous Digital Hierarchy. SDM — System Data Mover. SDSF — Spool Display and Search Facility. Sector — A sub-division of a track of a magnetic disk that stores a fixed amount of data. SEL — System Event Log. Selectable segment size — Can be set per partition. Selectable Stripe Size — Increases performance by customizing the disk access size. SENC — Is the SATA (Serial ATA) version of the ENC. ENCs and SENCs are complete microprocessor systems on their own and they occasionally require a firmware upgrade. SeqRD — Sequential read. Serial Transmission — The transmission of data bits in sequential order over a single line. Server — A central computer that processes end-user applications or requests, also called a host. Server Virtualization — The masking of server resources, including the number and identity of individual physical servers, processors, and operating systems, from server users. The implementation of multiple isolated virtual environments in one physical server. Service-level Agreement — SLA. A contract between a network service provider and a customer that specifies, usually in measurable terms, what services the network service provider will furnish. Many Internet service providers (ISP) provide their customers with a SLA. More recently, IT departments in major enterprises have

HDS Confidential: For distribution only to authorized parties.

adopted the idea of writing a service level agreement so that services for their customers (users in other departments within the enterprise) can be measured, justified, and perhaps compared with those of outsourcing network providers. Some metrics that SLAs may specify include: • The percentage of the time services will be available • The number of users that can be served simultaneously • Specific performance benchmarks to which actual performance will be periodically compared • The schedule for notification in advance of network changes that may affect users • Help desk response time for various classes of problems • Dial-in access availability • Usage statistics that will be provided Service-Level Objective — SLO. Individual performance metrics built into an SLA. Each SLO corresponds to a single performance characteristic relevant to the delivery of an overall service. Some examples of SLOs include: system availability, help desk incident resolution time, and application response time. SES — SCSI Enclosure Services. SFF — Small Form Factor.

guidance information. (2) Storage Interface Module. (3) Subscriber Identity Module. SIM RC — Service (or system) Information Message Reference Code. SIMM — Single In-line Memory Module. SLA —Service Level Agreement. SLO — Service Level Objective. SLRP — Storage Logical Partition. SM ― Shared Memory or Shared Memory Module. Stores the shared information about the subsystem and the cache control information (director names). This type of information is used for the exclusive control of the subsystem. Like CACHE, shared memory is controlled as 2 areas of memory and fully nonvolatile (sustained for approximately 7 days). SM PATH— Shared Memory Access Path. The Access Path from the processors of CHA, DKA PCB to Shared Memory. SMB/CIFS — Server Message Block Protocol/Common Internet File System. SMC — Shared Memory Control. SME — Small and Medium Enterprise SMF — System Management Facility. SMI-S — Storage Management Initiative Specification. SMP — Symmetric Multiprocessing. An IBMlicensed program used to install software and software changes on z/OS systems. SMP/E — System Modification Program/Extended.

SFI — Storage Facility Image.

SMS — System Managed Storage.

SFM — Sysplex Failure Management. SFP — Small Form-Factor Pluggable module Host connector. A specification for a new generation of optical modular transceivers. The devices are designed for use with small form factor (SFF) connectors, offer high speed and physical compactness, and are hot-swappable. SHSN — Shared memory Hierarchical Star Network. SID — Security Identifier. A user or group identifier within the Microsoft Windows security model. SIGP — Signal Processor. SIM — (1) Service Information Message. A message reporting an error that contains fix

SMTP — Simple Mail Transfer Protocol. SMU — System Management Unit. Snapshot Image — A logical duplicated volume (V-VOL) of the primary volume. It is an internal volume intended for restoration. SNIA — Storage Networking Industry Association. An association of producers and consumers of storage networking products, whose goal is to further storage networking technology and applications. Active in cloud computing. SNMP — Simple Network Management Protocol. A TCP/IP protocol that was designed for management of networks over TCP/IP, using agents and stations. SOA — Service Oriented Architecture.

HDS Confidential: For distribution only to authorized parties.

Page G-21

SOAP — Simple object access protocol. A way for a program running in one kind of operating system (such as Windows 2000) to communicate with a program in the same or another kind of an operating system (such as Linux) by using the World Wide Web's Hypertext Transfer Protocol (HTTP) and its Extensible Markup Language (XML) as the mechanisms for information exchange. Socket — In UNIX and some other operating systems, socket is a software object that connects an application to a network protocol. In UNIX, for example, a program can send and receive TCP/IP messages by opening a socket and reading and writing data to and from the socket. This simplifies program development because the programmer need only worry about manipulating the socket and can rely on the operating system to actually transport messages across the network correctly. Note that a socket in this sense is completely soft; it is a software object, not a physical component. SOM — System Option Mode. SONET — Synchronous Optical Network. SOSS — Service Oriented Storage Solutions. SPaaS — SharePoint as a Service. A cloud computing business model. SPAN — Span is a section between 2 intermediate supports. See Storage pool.

SRM — Site Recovery Manager. SSB — Sense Byte. SSC — SiliconServer Control. SSCH — Start Subchannel. SSD — Solid-state Drive or Solid-State Disk. SSH — Secure Shell. SSID — Storage Subsystem ID or Subsystem Identifier. SSL — Secure Sockets Layer. SSPC — System Storage Productivity Center. SSUE — Split SUSpended Error. SSUS — Split SUSpend. SSVP — Sub Service Processor interfaces the SVP to the DKC. SSW — SAS Switch. Sticky Bit — Extended UNIX mode bit that prevents objects from being deleted from a directory by anyone other than the object's owner, the directory's owner or the root user. Storage pooling — The ability to consolidate and manage storage resources across storage system enclosures where the consolidation of many appears as a single view. STP — Server Time Protocol. STR — Storage and Retrieval Systems. Striping — A RAID technique for writing a file to multiple disks on a block-by-block basis, with or without parity.

Spare — An object reserved for the purpose of substitution for a like object in case of that object's failure.

Subsystem — Hardware or software that performs a specific function within a larger system.

SPC — SCSI Protocol Controller.

SVC Interrupts — Supervisor calls.

SpecSFS — Standard Performance Evaluation Corporation Shared File system.

S-VOL — (1) (ShadowImage) Source Volume for In-System Replication, or (2) (Universal Replicator) Secondary Volume.

SPECsfs97 — Standard Performance Evaluation Corporation (SPEC) System File Server (sfs) developed in 1997 (97). SPI model — Software, Platform and Infrastructure as a service. A common term to describe the cloud computing “as a service” business model.

SVC — Supervisor Call Interruption.

SVP — Service Processor ― A laptop computer mounted on the control frame (DKC) and used for monitoring, maintenance and administration of the subsystem.

SRA — Storage Replicator Adapter.

Switch — A fabric device providing full bandwidth per port and high-speed routing of data via link-level addressing.

SRDF/A — (EMC) Symmetrix Remote Data Facility Asynchronous.

SWPX — Switching power supply. SXP — SAS Expander.

SRDF/S — (EMC) Symmetrix Remote Data Facility Synchronous.

Symmetric virtualization — See In-band virtualization.

Page G-22

HDS Confidential: For distribution only to authorized parties.

Synchronous — Operations that have a fixed time relationship to each other. Most commonly used to denote I/O operations that occur in time sequence, i.e., a successor operation does not occur until its predecessor is complete. -back to top-

—T— Target — The system component that receives a SCSI I/O command, an open device that operates at the request of the initiator. TB — Terabyte. 1TB = 1,024GB. TCDO — Total Cost of Data Ownership. TCO — Total Cost of Ownership. TCP/IP — Transmission Control Protocol over Internet Protocol. TDCONV — Trace Dump CONVerter. A software program that is used to convert traces taken on the system into readable text. This information is loaded into a special spreadsheet that allows for further investigation of the data. More in-depth failure analysis.

storage cost. Categories may be based on levels of protection needed, performance requirements, frequency of use, and other considerations. Since assigning data to particular media may be an ongoing and complex activity, some vendors provide software for automatically managing the process based on a company-defined policy. Tiered Storage Promotion — Moving data between tiers of storage as their availability requirements change. TLS — Tape Library System. TLS — Transport Layer Security. TMP — Temporary or Test Management Program. TOD (or ToD) — Time Of Day. TOE — TCP Offload Engine. Topology — The shape of a network or how it is laid out. Topologies are either physical or logical. TPC-R — Tivoli Productivity Center for Replication. TPF — Transaction Processing Facility.

TDMF — Transparent Data Migration Facility.

TPOF — Tolerable Points of Failure.

Telco or TELCO — Telecommunications Company.

Track — Circular segment of a hard disk or other storage media.

TEP — Tivoli Enterprise Portal.

Transfer Rate — See Data Transfer Rate.

Terabyte (TB) — A measurement of capacity, data or data storage. 1TB = 1,024GB.

Trap — A program interrupt, usually an interrupt caused by some exceptional situation in the user program. In most cases, the Operating System performs some action, and then returns control to the program.

TFS — Temporary File System. TGTLIBs — Target Libraries. THF — Front Thermostat.

TSC — Tested Storage Configuration.

Thin Provisioning — Thin provisioning allows storage space to be easily allocated to servers on a just-enough and just-in-time basis.

TSO — Time Sharing Option.

THR — Rear Thermostat.

T-VOL — (ShadowImage) Target Volume for In-System Replication.

Throughput — The amount of data transferred from 1 place to another or processed in a specified amount of time. Data transfer rates for disk drives and networks are measured in terms of throughput. Typically, throughputs are measured in kbps, Mbps and Gb/sec. TID — Target ID. Tiered storage — A storage strategy that matches data classification to storage metrics. Tiered storage is the assignment of different categories of data to different types of storage media in order to reduce total

TSO/E — Time Sharing Option/Extended.

-back to top-

—U— UA — Unified Agent. UBX — Large Box (Large Form Factor). UCB — Unit Control Block. UDP — User Datagram Protocol is 1 of the core protocols of the Internet protocol suite. Using UDP, programs on networked computers can send short messages known as datagrams to one another. UFA — UNIX File Attributes.

HDS Confidential: For distribution only to authorized parties.

Page G-23

UID — User Identifier within the UNIX security model.

VLL — Virtual Logical Volume Image/Logical Unit Number.

UPS — Uninterruptible Power Supply — A power supply that includes a battery to maintain power in the event of a power outage.

VLUN — Virtual LUN. Customized volume. Size chosen by user.

UR — Universal Replicator. UUID — Universally Unique Identifier. -back to top-

—V—

VLVI — Virtual Logic Volume Image. Marketing name for CVS (custom volume size). VM — Virtual Machine. VMDK — Virtual Machine Disk file format. VNA — Vendor Neutral Archive.

vContinuum — Using the vContinuum wizard, users can push agents to primary and secondary servers, set up protection and perform failovers and failbacks.

VOJP — (Cache) Volatile Jumper.

VCS — Veritas Cluster System.

Volume — A fixed amount of storage on a disk or tape. The term volume is often used as a synonym for the storage medium itself, but it is possible for a single disk to contain more than 1 volume or for a volume to span more than 1 disk.

VDEV — Virtual Device. VDI — Virtual Desktop Infrastructure. VHD — Virtual Hard Disk. VHDL — VHSIC (Very-High-Speed Integrated Circuit) Hardware Description Language.

VOLID — Volume ID. VOLSER — Volume Serial Numbers.

VPC — Virtual Private Cloud.

VHSIC — Very-High-Speed Integrated Circuit.

VSAM — Virtual Storage Access Method.

VI — Virtual Interface. A research prototype that is undergoing active development, and the details of the implementation may change considerably. It is an application interface that gives user-level processes direct but protected access to network interface cards. This allows applications to bypass IP processing overheads (for example, copying data, computing checksums) and system call overheads while still preventing 1 process from accidentally or maliciously tampering with or reading data being used by another.

VSD — Virtual Storage Director.

Virtualization — Referring to storage virtualization, virtualization is the amalgamation of multiple network storage devices into what appears to be a single storage unit. Storage virtualization is often used in a SAN, and makes tasks such as archiving, backup and recovery easier and faster. Storage virtualization is usually implemented via software applications. There are many additional types of virtualization. Virtual Private Cloud (VPC) — Private cloud existing within a shared or public cloud (for example, the Intercloud). Also known as a virtual private network cloud.

Page G-24

VTL — Virtual Tape Library. VSP — Virtual Storage Platform. VSS — (Microsoft) Volume Shadow Copy Service. VTOC — Volume Table of Contents. VTOCIX — Volume Table of Contents Index. VVDS — Virtual Volume Data Set. V-VOL — Virtual Volume. -back to top-

—W— WAN — Wide Area Network. A computing internetwork that covers a broad area or region. Contrast with PAN, LAN and MAN. WDIR — Directory Name Object. WDIR — Working Directory. WDS — Working Data Set. WebDAV — Web-based Distributed Authoring and Versioning (HTTP extensions). WFILE — File Object or Working File. WFS — Working File Set. WINS — Windows Internet Naming Service.

HDS Confidential: For distribution only to authorized parties.

WL — Wide Link.

—Y—

WLM — Work Load Manager.

YB — Yottabyte.

WORM — Write Once, Read Many.

Yottabyte — A highest-end measurement of data at the present time. 1YB = 1,024ZB, or 1 quadrillion GB. A recent estimate (2011) is that all the computer hard drives in the world do not contain 1YB of data.

WSDL — Web Services Description Language. WSRM — Write Seldom, Read Many. WTREE — Directory Tree Object or Working Tree.

-back to top-

WWN ― World Wide Name. A unique identifier for an open-system host. It consists of a 64bit physical address (the IEEE 48-bit format with a 12-bit extension and a 4-bit prefix).

—Z—

WWNN — World Wide Node Name. A globally unique 64-bit identifier assigned to each Fibre Channel node process.

z/OS NFS — (System) z/OS Network File System.

WWPN ― World Wide Port Name. A globally unique 64-bit identifier assigned to each Fibre Channel port. A Fibre Channel port’s WWPN is permitted to use any of several naming authorities. Fibre Channel specifies a Network Address Authority (NAA) to distinguish between the various name registration authorities that may be used to identify the WWPN. -back to top-

—X— XAUI — "X"=10, AUI = Attachment Unit Interface. XCF — Cross System Communications Facility. XDS — Cross Enterprise Document Sharing. XDSi — Cross Enterprise Document Sharing for Imaging. XFI — Standard interface for connecting 10Gb Ethernet MAC device to XFP interface. XFP — "X"=10Gb Small Form Factor Pluggable.

z/OS — z Operating System (IBM® S/390® or z/OS® Environments). z/OSMF — (System) z/OS Management Facility. zAAP — (System) z Application Assist Processor (for Java and XML workloads). ZCF — Zero Copy Failover. Also known as Data Access Path (DAP). Zettabyte (ZB) — A high-end measurement of data at the present time. 1ZB = 1,024EB. zFS — (System) zSeries File System. zHPF — (System) z High Performance FICON. zIIP — (System) z Integrated Information Processor (specialty processor for database). Zone — A collection of Fibre Channel Ports that are permitted to communicate with each other via the fabric. Zoning — A method of subdividing a storage area network into disjoint zones, or subsets of nodes on the network. Storage area network nodes outside a zone are invisible to nodes within the zone. Moreover, with switched SANs, traffic within each zone may be physically isolated from traffic outside the zone.

XML — eXtensible Markup Language.

-back to top-

XRC — Extended Remote Copy. -back to top-

HDS Confidential: For distribution only to authorized parties.

Page G-25

Page G-26

HDS Confidential: For distribution only to authorized parties.

Evaluating this Course Please use the online evaluation system to help improve our courses.

Learning Center Sign-in location: https://learningcenter.hds.com/Saba/Web/Main

HDS Confidential: For distribution only to authorized parties.

Page E-1

Evaluating this Course

Page E-2

HDS Confidential: For distribution only to authorized parties.

More Documents from "Johan"