Integrated Virtualization Manager On IBM System p5: Paper
Integrated Virtualization Manager On IBM System p5: Paper
Integrated Virtualization Manager On IBM System p5: Paper
No dedicated Hardware Management Console required Powerful integration for entry-level servers Key administration tasks explained
Guido Somers
ibm.com/redbooks
Redpaper
International Technical Support Organization Integrated Virtualization Manager on IBM System p5 December 2006
Note: Before using this information and the product it supports, read the information in Notices on page v.
Second Edition (December 2006) This edition applies to IBM Virtual I/O Server Version 1.3 that is part of the Advanced POWER Virtualization hardware feature on IBM System p5 and eServer p5 platforms.
Copyright International Business Machines Corporation 2005, 2006. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii The team that wrote this Redpaper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Chapter 1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Hardware management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Integrated Virtualization Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.2 Hardware Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.3 Advanced System Management Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2 IVM design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.2 LPAR configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2.3 Considerations for partition setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Chapter 2. Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Reset to Manufacturing Default Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Microcode update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 ASMI IP address setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Address setting using the ASMI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Address setting using serial ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Virtualization feature activation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 VIOS image installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Virtualization setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 Set the date and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.3 Initial network setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.4 Changing the TCP/IP settings on the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . 2.7 VIOS partition configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Network management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 Virtual Storage management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 Installing and managing the Virtual I/O Server on a JS21 . . . . . . . . . . . . . . . . . . . . . 2.10.1 Virtual I/O Server image installation from DVD . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.2 Virtual I/O Server image installation from a NIM server . . . . . . . . . . . . . . . . . . . Chapter 3. Logical partition creation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Configure and manage partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 IVM graphical user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Connect to the IVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Storage pool disk management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Create logical partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Create an LPAR based on an existing partition . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.5 Shutting down logical partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.6 Monitoring tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.7 Hyperlinks for object properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 IVM command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 20 21 23 24 25 26 28 30 30 31 31 31 32 33 34 35 35 35 37 38 38 38 39 44 49 51 52 53 54 iii
3.3.1 Update the logical partitions profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Power on a logical partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Install an operating system on a logical partition . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Optical device sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 LPAR configuration changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Dynamic LPAR operations on an IVM partition. . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 LPAR resources management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Adding a client LPAR to the partition workload group. . . . . . . . . . . . . . . . . . . . . . Chapter 4. Advanced configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Network management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Ethernet bridging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Ethernet link aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Storage management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Virtual storage assignment to a partition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Virtual disk extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 IVM system disk mirroring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 AIX 5L mirroring on the managed system LPARs. . . . . . . . . . . . . . . . . . . . . . . . . 4.2.5 SCSI RAID adapter use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Securing the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Connecting to the Virtual I/O Server using OpenSSH. . . . . . . . . . . . . . . . . . . . . . . . . .
54 55 55 56 57 57 59 67 71 72 72 74 76 76 77 79 81 83 83 86
Chapter 5. Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.1 IVM maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.1.1 Backup and restore of the logical partition definitions. . . . . . . . . . . . . . . . . . . . . . 92 5.1.2 Backup and restore of the IVM operating system . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.1.3 IVM updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.2 The migration between HMC and IVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 5.2.1 Recovery after an improper HMC connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 5.2.2 Migration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.2.3 Migration from HMC to an IVM environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.2.4 Migration from an IVM environment to HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5.3 System maintenance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.3.1 Microcode update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.3.2 Capacity on Demand operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5.4 Logical partition maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5.4.1 Backup of the operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5.4.2 Restore of the operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5.5 Command logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5.6 Integration with IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Appendix A. IVM and HMC feature summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Appendix B. System requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 125 125 125 127 127
iv
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurement may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX AIX 5L BladeCenter eServer HACMP i5/OS IBM Micro-Partitioning OpenPower POWER POWER Hypervisor POWER5 POWER5+ pSeries Redbooks Redbooks (logo) System p System p5 Virtualization Engine
The following terms are trademarks of other companies: Internet Explorer, Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
vi
Preface
The Virtual I/O Server (VIOS) is part of the Advanced POWER Virtualization hardware feature on IBM System p5 and IBM eServer p5 platforms and part of the POWER Hypervisor and VIOS feature on IBM eServer OpenPower systems. It is also supported on the IBM BladeCenter JS21. It is a single-function appliance that resides in an IBM POWER5 and POWER5+ processor-based systems logical partition (LPAR) and facilitates the sharing of physical I/O resources between client partitions (IBM AIX 5L or Linux) within the server. The VIOS provides virtual SCSI target and Shared Ethernet Adapter (SEA) virtual I/O function to client LPARs. Starting with Version 1.2, the VIOS provided a hardware management function named the Integrated Virtualization Manager (IVM). The latest version of VIOS, 1.3.0.0, adds a number of new functions, such as support for dynamic logical partitioning for memory (dynamic reconfiguration of memory is not supported on the JS21) and processors in managed systems, task manager monitor for long-running tasks, security additions such as viosecure and firewall, and other improvements. Using IVM, companies can more cost-effectively consolidate multiple partitions onto a single server. With its intuitive, browser-based interface, the IVM is easy to use and significantly reduces the time and effort required to manage virtual devices and partitions. IVM is available on these IBM systems: IBM System p5 505, 51A, 52A, 55A, and 561 IBM eServer p5 510, 520, and 550 IBM eServer OpenPower 710 and 720 IBM BladeCenter JS21 This IBM Redpaper provides an introduction to IVM by describing its architecture and showing how to install and configure a partitioned server using its capabilities. A complete understanding of partitioning is required prior to reading this document.
vii
The project that produced this paper was managed by: Scott Vetter IBM Austin Thanks to the following people for their contributions to this project: Amartey S. Pearson, Vani D. Ramagiri, Bob G. Kovacs, Jim Parumi, Jim Partridge IBM Austin Dennis Jurgensen IBM Raleigh Jaya Srikrishnan IBM Poughkeepsie Craig Wilcox IBM Rochester Peter Wuestefeld, Volker Haug IBM Germany Morten Vagmo IBM Norway Dai Williams, Nigel Griffiths IBM U.K.
Comments welcome
Your comments are important to us! We want our papers to be as helpful as possible. Send us your comments about this Redpaper or other IBM Redbooks in one of the following ways: Use the online Contact us review redbook form found at ibm.com/redbooks Send your comments in an e-mail to [email protected] Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400 viii
Integrated Virtualization Manager on IBM System p5
Chapter 1.
Overview
This chapter describes several available methods for hardware management and virtualization setup on IBM System p5 and eServer p5, OpenPower solutions, and BladeCenter JS21, and introduces the Integrated Virtualization Manager (IVM). The Integrated Virtualization Manager is a component that has been included since the Virtual I/O Server Version 1.2, which is part of the Advanced POWER Virtualization hardware feature. It enables companies to consolidate multiple partitions onto a single server in a cost-effective way. With its intuitive, browser-based interface, the IVM is easy to use and it significantly reduces the time and effort required to manage virtual devices and partitions.
IVM is an enhancement of the Virtual I/O Server (VIOS), the product that enables I/O virtualization in POWER5 and POWER5+ processor-based systems. It enables management of VIOS functions and uses a Web-based graphical interface that enables the administrator to remotely manage the server with a browser. The HTTPS protocol and server login with password authentication provide the security required by many enterprises. Because one of the goals of IVM is simplification of management, some implicit rules apply to configuration and setup: When a system is designated to be managed by IVM, it must not be partitioned. The first operating system to be installed must be the VIOS. The VIOS is automatically configured to own all of the I/O resources and it can be configured to provide service to other LPARs through its virtualization capabilities. Therefore, all other logical partitions (LPARs) do not own any physical adapters and they must access disk, network, and optical devices only through the VIOS as virtual devices. Otherwise, the LPARs operate as they have previously with respect to processor and memory resources. Figure 1-1 shows a sample configuration using IVM. The VIOS owns all of the physical adapters, and the other two partitions are configured to use only virtual devices. The administrator can use a browser to connect to IVM to set up the system configuration.
Administrators browser
LPAR #1
VIOS + IVM
Physical adapters
Corporate LAN
Figure 1-1 Integrated Virtualization Manager configuration
The system Hypervisor has been modified to enable the VIOS to manage the partitioned system without an HMC. The software that is normally running on the HMC has been rewritten to fit inside the VIOS and to provide a simpler user interface. Because the IVM is running using system resources, the design has been developed to have a minimal impact on disk, memory, and processor resources. The IVM does not interact with systems service processor. A specific device named the Virtual Management Channel (VMC) has been developed on the VIOS to enable a direct Hypervisor configuration without requiring additional network connections. This device is activated, by default, when the VIOS is installed as the first partition.
Chapter 1. Overview
The VMC enables IVM to provide basic logical partitioning functions: Logical partitioning configuration Boot, start, and stop actions for individual partitions Display of partition status Management of virtual Ethernet Management of virtual storage Basic system management Because IVM executes on an LPAR, it has limited service-based functions, and ASMI must be used. For example, a server power-on must be performed by physically pushing the server power-on button or remotely accessing ASMI, because IVM does not execute while the server power is off. ASMI and IVM together provide a simple but effective solution for a single partitioned server. LPAR management using IVM is through a common Web interface developed for basic administration tasks. Being integrated within the VIOS code, IVM also handles all virtualization tasks that normally require VIOS commands to be run. Important: The IVM provides a unique setup and interface with respect to the HMC for managing resources and partition configuration. An HMC expert should study the differences before using the IVM. IVM has support for dynamic LPAR, starting with Version 1.3.0.0. IVM and HMC are two unique management systems: The IVM is designed as an integrated solution designed to lower your cost of ownership, and the HMC is designed for flexibility and a comprehensive set of functions. This provides you the freedom to select the one ideal solution for your production workload requirements. Important: The internal design of IVM requires that no HMC should be connected to a working IVM system. If a client wants to migrate an environment from IVM to HMC, the configuration setup has to be rebuilt manually. This includes systems that had previous software levels of VIOS running on them, because they would also have been managed by an HMC.
The HMC is a centralized point of hardware control. In an System p5 environment, a single HMC can manage multiple POWER5 processor-based systems, and two HMCs can manage the same set of servers in a dual-active configuration designed for high availability. Hardware management is performed by an HMC using a standard Ethernet connection to the service processor of each system. Interacting with the service processor, the HMC is capable of modifying the hardware configuration of the managed system, querying for changes, and managing service calls. A hardware administrator can either log in to the physical HMC and use the native GUI, or download a client application from the HMC. This application can be used to remotely manage the HMC from a remote desktop with the same look and feel of the native GUI. Because it is a stand-alone personal computer, the HMC is does not use any managed system resources and can be maintained without affecting system activity. Reboots and software maintenance on the HMC do not have any impact on the managed systems. In the unlikely case that the HMC requires manual intervention, the systems continue to be operational and a new HMC can be plugged into the network and configured to download from the managed systems the current configuration, thus becoming operationally identical to the replaced HMC. The major HMC functions include: Monitoring of system status Management of IBM Capacity on Demand Creation of logical partitioning with dedicated processors Management of LPARs including power on, power off, and console Dynamic reconfiguration of partitions Management of virtual Ethernet among partitions Clustering Concurrent firmware updates Hot add/remove of I/O drawers POWER5 and POWER5+ processor-based systems are capable of Micro-Partitioning, and the Hypervisor can support multiple LPARs, sharing the processors in the system and enabling I/O sharing. System p servers require an Advanced POWER Virtualization feature, but OpenPower systems require a POWER Hypervisor and Virtual I/O Server feature.
Chapter 1. Overview
On systems with Micro-Partitioning enabled, the HMC provides additional functions: Creation of shared processor partitions Creation of the Virtual I/O Server (VIOS) partition for physical I/O virtualization Creation of virtual devices for VIOS and client partitions The HMC interacts with the Hypervisor to create virtual devices among partitions, and the VIOS partitions manage physical device sharing. Network, disk, and optical device access can be shared. Partition configuration can be changed dynamically by issuing commands on the HMC or using the HMC GUI. The allocation of resources, such as CPU, memory, and I/O, can be modified without making applications aware of the change. In order to enable dynamic reconfiguration, an HMC requires an Ethernet connection with every involved LPAR besides the basic connection with the service processor. Using a Remote Monitoring and Control (RMC) protocol, the HMC is capable of securely interacting with the operating system to free and acquire resources and to coordinate these actions with hardware configuration changes. The HMC also provides tools to ease problem determination and service support, such as the Service Focal Point feature, call-home, and error log notification through a modem or the Internet.
ASMI is the major configuration tool for systems that are not managed by an HMC and it provides basic hardware setup features. It is extremely useful when the system is a stand-alone system. ASMI can be accessed and used when the HMC is connected to the system, but some of its features are disabled. Using ASMI, the administrator can run the following basic operations: Viewing system information Controlling system power Changing the system configuration Setting performance options Configuring the service processors network services Using on demand utilities Using concurrent maintenance utilities Executing system service aids, such as accessing the service processors error log The scope of every action is restricted to the same server. In the case of multiple systems, the administrator must contact each of them independently, each in turn. After the initial setup, typical ASMI usage is remote system power on and power off. The other functions are related to system configuration changes, such as virtualization feature activation, and troubleshooting, such as access to service processors logs. The ASMI does not allow LPARs to be managed. In order to deploy LPARs, a higher level of management is required, going beyond basic hardware configuration setup. This can be done either with an HMC or using the Integrated Virtualization Manager (IVM).
Chapter 1. Overview
1.2.1 Architecture
The IVM has been developed to provide a simple environment where a single control program has the ownership of the physical hardware and other LPARs use it to access resources. The VIOS has most of the required features because it can provide virtual SCSI and virtual networking capability. Starting with Version 1.2, the VIOS has been enhanced to provide management features using the IVM. The current version of the Virtual I/O Server, 1.3.0.0, comes with several IVM improvements, such as dynamic LPAR-capability of the client LPARs, security improvements (firewall, viosecure), and usability additions (TCP/IP GUI configuration, hyperlinks, simple LPAR creation, task monitor, and so on). In order to set up LPARs, the IVM requires management access to the Hypervisor. It has no service processor connection used by the HMC and it relies on a new virtual I/O device type
called Virtual Management Channel (VMC). This device is activated only when VIOS installation detects that the environment has to be managed by IVM. VMC is present on VIOS only when the following conditions are true: The virtualization feature has been enabled. The system has not been managed by an HMC. The system is in Manufacturing Default Configuration. In order to fulfill these requirements, an administrator has to use the ASMI. By using the ASMI they can enter the virtualization activation code, reset the system to the Manufacturing Default Configuration, and so on. A system reset removes any previous LPAR configuration and any existing HMC connection configuration. On a VIOS partition with IVM activated, a new ibmvmc0 virtual device is present and a management Web server is started listening to HTTP port 80 and to HTTPS port 443. The presence of the virtual device can be detected using the lsdev -virtual command, as shown in Example 1-1.
Example 1-1 Virtual Management Channel device
$ lsdev -virtual | grep ibmvmc0 ibmvmc0 Available Virtual Management Channel Because IVM relies on VMC to set up logical partitioning, it can manage only the system on which it is installed. For each IVM managed system, the administrator must open an independent Web browser session. Figure 1-4 on page 10 provides the schema of the IVM architecture. The primary user interface is a Web browser that connects to port 80 of the VIOS. The Web server provides a simple GUI and runs commands using the same command line interface that can be used for logging in to the VIOS. One set of commands provides LPAR management through the VMC, and a second set controls VIOS virtualization capabilities. VIOS 1.3.0.0 also enables secure (encrypted) shell access (SSH). Figure 1-4 on page 10 shows the integration with IBM Director (Pegasus CIM server).
Chapter 1. Overview
Web Browser
Telnet SSH
VIOS
IVM Pegasus CIM Server
Web Server
Command Shell
P A R T I T I O N 1
P A R T I T I O N 2
LPARs in an IVM managed system are isolated exactly as before and cannot interact except using the virtual devices. Only the IVM has been enabled to perform limited actions on the other LPARs such as: Activate and deactivate Send a power off (EPOW) signal to the operating system Create and delete View and change configuration
10
Gig E
This behavior makes management more direct and it is a change compared to HMC managed systems where resource over commitment is allowed. It is important to understand that any unused processor resources do become available to other partitions through the shared pool when any LPAR is not using all of its processor entitlement. System configuration is described in the GUI, as shown in Figure 1-5. In this example, an unbalanced system has been manually prepared as a specific scenario. The system has 4 GB of global memory, 2 processing units, and four LPARs defined. In the Partition Details panel, the allocated resources are shown in terms of memory and processing units. Even if the LPAR2 and LPAR3 partitions have not been activated, their resources have been allocated and the available systems memory and processing units have been updated accordingly. If a new LPAR is created, it cannot use the resources belonging to a powered-off partition, but it can be defined using the available free resources shown in the System Overview panel. The processing units for the LPAR named LPAR1 (ID 2) have been changed from the default 0.2 created by the wizard to 0.1. LPAR1 can use up to one processor because it has one virtual processor and has been guaranteed to use up to 0.1 processing units.
Chapter 1. Overview
11
Memory
Memory is assigned to an LPAR using available memory on the system with an allocation unit size that can vary from system to system depending on its memory configuration. The wizard provides this information, as shown in Figure 1-6.
The minimum allocation size of memory is related to the systems logical memory block (LMB) size. It is defined automatically at the boot of the system depending on the size of physical memory, but it can be changed using ASMI on the Performance Setup menu, as shown in Figure 1-7. The default automatic setting can be changed to the following values: 16 MB, 32 MB, 64 MB, 128 MB, or 256 MB.
In order to change the LMB setting, the entire system has to be shut down. If an existing partition has a memory size that does not fit in the new LMB size, the memory size is
12
changed to the nearest value that can be allowed by the new LMB size, without exceeding original memory size. A small LMB size provides a better granularity in memory assignment to partitions but requires higher memory allocation and deallocation times because more operations are required for the same amount of memory. Larger LMB sizes can slightly increase the firmware reserved memory size. It is suggested to keep the default automatic setting.
Processors
An LPAR can be defined either with dedicated or with shared processors. The wizard provides available resources in both cases and asks which processor resource type to use. When shared processors are selected for a partition, the wizard only asks the administrator to choose the number of virtual processors to be activated, with a maximum value equal to the number of system processors. For each virtual processor, 0.1 processing units are implicitly assigned and the LPAR is created in uncapped mode with a weight of 128. Figure 1-8 shows the wizard panel related to the system configuration described in Figure 1-5 on page 11. Because only 0.7 processing units are available, no dedicated processors can be selected and a maximum number of two virtual processors are allowed. Selecting one virtual processor will allocate 0.1 processing units.
The LPAR configuration can be changed after the wizard has finished creating the partition. Available parameters are: Processing unit value Virtual processor number Capped or uncapped property Uncapped weight The default LPAR configuration provided using the partition creation wizard is designed to keep the system balanced. Manual changes to the partition configuration should be made after careful planning of the resource distribution. The configuration described in Figure 1-5 on page 11 shows manually changed processing units, and it is quite unbalanced.
Chapter 1. Overview
13
As a general suggestion: For the LPAR configuration, select appropriate virtual processors and keep the default processing units when possible. Leave some system processing units unallocated. They are available to all LPARs that require them. Do not underestimate processing units assigned to VIOS. If not needed, they remain available in the shared pool, but on system peak utilization periods, they can be important for VIOS to provide service to highly active partitions.
Virtual Ethernet
Every IVM managed system is configured with four predefined virtual Ethernet devices, each with a virtual Ethernet ID ranging from 1 to 4. Every LPAR can have up to two virtual Ethernet adapters that can be connected to any of the four virtual networks in the system. Each virtual Ethernet can be bridged by the VIOS to a physical network using only one physical adapter. If higher performance or redundancy is required, a physical adapter aggregation can be made on one of these bridges instead. The same physical adapter or physical adapter aggregation cannot bridge more than one virtual Ethernet. See 4.1, Network management on page 72 for more details. Figure 1-9 shows a Virtual Ethernet wizard panel. All four virtual networks are described with the corresponding bridging physical adapter, if configured. An administrator can decide how to configure the two available virtual adapters. By default, adapter 1 is assigned to virtual Ethernet 1 and the second virtual Ethernet is unassigned.
The virtual Ethernet is a bootable device and can be used to install the LPARs operating system.
Virtual storage
Every LPAR can be equipped with one or more virtual devices using a single virtual SCSI adapter. A virtual disk device has the following characteristics: The size is defined by the administrator. It is treated by the operating system as a normal SCSI disk. It is bootable.
14
It is created using the physical storage owned by the VIOS partition, either internal or external to the physical system (for example, on the storage area network). It can be defined either using an entire physical volume (SCSI disk or a logical unit number of an external storage server) or a portion of a physical volume. It can be assigned only to a single partition at a time. Virtual disk device content is preserved if moved from one LPAR to another or increased in size. Before making changes in the virtual disk device allocation or size, the owning partition should deconfigure the device to prevent data loss. A virtual disk device that does not require an entire physical volume can be defined using disk space from a storage pool created on the VIOS, which is a set of physical volumes. Virtual disk devices can be created spanning multiple disks in a storage pool, and they can be extended if needed. The IVM can manage multiple storage pools and change their configurations by adding or removing physical disks to them. In order to simplify management, one pool is defined to be the default storage pool and most virtual storage actions implicitly refer to it. We recommend keeping the storage pool to a single physical SCSI adapter at the time of writing.
Virtual TTY
In order to allow LPAR installation and management, the IVM provides a virtual terminal environment for LPAR console handling. When a new LPAR is defined, two matching virtual serial adapters are created for console access, one on the LPAR and one on the IVM. This provides a connection from the IVM to the LPAR through the Hypervisor. The IVM does not provide a Web-based terminal session to partitions. In order to connect to an LPARs console, the administrator has to log in to the VIOS and use the command line interface. Only one session for each partition is allowed, because there is only one virtual serial connection. The following commands are provided: mkvt rmvt Connect to a console. Remove an existing console connection.
The virtual terminal is provided for initial installation and setup of the operating system and for maintenance reasons. Normal access to the partition is made through the network using services such as telnet and ssh. Each LPAR can be configured with one or two virtual networks that can be bridged by VIOS into physical networks connected to the system.
15
The VIOS is the only LPAR that is capable of management interaction with the Hypervisor and is able to react to hardware configuration changes. Its configuration can be changed dynamically while it is running. The other LPARs do not have access to the Hypervisor and have no interaction with IVM to be aware of possible system changes. Starting with IVM 1.3.0.0, it is possible to change any resource allocation for the client LPARs through the IVM Web interface. This enables the user to change processing unit configuration, memory allocation, and virtual adapter setup while the LPAR is activated. This is possible with the introduction of a new concept called DLPAR Manager (with an RMC daemon). The IVM command line interface enables an experienced administrator to make modifications to a partition configuration. Changes using the command line are shown in the Web GUI, and a warning message is displayed to highlight the fact that the resources of an affected LPAR are not yet synchronized. Figure 1-10 shows a case where the memory has been changed manually on the command line. In order to detect the actual values, the administrator must select the partition on the GUI and click the Properties link or by just clicking on the hyperlink for more details about synchronization of the current and pending values.
16
Figure 1-11 shows a generic LPAR schema from an I/O point of view. Every LPAR is created with one virtual serial and one virtual SCSI connection. There are four predefined virtual networks, and VIOS already is equipped with one virtual adapter connected to each of them.
POWER5 System
LPAR1
LPARn
Virtual serial
Virtual SCSI
Virtual serial
Virtual SCSI
1 Virtual 2 3 networks 4
VIOS
Ethernet bridge
Corporate networks
Because there is only one virtual SCSI adapter for each LPAR, the Web GUI hides its presence and shows virtual disks and optical devices as assigned directly to the partition. When the command line interface is used, the virtual SCSI adapter must be taken into account. For virtual I/O adapter configuration, the administrator only has to define whether to create one or two virtual Ethernet adapters on each LPAR and the virtual network to which it has to be connected. Only virtual adapter addition and removal and virtual network assignment require the partition to be shut down. All remaining I/O configurations are done dynamically: An optical device can be assigned to any virtual SCSI channel. A virtual disk device can be created, deleted, or assigned to any virtual SCSI channel. Ethernet bridging between a virtual network and a physical adapter can be created, deleted, or changed at any time.
Chapter 1. Overview
17
18
Chapter 2.
Installation
Starting with Version 1.2, IVM is shipped with the VIOS media. It is activated during the VIOS installation only if all of the following conditions are true: The system is in the Manufacturing Default Configuration. The system has never been managed by an HMC. The virtualization feature has been enabled. A new system from manufacturing that has been ordered with the virtualization feature will be ready for the IVM. If the system has ever been managed by an HMC, the administrator is required to reset it to the Manufacturing Default Configuration. If virtualization has not been activated, the system cannot manage micropartitions, and an IBM sales representative should be contacted to order the activation code. If a system supports the IVM, it can be ordered with IVM preinstalled. The IVM installation requires the following items: A serial ASCII console and cross-over cable (a physical ASCII terminal or a suitable terminal emulator) connected to one of the two system ports for initial setup An IP address for IVM An optional, but recommended, IP address for the Advanced System Management Interface (ASMI) This chapter describes how to install the IVM on a supported system. The procedure is valid for any system as long as the IVM requirements are satisfied; however, we start with a complete reset of the server. If the system is in Manufacturing Default Configuration and the Advanced POWER Virtualization feature is enabled, the IVM can be activated. In this case, skip the first steps and start with the IVM media installation in 2.5, VIOS image installation on page 28.
19
Continuing will result in the loss of all configured system settings (such as the HMC access and ASMI passwords, time of day, network configuration, hardware deconfiguration policies, etc.) that you may have set via user interfaces. Also, you will lose the platform error logs and partition-related information. Additionally, the service processor will be reset. Before continuing with this operation make sure you have manually recorded all settings that need to be preserved. Make sure that the interface HMC1 or HMC2 not being used by ASMI or HMC is disconnected from the network. Follow the instructions in the system service publications to configure the network interfaces after the reset. Enter 1 to confirm or 2 to cancel: 1 The service processor will reboot in a few seconds. Note: After a factory configuration reset, the system activates the microcode version present in the permanent firmware image. Check the firmware levels in the permanent and temporary images before resetting the system.
Note: More information about migration between the HMC and IVM can be found in 5.2, The migration between HMC and IVM on page 98.
20
System name: Server-9111-520-SN10DDEDC Version: SF235_160 User: admin Copyright ) 2002-2005 IBM Corporation. All rights reserved. 1. 2. 3. 4. 5. 6. 7. 8. 9. 99. S1> If the service processors IP address is known, the same information is provided using the ASMI in the upper panel of the Web interface, as shown in Figure 2-1. For a description of the default IP configuration, see 2.3, ASMI IP address setup on page 23. Power/Restart Control System Service Aids System Information System Configuration Network Services Performance Setup On Demand Utilities Concurrent Maintenance Login Profile Log out
Microcode Level
Figure 2-1 Current microcode level display using the ASMI Chapter 2. Installation
21
If the system microcode must be updated, the code and installation instructions are available from the following Web site: http://www14.software.ibm.com/webapp/set2/firmware Microcode can be installed through one of the following methods: HMC Running operating system Running IVM Diagnostic CD The HMC and running operating system methods require the system to be reset to the Manufacturing Default Configuration before installing the IVM. If the system is already running the IVM, refer to 5.3.1, Microcode update on page 110 for instructions. In order to use a diagnostic CD, a serial connection to the system port is required with the setup described in 2.1, Reset to Manufacturing Default Configuration on page 20. The following steps describe how to update the microcode using a diagnostic CD: 1. Download the microcode as an ISO image and burn it onto a CD-ROM. The latest image is available at: http://techsupport.services.ibm.com/server/mdownload/p5andi5.iso 2. Insert the diagnostic CD in the system drive and boot the system from it, following steps 1 to 7 described in 2.5, VIOS image installation on page 28. 3. Follow the instructions on the screen until the main menu screen (Example 2-3) opens.
Example 2-3 Main diagnostic CD menu
FUNCTION SELECTION 1 Diagnostic Routines This selection will test the machine hardware. Wrap plugs and other advanced functions will not be used. 2 Advanced Diagnostics Routines This selection will test the machine hardware. Wrap plugs and other advanced functions will be used. 3 Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.) This selection will list the tasks supported by these procedures. Once a task is selected, a resource menu may be presented showing all resources supported by the task. 4 Resource Selection This selection will list the resources in the system that are supported by these procedures. Once a resource is selected, a task menu will be presented showing all tasks that can be run on the resource(s). 99 Exit Diagnostics NOTE: The terminal is not properly initialized. You will be prompted to initialize the terminal after selecting one of the above options. To make a selection, type the number and press Enter. [ ]
4. Remove the diagnostic CD from the drive and insert the microcode CD. 5. Select Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.) Update and Manage System Flash Validate and Update System Firmware.
22
6. Select the CD drive from the menu. 7. When prompted for the flash update image file, press the F7 key to commit. If the console does not support it, use the Esc-7 sequence. 8. On the final screen, shown in Example 2-4, select YES and wait for the firmware update to be completed and for the subsequent system reboot to be executed.
Example 2-4 Confirmation screen for microcode update
UPDATE AND MANAGE FLASH The image is valid and would update the temporary image to SF235_137. The new firmware level for the permanent image would be SF220_051. The current permanent system firmware image is SF220_051. The current temporary system firmware image is SF220_051. ***** WARNING: Continuing will reboot the system! ***** Do you wish to continue? Make selection, use 'Enter' to continue. NO YES
802816
Chapter 2. Installation
23
select
6. Review your configuration and click Save settings to apply the change. 24
Integrated Virtualization Manager on IBM System p5
Network Configuration 1. 2. 98. 99. Configure interface Eth0 Configure interface Eth1 Return to previous menu Log out
S1> 1 Configure interface Eth0 MAC address: 00:02:55:2F:BD:E0 Type of IP address Currently: Dynamic 1. Dynamic Currently: 192.168.2.147 2. Static 98. Return to previous menu 99. Log out S1> 2 Configure interface Eth0 MAC address: 00:02:55:2F:BD:E0 Type of IP address: Static 1. 2. 3. 4. 5. 6. 7. 8. 9. Host name Domain name IP address (Currently: 192.168.2.147) Subnet mask Default gateway IP address of first DNS server IP address of second DNS server IP address of third DNS server Save settings and reset the service processor
Chapter 2. Installation
25
26
3. Enter the activation code as soon as the system has finished booting. Expand the On Demand Utilities menu and click CoD Activation. Figure 2-4 shows the corresponding menu. Enter the code provided to activate the feature in the specific system and click Continue. A confirmation message appears.
Chapter 2. Installation
27
4. Set the system in running mode and shut it off. Again, select the Power On/Off System menu, select Running for the Boot to system server firmware field, and click Save settings and power off, as shown in Figure 2-5.
Figure 2-5 ASMI menu to bring system in running mode and power off
28
memory
keyboard
network
scsi
speaker
4. When requested, provide the password for the service processors admin user. The default password is admin. 5. Insert the VIOS installation media in the drive. 6. Use the SMS menus to select the CD or DVD device to boot. Select Select Boot Options Select Install/Boot Device CD/DVD IDE and choose the right device from a list similar to the one shown in Example 2-7.
Example 2-7 Choose optical device from which to boot
Version: SF240_261 SMS 1.5 (c) Copyright IBM Corp. 2000,2003 All rights reserved. ------------------------------------------------------------------------------Select Device Device Current Device Number Position Name 1. 1 IDE CD-ROM ( loc=U787B.001.DNW108F-P4-D2 ) ------------------------------------------------------------------------------Navigation keys: M = return to Main Menu ESC key = return to previous screen X = eXit System Management Services ------------------------------------------------------------------------------Type the number of the menu item and press Enter or select Navigation Key:1 7. Select Normal Mode Boot and exit from the SMS menu. 8. Select the console number and press Enter. 9. Select the preferred installation language from the menu. 10.Select the installation preferences. Choose the default settings, as shown in Example 2-8.
Example 2-8 VIOS installation setup
Welcome to Base Operating System Installation and Maintenance Type the number of your choice and press Enter. >>> 1 Start Install Now with Default Settings 2 Change/Show Installation Settings and Install 3 Start Maintenance Mode for System Recovery Choice is indicated by >>>.
Chapter 2. Installation
29
88 99
>>> Choice [1]: 1 11.Wait for the VIOS restore. A progress status is shown, as in Example 2-9. At the end, VIOS reboots.
Example 2-9 VIOS installation progress status
12.Log in to the VIOS using the user padmin and the default password padmin. When prompted, change the login password to something secure. 13.Accept the VIOS licence by issuing the license -accept command.
$ lsdev | grep ^ent ent0 Available ent1 Available ent2 Available ent3 Available $ mkgencfg -o init 30
Mbps Ethernet PCI Adapter 10/100/1000 Base-TX PCI-X 10/100/1000 Base-TX PCI-X Mbps Ethernet PCI Adapter
$ lsdev | grep ^ent ent0 Available ent1 Available ent2 Available ent3 Available ent4 Available ent5 Available ent6 Available ent7 Available
10/100 Mbps Ethernet PCI Adapter II (1410ff01) 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 10/100 Mbps Ethernet PCI Adapter II (1410ff01) Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan)
$ mktcpip -hostname ivm -inetaddr 9.3.5.123 -interface en0 -start -netmask 255.255.255.000 -gateway9.3.5.41
Important: The IVM, like a Web server, requires a valid name resolution to work correctly. If DNS is involved, check that both the name and IP address resolution of the IVM host name are correct. After the IVM Web server has access to the network, it is possible to use the Web GUI with the HTTP or the HTTPS protocol pointing to the IP address of the IVM server application. Authentication requires the use of the padmin user, unless other users have been created.
Chapter 2. Installation
31
Important: Modifying your TCP/IP settings remotely might result in the loss of access to the current session. Ensure that you have physical console access to the Integrated Virtualization Manager partition prior to making changes to the TCP/IP settings. To view or modify the TCP/IP settings, perform the following steps: 1. From the IVM Management menu, click View/Modify TCP/IP Settings. The View/Modify TCP/IP Settings panel opens (Figure 2-6).
2. Depending on which setting you want to view or modify, click one of the following tabs: General to view or modify the host name and the partition communication IP address Network Interfaces to view or modify the network interface properties, such as the IP address, subnet mask, and the state of the network interface Name Services to view or modify the domain name, name server search order, and domain server search order Routing to view or modify the default gateway 3. Click Apply to activate the new settings.
32
The first panel that opens after the login process is the partition configuration, as shown in Figure 2-7. After the initial installation of the IVM, there is only the VIOS partition on the system with the following characteristics: The ID is 1. The name is equal to the systems serial number. The state is Running. The allocated memory is the maximum value between 512 MB and one-eighth of the installed system memory. The number of virtual processors is equal to or greater than the number of processing units, and the processing units are equal to at least 0.1 times the total number of virtual processors in the LPAR.
The default configuration for the partition has been designed to be appropriate for most IVM installations. If the administrator wants to change memory or processing unit allocation of the VIOS partition, a dynamic reconfiguration action can be made either using the Web GUI or the command line, as described in 3.5, LPAR configuration changes on page 57. With VIOS/IVM 1.3.0.0 dynamic reconfiguration of memory and processors (AIX 5L) or processors (Linux) of the client partitions is also supported.
33
In 4.1, Network management on page 72, we describe the network bridging setup.
Storage pool
Virtual disk
Both physical volumes and virtual disks can be assigned to an LPAR to provide disk space. Each of them is represented by the LPAR operating system as a single disk. For example, assigning a 73.4 GB physical disk and a 3 GB virtual disk to an LPAR running AIX 5L makes the operating system create two hdisk devices. At installation time, there is only one storage pool named rootvg, normally containing only one physical volume. All remaining physical volumes are available but not assigned to any pool. The rootvg pool is used for IVM management, and we do not recommend using it to provide disk space to LPARs. Because it is the only pool available at installation time, it is also defined as the default pool. Create another pool and set it as the default before creating other partitions. Important: Create at least one additional storage pool so that the rootvg pool is not the default storage pool. You can use rootvg as a storage pool on a system equipped with a SCSI RAID adapter when all of the physical disks are configured as a single RAID array. In this case, the administrator must first boot the server using the Standalone Diagnostics CD-ROM provided with the system and create the array. During the VIOS image installation, only one disk will be available, representing the array itself. From any storage pool, virtual disks can be defined and configured. They can be created in several ways, depending on the IVM menu that is used: During LPAR creation. A virtual disk is created in the default storage pool and assigned to the partition. Using the Create Virtual Storage link. A virtual disk is not assigned to any partition and it is created in the default storage pool. The storage pool can then be selected/assigned by selecting the Storage Pool tab on the View/Modify Virtual Storage view. We discuss basic storage management in 3.2.2, Storage pool disk management on page 39 and in 4.2, Storage management on page 76.
34
# mount oro vcdrfs /dev/cd0 /mnt # cp /mnt/nimol/ioserver_res/mksysb /export/vios # cp /mnt/nimol/ioserver_res/bosinst.data /export/vios For more information, see Chapter 7 of the IBM BladeCenter JS21: The POWER of Blade Innovation, SG24-7273.
Chapter 2. Installation
35
36
Chapter 3.
37
After the authentication process, log in and the default IVM console window opens, as shown in Figure 3-2 on page 39. The IVM graphical user interface is composed of several elements. 38
The following elements are the most important: Navigation area Work area The navigation area displays the tasks that you can access in the work area. The work area contains information related to the management tasks that you perform using the IVM and to the objects on which you can perform management tasks. The task area lists the tasks that you can perform for items displayed in the work area. The tasks listed in the task area can change depending on the page that is displayed in the work area, or even depending on the tab that is selected in the work area.
Work area
Task area
Task area
39
Important: All data of a physical volume is erased when you add this volume to a storage pool. The following steps describe how to create a storage pool: 1. Under the Virtual Storage Management menu in the navigation area, click the Create Virtual Storage link. 2. Click Create Storage Pool in the work area, as shown in Figure 3-3.
40
3. Type a name in the Storage pool name field and select the needed disks, as shown in Figure 3-4.
4. Click OK to create the storage pool. A new storage pool called datapoolvg2 with hdisk2 and hdisk3 has been created.
41
2. Select the storage pool you want as the default, as shown in Figure 3-5.
3. Click Assign as default storage pool in the task area. 4. A summary with the current and the next default storage pool opens, as shown in Figure 3-6. 5. Click OK to validate the change. In this example datapoolvg2 will be the new default storage pool.
42
43
3. Enter a name for the virtual disk, select a storage pool name from the drop-down list, and add a size for the virtual disk, as shown in Figure 3-8. 4. Click OK to create the virtual disk.
In order to view your new virtual disk/logical volume and use it, select the View/Modify Virtual Storage link under the Virtual Storage Management menu in the navigation area. The list of available virtual disks is displayed in the work area.
44
2. Type a name for the new partition, as shown in Figure 3-9. Click Next.
3. Enter the amount of memory needed, as shown in Figure 3-10. Click Next.
45
4. Select the number of processors needed and choose a processing mode, as shown in Figure 3-11. In shared mode, each virtual processor uses 0.1 processing units. Click Next.
5. Each partition has two virtual Ethernet adapters that can be configured to one of the four available virtual Ethernets. In Figure 3-12, adapter 1 uses virtual Ethernet ID 1. The Virtual Ethernet Bridge Overview section of the panel shows on which physical network interface every virtual network is bridged. In the figure, a virtual Ethernet bridge has been created. This procedure is described in 4.1.1, Ethernet bridging on page 72. The bridge enables the partition to connect to the physical network. Click Next.
46
6. Select Assign existing virtual disks and physical volumes, as shown in Figure 3-13. You can also let the IVM create a virtual disk for you by selecting Create virtual disk, but be aware that the virtual disk will be created in the default storage pool. To create storage pool and virtual disks or change the default storage pool, refer to 3.2.2, Storage pool disk management on page 39. Click Next.
7. Select needed virtual disks from the list, as shown in Figure 3-14. Click Next.
47
9. A summary of the partition to be created appears, as shown in Figure 3-16. Click Finish to create the LPAR.
To view the new LPAR and use it, under the Partition Management menu in the navigation area, click the View/Modify Partitions link. A list opens in the work area.
48
49
4. The Create Based On panel opens (Figure 3-18). Enter the name of the new partition, and click OK.
5. The View/Modify Partitions panel opens, showing the new partition (Figure 3-19).
Figure 3-19 Create based on - New logical partition has been created
The virtual disks that are created have the same size and are in the same storage pool as the selected partition. However, the data in these disks is not cloned.
50
51
4. Select the shutdown type. 5. Optionally, select Restart after the shutdown completes if you want the LPAR to start immediately after it shuts down. 6. Click OK to shut down the partition. The View/Modify Partitions panel is displayed, and the partition is shut down. Note: If the LPAR does not have an RMC connection, the Operating System shutdown type will be disabled, and the Delayed type will be the default selection. When the IVM/VIOS logical partition is selected, the only available option is OS shutdown. In addition, a warning is placed first thing on the panel indicating that shutting down the IVM/VIOS LPAR will affect other running LPARs.
52
4. Click Cancel to close the Task Properties window. The Monitor Tasks panel appears. You can also just click the hyperlink of the task from which you want to view the properties (arrow without number in Figure 3-21). This eliminates steps 2 and 3. See more about hyperlinks in the following section.
53
$ lssyscfg -r prof --filter "lpar_names=LPAR2" -F lpar_name LPAR2 $ chsyscfg -r prof -i "lpar_name=LPAR2,new_name=LPAR2_new_name" $ lssyscfg -r prof --filter "lpar_names=LPAR2_new_name" -F lpar_name LPAR2_new_name
54
$ chsysstate -o on -r lpar -n LPAR2 $ lsrefcode -r lpar --filter "lpar_names=LPAR2" -F refcode CA00E1F1 $ lsrefcode -r lpar --filter "lpar_names=LPAR2" -F refcode CA00E14D
$ mkvt -id 3
AIX Version 5 (C) Copyrights by IBM and by others 1982, 2005. Console login: 3. Start the LPAR in SMS mode. You can change the boot mode in the properties of the partitions profile before starting it, or enter 1 on the virtual terminal at the very beginning of the boot process, as shown in Example 3-4.
Example 3-4 Boot display
Memory
Keyboard
Network
SCSI
Speaker
4. Select a boot device, such as virtual optical device, or a network for the Network Installation Management (NIM) installation. 5. Boot the LPAR. 6. Select the system console and the language.
55
7. Select the disks to be installed. The installation of the operating system starts. Proceed as directed by your operating system installation instructions.
56
4. Select the name of the LPAR to which you want to assign the optical device, as shown in Figure 3-24. You can also remove the optical device from the current LPAR by selecting None.
5. Click OK. 6. If you move or remove an optical device from a running LPAR, you are prompted to confirm the forced removal before the optical device is removed. Because the optical device will become unavailable, log in to the LPAR and remove the optical device before going further. On AIX 5L, use the rmdev command. Press the Eject button. If the drawer opens, this is an indication that the device is not mounted. 7. Click OK. 8. The new list of optical devices is displayed with the changes you made. 9. Log in to the related LPAR and use the appropriate command to discover the new optical device. On AIX 5L, use the cfgmgr command.
57
3. Click Properties in the task area (or use the one-click hyperlink method explained in 3.2.7, Hyperlinks for object properties on page 53).
58
4. Modify the pending values as needed. In Figure 3-26, the assigned memory is increased by 512 MB. Click OK.
Memory is not added or removed in a single operation, but in 16 MB blocks. You can monitor the status by looking at partition properties.
$ lshwres -r proc --level lpar --filter "lpar_names=VIOS" -F curr_proc_units 0.20 $ chsyscfg -r prof -i lpar_name=VIOS,desired_proc_units=0.3 $ lshwres -r proc --level lpar --filter "lpar_names=VIOS" -F curr_proc_units 0.30
59
Note: If a change is made to a pending value of an LPAR in a workload management group with another LPAR, the workload management software must be aware of this change and dynamically adapt to it, otherwise, manual intervention is required. This only applies to processors and memory. When the dlparmgr encounters an error, it will be written to the dlparmgr status log, which can be read with the lssvcevents -t dlpar command. This log contains the last drmgr command run for each object type, for each LPAR. It includes the any responses from the drmgr command. The user will not be notified directly of these errors; their indication will be that the pending values are out of sync. The GUI enables you to see the state and gives you more information about the result of the operation. (See Figure 3-31 on page 64.) All chsyscfg command functions will continue to work, even if the partition does not have dynamic LPAR support as they do today. The GUI, however, will selectively enable or disable function based on the capabilities. The dynamic LPAR capabilities for each logical partition will be returned as an attribute on the lssyscfg -r lpar command. This allows the GUI to selectively enable/disable dynamic LPAR based on the current capabilities of the logical partition.
60
Note: The following site contains the RMC and RSCT requirements for dynamic LPAR, including the additional filesets that have to be installed on Linux clients: https://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html
61
Memory Tab
If the LPAR is powered on and memory is dynamic LPAR capable (see capabilities on the General tab), then the Pending assigned value will be enabled. (min and max are still disabled.) The user may change this value and select OK. The change will take effect immediately for the pending value. The dlparmgr daemon will then work to bring the pending and current (runtime) values into sync. If these values are not in sync, the user will see the Warning icon as in Figure 3-30 on page 64. Figure 3-28 on page 63 and Figure 3-29 on page 63 show a change.
62
63
Figure 3-30 Warning in work area because pending and current values are not in sync
Click the details hyperlink for more information about the resource synchronization, shown in Figure 3-31.
64
Note: The minimum and maximum memory values are enabled for the VIOS/IVM LPAR at all times. Table 3-2 provides possible memory field values.
Table 3-2 Possible field modifications: memory Capability setting Yes Enabled fields Assigned Memory Introduction text Modify the settings by changing the pending values. Changes will be applied immediately, but synchronizing the current and pending values might take some time. Modify the settings by changing the pending values. This LPAR does not currently support modifying these values while running, so pending values can be edited only when the LPAR is powered off. Modify the settings by changing the pending values. This LPAR does not currently support modifying these values while running, so pending values can be edited only when the LPAR is powered off.
No
None
Unknown
Assigned Memory
Processing tab
If the LPAR is powered on and processor dynamic LPAR capable (see capabilities on the General tab), then the Pending assigned values will be enabled. The minimum and maximum (processing units as well as virtual processors) values are still disabled. The user may change these values and select OK. The change will take effect immediately for the pending value. The dlparmgr daemon will then work to bring the pending and current (runtime) values into sync. If these values are not in sync, the user will see the Warning icon as in the memory panel. As with the Memory panel, the same rules apply with respect to the enabled fields and introductory text for the various capability options.
65
2. Click Modify partition assignment in the task area. 3. Select the partition name you want to assign to the virtual disks, as shown in Figure 3-33, and click OK to validate the virtual disk partition assignment.
66
4. Log in to the related LPAR and discover the new disks. On AIX 5L, use the cfgmgr command. Example 3-6 shows how the partition discovers two new virtual disks on AIX 5L.
Example 3-6 Virtual disk discovery
# lsdev -Ccdisk hdisk0 Available # cfgmgr # lsdev -Ccdisk hdisk0 Available hdisk1 Available
You can also assign virtual disks by editing the properties of the LPAR.
$ lssyscfg -r prof --filter "lpar_names=LPAR1" -F desired_proc_units 0.40 $ chsyscfg -r prof -i "lpar_name=LPAR1,desired_proc_units=0.3" $ lssyscfg -r prof --filter "lpar_names=LPAR1" -F desired_proc_units 0.30 A warning icon with an exclamation point inside it is displayed in the View/Modify Partitions screen if current and pending values are not synchronized. Example 3-8 shows an increase of memory operation.
Example 3-8 Increase memory of LPAR1 with 256 MB
$ lssyscfg -r prof --filter "lpar_names=LPAR1" -F desired_mem 512 $ chsyscfg -r prof -i "lpar_name=LPAR1,desired_mem+=256" $ lssyscfg -r prof --filter "lpar_names=LPAR1" -F desired_mem 768
67
to monitor its workload, manage its resources, or both. Workload management tools use partition workload groups to identify which LPARs they can manage. For example, Enterprise Workload Manager (EWLM) can dynamically and automatically redistribute processing capacity within a partition workload group to satisfy workload performance goals. EWLM adjusts processing capacity based on calculations that compare actual performance of work processed by the partition workload group to the business goals defined for the work. Workload management tools use dynamic LPAR to make resource adjustments based on performance goals. Therefore, each LPAR in the partition workload group must support dynamic LPAR. Verify that the LPAR that you want to add to the partition workload group supports dynamic LPAR for the resource type that your workload management tool adjusts as shown in Table 3-3. Note: Systems managed by the Integrated Virtualization Manager can have only one partition workload group per physical server. It is not required that all LPARs on a system participate in a partition workload group. Workload management tools manage the resources of only those LPARs that are assigned to a partition workload group. Workload management tools can monitor the work of an LPAR that is not assigned to a partition workload group, but they cannot manage the LPARs resources.
Table 3-3 Dynamic LPAR support Logical partition type AIX Linux Supports processor dynamic LPAR Yes Yes Supports memory dynamic LPAR Yes Yes/no (SLES 10 and RHEL 5 support memory add, but not memory removal at this moment
For example, the partition management function of EWLM adjusts processor resources based on workload performance goals. Thus, EWLM can adjust the processing capacity for AIX and Linux LPARs. The following recommendations are for workload management: Do not add the management partition to the partition workload group. To manage LPAR resources, workload management tools often require that you install some type of management or agent software on the LPARs. To avoid creating an unsupported environment, do not install additional software on the management partition. The dynamic LPAR support listed in the previous table is not the same as the dynamic LPAR capabilities that are in the partition properties for an LPAR. The dynamic LPAR support listed in the previous table reflects what each operating system supports in regard to dynamic LPAR functions. The dynamic LPAR capabilities that are shown in the partition properties for an LPAR reflect a combination of: A Resource Monitoring and Control (RMC) connection between the management partition and the client LPAR The operating systems support of dynamic LPAR (see Table 3-3) For example, an AIX client LPAR does not have an RMC connection to the management partition, but AIX supports both processor and memory dynamic LPAR. In this situation, the dynamic LPAR capabilities shown in the partition properties for the AIX LPAR indicate that the AIX LPAR is not capable of processor or memory dynamic LPAR. However, because AIX supports processor and memory dynamic LPAR, a workload management 68
Integrated Virtualization Manager on IBM System p5
tool can dynamically manage its processor and memory resources. Workload management tools are not dependent on RMC connections to dynamically manage LPAR resources. If an LPAR is part of the partition workload group, you cannot dynamically manage its resources from the Integrated Virtualization Manager because the workload management tool is in control of dynamic resource management. Not all workload management tools dynamically manage both processor and memory resources. When you implement a workload management tool that manages only one resource type, you limit your ability to dynamically manage the other resource type. For example, EWLM dynamically manages processor resources, but not memory. AIX supports both processor and memory dynamic LPAR. EWLM controls dynamic resource management of both processor resources and memory for the AIX LPAR, but EWLM does not dynamically manage memory. Because EWLM has control of dynamic resource management, you cannot dynamically manage memory for the AIX LPAR from the Integrated Virtualization Manager. To add an LPAR to the partition workload group, complete the following steps: 1. Select the logical partition that you want to include in the partition workload group and click Properties. The Partition Properties window opens (Figure 3-34). 2. In the Settings section, select Partition workload group participant. Click OK.
69
70
Chapter 4.
Advanced configuration
Logical partitions require an available connection to the network and storage. The Integrated Virtualization Manager (IVM) provides several solutions using either the Web graphical interface or the command line interface. This chapter describes the following advanced configurations on networking, storage management, and security: Virtual Ethernet bridging Ethernet link aggregation Disk space management Disk data protection Virtual I/O Server firewall SSH support
71
The Web GUI hides the details of the network configuration. Example 4-1 on page 73 describes the VIOS configuration before the creation of the bridge. For each physical and virtual network adapter, an Ethernet device is configured. The IVM is connected to a physical network and four virtual network adapters are available.
72
$ lsdev | grep ^en en0 Available en1 Defined en2 Defined en3 Defined en4 Defined en5 Defined en6 Defined en7 Defined ent0 Available ent1 Available ent2 Available ent3 Available ent4 Available ent5 Available ent6 Available ent7 Available $ lstcpip Name en0 en0 lo0 lo0 lo0 Mtu 1500 1500 16896 16896 16896 Network link#2 9.3.5 link#1 127 ::1
Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 10/100 Mbps Ethernet PCI Adapter II (1410ff01) 10/100 Mbps Ethernet PCI Adapter II (1410ff01) Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan)
0 0 0 0 0
Coll 4 4 0 0 0 0 0 0 0 0
When a virtual Ethernet bridge is created, a new shared Ethernet adapter (SEA) is defined, binding the physical device with the virtual device. If a network interface was configured on the physical adapter, the IP address is migrated to the new SEA. Example 4-2 shows the result of bridging virtual network 1 with the physical adapter ent0 when the IVM is using the network interface en0. A new ent8 SEA device is created, and the IP address of the IVM is migrated on the en8 interface. Due to the migration, all active network connections on en0 are reset.
Example 4-2 Shared Ethernet adapter configuration
$ lsdev | grep ^en en0 Available en1 Defined en2 Defined en3 Defined en4 Defined en5 Defined en6 Defined en7 Defined en8 Available ent0 Available ent1 Available ent2 Available ent3 Available ent4 Available ent5 Available
Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface Standard Ethernet Network Interface 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 10/100 Mbps Ethernet PCI Adapter II (1410ff01) 10/100 Mbps Ethernet PCI Adapter II (1410ff01) Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan)
73
ent6 ent7 ent8 $ lstcpip Name en8 en8 et8* et8* lo0 lo0 lo0 Mtu 1500 1500 1492 1492 16896 16896 16896
Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan) Shared Ethernet Adapter
0 0 0 0 0 0 0
Coll 0 0 0 0 0 0 0 0 0 0 0 0 0 0
In the environment shown in Example 4-3, it is possible to aggregate the two physical Ethernet adapters ent2 and ent3. A new virtual adapter ent9 is created, as described in Example 4-3.
Example 4-3 Ethernet aggregation creation
$ mkvdev -lnagg ent2 ent3 ent9 Available en9 et9 $ lsdev -dev ent9 name status ent9 Available $ lsdev -dev en9 name status en9 Defined
74
Aggregated devices can be used to define an SEA. The SEA must be created using the mkvdev command with the following syntax: mkvdev -sea TargetDevice -vadapter VirtualEthernetAdapter ... -default DefaultVirtualEthernetAdapter -defaultid SEADefaultPVID [-attr Attributes=Value ...] [-migrate]
Figure 4-2 shows the bridging of virtual network 4 with SEA ent9. The mkvdev command requires the identification of the virtual Ethernet adapter that is connected to virtual network 4. The lssyscfg command with the parameter lpar_names set to the VIOS partitions name provides the list of virtual adapters defined for the VIOS. The adapters are separated by commas, and their parameters are separated by slashes. The third parameter is the network number (4 in the example) and the first is the slot identifier (6 in the example). The lsdev command with the -vpd flag provides the physical location of virtual Ethernet adapters that contains the letter C followed by its slot number. In the example, ent7 is the virtual Ethernet adapter connected to network 4. The created ent10 adapter is the new SEA.
After the SEA is created using the command line, it is available from the IVM panels. It is displayed as a device with no location codes inside the parenthesis because it uses a virtual device.
75
Figure 4-3 shows how IVM represents an SEA created using an Ethernet link aggregation.
Physical adapter with location code Link aggregation with no location codes
The SEA can be removed using the IVM by selecting None as physical adapter for the virtual network. When you click Apply, the IVM removes all devices that are related to the SEA, but the link aggregation remains active.
76
77
3. Enter the disk space to be added and click OK. If the virtual disk is owned by a running partition, a warning message opens, as shown in Figure 4-5, and you must select a check box to force the expansion. The additional disk space is allocated to the virtual disk, but it is not available to the operating system.
78
4. Under Virtual Storage Management in the IVM navigation area, click View/Modify Virtual Storage. From the work area, select the virtual disk and click Modify partition assignment. Unassign the virtual disk by selecting None in the New partition field. If the disk is owned by a running partition, a warning message opens, as shown in Figure 4-6, and you must select a check box to force the expansion.
5. Execute the same action as in step 4, but assign the virtual disk back to the partition. 6. On the operating system, issue the appropriate procedure to recognize the new disk size. On AIX 5L, issue the varyonvg command on the volume group to which the disk belongs and, as suggested by a warning message, issue the chvg -g command on the volume group to recompute the volume group size.
79
Important: Mirrored logical volumes are supported as virtual disks. This procedure mirrors all logical volumes defined in rootvg and must not be run if rootvg contains virtual disks. The following steps describe how to provide a mirrored configuration for the rootvg storage pool: 1. Use the IVM to add a second disk of a similar size to rootvg. Under Virtual Storage Management in the navigation area, click View/Modify Virtual Storage, then go to the Physical Volumes tab. Select a disk of a similar size that is not assigned to any storage pool. Click Add to storage pool, as shown in Figure 4-7.
80
3. The actual mirroring is done using the VIOS command line. Log in as the padmin user ID and issue the mirrorios command, as shown in Example 4-4. The command asks for confirmation and causes a VIOS reboot to activate the configuration after performing data mirroring.
Example 4-4 rootvg mirroring at command line
$ mirrorios This command causes a reboot. Continue [y|n]? y SHUTDOWN PROGRAM Fri Oct 06 10:20:20 CDT 2006 Wait for 'Rebooting...' before stopping.
81
On the IVM, virtual disks are created out of storage pools. They are created using the minimum number of physical disks in the pool. If there is not enough space on a single disk, they can span multiple disks. If the virtual disks are expanded, the same allocation algorithm is applied. In order to guarantee mirror copy separation, we recommend that you create two storage pools and create one virtual disk from each of them. After virtual storage is created and made available as an hdisk to AIX 5L, it becomes important to correctly map it. On the IVM, the command line interface is required. On the IVM, the lsmap command provides all the mapping between each physical and virtual device. For each partition, there is a separate stanza, as shown in Example 4-5. Each logical or physical volume displayed in the IVM GUI is defined as a backing device, and the command provides the virtual storages assigned logical unit number (LUN) value.
Example 4-5 IVM command line mapping of virtual storage
$ lsmap -all ... SVSA Physloc Client Partition ID --------------- -------------------------------------------- -----------------vhost1 U9111.520.10DDEDC-V1-C13 0x00000003 VTD LUN Backing device Physloc VTD LUN Backing device Physloc VTD LUN Backing device Physloc VTD LUN Backing device Physloc ... vtscsi1 0x8100000000000000 aixboot1
On AIX 5L, the lscfg command can be used to identify the hdisk using the same LUN used by the IVM. Example 4-6 shows the command output with the 12-digit hexadecimal number representing the virtual disks LUN number.
Example 4-6 Identification of AIX 5L virtual SCSI disks logical unit number
82
PCI-X SCSI Disk Array Manager Move cursor to desired item and press Enter. List PCI-X SCSI Disk Array Configuration Create an Array Candidate pdisk and Format to 522 Byte Sectors Create a PCI-X SCSI Disk Array Delete a PCI-X SCSI Disk Array Add Disks to an Existing PCI-X SCSI Disk Array Configure a Defined PCI-X SCSI Disk Array Change/Show Characteristics of a PCI-X SCSI Disk Array Reconstruct a PCI-X SCSI Disk Array Change/Show PCI-X SCSI pdisk Status Diagnostics and Recovery Options
F1=Help F9=Shell
F2=Refresh F10=Exit
F3=Cancel Enter=Do
F8=Image
83
Before configuring firewall settings, you must first enable the Virtual I/O Server firewall. The following topic describes this action.
Note: The firewall settings are in the viosecure.ctl file in the /home/ios/security directory. You can use the -force option to enable the standard firewall default ports. For more about the force option, see the viosecure command description and Appendix 3. You can use the default setting or configure the firewall settings to meet the needs of your environment by specifying which ports or port services to allow. You can also turn off the firewall to deactivate the settings. Use the following tasks at the VIOS command line to configure the VIOS firewall settings: 1. Enable the VIOS firewall by issuing the following command: viosecure -firewall on 2. Specify the ports to allow or deny, by using the following command: viosecure -firewall allow | deny -port number 3. View the current firewall settings by issuing the following command: viosecure -firewall view 4. If you want to disable the firewall configuration, issue the following command: viosecure -firewall off For more about any viosecure command option, see the viosecure command description.
85
The low-level security settings are a subset of the medium-level security settings, which are a subset of the high-level security settings. Therefore, the High level is the most restrictive and provides the greatest level of control. You can apply all of the rules for a specified level or select which rules to activate for your environment. By default, no VIOS security levels are set; you must run the viosecure command to enable the settings. Use the following tasks to configure the system security settings:
86
nim-ROOT[1156]/root/.ssh ># ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/root/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_dsa. Your public key has been saved in /root/.ssh/id_dsa.pub. The key fingerprint is: d2:30:06:6b:68:e2:e7:fd:3c:77:b7:f6:14:b1:ce:35 root@nim nim-ROOT[1160]/root/.ssh 2. Verify that the keys are generated on your workstation (Example 4-9).
Example 4-9 Verify successful creation of id_dsa files
nim-ROOT[1161]/root/.ssh ># ls -l total 16 -rw------1 root system -rw-r--r-1 root system nim-ROOT[1162]/root/.ssh
3. Now log in to the IVM through SSH. There is not yet a known_hosts file created, which will be done during the first SSH login (Example 4-10).
Example 4-10 First SSH login toward IVM - known_hosts file creation
># ssh [email protected] The authenticity of host '9.3.5.123 (9.3.5.123)' can't be established. RSA key fingerprint is 1b:36:9b:93:87:c2:3e:97:48:eb:09:80:e3:b6:ee:2d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '9.3.5.123' (RSA) to the list of known hosts. [email protected]'s password: Last unsuccessful login: Fri Oct 13 15:23:50 CDT 2006 on ftp from ::ffff:9.3.5.111 Last login: Fri Oct 13 15:25:21 CDT 2006 on /dev/pts/1 from 9.3.5.111 $ Connection to 9.3.5.123 closed. nim-ROOT[1163]/root/.ssh ># ls -l total 24 -rw------1 root system 668 Oct 13 15:31 id_dsa -rw-r--r-1 root system 598 Oct 13 15:31 id_dsa.pub -rw-r--r-1 root system 391 Oct 13 15:33 known_hosts
The known_hosts file has been created. 4. Next step is to retrieve the authorized_keys2 file with FTP (get) from the IVM (Example 4-11).
Example 4-11 Transfer of authorized_keys2 file
87
220 IVM FTP server (Version 4.2 Fri Feb 3 22:13:23 CST 2006) ready. Name (9.3.5.123:root): padmin 331 Password required for padmin. Password: 230-Last unsuccessful login: Fri Oct 13 15:23:50 CDT 2006 on ftp from ::ffff:9.3.5.111 230-Last login: Fri Oct 13 15:32:03 CDT 2006 on /dev/pts/1 from 9.3.5.111 230 User padmin logged in. ftp> cd .ssh 250 CWD command successful. ftp> ls 200 PORT command successful. 150 Opening data connection for .. environment authorized_keys2 226 Transfer complete. ftp> get authorized_keys2 200 PORT command successful. 150 Opening data connection for authorized_keys2 (598 bytes). 226 Transfer complete. 599 bytes received in 7.5e-05 seconds (7799 Kbytes/s) local: authorized_keys2 remote: authorized_keys2 ftp> by 221 Goodbye. 5. Add the contents of your local SSH public key (id_dsa.pub) to the authorized_keys2 file (Example 4-12).
Example 4-12 Add contents of local SSH public key to authorized_keys2 file
nim-ROOT[1169]/root/.ssh ># ftp 9.3.5.123 nim-ROOT[1169]/root/.ssh ># cat id_dsa.pub >> auth* 6. Verify the successful addition of the public key by comparing the size of the authorized keys file to the id_dsa.pub file (Example 4-13).
Example 4-13 Compare addition of public key
nim-ROOT[1209]/root/.ssh ># ls -l total 32 -rw-r--r-1 root system -rw------1 root system -rw-r--r-1 root system -rw-r--r-1 root system
13 13 13 13
7. Transfer the authorized key file back to the IVM into the directory /home/padmin/.ssh (Example 4-14).
Example 4-14 FTP of authorized key back to IVM
nim-ROOT[1171]/root/.ssh ># ftp 9.3.5.123 Connected to 9.3.5.123. 220 IVM FTP server (Version 4.2 Fri Feb 3 22:13:23 CST 2006) ready.
88
Name (9.3.5.123:root): padmin 331 Password required for padmin. Password: 230-Last unsuccessful login: Fri Oct 13 15:23:50 CDT 2006 on ftp from ::ffff:9.3.5.111 230-Last login: Fri Oct 13 15:35:44 CDT 2006 on ftp from ::ffff:9.3.5.111 230 User padmin logged in. ftp> cd .ssh 250 CWD command successful. ftp> put authorized_keys2 200 PORT command successful. 150 Opening data connection for authorized_keys2. 226 Transfer complete. 599 bytes sent in 0.000624 seconds (937.4 Kbytes/s) local: authorized_keys2 remote: authorized_keys2 ftp> by 221 Goodbye. 8. Verify that the key can be read by the SSH daemon on the IVM and test the connection by typing the ioslevel command (Example 4-15).
Example 4-15 Test the configuration
nim-ROOT[1173]/root/.ssh ># ssh [email protected] Last unsuccessful login: Fri Oct 13 15:23:50 2006 on ftp from ::ffff:9.3.5.111 Last login: Fri Oct 13 15:37:33 2006 on ftp from ::ffff:9.3.5.111 $ ioslevel 1.3.0.0 After establishing these secure remote connections, we can execute several commands. For example: ssh [email protected] This gives us an interactive login (host name is also possible). ssh -t [email protected] ioscli mkvt -id 2 This enables us to get a console directly to a client LPAR with id 2 ssh [email protected] lssyscfg -r sys Example 4-16 shows the output of the padmin command.
Example 4-16 Output of the padmin command
nim-ROOT[1217]/root/.ssh ># ssh [email protected] lssyscfg -r sys name=p520-ITSO,type_model=9111-520,serial_num=10DDEEC,ipaddr=9.3.5.127,state=Opera ting,sys_time=10/13/06 17:39:22,power_off_policy=0,cod_mem_capable=0,cod_proc_capable=1,os400_capable=1,m icro_lpar_capable=1,dlpar_mem_capable=1,assign_phys_io_capable=0,max_lpars=20,max_ power_ctrl_lpars=1,service_lpar_id=1,service_lpar_name=VIOS,mfg_default_config=0,c urr_configured_max_lpars=11,pend_configured_max_lpars=11,config_version=0100010000 000000,pend_lpar_config_state=enabled nim-ROOT[1218]/root/.ssh
89
90
Chapter 5.
Maintenance
This chapter provides information about maintenance operations on the Integrated Virtualization Manager (IVM). This chapter discusses the following topics: IVM backup and restore Logical partition backup and restore IVM upgrade Managed system firmware update IVM migration Command logging Integration with IBM Director
91
A file named profile.bak is generated and stored under the users home directory. In the work area, you can select this file name and save it to a disk. There is only one unique backup file at a time, and a new backup file replaces an existing one. The backup file contains the LPARs configuration, such as processors, memory, and network. Information about virtual disks is not included in the backup file. In order to perform a restore operation, the system must not have any LPAR configuration defined. Click Restore Partition Configuration to restore the last backed-up file. If you want to restore a backup file stored on your disk, follow these steps: 1. Click Browse and select the file. 2. Click Upload Backup File. The uploaded file replaces the existing backup file. 3. Click Restore Partition Configuration to restore the uploaded backup file.
92
You can also back up and restore LPAR configuration information from the CLI. Use the bkprofdata command to back up the configuration information and the rstprofdata command to restore it. See the VIO Server and PLM command descriptions in the Information Center at the following Web page for more information: http://publib.boulder.ibm.com/infocenter/eserver/v1r3s/index.jsp?topic=/iphb1/iphb 1_vios_commandslist.htm
Important: The backup operation does not save the data contained in virtual disks or physical volumes assigned to the LPARs. The backup can use one of the following media types: File Tape CD-R DVD-RAM To restore the management partition, install the operating system using the bootable media created by the backup process.
Chapter 5. Maintenance
93
$ ioslevel 1.2.1.4-FP-7.4 $ In the example, the level of the VIOS software is 1.2.1.4 with Fix Pack 7.4. If we now go back to the mentioned Web site (Figure 5-3), we notice that a newer fix pack is available: FP 8.0.
Fix Pack 8.0 provides a migration path for existing Virtual I/O Server installations. Applying this package will upgrade the VIOS to the latest level, V1.3.0.0. All VIOS fix packs are cumulative and contain all fixes from previous fix packs.
94
To take full advantage of all of the available functions in the VIOS, it is necessary to be at a system firmware level of SF235 or later. SF230_120 is the minimum level of SF230 firmware supported by the Virtual I/O Server V1.3. If a system firmware update is necessary, it is recommended that the firmware be updated before upgrading the VIOS to Version 1.3.0.0. (See 5.3.1, Microcode update on page 110.) The VIOS Web site has a direct link to the microcode download site: http://www14.software.ibm.com/webapp/set2/firmware/gjsn Important: Be sure to have the right level of firmware before updating the IVM. All interim fixes applied to the VIOS must be manually removed before applying Fix Pack 8.0. VIOS customers who applied interim fixes to the VIOS should use the following procedure to remove them prior to applying Fix Pack 8.0. Example 5-2 shows how to list fixes.
Example 5-2 Listing fixes
$ $ $ $
oem_setup_env /*from the VIOS command line emgr -P /*gives a list of the installed efix's (by label) emgr -r -L /* for each additional efix listed, run this command to remove exit Note: It is recommended that the AIX 5L client partitions using VSCSI devices should upgrade to AIX 5L maintenance Level 5300-03 or later.
Chapter 5. Maintenance
95
For the first download option, which retrieves the latest fix pack using the Download Director, all filesets are downloaded into a user-specified directory. When the download has completed, the updates can be applied from a directory on your local hard disk: 1. Log in to the Virtual I/O Server as the user padmin. 2. Create a directory on the Virtual I/O Server. $ mkdir directory_name 3. Using the ftp command, transfer the update file (or files) to the directory you created. 4. Apply the update by running the updateios command: $ updateios -dev directory_name -install -accept Accept to continue the installation after the preview update is run. 5. Reboot. Verify a successful update by checking the results of the updateios command and running the ioslevel command. The result of the ioslevel command should equal the level of the downloaded package. $ ioslevel 1.3.0.0-FP-8.0
96
Chapter 5. Maintenance
97
Verify a successful update by checking the results of the updateios command and running the ioslevel command. The result of the ioslevel command should equal the level of the downloaded package: $ ioslevel 1.3.0.0-FP-8.0
98
If an HMC was connected to a system using the IVM, the following steps explain how to re-enable the IVM capabilities: 1. Power off the system. 2. Remove the system definition from the HMC. 3. Unplug the HMC network cable from the system if directly connected. 4. Connect a TTY console emulator with a serial cross-over cable to one of the systems serial ports. 5. Press any key on the console to open the service processor prompt. 6. Log in as the user admin and answer the questions about the number of lines and columns. 7. Reset the service processor. Type 2 to select 2. System Service Aids, type 10 to select 10. Reset Service Processor, and then type 1 to confirm your selection. Wait for the system to reboot. 8. Reset it to the factory configuration (Manufacturing Default Configuration). Type 2 to select 2. System Service Aids, type 11 to select 11. Factory Configuration, and then type 1 to confirm. Wait for the system to reboot. 9. Configure the ASMI IP addresses if needed. Type 5 to select 5. Network Services, type 1 to select 1. Network Configuration, and then configure each Ethernet adapter. For more information, refer to 2.3, ASMI IP address setup on page 23. 10.Start the system. Type 1 to select 1. Power/Restart Control, type 1 to select 1. Power On/Off System, type 8 to select 8. Power on, and press Enter to confirm your selection. 11.Go to the SMS menu. 12.Update the boot list. Type 5 to select 5. Select Boot Options, type 2 to select 2. Configure Boot Device Order, and select the IVM boot disk. 13.Boot the system. 14.Wait for the IVM to start. 15.Connect to the IVM with the GUI. 16.Restore the partition configuration using the last backup file. From the Service Management menu in the navigation area, click Backup/Restore, and then click Restore Partition Configuration in the work area. For more information, refer to 5.1.1, Backup and restore of the logical partition definitions on page 92. This operation only updates the IVM partition configuration and does not restore the LPARs hosted by the IVM. 17.Reboot the IVM. (If changes do not require reboot, then recovery of IVM should be done immediately.) 18.Restore the partition configuration using the last backup file. This time, each LPAR definition is restored. 19.Reboot the IVM. This reboot is needed to make each virtual device available to the LPARs. (This is also possible by issuing the cfgdev command.)
Chapter 5. Maintenance
99
VIOS version
The ioslevel command displays the VIOS version; you will see output similar to this: $ ioslevel 1.3.0.0
$ lsdev -type adapter name status ent0 Available ent1 Available ent2 Available ent3 Available ent4 Available ent5 Available ent6 Available ide0 Available sisscsia0 Available sisscsia1 Available usbhc0 Available usbhc1 Available vhost0 Available vhost1 Available vsa0 Available $ lsvg -lv db_sp db_sp: LV NAME db_lv
description 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 10/100 Mbps Ethernet PCI Adapter II (1410ff01) 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890 Virtual I/O Ethernet Adapter (l-lan) Shared Ethernet Adapter ATA/IDE Controller Device PCI-X Dual Channel Ultra320 SCSI Adapter PCI-X Dual Channel Ultra320 SCSI Adapter USB Host Controller (33103500) USB Host Controller (33103500) Virtual SCSI Server Adapter Virtual SCSI Server Adapter LPAR Virtual Serial Adapter
TYPE jfs
LPs 800
PPs 800
PVs 1
LV STATE open/syncd
100
If you want to display the attribute of each device, then use the lsdev -dev Devicename -attr command. And you can use lsdev -slots command for the slot informations and lsdev -dev Devicename -child command for the child devices associated with devices. Also, you can use lsvg -lv volumegroup_name command to discover system disk configuration and volume group information. Tip: Note the physical location code of the disk unit that you are using to boot the VIOS. To display this, use the lsldev -dev Devicename -vpd command. To migrate from an HMC to an IVM environment, the VIOS must own all of the physical devices. You must check the profile of VIOS as shown in Figure 5-5.
Backup VIOS, virtual I/O clients profile, and virtual I/O devices
You should document the information in the virtual I/O clients that which have a dependency on the virtual SCSI server and virtual SCSI client adapter as shown in Figure 5-6 on page 102.
Chapter 5. Maintenance
101
$ lsmap -vadapter vhost0 SVSA Physloc Client Partition ID --------------- -------------------------------------------- -----------------vhost0 U9111.520.10DDEEC-V1-C3 0x00000002 VTD LUN Backing device Physloc VTD LUN Backing device Physloc VTD LUN Backing device Physloc vopt0 0x8300000000000000 cd0 U787A.001.DNZ00XK-P4-D3 vscsi0 0x8100000000000000 dbroot_lv
This migration has the following requirements: VIOS of HMC-managed environment owns all physical I/O devices Backup of VIOS and VIOC VIOS Version 1.2 or above System firmware level SF230_120 or above Figure 5-7 shows the general migration procedure from HMC to an IVM environment. There is some dependency on system configuration.
1. Reset to manufacturing default configuration If you decide to perform this migration, it is necessary to restore firmware setting, network configuration and passwords to their factory defaults. When you reset the firmware, it will remove all partition configuration and any personalization that has been made to the service processor. A default full system partition will be created to handle all hardware resources. Without an HMC the system console is provided through the internal serial
Chapter 5. Maintenance
103
ports, and connections are made using a serial ASCII console and cross-over cable connected to the serial port. If you perform the firmware reset after detaching the HMC, the HMC will retain information about the server as a managed system. You can remove this using the HMC GUI. When a console session is opened to the reset server, at the first menu, select 1.Power/Restart Control 1.Power On/Off system as shown in Example 5-5.
Example 5-5 Power On/Off System
Power On/Off System Current system power state: Off Current firmware boot side: Temporary Current system server firmware state: Not running 1. System boot speed Currently: Fast 2. Firmware boot side for the next boot Currently: Temporary 3. System operating mode Currently: Normal 4. Boot to system server firmware Currently: Standby 5. System power off policy Currently: Automatic 6. Power on 98. Return to previous menu 99. Log out Example 5-5 shows that the Power on menu is 6. This means that the firmware reset has not been performed and the system is still managed by an HMC. If the firmware reset is performed and the system is no longer managed by an HMC, then the Power on menu is 8. You can reset the service processor or put the server back to factory configuration through the System Service Aids menu in ASMI. 2. Change the serial connection for IVM When you change the management system from HMC to IVM, you can no longer use the default console connection through vty0. You will change the console connection, as shown in Example 5-6. This is effective after the VIOS reboot, and you will change the physical serial connection from SPC1 to SPC2 for using the vty1 console connection.
Example 5-6 Serial connection change for IVM
# lscons NULL # lsdev -Cc tty vty0 Defined Asynchronous Terminal vty1 Available Asynchronous Terminal vty2 Available Asynchronous Terminal # lsdev -Cl vty0 -F parent vsa0 # lsdev -Cl vty1 -F parent vsa1
104
# lsdev -Cl vsa1 vsa1 Available LPAR Virtual Serial Adapter # chcons /dev/vty1 chcons: console assigned to: /dev/vty1, effective on next system boot 3. Connect IVM Web-interface using the VIOS IP address The first Web-interface pane that opens after the login process is View/Modify Partitions, as shown in Figure 5-8. You can only see a VIOS partition. IVM does not have any information about other virtual I/O clients because the service process is reset to the manufacturing default configuration.
4. Re-create virtual devices and Ethernet bridging When changed to an IVM environment, the VIOS (now Management Partition) still has virtual device information left over from the HMC environment. There is the virtual SCSI, virtual Ethernet, shared Ethernet, and virtual target device information, but their status is changed to defined after migrating to an IVM environment. Because these virtual devices no longer exist, you should remove them before creating the virtual I/O clients in IVM. You can remove the virtual devices as shown in Example 5-7. If you define virtual disks for clients from the Management Partition, the virtual SCSI server and client devices are created automatically for you.
Example 5-7 Remove the virtual device
Chapter 5. Maintenance
105
vtscsi0 deleted vhost0 deleted $ rmdev -dev ent4 ent4 deleted $ rmdev -dev en4 en4 deleted $ rmdev -dev et4 et4 deleted After removing the virtual devices, you can re-create virtual devices using the cfgdev command or through the IVM GUI and the Virtual Ethernet Bridge for virtual I/O clients in the View/Modify Virtual Ethernet pane as shown in Figure 5-9.
5. Re-create virtual I/O clients Because the IVM does not have virtual I/O clients information, you will have to re-create virtual I/O clients using the IVM Web interface. For more information about creating LPARs, refer to 3.2, IVM graphical user interface on page 38. When you choose Storage type, select Assign existing virtual disks and physical volumes as shown in Figure 5-10 on page 107. You can also let the IVM create a virtual disk for you by selecting Create virtual disk when needed. Tip: You should export any volume group containing client data using the exportvg command. After migrating, import the volume groups using the importvg command. This is a more efficient method to migrate the client data without loss.
106
Chapter 5. Maintenance
107
2. Recover HMC
1. Connect System p server to an HMC The server is connected and recognized by the HMC, and the IVM interface will be disabled immediately, effectively making it just a VIOS partition as shown in Figure 5-12.
108
2. Recover server configuration data to HMC Add the managed system to the HMC, then the managed system will go into recovery mode as shown in Figure 5-13. Right-click on the managed system, then select Recover Partition Data Restore profile data from HMC backup data. Make sure that at least one of the LPARs is up and running; otherwise the HMC might delete all of the LPARs.
3. Re-create the partition profile After the recovery completes, the HMC displays all partitions without a profile in the managed system. VIOS should be able to use the virtual Ethernet adapter created in an IVM environment when it is rebooted. The IVM devices will appear in the defined state, as shown in Example 5-8. More information about creating partitions and profiles can be found on the following Web site: http://publib.boulder.ibm.com/infocenter/eserver/v1r3s/topic/iphbl/iphblcreatel par.htm 4. Re-create virtual devices and Ethernet bridging Because everything is identical from the PHYP side, you normally should not re-create virtual devices or bridging. However, if this is not the case, after removing the previous virtual devices, you can create the VIOS profile including the virtual server SCSI adapter and virtual Ethernet adapter. Then re-create the virtual devices to bridge between VIOS and virtual I/O clients as shown in Example 5-8.
Example 5-8 Re-create bridge between VIOS and virtual I/O clients
<< SEA creation >> $ mkvdev -sea ent0 -vadapter ent5 -default ent5 -defaultid 1 ent6 Available
Chapter 5. Maintenance
109
en6 et6 << Virtual Disk Mapping >> $ mkvdev -vdev dbroot_lv -vadapter vhost0 -dev vscsi0 vscsi0 Available $ mkvdev -vdev cd0 -vadapter vhost0 -dev vtopt0 vtopt0 Available Also, you will create the virtual I/O clients profile, including the virtual client SCSI adapter and virtual Ethernet adapters. For more information about the creation of virtual devices on the VIOS refer to the IBM Redbook Advanced POWER Virtualization on IBM System p5, SG24-7940. Important: Before migration from an IVM environment to an HMC, it is necessary to back up the VIOS and VIOC. For more information about backup, refer to Section 2.1 in theIBM System p Advanced POWER Virtualization Best Practices, REDP-4194.
110
2. Click Generate New Survey. This generates a list of devices, as shown in Figure 5-14.
3. From the Microcode Survey Results list, select one or more items to upgrade. Click the Download link in the task area.
Chapter 5. Maintenance
111
4. Information appears about the selected devices such as the available microcode level and the commands you need in order to install the microcode update, as shown in Figure 5-15. Select the Accept license check box in the work area, and click OK to download the selected microcode and store it on the disk.
5. Log in to the IVM using a terminal session. 6. Run the install commands provided by the GUI in step 3 on page 111. If you are not able to connect to the GUI of the IVM and a system firmware update is needed, refer to 2.2, Microcode update on page 21 for the update procedure with a diagnostic CD.
112
For more information, refer to 2.4, Virtualization feature activation on page 26.
113
The mkcd command creates a system backup image (mksysb) to CD-Recordable (CD-R) or DVD-Recordable (DVD-RAM) media from the system rootvg or from a previously created mksysb image. Multiple volumes are possible for backups over 4 GB. You can create a /mkcd file system that is very large (1.5 GB for CD or 9 GB for DVDs). The /mkcd file system can then be mounted onto the clients when they want to create a backup CD or DVD for their systems. Note: When creating very large backups (DVD sized backups larger than 2 GB) with the mkcd command, the file systems must be large file enabled and this requires that the ulimit values are set to unlimited. Network Installation Management (NIM) creates a system backup image from a logical partition rootvg using the network. For more information, see: http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix .install/doc/insgdrf/create_sys_backup.htm
114
Chapter 5. Maintenance
115
Attention: The figure is one of an early code level, at the time of writing. This is subject to change. The support for IVM directly leverages support for the Hardware Management Console (HMC) that was available in IBM Director 5.10. IVM contains a running CIMOM that has information about the physical system it is managing and all of the LPARs. The CIMOM also forwards event information to IBM Director (see Figure 1-4 on page 10). Because the IVM provides a Web GUI for creating, deleting, powering on, and powering off LPARs, it also enables the client to manage events that have occurred on the system. How does it work, and how is it integrated? Before IBM Director can manage an IVM system, the system must be added to IBM Director using one of two methods: The client can choose to create a new system, in which case the IP address would be provided and IBM Director would validate the IP address and if validated would create a managed object for IVM. This managed object would appear on the IBM Director console with the padlock icon next to it, indicating that the managed object is locked and needs authentication information to unlock it. The user has to Request Access to the managed object, giving it the User ID and Password. The other way is to Discover a Level 0: Agentless Systems. This will cause IBM Director to interrogate systems that are reachable based on Directors Discovery Preferences for Level 0. In this case 0 or more managed objects will be created and locked as above; some may be IVM systems, some might not be. This will be determined after access has been granted. This time the user will have to Request Access to the managed object so that IBM Director can determine which ones are IVM managed systems.
116
After a user Requests Access to a Level 0 managed object and access is granted, an attribute is set to identify it as belonging to an IVM system. When this happens, IBM Director creates a Logical Platform managed object for it and passes to it the authentication details. It also indicates that this managed object is a Platform Manager. After this is done, Director connects to the CIMOM on the IVM system and begins discovering the resources that are being managed by IVM, such as the physical system and each of its LPARs. Each of these resources will also have a managed object representation on the Director Console. All discovery of the resources starts from the IBM_HwCtrlPoint CIM object. When we have that object, we use the IBM_TaggedCollection, which is an association between the Hardware Control Point and the objects that represent the physical system. This will be an instance of IBMP_CEC_CS class. Before we discover the LPARs, we must provide the Power Status, which we get from the association IBM_AssociatedPowerManagementService. This gives us an object that contains the PowerState property that we use to set the Power State attribute on the CEC and the subsequent LPARs. We then use the association between IBMP_CEC_CS object and IBMP_LPAR_CS objects to get all objects for all LPARs. This gives us the whole topology. Finally, we subscribe to the CIMOM for event notification. IBM Director has a presence check facility. This is enabled by default, and has a default interval of 15 minutes. Basically, every 15 minutes (or whatever the user chooses), a presence check will be attempted on the managed object for IVM and for all of the managed objects that it is managing. This is done by attempting to connect to the CIMOM on the IVM system. These presence checks could happen either before or after a request access has been completed successfully. The presence check uses the credentials that the managed object has, so if the presence check is done before the request access, IBM Director will either get a fail to connect or an invalid authentication. If IBM Director gets a fail to connect indication, then the managed object will indicate offline and will remain that way until a presence check gets an invalid authentication indication. While the managed object is in the offline state, the user will not be able to request access to it. If IBM Director was receiving a fail to connect indication because of a networking problem or because the hardware was turned off, fixing those problems will cause the managed object to go back to online and locked. At this point the user can request access. After access is granted, subsequent presence checks will use those validated credentials to connect to the CIMOM. Now the possibilities for presence check should be fail to connect or connect successful. If the connection is successful, then presence check does a topology scan and verifies that all resources have managed objects and all managed objects represent existing resources. If that is not the case, managed objects are created or deleted to make the two lists agree. Normally events will be created when an LPAR is deleted, for example. When this happens, IBM Director will delete the managed object for this LPAR. Because an LPAR could be deleted when Director server is down for some reason, this validation that is done by presence check would keep things in sync. IBM Director subscribes to events with the CIMOM on IVM. Some events require action from IBM Director such as power-on or power-off events or creation or deletion of an LPAR, and some require no action. All events that IBM Director receives are recorded in the Director Event Log and those that require action are acted on. For example, if an LPAR was deleted, then Directors action would be to remove the managed object from the console. If an LPAR was powered on, then the managed object for the LPAR would show the new power state. IBM Director also provides a means of doing Inventory collection. For IVM, we collect information for physical and virtual information for processors and memory.
Chapter 5. Maintenance
117
118
Appendix A.
Integrated Virtualization Manager (IVM) Physical footprint Installation Integrated into the server Installed with the VIOS (optical or network). Preinstall option available on some systems. AIX 5L and Linux AIX 5L and Linux virtual console support Password authentication with support for either full or ready-only authorities -Firewall support via command line -Web server SSL support
Hardware Management Console (HMC) A desktop or rack-mounted appliance Appliance is preinstalled. Reinstall using optical media or network is supported. AIX 5L, Linux, and i5/OS AIX 5L, Linux, and i5/OS virtual console support Password authentication with granular control of task-based authorities and object-based authorities -Integrated firewall -SSL support for clients and for communications with managed systems
Network security
119
Integrated Virtualization Manager (IVM) Servers supported System p5 505 and 505Q Express System p5 510 and 510Q Express System p5 520 and 520Q Express System p5 550 and 550Q Express System p5 560Q Express eServer p5 510 and 510 Express eServer p5 520 and 520 Express eServer p5 550 and 550 Express OpenPower 710 and 720 BladeCenter JS21 Multiple system support Redundancy One IVM per server One IVM per server
Hardware Management Console (HMC) All POWER5 and POWER5+ Processor-based servers: System p5 and System p5 Express eServer p5 and eServer p5 Express OpenPower eServer i5
One HMC can manage multiple servers Multiple HMCs can manage the same system for HMC redundancy Firmware maximum Yes Yes - full support
Maximum number of partitions supported Uncapped partition support Dynamic Resource Movement (dynamic LPAR)
Firmware maximum Yes - System p5 support for processing & memory - BladeCenter JS21 only support for processing
I/O Support for AIX 5L and Linux I/O Support for i5/OS Maximum # of virtual LANs Fix/update process for Manager Adapter microcode updates Firmware updates
Virtual and Direct Virtual and Direct 4096 HMC e-fixes and release updates Inventory scout Service Focal Point with concurrent firmware updates
VIOS fixes and updates Inventory scout VIOS firmware update tools (not concurrent)
120
Integrated Virtualization Manager (IVM) I/O concurrent maintenance VIOS support for slot and device level concurrent maintenance via the diag hot plug support VIOS command line interface (CLI) and HMC-compatible CLI No support Web browser (no local graphical display)
One
Hardware Management Console (HMC) Guided support in the Repair and Verify function on the HMC HMC command line interface Full support WebSM (local or remote) 254 Yes Yes Service Focal Point support for consolidated management of operating system and firmware detected errors Dump collection and call home support Full remote support for the HMC and connectivity for firmware remote support
Scripting and automation Capacity on Demand User interface Workload Management (WLM) groups supported LPAR configuration data backup and restore Support for multiple profiles per partition Serviceable event management
Yes
No
Service Focal Point Light: Consolidated management of firmware and management of partition detected errors Dump collection with support to do manual dump downloads No remote support connectivity
121
122
Appendix B.
System requirements
The following are the currently supported systems: System p5 505 and 505Q Express System p5 510 and 510Q Express System p5 520 and 520Q Express System p5 550 and 550Q Express System p5 560Q Express eServer p5 510 and 510 Express eServer p5 520 and 520 Express eServer p5 550 and 550 Express OpenPower 710 and 720 BladeCenter JS21 The required firmware level is as follows: SF235 or later (not applicable to BladeCenter JS21) The software minimum supported levels are: AIX 5L V5.3 or later SUSE Linux Enterprise Server 9 for POWER (SLES 9) or later Red Hat Enterprise Linux AS 3 for POWER, Update 2 (RHEL AS 3) or later
123
124
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this Redpaper.
IBM Redbooks
For information about ordering these publications, see How to get IBM Redbooks on page 127. Note that some of the documents referenced here may be available in softcopy only. Advanced POWER Virtualization on IBM System p5, SG24-7940, draft available, expected publication date December 2005 IBM System p5 505 and 505Q Technical Overview and Introduction, REDP-4079 IBM eServer p5 510 Technical Overview and Introduction, REDP-4001 IBM eServer p5 520 Technical Overview and Introduction, REDP-9111 IBM eServer p5 550 Technical Overview and Introduction, REDP-9113 Managing AIX Server Farms, SG24-6606 Partitioning Implementations for IBM eServer p5 Servers, SG24-7039 Practical Guide for SAN with pSeries, SG24-6050 Problem Solving and Troubleshooting in AIX 5L, SG24-5496 Understanding IBM eServer pSeries Performance and Sizing, SG24-4810
Other publications
These publications are also relevant as further information sources: RS/6000 and eServer pSeries Adapters, Devices, and Cable Information for Multiple Bus Systems, SA38-0516, contains information about adapters, devices, and cables for your system. RS/6000 and eServer pSeries PCI Adapter Placement Reference for AIX, SA38-0538, contains information regarding slot restrictions for adapters that can be used in this system. System Unit Safety Information, SA23-2652, contains translations of safety information used throughout the system documentation. IBM eServer Planning, SA38-0508, contains site and planning information, including power and environment specifications.
Online resources
These Web sites and URLs are also relevant as further information sources: AIX 5L operating system maintenance packages downloads http://www.ibm.com/servers/eserver/support/pseries/aixfixes.html
125
IBM eServer p5, pSeries, OpenPower and IBM RS/6000 Performance Report http://www.ibm.com/servers/eserver/pseries/hardware/system_perf.html IBM TotalStorage Expandable Storage Plus http://www.ibm.com/servers/storage/disk/expplus/index.html IBM TotalStorage Mid-range Disk Systems http://www.ibm.com/servers/storage/disk/ds4000/index.html IBM TotalStorage Enterprise disk storage http://www.ibm.com/servers/storage/disk/enterprise/ds_family.html IBM Virtualization Engine http://www.ibm.com/servers/eserver/about/virtualization/ Advanced POWER Virtualization on IBM Sserver p5 http://www.ibm.com/servers/eserver/pseries/ondemand/ve/resources.html Virtual I/O Server supported environments http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html Hardware Management Console support information http://techsupport.services.ibm.com/server/hmc IBM LPAR Validation Tool (LVT), a PC-based tool intended assist you in logical partitioning http://www.ibm.com/servers/eserver/iseries/lpar/systemdesign.htm Customer Specified Placement and LPAR Delivery http://www.ibm.com/servers/eserver/power/csp/index.html SUMA on AIX 5L http://techsupport.services.ibm.com/server/suma/home.html Linux on IBM eServer p5 and pSeries http://www.ibm.com/servers/eserver/pseries/linux/ SUSE Linux Enterprise Server 9 http://www.novell.com/products/linuxenterpriseserver/ Red Hat Enterprise Linux details http://www.redhat.com/software/rhel/details/ IBM eServer Linux on POWER Overview http://www.ibm.com/servers/eserver/linux/power/whitepapers/linux_overview.html Autonomic computing on IBM Sserver pSeries servers http://www.ibm.com/autonomic/index.shtml IBM eServer p5 AIX 5L Support for Micro-Partitioning and Simultaneous multithreading whitepaper http://www.ibm.com/servers/aix/whitepapers/aix_support.pdf Hardware documentation http://publib16.boulder.ibm.com/pseries/en_US/infocenter/base/ IBM eServer Information Center http://publib.boulder.ibm.com/eserver/
126
IBM eServer pSeries support http://www.ibm.com/servers/eserver/support/pseries/index.html IBM eServer support: Tips for AIX 5L administrators http://techsupport.services.ibm.com/server/aix.srchBroker Linux for IBM eServer pSeries http://www.ibm.com/servers/eserver/pseries/linux/ Microcode Discovery Service http://techsupport.services.ibm.com/server/aix.invscoutMDS POWER4 system microarchitecture, comprehensively described in the IBM Journal of Research and Development, Vol 46, No.1, January 2002 http://www.research.ibm.com/journal/rd46-1.html SCSI T10 Technical Committee http://www.t10.org Microcode Downloads for IBM Sserver i5, OpenPower, p5, pSeries, and RS/6000 systems http://techsupport.services.ibm.com/server/mdownload VIO Server and PLM command descriptions http://publib.boulder.ibm.com/infocenter/eserver/v1r3s/index.jsp?topic=/iphb1/i phb1_vios_commandslist.htm
Related publications
127
128
Back cover