The Blog from the DBA Classroom

By: Joel Goodman

Archive for October, 2011

Maintaining your Cell’s Image

Posted by Joel Goodman on 14/10/2011


I have been teaching the Oracle Exadata Database Machine Administration course for 2 years now and recently the one day seminar on Database Machine (DBM) monitoring. These two will be merged into a newly upgraded course soon that covers both topics and also new material on monitoring Infiniband switches, Cisco Switches, KVM hardware, patching and much more.

In the classroom, Oracle use a virtualised image of an X2-2 Quarter rack on each delegate desktop running under OVM. But the virtual machines for the Exadata storage servers also known as cells have some differences when compared to real storage servers:

  • The 12 Physical disks and associated LUNs are virtualised as files, rather than being device files as they are in a real cell
  • the 16 Flash FDOMs and associated LUNs are virtualised as files, rather than being device files as they are in a real cell
  • The sizes of these 28 files are smaller than the devices in a real cell
  • These is no virtualised Infiniband network. All the virtual machines in the virtual quarter rack use a virtual network under Oracle virtual machine (OVM)
  • There are no Infiniband switches, Cisco switches, KVM hardware, Power Distribution Units, or ILOMs to administer or monitor.
  • The root, boot and Oracle base file systems are not mirrored in different partitions on the first two hard disks of the cell. They are contained in directories on the virtual disk used by the virtual machine.

The classroom setup however is very good for the purpose of the course:

  • The Cell Software is the same as the software in real cells
  • The virtualised quarter rack permits delegates to lean how to administer for Exadata All the following:
    • Grid Infrastructure
    • ASM Disk Groups
    • Cell Discovery via the /etc/oracle/cell/network-config/cellip.ora and /etc/oracle/cell/network-config/cellinit.ora files
    • Cell administration using cellcli and dcli
    • Database I/O Resource management
    • Cell I/O Resource Management
    • Cell Security using Realms
    • Flash device configuration for Flash Cache
    • Flash device configuration for flash based disk groups
    • Flash device configuration for Smart Flash Logging
  • The Monitoring of the following is also available in the virtualised setup:
    • Cell Smart Scans
    • Use of Hybrid Columnar Compression (HCC)
    • The effect of Storage Indexes on I/O
    • The effect of Flash Cache on I/O
    • Interpreting cell oriented statistics
    • Interpreting execution plans and statement execution for Smart Storage operations
    • Monitoring the effect of I/O resource management

One of the common questions I get in the course concerns the configuration of the software on the cells. Cells contain “images” consisting of the following components:

  • Enterprise Linux OS kernel
  • The Root file system
  • The Oracle Software
  • Device Drivers
  • Firmware Golden Copies

But in the virtualised classroom setup it is not possible to demonstrate this, so I will use a real cell to show how this is configured.

The first two Hard Disks on each cell contain various partitions that are not present on the other ten Hard Disks which have no partition table.

Lets start by finding the device files associated with the 12 physical disks on the cell. We can use the cellcli utility for this:

CellCLI> list lun attributes diskType ,deviceName, lunSize,isSystemLun where disktype = HardDisk
         HardDisk        /dev/sda        557.861328125G   TRUE
         HardDisk        /dev/sdb        557.861328125G   TRUE
         HardDisk        /dev/sdc        557.861328125G   FALSE
         HardDisk        /dev/sdd        557.861328125G   FALSE
         HardDisk        /dev/sde        557.861328125G   FALSE
         HardDisk        /dev/sdf        557.861328125G   FALSE
         HardDisk        /dev/sdg        557.861328125G  FALSE
         HardDisk        /dev/sdh        557.861328125G  FALSE
         HardDisk        /dev/sdi        557.861328125G   FALSE
         HardDisk        /dev/sdj        557.861328125G   FALSE
         HardDisk        /dev/sdk        557.861328125G  FALSE
         HardDisk        /dev/sdl        557.861328125G   FALSE

We can see the 12 device files. /dev/sda and /dev/sdb correspond to disks 0 and 1 containing the “image” of  the cells. The “IsSystemLun” attribute shows that these two devices contain the image. This cell has 600G disks as the “lunSize” attribute indicates.

Here are the device files:

[root@dmorlcel05 ~]# ls -als /dev/sd[a-l]
0 brw-r—– 1 root disk 8,   0 Aug 27 02:22 /dev/sda
0 brw-r—– 1 root disk 8,  16 Aug 27 02:22 /dev/sdb
0 brw-r—– 1 root disk 8,  32 Oct 14 06:07 /dev/sdc
0 brw-r—– 1 root disk 8,  48 Oct 14 06:07 /dev/sdd
0 brw-r—– 1 root disk 8,  64 Oct 14 06:07 /dev/sde
0 brw-r—– 1 root disk 8,  80 Oct 14 06:05 /dev/sdf
0 brw-r—– 1 root disk 8,  96 Oct 14 06:07 /dev/sdg
0 brw-r—– 1 root disk 8, 112 Oct 14 06:07 /dev/sdh
0 brw-r—– 1 root disk 8, 128 Oct 14 06:07 /dev/sdi
0 brw-r—– 1 root disk 8, 144 Oct 14 06:07 /dev/sdj
0 brw-r—– 1 root disk 8, 160 Oct 14 06:07 /dev/sdk
0 brw-r—– 1 root disk 8, 176 Oct 14 06:07 /dev/sdl

Now lets look at some details of  these devices. First lets see the partition tables of the first two devices:

[root@dmorlcel05 grub]# fdisk -l /dev/sd[a-b]

Disk /dev/sda: 598.9 GB, 598999040000 bytes
255 heads, 63 sectors/track, 72824 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          15      120456   fd  Linux raid autodetect
/dev/sda2              16          16        8032+  83  Linux
/dev/sda3              17       69039   554427247+  83  Linux
/dev/sda4           69040       72824    30403012+   f  W95 Ext’d (LBA)
/dev/sda5           69040       70344    10482381   fd  Linux raid autodetect
/dev/sda6           70345       71649    10482381   fd  Linux raid autodetect
/dev/sda7           71650       71910     2096451   fd  Linux raid autodetect
/dev/sda8           71911       72171     2096451   fd  Linux raid autodetect
/dev/sda9           72172       72432     2096451   fd  Linux raid autodetect
/dev/sda10          72433       72521      714861   fd  Linux raid autodetect
/dev/sda11          72522       72824     2433816   fd  Linux raid autodetect

Disk /dev/sdb: 598.9 GB, 598999040000 bytes
255 heads, 63 sectors/track, 72824 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          15      120456   fd  Linux raid autodetect
/dev/sdb2              16          16        8032+  83  Linux
/dev/sdb3              17       69039   554427247+  83  Linux
/dev/sdb4           69040       72824    30403012+   f  W95 Ext’d (LBA)
/dev/sdb5           69040       70344    10482381   fd  Linux raid autodetect
/dev/sdb6           70345       71649    10482381   fd  Linux raid autodetect
/dev/sdb7           71650       71910     2096451   fd  Linux raid autodetect
/dev/sdb8           71911       72171     2096451   fd  Linux raid autodetect
/dev/sdb9           72172       72432     2096451   fd  Linux raid autodetect
/dev/sdb10          72433       72521      714861   fd  Linux raid autodetect
/dev/sdb11          72522       72824     2433816   fd  Linux raid autodetect

/dev/sda and /dev/sdb each contain the same partitions with the same sizes. But lets look at the other 10 disks and see what they look like:

[root@dmorlcel05 ~]# fdisk -l /dev/sd[c-l]

Disk /dev/sdc: 598.9 GB, 598999040000 bytes
255 heads, 63 sectors/track, 72824 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn’t contain a valid partition table

Disk /dev/sdd: 598.9 GB, 598999040000 bytes
255 heads, 63 sectors/track, 72824 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdd doesn’t contain a valid partition table

Disk /dev/sde: 598.9 GB, 598999040000 bytes
255 heads, 63 sectors/track, 72824 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sde doesn’t contain a valid partition table

Disk /dev/sdf: 598.9 GB, 598999040000 bytes
255 heads, 63 sectors/track, 72824 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdf doesn’t contain a valid partition table

Disk /dev/sdg: 598.9 GB, 598999040000 bytes
255 heads, 63 sectors/track, 72824 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdg doesn’t contain a valid partition table

Disk /dev/sdh: 598.9 GB, 598999040000 bytes
255 heads, 63 sectors/track, 72824 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdh doesn’t contain a valid partition table

Disk /dev/sdi: 598.9 GB, 598999040000 bytes
255 heads, 63 sectors/track, 72824 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdi doesn’t contain a valid partition table

Disk /dev/sdj: 598.9 GB, 598999040000 bytes
255 heads, 63 sectors/track, 72824 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdj doesn’t contain a valid partition table

Disk /dev/sdk: 598.9 GB, 598999040000 bytes
255 heads, 63 sectors/track, 72824 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdk doesn’t contain a valid partition table

Disk /dev/sdl: 598.9 GB, 598999040000 bytes
255 heads, 63 sectors/track, 72824 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdl doesn’t contain a valid partition table

We can clearly see that disks 0 and 1 contain 11 partitions but that disks 2 through 11 do not. These partitions are what makes the first two disks on each cell into “system luns“. The total size used for this set of system partitions is seen by looking at the celldisks:

CellCLI> list celldisk attributes name,deviceName,devicePartition,diskType,size where diskType = HardDisk
         CD_00_dmorlcel05        /dev/sda        /dev/sda3       HardDisk        528.734375G
         CD_01_dmorlcel05        /dev/sdb        /dev/sdb3       HardDisk        528.734375G
         CD_02_dmorlcel05        /dev/sdc        /dev/sdc        HardDisk        557.859375G
         CD_03_dmorlcel05        /dev/sdd        /dev/sdd        HardDisk        557.859375G
         CD_04_dmorlcel05        /dev/sde        /dev/sde        HardDisk        557.859375G
         CD_05_dmorlcel05        /dev/sdf        /dev/sdf        HardDisk        557.859375G
         CD_06_dmorlcel05        /dev/sdg        /dev/sdg        HardDisk        557.859375G
         CD_07_dmorlcel05        /dev/sdh        /dev/sdh        HardDisk        557.859375G
         CD_08_dmorlcel05        /dev/sdi        /dev/sdi        HardDisk        557.859375G
         CD_09_dmorlcel05        /dev/sdj        /dev/sdj        HardDisk        557.859375G
         CD_10_dmorlcel05        /dev/sdk        /dev/sdk        HardDisk        557.859375G
         CD_11_dmorlcel05        /dev/sdl        /dev/sdl        HardDisk        557.859375G

We can see that about 29 gig is used for the system area on the first two hard disks. We can also see that the “devicePartition” attribute for the first two celldisks is the third partition of  the device but that for the other 10 disks, it corresponds to the device name. But now lets see how these partitions are really used. First lets try to see what is mounted:

[root@dmorlcel05 ~]# mount
/dev/md6 on / type ext3 (rw,usrquota,grpquota)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/md8 on /opt/oracle type ext3 (rw,nodev)
/dev/md4 on /boot type ext3 (rw,nodev)
/dev/md11 on /var/log/oracle type ext3 (rw,nodev)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
//10.141.138.97/monitor on /home/monitor type cifs (rw,mand)
/dev/sdm1 on /bla type ext3 (rw)
/dev/sdm1 on /mnt/usb type ext3 (rw)

The mounts show no evidence of partitions on /dev/sda or /dev/sdb. Instead we see various references to /dev/md partitions. We can also see this in /etc/fstab:

[root@dmorlcel05 ~]# cat /etc/fstab
/dev/md6           /                       ext3    defaults,usrquota,grpquota        1 1
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/md2              swap                    swap    defaults        0 0
/dev/md8                /opt/oracle             ext3    defaults,nodev  1 1
/dev/md4                /boot                   ext3    defaults,nodev  1 1
/dev/md11               /var/log/oracle         ext3    defaults,nodev  1 1
//10.141.138.97/monitor     /home/monitor           cifs    user=monitor,password=monitor,uid=root,gid=root 0 0

We can see the following:

  • the swap partition is /dev/md2
  • the boot file system is mounted on /dev/md4
  • the root file system on /dev/md6
  • the oracle cell software base is mounted on /dev/md8
  • the oracle cell software log file system is mounted on /dev/md11

Also we already know the following:

  • /dev/sda3 and /dev/sdb3 are used for the celldisks on the first two luns. We saw this in the celldisk listing above. There is no corresponding metadevice.
  • /dev/sda4 and /dev/sdb4 are just extended partitions.
  • /dev/sda1 and /dev/sdb1 are bootable partitions from the fdisk output we saw earlier

To understand the use of the mounted devices, note that many of the partitions of the first two disks are managed with software raid mirroring using mdadm. Lets first see all the metadevices:

[root@dmorlcel05 grub]# ls -als /dev/md*
0 brw-r—– 1 root disk 9,  0 Aug 27 02:24 /dev/md0
0 brw-r—– 1 root disk 9,  1 Aug 27 02:22 /dev/md1
0 brw-r—– 1 root disk 9, 11 Aug 27 02:24 /dev/md11
0 brw-r—– 1 root disk 9,  2 Aug 27 02:22 /dev/md2
0 brw-r—– 1 root disk 9,  4 Aug 27 02:24 /dev/md4
0 brw-r—– 1 root disk 9,  5 Aug 27 02:27 /dev/md5
0 brw-r—– 1 root disk 9,  6 Aug 27 02:27 /dev/md6
0 brw-r—– 1 root disk 9,  7 Aug 27 02:27 /dev/md7
0 brw-r—– 1 root disk 9,  8 Aug 27 02:27 /dev/md8

We can see that there are metadevices for most of the partitions but not for:

  • /dev/sda3 which is used for the first celldisk
  • /dev/sdb3 which is used for the second celldisk

Lets see what the software raid setup is like using mdadm first with /dev/md2 which we saw earlier was used for the swap partition. First lets confirm this using swapon.

[root@dmorlcel05 grub]# swapon -s
Filename                                Type            Size    Used    Priority
/dev/md2                                partition       2096376 0       -1

Now lets see what /dev/md2 is:

root@dmorlcel05 grub]# mdadm --misc -D /dev/md2
/dev/md2:
        Version : 0.90
  Creation Time : Thu Feb  4 06:03:17 2010
     Raid Level : raid1
     Array Size : 2096384 (2047.59 MiB 2146.70 MB)
  Used Dev Size : 2096384 (2047.59 MiB 2146.70 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Sun Oct  9 04:22:45 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 3db204f1:d7a6a84b:cd086034:be3fb671
         Events : 0.58

    Number   Major   Minor   RaidDevice State
       0       8        9        0      active sync   /dev/sda9
       1       8       25        1      active sync   /dev/sdb9

We can see that /dev/md2 is mirroring across partitions /dev/sda9 and /dev/sdb9.

Lets see about some other partitions, firstly the boot file system on /dev/md4:

[root@dmorlcel05 grub]# mdadm --misc -D /dev/md4
/dev/md4:
        Version : 0.90
  Creation Time : Thu Feb  4 06:03:28 2010
     Raid Level : raid1
     Array Size : 120384 (117.58 MiB 123.27 MB)
  Used Dev Size : 120384 (117.58 MiB 123.27 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 4
    Persistence : Superblock is persistent

    Update Time : Fri Oct 14 06:44:52 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : e325de40:410dd5d7:f477be17:f36a61aa
         Events : 0.64

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1

We can see that the boot file system is also mirrored across /dev/sda1 and /dev/sdb1. Next is the root file system:

[root@dmorlcel05 grub]# mdadm --misc -D /dev/md6
 /dev/md6:
         Version : 0.90
   Creation Time : Thu Feb  4 06:03:41 2010
      Raid Level : raid1
      Array Size : 10482304 (10.00 GiB 10.73 GB)
   Used Dev Size : 10482304 (10.00 GiB 10.73 GB)
    Raid Devices : 2
   Total Devices : 2
 Preferred Minor : 6
     Persistence : Superblock is persistent
    Update Time : Fri Oct 14 07:15:20 2011
           State : clean
  Active Devices : 2
 Working Devices : 2
  Failed Devices : 0
   Spare Devices : 0
           UUID : 73867382:a65ee3ec:8bc6831d:a67e037c
          Events : 0.6
    Number   Major   Minor   RaidDevice State
        0       8        6        0      active sync   /dev/sda6
        1       8       22        1      active sync   /dev/sdb6

Next the Oracle base file system on /dev/md8:

[root@dmorlcel05 grub]# mdadm --misc -D /dev/md8
/dev/md8:
        Version : 0.90
  Creation Time : Thu Feb  4 06:04:15 2010
     Raid Level : raid1
     Array Size : 2096384 (2047.59 MiB 2146.70 MB)
  Used Dev Size : 2096384 (2047.59 MiB 2146.70 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 8
    Persistence : Superblock is persistent

    Update Time : Fri Oct 14 07:16:41 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 353f0c30:88a291fa:36b88fb4:d274f195
         Events : 0.60

    Number   Major   Minor   RaidDevice State
       0       8        8        0      active sync   /dev/sda8
       1       8       24        1      active sync   /dev/sdb8

Finally we have the log filesystem:

[root@dmorlcel05 grub]# mdadm --misc -D /dev/md11
/dev/md11:
        Version : 0.90
  Creation Time : Tue Aug 24 15:55:27 2010
     Raid Level : raid1
     Array Size : 2433728 (2.32 GiB 2.49 GB)
  Used Dev Size : 2433728 (2.32 GiB 2.49 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 11
    Persistence : Superblock is persistent

    Update Time : Fri Oct 14 07:19:15 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : f04ecbf4:58fdc681:0e1668d1:9049b7b4
         Events : 0.66

    Number   Major   Minor   RaidDevice State
       0       8       11        0      active sync   /dev/sda11
       1       8       27        1      active sync   /dev/sdb11 

We can see that the root, boot, cell software, cell log and swap partitions are mirrored onto partitions of the first
two disks using mdadm. But this leaves some partitions on each of the disks unaccounted for. For example we see no evidence of  /dev/md5 or /dev/md7 and the partitions over which they might be created.

To understand this we must see how cell get imaged. A  cell contains both the current image and the previous image of root and software file systems as well as different kernels. When a new image is “patched” onto the cell, the “inactive” partitions on /dev/sda and /dev/sdb get updated with the new image and updates are made to /etc/fstab and to the grub.conf file for booting the new images kernel. When the cell is rebooted, the new image is the active image and the previous active image becomes the inactive one. We can explore this using some utilities used on the database machine.

[root@dmorlcel05 grub]# imageinfo -all

Kernel version: 2.6.18-194.3.1.0.4.el5 #1 SMP Sat Feb 19 03:38:37 EST 2011 x86_64
Cell version: OSS_11.2.0.3.0_LINUX.X64_110429.1
Cell rpm version: cell-11.2.2.3.1_LINUX.X64_110429.1-1

Active image version: 11.2.2.3.1.110429.1
Active image created: 2011-04-29 21:20:48 -0700
Active image activated: 2011-05-09 16:26:31 -0400
Active image type: production
Active image status: success
Active internal version:
Active image label: OSS_11.2.0.3.0_LINUX.X64_110429.1
Active node type: STORAGE
Active system partition on device: /dev/md6
Active software partition on device: /dev/md8

In partition rollback: Impossible

Cell boot usb partition: /dev/sdm1
Cell boot usb version: 11.2.2.3.1.110429.1

rmdir: /mnt/dev/md5: Directory not empty
Inactive image version: 11.2.1.2.6
Inactive image created: 2010-05-11 13:37:31 -0700
Inactive image activated: 2010-10-05 02:34:01 -0400
Inactive image type: production
Inactive image status: success
Inactive internal version: 999999
Inactive image label: OSS_11.2.1.2.6_LINUX.X64_100511
Inactive node type: STORAGE
Inactive system partition on device: /dev/md5
Inactive software partition on device: /dev/md7

Boot area has rollback archive for the version: 11.2.1.2.6
Rollback to the inactive partitions: Possible

Here we can see that /dev/md5 and /dev/md7 are “inactive. Each of these metadevices is mirrored across partitions on /dev/sda and /dev/sdb. Note that there is also a utility to show all the images that have been on the cell:

[root@dmorlcel05 grub]# imagehistory -a
Version                              : 11.2.1.2.1
Image activation date                : 2010-02-04 15:00:53 -0800
Imaging mode                         : fresh
Imaging status                       : success
Imaging type                         : image
Image creation date                  : 2010-02-01 02:52:14 -0800
Image type                           : production
Image node                           : STORAGE
Internal version                     : 100131
Internal label                       : OSS_11.2.1.2.1_LINUX.X64_100131
Internal cell rpm version            : undefined
Internal cell software version       : undefined
Image id                             : 1265021534

Version                              : 11.2.1.2.3
Image activation date                : 2010-04-30 12:12:04 -0400
Imaging mode                         : patch
Imaging status                       : success
Imaging type                         : in partition
Image creation date                  : 2010-03-05 11:35:15 -0800
Image type                           : production
Image node                           : STORAGE
Internal version                     : 999999
Internal label                       : OSS_11.2.1.2.3_LINUX.X64_100305
Internal cell rpm version            : undefined
Internal cell software version       : undefined
Image id                             : 1267817715

Version                              : 11.2.1.2.4
Image activation date                : 2010-04-30 12:21:54 -0400
Imaging mode                         : patch
Imaging status                       : success
Imaging type                         : in partition
Image creation date                  : 2010-04-13 21:53:04 -0700
Image type                           : production
Image node                           : STORAGE
Internal version                     : 999999
Internal label                       : OSS_11.2.1.2.4_LINUX.X64_100413
Internal cell rpm version            : undefined
Internal cell software version       : undefined
Image id                             : 1271220784

Version                              : 11.2.1.3.1
Image activation date                : 2010-08-24 18:13:07 -0400
Imaging mode                         : out of partition upgrade
Upgrade logs                         : /var/log/cellos/patchlogs/1286251665.ex_part_rollback.11.2.1.2.4_11.2.1.3.1/1286251665.ex_part_rollback.11.2.1.2.4_11.2.1.3.1.tar.gz
Imaging status                       : success
Imaging type                         : out of partition
Image creation date                  : 2010-08-18 21:52:15 -0700
Image type                           : production
Image node                           : STORAGE
Internal version                     : 100818.1
Internal label                       : OSS_11.2.1.3.1_LINUX.X64_100818.1
Internal cell rpm version            : undefined
Internal cell software version       : undefined
Image id                             : 1282193535

Version                              : 11.2.1.2.4
Image activation date                : 2010-10-05 00:24:08 -0400
Imaging mode                         : out of partition rollback
Imaging status                       : success
Imaging type                         : in partition
Image creation date                  : 2010-04-13 21:53:04 -0700
Image type                           : production
Image node                           : STORAGE
Internal version                     : 999999
Internal label                       : OSS_11.2.1.2.4_LINUX.X64_100413
Internal cell rpm version            : undefined
Internal cell software version       : undefined
Image id                             : 1271220784

Version                              : 11.2.1.2.6
Image activation date                : 2010-10-05 02:34:01 -0400
Imaging mode                         : patch
Imaging status                       : success
Imaging type                         : in partition
Image creation date                  : 2010-05-11 13:37:31 -0700
Image type                           : production
Image node                           : STORAGE
Internal version                     : 999999
Internal label                       : OSS_11.2.1.2.6_LINUX.X64_100511
Internal cell rpm version            : undefined
Internal cell software version       : undefined
Image id                             : 1273610251

Version                              : 11.2.2.3.1.110429.1
Image activation date                : 2011-05-09 16:26:31 -0400
Imaging mode                         : out of partition upgrade
Imaging status                       : success
Imaging type                         : out of partition
Image creation date                  : 2011-04-29 21:20:48 -0700
Image type                           : production
Image node                           : STORAGE
Internal version                     : undefined
Internal label                       : OSS_11.2.0.3.0_LINUX.X64_110429.1
Internal cell rpm version            : OSS_11.2.2.3.1_LINUX.X64_110429.1
Internal cell software version       : OSS_11.2.0.3.0_LINUX.X64_110429.1
Image id                             : 1304137248

Note: the imageinfo and imagehistory may also be used on the database nodes in a database machine but the output is somewhat different for imageinfo. Here is imageinfo on a database server node ni the database machine:

[root@dmorldb03 ~]# imageinfo -all

Kernel version: 2.6.18-194.3.1.0.4.el5 #1 SMP Sat Feb 19 03:38:37 EST 2011 x86_64
Image version: 11.2.2.3.5.110815
Image created: 2011-08-16 13:35:02 -0700
Image activated: 2011-10-08 17:28:56 -0400
Image image type: production
Image status: success
Internal version:
Image label: OSS_11.2.2.3.5_LINUX.X64_110815
Node type: COMPUTE
System partition on device: /dev/sda1

And here is the imagehistory output on a database node:

[root@dmorldb03 ~]# imagehistory -a
Version                              : 11.2.1.2.1
Image activation date                : 2010-02-03 22:59:09 -0800
Imaging mode                         : fresh
Imaging status                       : success
Imaging type                         : image
Image creation date                  : 2010-02-01 02:53:01 -0800
Image type                           : production
Image node                           : COMPUTE
Internal version                     : 100131
Internal label                       : OSS_11.2.1.2.1_LINUX.X64_100131
Image id                             : 1265021581

Version                              : 11.2.1.2.3
Image activation date                : 2010-04-30 14:29:15 -0400
Imaging mode                         : patch
Imaging status                       : success
Imaging type                         : in partition
Image creation date                  : 2010-03-05 11:38:07 -0800
Image type                           : production
Image node                           : COMPUTE
Internal version                     : 999999
Internal label                       : OSS_11.2.1.2.3_LINUX.X64_100305
Image id                             : 1267817887

Version                              : 11.2.1.3.1
Image activation date                : 2010-08-24 18:50:13 -0400
Imaging mode                         : patch
Imaging status                       : success
Imaging type                         : in partition
Image creation date                  : 2010-08-18 21:49:31 -0700
Image type                           : production
Image node                           : COMPUTE
Internal version                     : 100818.1
Internal label                       : OSS_11.2.1.3.1_LINUX.X64_100818.1
Image id                             : 1282193371

Version                              : 11.2.2.3.1.110429.1
Image activation date                : 2011-05-09 17:58:47 -0400
Imaging mode                         : patch
Imaging status                       : success
Imaging type                         : in partition
Image creation date                  : 2011-04-29 21:14:31 -0700
Image type                           : production
Image node                           : COMPUTE
Internal version                     : undefined
Internal label                       : OSS_11.2.0.3.0_LINUX.X64_110429.1
Image id                             : 1304136871

Version                              : 11.2.2.3.5.110815
Image activation date                : 2011-10-08 16:15:41 -0400
Imaging mode                         : patch
Imaging status                       : success
Imaging type                         : in partition
Image creation date                  : 2011-08-16 13:35:02 -0700
Image type                           : production
Image node                           : COMPUTE
Internal version                     : undefined
Internal label                       : OSS_11.2.2.3.5_LINUX.X64_110815
Image id                             : 1313526902

Version                              : 11.2.2.3.1.110429.1
Image activation date                : 2011-10-08 17:01:49 -0400
Imaging mode                         : rollback
Imaging status                       : success
Imaging type                         : in partition
Image creation date                  : 2011-04-29 21:14:31 -0700
Image type                           : production
Image node                           : COMPUTE
Internal version                     : undefined
Internal label                       : OSS_11.2.0.3.0_LINUX.X64_110429.1
Image id                             : 1304136871

Version                              : 11.2.2.3.5.110815
Image activation date                : 2011-10-08 17:28:56 -0400
Imaging mode                         : patch
Imaging status                       : success
Imaging type                         : in partition
Image creation date                  : 2011-08-16 13:35:02 -0700
Image type                           : production
Image node                           : COMPUTE
Internal version                     : undefined
Internal label                       : OSS_11.2.2.3.5_LINUX.X64_110815
Image id                             : 1313526902

I hope this helps in the understanding of how cells maintain their images!

Joel

October 2011


Advertisements

Posted in Oracle | 8 Comments »

Train Your DBAs – From DBATRAIN

Posted by Joel Goodman on 12/10/2011


Last chance to contribute to the footprint of the new Oracle DBA training curriculum and certification track footprint.

The Job Task Analysis project survey closes at end of the month and will be used to help define the structure and content of the Oracle University curriculum for the next release. This covers the core DBA training curriculum, advanced topics such as Grid Infrastructure, RAC, Data Guard, Partitioning, Exadata Database Machine, Performance Management and much more.

There will of course be a certification track for OCA and OCP based on the core curriculum and several Oracle Certified Expert (OCE) exams based on advanced topics, and of course the Oracle Certified Master (OCM) exam.

I will be meeting with Harald van Breederode, Uwe Hesse and Branislav Valny from the Oracle University EMEA SME team, and with the DBA curriculum Managers from Oracle University to help create the blueprint for the next generation of training and certification paths.

Share your views by taking  this online survey and tell us what tasks are important to you.

Your input will assure that Oracle University courses and certification tracks, reflect the evolving skill set and job requirements of today’s DBA 2.0

Joel

October 2011

Posted in Oracle | Leave a Comment »