Exadata Database Machine数据库一体机专题

 

Exadata model1

 

什么是Exadata?

 

Exadata是软硬件结合的数据库一体机, 在出厂前完成预配置,在运达用户现场后开封上电即可使用。

 

由SUN 提供的硬件!

由Oracle提供的软件, Database Server和 Exadata Storage Server software SAGE

Exadata的出现意味着大规模并行化,最高的RDBMS性能标杆,容错能力和可扩展能力。

 

 

 

 

 

 

 

 

 

 

exadata_v1Exadata的历史

 

版本 Version 1

在2008年OOW期间开始宣传, 是Oracle和HP合作开发的。 是当时世界上最快的数据仓库一体机。为顺序物理读提供了额外的性能优化,比其他硬件平台上的Oracle数据仓库快10倍。

 

 

 

 

 

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
exadata_V2

 

 

 

版本 Version 2

 

于2009年9月份发布, 由 Oracle和SUN 联合开发。  是当时世界上最快的OLTP一体机。为随机读提供了额外的性能优化。 比Version 1的Exadata在DW上快5倍。 引入注目的 新 Exadata Storage Software能力。

 

 

 

 

 

 

Exadata的最大卖点无意是 其Smart Scan Processing智能扫描技术

 

exadata smart scan1

 

 

Smart Scan Processing智能扫描技术的核心在于offloading,offload很难翻译,但你可以理解为将一部分扫描处理交给Exadata的存储节点来完成。

本来要扫描1GB的数据表,实际符合查询条件的仅10MB数据。 传统架构总是无法避免要让Database Server去扫描那1GB的数据。

 

而Exadata将这部分load off到Exadata Cell存储节点上,由cell来扫描那1GB数据,而仅仅返回10MB给Database Server,这样分工是由于存储节点更精于物理扫描也更接近物理磁盘。

 

真正实现Smart Scan智能扫描的是Oracle Exadata Storage Software 代号Sage, 开发时间估计在2006或更早就开始了!

 

 

 

 

 

Oracle Exadata Storage Software SAGE是 Exadata的灵魂, 是Oracle自主研发的能听懂数据库SQL语言的智能存储软件。 由于SAGE软件才是Exadata的灵魂,所以光靠堆积flashcard、Infiniband等硬件是无法山寨Exadata数据库一体机的。

 

Exadata目前的一用户包括 StartBucks 、Facebook、华为、中国移动、上海银行、工商银行、Apple苹果、三星电子、LG、法国巴黎银行、韩国电信、韩亚航空、澳大利亚联邦银行、日本软银集团、海尔、喜达屋集团、尼桑、PayPal、土耳其电信、神奈川県警察本部、株式会社三井住友銀行、中国华夏银行、中国人民人寿保险股份有限公司、深圳市人社局、青岛市人社局、乌鲁木齐市人社局、本溪市人社局、新疆电信、广东移动、辽宁移动、福建移动、神华集团、东风汽车、海尔集团、中冶赛迪重庆信息技术有限公司、上海研发公共服务平台、中远集装箱运输有限公司、内蒙古电网、启融普惠(北京)科技有限公司、印孚瑟斯技术有限公司和香港房屋署等等。

 

Exadata用户群

 

 

Exadata的硬件主要分成三个部分: 

  • Database Server 有时候也叫做Compute Node
  • Storage Server 也叫做Cell Node
  • Infiniband Switch 简写IB SW

如下图:

 

exadata arch

 

 

 

exadata arch2

 

 

 

Oracle Exadata X2-2 and X2-8的 Storage Servers

 

Storage Server Exadata

 

 

 

Oracle Exadata V2 Storage Servers

storage server v2

 

 

 

 

Exadata真机在SUN的装配流水线上

 

Exadata真机在装配流水线上

 

 

 

 

 

========================================================================分割线

 

Exadata Storage Server硬件配置

 

 

 

V2 X2-2
CPU Nehalem-EP Westmere-EP
DIMMs -compute node 8 GB at 1066 MHz 8 GB-LV dimms at 1333 MHz (post-RR)
HBA Niwot with BBU07 Niwot with BBU08
NIC (compute node only) No 10 GE ports Using Niantic 10 GE ports
Compute node HDD 146 GB SAS2 drives 300 GB SAS2 drives
Storage node -SAS 600 GB SAS2 15 KRPM Same as V2
Storage node -SATA 2TB SATA 7.2 KRPM 2TB FAT-SAS 7.2 KRPM
Aura (4x) V1.0 V1.0 (V1.1 post-RR)
IB CX CX2

 

 

 

Exadata 上的OS操作系统版本

compute note 数据库服务器操作系统, Solaris或者Linux

Oracle Enterprise Linux (OEL)和Solaris 11(x86)是目前 Exadata db服务器可选的2种OS类型, 当然Solaris仅在Exadata V2之后的机型上可选

用户可以在安装时选择他们更青睐的OS, Exadata 刷机的image 会有这2个版本。

出厂安装的image有linux 和solaris的双启动选项, 用户可以选择默认的启动OS 。

 

 

Exadata存储容量 对应不同的配置

 

FULL RACK Half RACK Quarter RACK One Cell
Raw Disk Capacity 432 TB 216 TB 96 TB 24TB
Raw Flash Capacity 6.75 TB 3.4 TB 1.5 TB 0.375 TB
Usable Mirrored Capacity 194 TB 97 TB 42.5 TB 10.75 TB
Usable Triple Mirrored Capacity 130 TB 65 TB 29 TB 7.25 TB

Exadata通过ILOM远程MOUNT ISO实现刷机Reimage

Exadata通过ILOM远程MOUNT ISO实现刷机Reimage

 

Download (PDF, 1.73MB)

【Exadata一体机】Exadata Cell监控最佳实践

  1. Verify cable connections via the following steps

Visually inspect all cables for proper connectivity.

 

确认缆线链接正常

 

 

 

[root@dm01db01 ~]# cat /sys/class/net/ib0/carrier

1

[root@dm01db01 ~]# cat /sys/class/net/ib1/carrier

1

 

确认输出是1

 

 

检查这些命令,

ls -l /sys/class/infiniband/*/ports/*/*errors*

 

 

/opt/oracle.SupportTools/ibdiagtools 目录包含了verify_topology 和infinicheck工具 运行并确认网络。下面是这些工具的信息:

 

[root@dm01db01 ~]# cd /opt/oracle.SupportTools/

[root@dm01db01 oracle.SupportTools]# ls

asrexacheck         defaultOSchoose.pl  firstconf                        make_cellboot_usb  PS4ES            sys_dirs.tar

CheckHWnFWProfile   diagnostics.iso     flush_cache.sh                   MegaSAS.log        reclaimdisks.sh

CheckSWProfile.sh   em                  harden_passwords_reset_root_ssh  ocrvothostd        setup_ssh_eq.sh

dbserver_backup.sh  exachk              ibdiagtools                      onecommand         sundiag.sh

 

 

[root@dm01db01 oracle.SupportTools]# cd ibdiagtools/

[root@dm01db01 ibdiagtools]# ls

cells_conntest.log    dcli                  ibqueryerrors.log  perf_cells.log0  perf_mesh.log1     subnet_cells.log  VERSION_FILE

cells_user_equiv.log  diagnostics.output    infinicheck        perf_cells.log1  perf_mesh.log2     subnet_hosts.log  xmonib.sh

checkbadlinks.pl      hosts_conntest.log    monitord           perf_cells.log2  README             topologies

cleanup_remote.log    hosts_user_equiv.log  netcheck           perf_hosts.log0  SampleOutputs.txt  topology-zfs

clearcounters.log     ibping_test           netcheck_scratch   perf_mesh.log0   setup-ssh          verify-topology

 

 

 

[root@dm01db01 ibdiagtools]# ./verify-topology -h

 

[ DB Machine Infiniband Cabling Topology Verification Tool ]

[Version IBD VER 2.c 11.2.3.1.1  120607]

Usage: ./verify-topology [-v|--verbose] [-r|--reuse (cached maps)]  [-m|--mapfile]

[-ibn|--ibnetdiscover (specify location of ibnetdiscover output)]

[-ibh|--ibhosts (specify location of ibhosts output)]

[-ibs|--ibswitches (specify location of ibswitches output)]

[-t|–topology [torus | quarterrack ] default is fattree]

[-a|–additional [interconnected_quarterrack]

[-factory|--factory non-exadata machines are treated as error]

 

Please note that halfrack is now redundant. Checks for Half Racks

are now done by default.

-t quarterrack

option is needed to be used only if testing on a stand alone quarterrack

-a interconnected_quarterrack

option is to be used only when testing on large multi-rack setups

-t fattree

option is the default option and not required to be specified

 

Example : perl ./verify-topology

Example : ././verify-topology -t quarterrack

Example : ././verify-topology -t torus

Example : ././verify-topology -a interconnected_quarterrack

——— Some Important properties of the fattree cabling topology————–

(1) Every internal switch must be connected to every external switch

(2) No 2 external switches must be connected to each other

——————————————————————————-

Please note that switch guid can be determined by logging in to a switch and

trying either of these commands, depending on availability –

>module-firmware show

OR

>opensm

 

 

 

[root@dm01db01 ibdiagtools]# ./verify-topology -t fattree

 

[ DB Machine Infiniband Cabling Topology Verification Tool ]

[Version IBD VER 2.c 11.2.3.1.1  120607]

External non-Exadata-image nodes found: check for ZFS if on T4-4 – else ignore

Leaf switch found: dmibsw03.acs.oracle.com (212846902ba0a0)

Spine switch found: 10.146.24.251 (2128469c74a0a0)

Leaf switch found: dmibsw02.acs.oracle.com (21284692d4a0a0)

Spine switch found: 10.146.24.252 (2128b7f744c0a0)

Spine switch found: dmibsw01.acs.oracle.com (21286cc7e2a0a0)

Spine switch found: 10.146.24.253 (2128b7ac44c0a0)

 

Found 2 leaf, 4 spine, 0 top spine switches

 

Check if all hosts have 2 CAs to different switches……………[SUCCESS]

Leaf switch check: cardinality and even distribution…………..[SUCCESS]

Spine switch check: Are any Exadata nodes connected …………..[SUCCESS]

Spine switch check: Any inter spine switch links………………[ERROR]

Spine switches 10.146.24.251 (2128469c74a0a0) & 10.146.24.252 (2128b7f744c0a0) should not be connected

[ERROR]

Spine switches 10.146.24.251 (2128469c74a0a0) & 10.146.24.253 (2128b7ac44c0a0) should not be connected

[ERROR]

Spine switches 10.146.24.252 (2128b7f744c0a0) & dmibsw01.acs.oracle.com (21286cc7e2a0a0) should not be connected

[ERROR]

Spine switches 10.146.24.252 (2128b7f744c0a0) & 10.146.24.253 (2128b7ac44c0a0) should not be connected

[ERROR]

Spine switches dmibsw01.acs.oracle.com (21286cc7e2a0a0) & 10.146.24.253 (2128b7ac44c0a0) should not be connected

 

Spine switch check: Any inter top-spine switch links…………..[SUCCESS]

Spine switch check: Correct number of spine-leaf links…………[ERROR]

Leaf switch dmibsw03.acs.oracle.com (212846902ba0a0) must be linked

to spine switch 10.146.24.252 (2128b7f744c0a0) with

at least 1 links…0 link(s) found

[ERROR]

Leaf switch dmibsw02.acs.oracle.com (21284692d4a0a0) must be linked

to spine switch 10.146.24.252 (2128b7f744c0a0) with

at least 1 links…0 link(s) found

[ERROR]

Spine switch 10.146.24.252 (2128b7f744c0a0) has fewer than 2 links to leaf switches.

It has 0

[ERROR]

Leaf switch dmibsw03.acs.oracle.com (212846902ba0a0) must be linked

to spine switch 10.146.24.253 (2128b7ac44c0a0) with

at least 1 links…0 link(s) found

[ERROR]

Leaf switch dmibsw02.acs.oracle.com (21284692d4a0a0) must be linked

to spine switch 10.146.24.253 (2128b7ac44c0a0) with

at least 1 links…0 link(s) found

[ERROR]

Spine switch 10.146.24.253 (2128b7ac44c0a0) has fewer than 2 links to leaf switches.

It has 0

 

Leaf switch check: Inter-leaf link check……………………..[ERROR]

Leaf switches dmibsw03.acs.oracle.com (212846902ba0a0) & dmibsw02.acs.oracle.com (21284692d4a0a0) have 0 links between them

They should have 7 links instead.

 

Leaf switch check: Correct number of leaf-spine links………….[SUCCESS]

 

 

 

 

确认硬件和固件

 

cd /opt/oracle.cellos/

[root@dm01db01 oracle.cellos]# ./CheckHWnFWProfile

 

[SUCCESS] The hardware and firmware profile matches one of the supported profiles

 

 

确认平台软件

 

 

 

 

 

[root@dm01db01 oracle.cellos]# cd /opt/oracle.SupportTools/

[root@dm01db01 oracle.SupportTools]# ./CheckSWProfile.sh

usage: ./CheckSWProfile.sh options

 

This script returns 0 when the platform and software on the

machine on which it runs matches one of the suppored platform and

software profiles. It will return nonzero value in all other cases.

The check is applicable both to Exadata Cells and Database Nodes

with Oracle Enterprise Linux (OEL) and RedHat Enterprise Linux (RHEL).

 

OPTIONS:

-h    Show this message

-s    Show supported platforms and software profiles for this machine

-c    Check this machine for supported platform and software profiles

-I <No space comma separated list of Infiniband switch names/ip addresses>

To check configuration for SPINE switch prefix the switch host name or

ip address with IS_SPINE.

Example: CheckSWProfile.sh -I IS_SPINEswitch1.company.com,switch2.company.com

Check for the software revision on the managed Infiniband switches

in the Database Machine. You will need to supply the password for

admin user.

-S <No space comma separated list of Infiniband switch names/ip addresses>

Example: CheckSWProfile.sh -S switch1.company.com,switch2.company.com

Prints the Serial number and Hardware version for the switches

in the Database Machine. You will need to supply the password for

admin user for Voltaire switches and root user for Sun switches.

 

 

[root@dm01db01 oracle.SupportTools]# ./CheckSWProfile.sh  -c

[INFO] Software checker check option is only available on Exadata cells.

 

[root@dm01db01 oracle.SupportTools]# ssh dm01cel01-priv

 

[root@dm01cel01 oracle.SupportTools]# ./CheckSWProfile.sh -c

 

[INFO] SUCCESS: Meets requirements of operating platform and InfiniBand software.

[INFO] Check does NOT verify correctness of configuration for installed software.

 

 

[root@dm01cel01 oracle.SupportTools]# cd /opt/oracle.cellos/

[root@dm01cel01 oracle.cellos]# ./CheckHWnFWProfile

[SUCCESS] The hardware and firmware profile matches one of the supported profiles

 

 

 

If hardware is replaced, rerun the /opt/oracle.cellos/CheckHWnFWProfile script.

Exadata一体机健康检查报告

Exadata数据库一体机软硬结合了ORACLE公司技术的精华,定期的健康检查也马虎不得。

Exadata的健康检查主要基于Oracle Support标准化工具Exachk,关于Exachk的详细介绍可以参考:

介绍 Exachk 的概览和最佳实践;定期使用 Exachk 收集 Exadata 机器的系统信息, 并结合 Oracle 最佳实践与客户当前的环境配置给出建议值, 可及时发现潜在问题, 消除隐患, 保障 Exadata 系统的稳定运行。

注册: https://oracleaw.webex.com/oracleaw/onstage/g.php?d=592264766&t=a

 

这里我们只给出Exachk的具体使用步骤:

 

$./exachk

CRS stack is running and CRS_HOME is not set. Do you want to set CRS_HOME to /u01/app/11.2.0.3/grid?[y/n][y]y

Checking ssh user equivalency settings on all nodes in cluster

./exachk: line 8674: [: 5120: unary operator expected

Space available on ware at /tmp is KB and required space is 5120 KB

Please make at least 10MB space available at above location and retry to continue.[y/n][y]?

 

 

需要设置RAT_CLUSTERNODES 指定检查的节点名

 

 

su - oracle

$export RAT_CLUSTERNODES="dm01db01-priv dm01db02-priv"

export RAT_DBNAMES="orcl,dbm"
$ ./exachk

[oracle@192 tmp]$ ./exachk

CRS stack is running and CRS_HOME is not set. Do you want to set CRS_HOME to /u01/app/11.2.0.3/grid?[y/n][y]y

Checking ssh user equivalency settings on all nodes in cluster

Node dm01db01-priv is configured for ssh user equivalency for oracle user

Node dm01db02-priv is configured for ssh user equivalency for oracle user

Searching out ORACLE_HOME for selected databases.

. . . . 

Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS

. . . . . . . . . . . . . . . . . . . /u01/app/11.2.0.3/grid/bin/cemutlo.bin: Failed to initialize communication with CSS daemon, error code 3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
-------------------------------------------------------------------------------------------------------
                                                 Oracle Stack Status                            
-------------------------------------------------------------------------------------------------------
Host Name  CRS Installed  ASM HOME       RDBMS Installed  CRS UP    ASM UP    RDBMS UP  DB Instance Name
-------------------------------------------------------------------------------------------------------
192         Yes             Yes             Yes             Yes        No       Yes                
dm01db01-priv Yes             Yes             Yes             Yes        No       Yes                
dm01db02-priv Yes             Yes             Yes             Yes        No       Yes                
-------------------------------------------------------------------------------------------------------

root user equivalence is not setup between 192 and STORAGE SERVER dm01cel01.

1. Enter 1 if you will enter root password for each STORAGE SERVER when prompted.

2. Enter 2 to exit and configure root user equivalence manually and re-run exachk.

3. Enter 3 to skip checking best practices on STORAGE SERVER.

Please indicate your selection from one of the above options[1-3][1]:- 1

Is root password same on all STORAGE SERVER?[y/n][y]y

Enter root password for STORAGE SERVER :- 

97 of the included audit checks require root privileged data collection on DATABASE SERVER. If sudo is not configured or the root password is not available, audit checks which  require root privileged data collection can be skipped.

1. Enter 1 if you will enter root password for each on DATABASE SERVER host when prompted

2. Enter 2 if you have sudo configured for oracle user to execute root_exachk.sh script on DATABASE SERVER

3. Enter 3 to skip the root privileged collections on DATABASE SERVER

4. Enter 4 to exit and work with the SA to configure sudo on DATABASE SERVER or to arrange for root access and run the tool later.

Please indicate your selection from one of the above options[1-4][1]:- 1

Is root password same on all compute nodes?[y/n][y]y

Enter root password on DATABASE SERVER:- 

9 of the included audit checks require nm2user privileged data collection on INFINIBAND SWITCH .

1. Enter 1 if you will enter nm2user password for each INFINIBAND SWITCH when prompted

2. Enter 2 to exit and to arrange for nm2user access and run the exachk later.

3. Enter 3 to skip checking best practices on INFINIBAND SWITCH

Please indicate your selection from one of the above options[1-3][1]:- 3

*** Checking Best Practice Recommendations (PASS/WARNING/FAIL) ***

Log file for collections and audit checks are at
/tmp/exachk_040613_105703/exachk.log

Starting to run exachk in background on dm01db01-priv

Starting to run exachk in background on dm01db02-priv

=============================================================
                    Node name - 192                                
=============================================================
Collecting - Compute node PCI bus slot speed for infiniband HCAs
Collecting - Kernel parameters
Collecting - Maximum number of semaphore sets on system
Collecting - Maximum number of semaphores on system
Collecting - Maximum number of semaphores per semaphore set
Collecting - Patches for Grid Infrastructure 
Collecting - Patches for RDBMS Home 
Collecting - RDBMS patch inventory
Collecting - number of semaphore operations per semop system call
Preparing to run root privileged commands on DATABASE SERVER 192.

Starting to run root privileged commands in background on STORAGE SERVER dm01cel01

root@192.168.64.131's password: 

Starting to run root privileged commands in background on STORAGE SERVER dm01cel02

root@192.168.64.132's password: 

Starting to run root privileged commands in background on STORAGE SERVER dm01cel03

root@192.168.64.133's password: 
Collecting - Ambient Temperature on storage server 
Collecting - Exadata Critical Issue EX10 
Collecting - Exadata Critical Issue EX11 
Collecting - Exadata software version on storage server 
Collecting - Exadata software version on storage servers 
Collecting - Exadata storage server system model number  
Collecting - RAID controller version on storage servers 
Collecting - Verify Disk Cache Policy on storage servers 
Collecting - Verify Electronic Storage Module (ESM) Lifetime is within Specification  
Collecting - Verify Hardware and Firmware on Database and Storage Servers (CheckHWnFWProfile) [Storage Server] 
Collecting - Verify Master (Rack) Serial Number is Set [Storage Server] 
Collecting - Verify PCI bridge is configured for generation II on storage servers 
Collecting - Verify RAID Controller Battery Condition [Storage Server] 
Collecting - Verify RAID Controller Battery Temperature [Storage Server] 
Collecting - Verify There Are No Storage Server Memory (ECC) Errors  
Collecting - Verify service exachkcfg autostart status on storage server 
Collecting - Verify storage server disk controllers use writeback cache  
Collecting - verify asr exadata configuration check via ASREXACHECK on storage servers 
Collecting - Configure Storage Server alerts to be sent via email 
Collecting - Exadata Celldisk predictive failures 
Collecting - Exadata Critical Issue EX9 
Collecting - Exadata storage server root filesystem free space 
Collecting - HCA firmware version on storage server 
Collecting - OFED Software version on storage server 
Collecting - OSWatcher status on storage servers 
Collecting - Operating system and Kernel version on storage server 
Collecting - Scan storage server alerthistory for open alerts 
Collecting - Storage server flash cache mode 
Collecting - Verify Data Network is Separate from Management Network on storage server 
Collecting - Verify Ethernet Cable Connection Quality on storage servers 
Collecting - Verify Exadata Smart Flash Cache is created 
Collecting - Verify Exadata Smart Flash Log is Created 
Collecting - Verify InfiniBand Cable Connection Quality on storage servers 
Collecting - Verify Software on Storage Servers (CheckSWProfile.sh)  
Collecting - Verify average ping times to DNS nameserver 
Collecting - Verify celldisk configuration on disk drives 
Collecting - Verify celldisk configuration on flash memory devices 
Collecting - Verify griddisk ASM status 
Collecting - Verify griddisk count matches across all storage servers where a given prefix name exists 
Collecting - Verify storage server metric CD_IO_ST_RQ 
Collecting - Verify there are no griddisks configured on flash memory devices 
Collecting - Verify total number of griddisks with a given prefix name is evenly divisible of celldisks 
Collecting - Verify total size of all griddisks fully utilizes celldisk capacity 
Collecting - mpt_cmd_retry_count from /etc/modprobe.conf on Storage Servers 

Data collections completed. Checking best practices on 192.
--------------------------------------------------------------------------------------

 FAIL =>    CSS misscount should be set to the recommended value of 60
 FAIL =>    Database Server InfiniBand network MTU size is NOT 65520
 WARNING => Database has one or more dictionary managed tablespace
 WARNING => RDBMS Version is NOT 11.2.0.2 as expected
 FAIL =>    Storage Server alerts are not configured to be sent via email
 FAIL =>    Management network is not separate from data network
 WARNING => NIC bonding is NOT configured for public network (VIP)
 WARNING => NIC bonding is  not configured for interconnect
 WARNING => SYS.AUDSES$ sequence cache size < 10,000  WARNING => GC blocks lost is occurring
 WARNING => Some tablespaces are not using Automatic segment storage management.
 WARNING => SYS.IDGEN1$ sequence cache size < 1,000  WARNING => Interconnect is configured on routable network addresses
 FAIL =>    Some data or temp files are not autoextensible
 FAIL =>    One or more Ethernet network cables are not connected.
 WARNING => Multiple RDBMS instances discovered, observe database consolidation best practices
 INFO =>    ASM griddisk,diskgroup and Failure group mapping not checked.
 FAIL =>    One or more storage server has stateless alerts with null "examinedby" fields.
 WARNING => Standby is not opened read only with managed recovery in real time apply mode
 FAIL =>    Managed recovery process is not running
 FAIL =>    Flashback on PRIMARY is not configured
 WARNING => Standby is not in READ ONLY WITH APPLY mode
 FAIL =>    Flashback on STANDBY is not configured
 FAIL =>    No one high redundancy diskgroup configured
 INFO =>    Operational Best Practices
 INFO =>    Consolidation Database Practices
 INFO =>    Network failure prevention best practices
 INFO =>    Computer failure prevention best practices
 INFO =>    Data corruption prevention best practices
 INFO =>    Logical corruption prevention best practices
 INFO =>    Storage failures prevention best practices
 INFO =>    Database/Cluster/Site failure prevention best practices
 INFO =>    Client failover operational best practices
 FAIL =>    Some bigfile tablespaces do not have non-default maxbytes values set
 FAIL =>    Standby database is not in sync with primary database
 FAIL =>    Redo transport from primary to standby has more than 5 minutes or more lag
 FAIL =>    Standby database is not in sync with primary database
 FAIL =>    System may be exposed to Exadata Critical Issue DB11 /u01/app/oracle/product/11.2.0.3/dbhome_1
 FAIL =>    System may be exposed to Exadata Critical Issue DB11 /u01/app/oracle/product/11.2.0.3/orcl
 INFO =>    Software maintenance best practices
 FAIL =>    Operating system hugepages count does not satisfy total SGA requirements
 FAIL =>    Table AUD$[FGA_LOG$] should use Automatic Segment Space Management
 INFO =>    Database failure prevention best practices
 WARNING => Database Archivelog Mode should be set to ARCHIVELOG
 WARNING => Some tablespaces are not using Automatic segment storage management.
 WARNING => Database has one or more dictionary managed tablespace
 WARNING => Unsupported data types preventing Data Guard (transient logical standby or logical standby) rolling upgrade
Collecting patch inventory on  CRS HOME /u01/app/11.2.0.3/grid
Collecting patch inventory on ORACLE_HOME /u01/app/oracle/product/11.2.0.3/dbhome_1 
Collecting patch inventory on ORACLE_HOME /u01/app/oracle/product/11.2.0.3/orcl 

Copying results from dm01db01-priv and generating report. This might take a while. Be patient.

---------------------------------------------------------------------------------
                      CLUSTERWIDE CHECKS
---------------------------------------------------------------------------------
---------------------------------------------------------------------------------

Detailed report (html) - /tmp/exachk_192_dbm_040613_105703/exachk_192_dbm_040613_105703.html

UPLOAD(if required) - /tmp/exachk_192_dbm_040613_105703.zip

 

 

最后将生成打包成zip的报告,可以定期上传给GCS。

生成报告的HTML版如下图:

exadata1 h ealth check

 

【Enterprise Manager 12c】如何在EM 12c中配置Exadata Infiniband告警邮件

EM 12c集中了Exadata的大量管理功能,这里我们介绍一下如何在EM 12c中配置Exadata Infiniband告警邮件?

 

  1. 首先需要将IB network加入到EM target中,点击Exadata machine => IB network => target setup => monitor setup
  2. 之后进入监视—>度量和搜集设置
  3. 还可以参考文档 How To Configure Notification Rules in 12c Enterprise Manager Cloud Control ? [ID 1368036.1]

Calibrate测试Exadata IO

以下同时使用cellcli calibrate和 DBMS_RESOURCE_MANAGER.CALIBRATE_IO包测试 Exadata IO, Exadata为X2-2 1/4 RAC:

 

CellCLI: Release 11.2.3.1.1 – Production on Mon Dec 03 00:32:27 EST 2012

Copyright (c) 2007, 2011, Oracle. All rights reserved.
Cell Efficiency Ratio: 617

CellCLI> calibrate force;

Calibration will take a few minutes…
Aggregate random read throughput across all hard disk LUNs: 1921 MBPS
Aggregate random read throughput across all flash disk LUNs: 4164.33 MBPS
Aggregate random read IOs per second (IOPS) across all hard disk LUNs: 4971
Aggregate random read IOs per second (IOPS) across all flash disk LUNs: 145195
Controller read throughput: 1919.64 MBPS
Calibrating hard disks (read only) …
LUN 0_0 on drive [28:0 ] random read throughput: 168.12 MBPS, and 430 IOPS
LUN 0_1 on drive [28:1 ] random read throughput: 164.23 MBPS, and 423 IOPS
LUN 0_10 on drive [28:10 ] random read throughput: 170.80 MBPS, and 433 IOPS
LUN 0_11 on drive [28:11 ] random read throughput: 168.32 MBPS, and 421 IOPS
LUN 0_2 on drive [28:2 ] random read throughput: 170.07 MBPS, and 431 IOPS
LUN 0_3 on drive [28:3 ] random read throughput: 169.82 MBPS, and 421 IOPS
LUN 0_4 on drive [28:4 ] random read throughput: 165.17 MBPS, and 417 IOPS
LUN 0_5 on drive [28:5 ] random read throughput: 166.82 MBPS, and 429 IOPS
LUN 0_6 on drive [28:6 ] random read throughput: 170.85 MBPS, and 430 IOPS
LUN 0_7 on drive [28:7 ] random read throughput: 168.42 MBPS, and 429 IOPS
LUN 0_8 on drive [28:8 ] random read throughput: 169.78 MBPS, and 428 IOPS
LUN 0_9 on drive [28:9 ] random read throughput: 168.77 MBPS, and 430 IOPS
Calibrating flash disks (read only, note that writes will be significantly slower) …
LUN 1_0 on drive [FLASH_1_0] random read throughput: 271.01 MBPS, and 19808 IOPS
LUN 1_1 on drive [FLASH_1_1] random read throughput: 270.24 MBPS, and 19821 IOPS
LUN 1_2 on drive [FLASH_1_2] random read throughput: 270.41 MBPS, and 19844 IOPS
LUN 1_3 on drive [FLASH_1_3] random read throughput: 270.37 MBPS, and 19812 IOPS
LUN 2_0 on drive [FLASH_2_0] random read throughput: 272.32 MBPS, and 20634 IOPS
LUN 2_1 on drive [FLASH_2_1] random read throughput: 272.12 MBPS, and 20635 IOPS
LUN 2_2 on drive [FLASH_2_2] random read throughput: 272.28 MBPS, and 20676 IOPS
LUN 2_3 on drive [FLASH_2_3] random read throughput: 272.43 MBPS, and 20669 IOPS
LUN 4_0 on drive [FLASH_4_0] random read throughput: 271.13 MBPS, and 19802 IOPS
LUN 4_1 on drive [FLASH_4_1] random read throughput: 271.90 MBPS, and 19799 IOPS
LUN 4_2 on drive [FLASH_4_2] random read throughput: 271.42 MBPS, and 19798 IOPS
LUN 4_3 on drive [FLASH_4_3] random read throughput: 272.25 MBPS, and 19808 IOPS
LUN 5_0 on drive [FLASH_5_0] random read throughput: 272.22 MBPS, and 19824 IOPS
LUN 5_1 on drive [FLASH_5_1] random read throughput: 272.44 MBPS, and 19823 IOPS
LUN 5_2 on drive [FLASH_5_2] random read throughput: 271.83 MBPS, and 19808 IOPS
LUN 5_3 on drive [FLASH_5_3] random read throughput: 271.73 MBPS, and 19837 IOPS
CALIBRATE results are within an acceptable range.
Calibration has finished.

 
set serveroutput on;

DECLARE
lat INTEGER;
iops INTEGER;
mbps INTEGER;
BEGIN
— DBMS_RESOURCE_MANAGER.CALIBRATE_IO (disk_count,max_latency , iops, mbps, lat);
DBMS_RESOURCE_MANAGER.CALIBRATE_IO (20, 15, iops, mbps, lat);

DBMS_OUTPUT.PUT_LINE (‘max_iops = ‘ || iops);
DBMS_OUTPUT.PUT_LINE (‘latency = ‘ || lat);
dbms_output.put_line(‘max_mbps = ‘ || mbps);
end;
/

max_iops = 27176
latency = 4
max_mbps = 4280

PL/SQL procedure successfully completed.

 

 

SQL> col start_time for a10;
SQL> col end_time for a10;
SQL> select * from DBA_RSRC_IO_CALIBRATE;
START_TIME END_TIME MAX_IOPS MAX_MBPS MAX_PMBPS LATENCY NUM_PHYSICAL_DISKS
———- ———- ———- ———- ———- ———- ——————
00:42 00:58 27176 4280 511 4 20

 

 

【转】Oracle SPARC SuperCluster全能王:不改大道至简本色

伴随甲骨文收购Sun后软硬件一体机策略的持续贯彻和运筹帷幄,Oracle Exadata数据库云服务器、Oracle Exalogic中间件云服务器、Oracle Exalytics商务智能云服务器等一体机产品逐渐为我们所熟悉,也让甲骨文编织的“全球最全面的云”的美好蓝图逐渐清晰。近期,记者走进甲骨文位于中 关村软件园的解决方案中心,近距离接触其另外一款一体化设备或者称作集成化系统——SPARC SuperCluster。如果把Oracle Exadata、Oracle Exalogic比作百米冠军或者跳水皇后,SPARC SuperCluster就是全能王,它集计算、处理,存储、网络、虚拟化功能于一身,而其精髓依旧是:大道至简。

对于企业数据中心建设来说,彼此互不联通的存储、服务器、网络设备就像一个个孤立而突兀的烟囱,带来性能提升困难、部署管理复杂、总体成本过高等诸 多问题。而SPARC SuperCluster是一款通用型集成系统,包含计算资源、处理能力、存储资源、网络资源,确保内部部件互联互通;更重要的是,SPARC SuperCluster通过多款定制化软件确保挖掘硬件最大潜能。

甲骨文公司系统事业部销售顾问总监张雪枫告诉记者:大道至简一直都是企业数据中心建设和管理的方向,这也是甲骨文集成系统追求的目标。他为记者剥丝抽茧,让我们了解SPARC SuperCluster大机柜中有何乾坤。

Oracle SPARC SuperCluster全能王:不改大道至简本色

 

甲骨文公司系统事业部销售顾问总监张雪枫

SPARC SuperCluster最核心的SPARC SuperCluster T4-4小型机是SPARC+Solaris的佼佼者,包含4个3.0的高速CPU,维持12项性能方面的世界纪录。在这方面张雪枫特别强调:“TBCH 是一个现在比较常用的进行数据挖掘的测试,在这个测试中,IBM目前在中国市场最高端的Power 7 780数据达到164,747,惠普最新安腾服务器测试数据达到140,181,而T4-4达到201,487,整机比IBM快22%,比惠普快 44%。”

在存储方面,SPARC SuperCluster的存储专门是为运行数据库而量身定制,采用11g版本。相对于市面上很多单纯进行数据存储的产品,张雪枫表示“如果数据不被附加 逻辑上和业务上的价值,实际上就是一堆垃圾。”。甲骨文的存储实际上是把数据库软件安装在服务器上,因此可以参与计算。一个语句可以通过存储的方式大大提 升数据查找速度,通过混合阵列压缩的方式提高数据存储效果,通过闪存提高数据分级,这些技术远非一般的磁盘可以做到。

在甲骨文一体机产品之后,友商们一体机产品也纷纷面市,相对于友商来说,甲骨文的一体机产品中拥有很多专门为硬件量身定制的软件,这样才能把硬件资 源做到最大化的挖掘。据介绍,SPARC SuperCluster在性能上提升10-50倍;管理工作减少50%;系统部署速度提升4倍。
Oracle SPARC SuperCluster全能王:不改大道至简本色

 

SPARC SuperCluster产品

此外,SPARC SuperCluster采用了Solaris 11操作系统,这个操作系统拥有非常强大的虚拟化技术,是未来IT整合的基础。在一台T4-4服务器上,可以通过Oracle软件把一台服务器划分成两个 不同的服务器,并且做到把硬件和软件完全隔离开,如同两台物理服务器一样使用,在这两台不同的服务器上均可以安装操作系统。张雪枫举例:“这时候服务器上 软硬件的故障都不会影响到另外一个分区,并且在这个基础之上还可以使用软件分区来创建更多的分区应用。”

谈到SPARC SuperCluster的强大性能,就不得不提Oracle企业管理器。通过Oracle企业管理器,数据中心工作人员不需要走进机房,就可以直接管理 从硬件到软件的所有资源。可谓真正做到用一个管理平台把所有的硬件、软件、交换、存储管理起来。张雪枫为记者演示了登陆Oracle企业管理器界面后看到 的景象:SPARC SuperCluster的硬件配置和型号一目了然,每一个部件的具体信息都可以点击查看。如果某个部件出现问题,系统会以红色、黄色、蓝色来划分警示等 级,这些问题都有详尽的信息供查询和参考。

SPARC SuperCluster特别适合企业数据中心多应用整合,其定位为企业级高端UNIX服务器平台,更多面向金融、电信、政府等行业采用。其价值包括:

首先,更快的批量处理速度,更快的自服务应用响应时间。在每小时支持交付处理能力测试中,SPARC SuperCluster可以处理高达100万的交易,IBM大机Z10只有50万;在查询和存档时间上,Oracle更比IBM大机快11倍。

其次,更快的部署时间。传统意义上的搭建IT就像搭积木一样,在市面上选择存储设备、网络设备等等,需要组装上百个部件才能拼出一副完整的图画;但 如果采用SPARC SuperCluster,只需购买一台设备就可一站式部署。过去组装上百个不同组件可能需要上千个小时,而如今采用SPARC SuperCluster可以开箱即用,并且确保99.999%高可用性。

另外,降低管理复杂性,减少维护成本。

最后,更低成本、更少机房面积、更加绿色节能。IBM Z10服务器加存储所占用面积是4平米;而SPARC SuperCluster只有0.7平米。

本文转自 :http://news.zdnet.com.cn/zdnetnews/2012/0910/2120029.shtml

Oracle Open World 2012信息汇总贴

Oracle Open World 2012信息汇总贴: 建设中..

Oracle OpenWorld 2012 sessions下载地址:Search Content Catalog for Oracle OpenWorld 2012 sessions

目前还没有开放所有session的下载

 

 

Open World 2012直播:

 

Larry 介绍Exadata X3

 

 

 

 

 

 

 

 

OOW 2012发布了Exadata X3,包括 X3-2 、Expansion Rack X3-2、X3-8

Exadata X3介绍页面:http://www.oracle.com/us/products/database/exadata/overview/index.html

 ORACLE EXADATA Database MACHINE X3-8 sheet
ORACLE EXADATA Database MACHINE X3-2 sheet

 

Exadata X3-2的硬件高亮特性:

X3-2的compute db node服务器升级到最新的8核Intel Xeon E5-2690处理器

  • 计算节点从每个服务器12核升级到16核,增加了33%的计算能力
  • 内存从96GB升级到128GB,可扩展至256GB
  • 计算节点的性能提升了50%

X3-2 cell node存储节点服务器升级到了最新的Intel Xeon 以及新一代的flash card

  • flash card的容量提升了4倍,同时flash card的响应速度也提升了40%。 满配的X3-2将拥有22.4TB的flash ,这么大容量的flash将可以提供更多数据库将所有数据缓存在其中,以提升10倍的性能。
  • CPU虽仍是6核,但已更新到最新的Intel Xeon model
  • 存储节点的磁盘与X2-2持平,同样是600GB的高性能磁盘或3TB的高容量磁盘

 

同时Exadata X3-2将提供八分之一配置,由八分之一配置升级到1/4也非常方便,1/8配置的价格应当能吸引不少中小企业。

 

Exadata X3-8的硬件高亮特性:

X3-8对应为X2-8的升级版本,同样的X3-8的存储服务将加强到与X3-2一样,因此X3-8将同样拥有22.4TB的海量闪存。

 

甲骨文CEO介绍  Engineered to Work Together

 

 

OOW的中文介绍

 

Oracle Open World 2012 信息汇总贴

Open World 2012 主页:http://www.oracle.com/openworld/index.html
Open World 2012 注册页面:http://www.oracle.com/openworld/register/packages/index.html

日期: Sept. 30 – Oct. 4, 2012 9月30日至 10月4日

地点:Moscone Center, San Francisco (747 Howard Street, San Francisco, California 94103).

 

甲骨文总裁Mark Hurd介绍OOW 2012:

 

 

How big is oow

 

 

OOW 2012按技术分类日程安排:

 

 

 

 

Test Fest at Oracle OpenWorld 2012

2012 OOW Day1综述:甲骨文详解云战略,12C数据库,Exadata X3(转自http://www.cbinews.com/software/news/2012-10-01/193360.htm)

 

012甲骨文全球大会 (Oracle OpenWorld 2012) 于美国时间9月30日正式拉开帷幕,今天开幕的2012甲骨文全球大会最受人们关注的无疑是–甲骨文公司首席执行官Larry Ellison的主题演讲。Larry Ellison在演讲中正式公布了甲骨文的云战略–“3s-all-in”,12C数据库以及Exadata X3的正式发布。

“甲骨文的云要用我们的OS,我们的VM,我们的运算服务和存储服务,运行于世界上最快最可靠的系统之上–这些系统即我们全面用Infiniband联接的工程系统,Exadata, Exalogic和Exalytics。”Larry Ellison强调说。

3s-all-in:公云和私云的融合

毫无疑问,云计算是Larry Ellison演讲中的重点,在”3s-all-in”战略中,Larry Ellison希望甲骨文能在IaaS,PaaS,SaaS通吃,而与之对应的两个新的云产品被称之为:Oracle Public Cloud IaaS和Oracle Private Cloud。

他说甲骨文在继续以传统方式销售硬件和软件的模式下,增加以云的方式销售硬件(IaaS)和软件(SaaS/PaaS)。这三个层面的云服务将完全采用工业标准,并给与客户创建私云的选择。

实际上,甲骨文从2011年正式宣布要进入SaaS和PaaS,随后通过收购和自我研发,很快推出了SaaS和PaaS产品。但是Larry Ellison显然并不满足仅仅只是在SaaS和PaaS,他希望甲骨文要做一个全面的云计算供应商,这就是在IaaS、PaaS、SaaS三层上都要参与市场竞争。Larry Ellison宣布,今年甲骨文要进入IaaS,也就是要像亚马逊一样,提供虚拟机、存储等基础设施云服务。

除此之外,甲骨文也希望在SaaS领域有所建树。新推出的SaaS服务可以让用户像租用公有云服务一样订阅服务,这将是一个“甲骨文管理、甲骨文建立、甲骨文拥有的私有云”,而企业“只需要按月份和需要付费即可。”

当然外界最为关心的无疑是Oracle Cloud公有云和Oracle Private Cloud私有云是否具有连通和迁移的能力将成为甲骨文云计算整体战略中的重要砝码,对此,Larry Ellison的回应是,Oracle Private Cloud能够无缝的连接到Oracle Cloud。这其中不仅包括用户可以使用Oracle Cloud(公有云)来开发、测试、运行在Oracle Private Cloud上的应用程序,可以将Oracle Clou作为Private Cloud的备份、灾难恢复及额外的容量空间使用。同时,甲骨文还将帮助企业在两个云之间迁移移动程序。

显然,甲骨文将成为行业内首个将私有云租用服务和公有云服务结合在一起,实现公有云和私有云的动态分配与使用的云计算供应商。非但如此,甲骨文的客户使用公共云和私有云的同时是完全基于该公司的软件和硬件搭建的,这似乎是一个软硬件一体的完美解决方案。

Oracle 12C:为云计算而生

今天大会的另一个重点是甲骨文宣布推出最新版的数据库软件Oracle 12C。(C你可以理解为甲骨文数据库也与时俱进进入云计算时代),和上一代Oracle 11g不同,12c被称为多租户数据库。也就是说,未来的企业可能只有一个数据库(Container Database),所有分别支持各种应用程序的数据库都运行在这个数据库容器中,所有数据库都共享内存和存储资源。

显然,Oracle 12C的数据库软件将使客户将他们的计算工作从数据中心迁移互联网,Larry Ellison认为这将极大地减少用户在硬件上的投入,并提供更好的软件扩展性给用户,而且甲骨文将通过管理方式的优化让用户“就像是在一台机器上管理一台服务器”,而所有的这些改变“无需要干预和调整应用程序”。据称,与前一代数据库相比,12c数据库在50个数据库的配置中,仅以3GB内存就胜过了传统数据库20GB的访问效果。

Exadata X3:加量不加价

本次大会新发布的甲骨文Exadata X3数据库一体机亦是亮点。这是一款与SAP的HANA很近似的产品。Larry Ellison表示,Exadata X3将是一款存储量更大、运行更快,特别是吞吐能力更强的内存计算产品,这是由于X3中采用了新的SSD软件加速读写功能等新技术,可以大大加速Exadata的处理能力。虽然性能有了很大提升,但Larry Ellison表示,价格将仍然与Exadata X2一致。

据官方公布的资料显示,Exadata X3-8使用英特尔至强E7-8870处理器,而X3-2则使用英特尔至强E5-2690处理器,磁盘控制器适配卡提供512MB由电池保护的写缓存。在大会现场展示中的Exadata X3-2中,每服务器通过2个QDR(40Gb/s)的Infiniband端口,4个300GB万转SAS盘。从资料上看,X3-8和X3-2都是用14个Exadata 存储服务器,全机架配置提供一样的56块总计22.4TB Flash。

在演讲中,Larry Ellison的“攻击性”依然如此前的犀利,他毫不客气把Exadata和EMC VMAX 40K进行了对比,尽管这是分属两个截然不同市场的产品,“现在,1个Rack的Exadata提供100GB/s的带宽,而最新的EMC VMAX 40K高端存储系统才仅仅提供52GB/s的带宽。“Larry Ellison说。

此外,他把Exadata与IBM p780进行了对比,认为Exadata与P780的性价比将达到8:1。而随着Exadata 此次即将推出X3-2的八分之一机架版本,Larry Ellison表示,他有信心进一步与IBM在数据库一体机市场上展开更加自信的竞争。

而对于SAP的HANA,Larry Ellison的评语是:“SAP的HANA太小了,Exadata X3的存储量更大、运行更快,吞吐能力更强。”

 

 

 

 

媒体链接:

甲骨文赫德:去年研发投45亿美元 并购花了60亿