1 System environment requirements

1.1 System environment preparation before installation

  • Verify that the Solaris server is installed and updated with the latest set of patches
  • The network environment is connected and debugged properly.
  • The disk array is installed and partitioned according to the Oracle system.

1.2 Hardware requirements

  • Memory: > 2G.
  • Swap zone: 2G. Usually equal to physical memory, minimum is not less than 1G.
  • Hard disk capacity: database software > 4G. Database > 2G.
  • / TMP: Temporary directory space greater than 500M.

1.3 Software Requirements

  • Operating System and Patches:

Solaris 10 patches

SUNWarc  SUNWbtool SUNWhea SUNWlibC SUNWlibm SUNWlibms SUNWmfrun SUNWsprot SUNWtoo SUNWi1of SUNWi1cs SUNWi15cs SUNWxwfnt SUNWcsl SUNWxcu4

2. Preparation

2.1 Check the operating system running environment

  • Check whether the required Patch is included. Command:
pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibC SUNWlibm SUNWlibms SUNWmfrun SUNWsprot SUNWtoo SUNWi1of SUNWi1cs SUNWi15cs  SUNWxwfnt SUNWcsl SUNWxcu4
  • Check the version of the operating system
# uname -r
  • To check the actual available memory, command:
# /usr/sbin/prtconf | grep "Memory size"
  • Check the swap size. Command:
# /usr/sbin/swap -s
  • Check file system free space and temporary directory/TMP free space. Command:
# df -h /tmp

# df -h
  • Check the operating system kernel architecture
# /bin/isainfo -kv
  • Check the network
# ifconfig -- a # ping

Contents of the hosts file for the server:

#public IP 172.16.10.1bxdb1-priv 172.16.1.4bxdb2-priv #private IP 172.16.16.1.4bxdb2-priv #private IP 172.16.10.1bxdb2-priv BXDB1-VIP 172.16.10.8 BXDB2- VIP #SCAN 172.16.10.9 BXDB-SCAN
  • Check node times to ensure synchronization
# date

2.2 User preparation (BXDB1 and BXDB2 are the same)

  • Modify the UDP parameters
$ vi /etc/rc2.d/S99ndd

add

ndd -set /dev/udp udp_xmit_hiwat 65536
ndd -set /dev/udp udp_recv_hiwat 65536
  • Create the required groups
/usr/sbin/groupadd -g 1000 oinstall
/usr/sbin/groupadd -g 1100 asmadmin
/usr/sbin/groupadd -g 1200 dba
/usr/sbin/groupadd -g 1201 oper
/usr/sbin/groupadd -g 1300 asmdba
/usr/sbin/groupadd -g 1301 asmoper
  • Set up the required users
# mkdir -p /export/home/grid
# useradd -u 1100 -g oinstall -G dba,asmadmin,asmdba,asmoper -d /export/home/grid -s /usr/bin/bash grid

# mkdir -p /export/home/oracle
# useradd -u 1101 -g oinstall -G asmdba,dba,oper -d /export/home/oracle -s /usr/bin/bash  oracle

Change the password of the new user: password 1qaz. Oracle

# passwd grid
# passwd oracle
  • Modifying environment variables

The Grid users

# su -- grid $vi.profile

add

\# ORACLE_SID (ORACLE_SID=+ASM2)

ORACLE_SID=+ASM1; export ORACLE_SID ORACLE_BASE=/oracle/app/grid; Export ORACLE_BASE ORACLE_HOME = / oracle/app / 11.2.0 / grid; export ORACLE_HOME PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin export PATH NLS_LANG=AMERICAN_AMERICA.UTF8; export NLS_LANG umask 022

The oracle user

# su -- Oracle $vi.profile

add

\# ORACLE_SID (write ORACLE_SID=boss2 on node 2)

ORACLE_SID=boss1; export ORACLE_SID ORACLE_BASE=/oracle/app/oracle; Export ORACLE_BASE ORACLE_HOME = / oracle/app/oracle/product / 11.2.0 / dbhome_1; export ORACLE_HOME PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin export PATH NLS_LANG=AMERICAN_AMERICA.UTF8; export NLS_LANG umask 022
  • Modify system parameters (root user)
# vi /etc/system

add

set noexec_user_stack=1
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=1024
set semsys:seminfo_semmsl=256
set semsys:seminfo_semvmx=23767
set shmsys:shminfo_shmmax=107374182400
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmset=10

perform

# projmod -sK "project.max-shm-memory=(privileged,100G,deny)" default

Restart the server

  • Configuration equivalence

The following are Grid users. Oracle users are configured in the same way

$chmod 755 /export/home (permission must be 755)

◆ The following two RAC nodes are executed

$mkdir -p ~/.ssh-bash-3.00 $chmod 700 ~/.ssh-bash-3.00 $/usr/bin/ ssh-keygen-t rsa

The prompt input input password, keep the empty enter can, the use of empty password operation is simpler.

◆ The following is only performed on RAC node 1

-bash-3.00$SSH BXDB1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys -bash-3.00$SSH BXDB1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys -bash-3.00$SSH BXDB2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys -bash-3.00$SCP ~/.ssh/authorized_keys BXDB2:.ssh/authorized_keys

◆ The following two RAC nodes are executed

- bash - 3.00 $chmod 600 ~ /. SSH/authorized_keys

Test equivalence, the configuration is successful without prompting for password (it is normal to ask for password the first time)

ssh BXDB1 "date; hostname" ssh BXDB2 "date; hostname"
  • Configure naked disk (root user)
# format AVAILABLE DISK SELECTIONS: 0. c0t5000CCA03C70E8B4d0 <HITACHI-H106030SDSUN300G-A2B0 cyl 46873 alt 2 hd 20 sec 625> solaris scsi_vhci/disk@g5000cca03c70e8b4 1. c0t5000CCA03C709A38d0 <HITACHI-H106030SDSUN300G-A2B0 cyl 46873 alt 2 hd 20 sec 625> solaris scsi_vhci/disk@g5000cca03c709a38 2. c0t600000E00D11000000111430000D0000d0 <FUJITSU-ETERNUS_DXL-0000 cyl 254 alt 2 hd 64 sec 256> scsi_vhci/ssd@g600000e00d11000000111430000d0000 3. c0t600000E00D1100000011143000040000d0 - ETERNUS_DXL - 0000-409.00 GB > < FUJITSU scsi_vhci/SSD @ g600000e00d1100000011143000040000 4. C0t600000E00D1100000011143000060000d0 - ETERNUS_DXL - 0000-409.00 GB > < FUJITSU scsi_vhci/ssd@g600000e00d1100000011143000060000 5. c0t600000E00D1100000011143000070000d0 - ETERNUS_DXL - 0000-409.00 GB > < FUJITSU scsi_vhci/SSD @ g600000e00d1100000011143000070000 6. C0t600000E00D1100000011143000050000d0 - ETERNUS_DXL - 0000-409.00 GB > < FUJITSU scsi_vhci/ssd@g600000e00d1100000011143000050000 7. c0t600000E00D1100000011143000030000d0 <FUJITSU-ETERNUS_DXL-0000 cyl 58878 alt 2 hd 128 sec 256> scsi_vhci/ssd@g600000e00d1100000011143000030000 8. c0t600000E00D1100000011143000020000d0 <FUJITSU-ETERNUS_DXL-0000 cyl 58878 alt 2 hd 128 sec 256> scsi_vhci/ssd@g600000e00d1100000011143000020000 9. c0t600000E00D1100000011143000010000d0 <FUJITSU-ETERNUS_DXL-0000 cyl 58878 alt 2 hd 128 sec 256> scsi_vhci/ssd@g600000e00d1100000011143000010000 10. c0t600000E00D1100000011143000000000d0 <FUJITSU-ETERNUS_DXL-0000 cyl 58878 alt 2 hd 128 sec 256> scsi_vhci/ssd@g600000e00d1100000011143000000000 Specify disk (enter its number)[7]: 2 selecting c0t600000E00D11000000111430000D0000d0 [disk formatted] format> p PARTITION MENU: 0 - change `0' partition 1 - change `1' partition 2 - change `2' partition 3 - change `3' partition 4 - change `4' partition 5 - change `5' partition 6 - change `6' partition 7 - change `7' partition select - select a predefined table modify - modify a predefined partition table name - name the current table print - display the current table label - write partition map and label to the disk ! <cmd> - execute <cmd>, then return quit partition> 0 Part Tag Flag Cylinders Size Blocks 0 root wm 0 0 (0/0/0) 0 Enter partition id tag[root]: Enter partition permission flags[wm]: Enter new starting cyl[0]: Enter partition size[0b, 0c, 0e, 0.00 MB, 0.00 GB]: partition> p Current partition table (unnamed): Total disk cylinders available: 254 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 swap wu 0 0 (0/0/0) 0 2 Backup Wu 0-253 1.98GB (254/0/0) 4161536 3 Unassigned WM 0 0 (0/0/0) 0 4 Unassigned WM 0 0 (0/0/0) 0 5 Unassigned WM 0 0 (0/0/0) 0 5 Unassigned WM 0 0 (0/0/0) 0 0 0 (0/0/0) 0 6 usr wm 0-253 1.98GB (254/0/0) 4161536 7 unassigned wm 0 0 (0/0/0) 0 partition> 6 Part Tag Flag Cylinders Cylinders Cylinders Cylinders 6 USR WM 0-253 1.98GB (255/0/0) 4161536 Enter partition id tag[USR]: Cylinders Cylinders 6 USR WM 0-253 1.98GB (255/0/0) 4161536 Enter partition id tag[USR]: Cylinders Cylinders Enter partition permission flags[wm]: Enter new starting cyl[0]: 3 Enter partition size[4112384B, 251C, 253E, 2008.00 MB, 1.96GB]: partition> label Ready to label disk, continue? y partition> p Current partition table (unnamed): Total disk cylinders available: 254 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 swap wu 0 0 (0/0/0) 0 2 Backup Wu 0-253 1.98GB (254/0/0) 4161536 3 Unassigned WM 0 0 (0/0/0) 0 4 Unassigned WM 0 0 (0/0/0) 0 5 Unassigned WM 0 0 (0/0/0) 0 5 Unassigned WM 0 0 (0/0/0) 0 0 0 (0/0/0) 0 6 usr wm 3-253 1.96GB (251/0/0) 4112384 7 unassigned wm 0 0 (0/0/0) 0 partition> quit

All the disks to be used in turn are formatted and partitioned

  • Modify disk permissions

When dividing disk if you chose the number assigned to the space n, the corresponding blanks is sn, such as disk is c0t600000E00D11000000111430000D0000d0, space allocation by 6 pieces, Finally we use the disk file name is c0t600000E00D11000000111430000D0000d0s6

chown grid:asmadmin /dev/rdsk/c0t600000E00D11000000111430000D0000d0s6
chown grid:asmadmin /dev/rdsk/c0t600000E00D1100000011143000030000d0s6
chown grid:asmadmin /dev/rdsk/c0t600000E00D1100000011143000020000d0s6
chown grid:asmadmin /dev/rdsk/c0t600000E00D1100000011143000010000d0s6
chown grid:asmadmin /dev/rdsk/c0t600000E00D1100000011143000000000d0s6

chmod 660 /dev/rdsk/c0t600000E00D11000000111430000D0000d0s6
chmod 660 /dev/rdsk/c0t600000E00D1100000011143000030000d0s6
chmod 660 /dev/rdsk/c0t600000E00D1100000011143000020000d0s6
chmod 660 /dev/rdsk/c0t600000E00D1100000011143000010000d0s6
chmod 660 /dev/rdsk/c0t600000E00D1100000011143000000000d0s6

3 the installation

3.1 Install CRS software (on BXDB1)

  • Run the installer as a Grid user

\# xhost +

# Unzip and install the software

\ # su – grid

$export DISPLAY= client IP: 0.0

$ ./runInstaller

Select the first item, Next

Select the second advanced installation, Next

Adding Chinese Language

Fill in the SCAN information where the Cluster Name is customized and the SCAN Name is configured in the HOSTS file

Add a node

Click SSH Connectivity to configure SSH equivalence

Select the network card user according to the configuration of the hosts file

Select to place the OCR file in ASM

Create the ASM disk group and add the bare disk files that were previously prepared for OCR. Select External for redundancy

For ASM account password 1 qaz. Oracle, if prompt password safe enough, click yes

Select User Group

Select the installation path

Select the Inventory directory. Default is fine

Begin to check if the system environment is satisfied

Indicates that the following conditions are not met. These two errors can be ignored

Confirm to start installation

After installation, execute scripts on both nodes as root user as prompted

After the installation, an error was found in the verification of the status of Cluster. This is because we did not configure the DNS server to allocate VIP and SCANIP, but manually allocated them. This error can be ignored

3.2 Install Database software (on BXDB1)

  • Run the installer as a Grid user

\# xhost +

# Unzip and install the software

\ # su – grid

$export DISPLAY= client IP: 0.0

$ ./runInstaller

Do not select to receive security patch information, select Yes if prompted

Skip software updates

Choose to install only database software

Select cluster mode installation

Click SSH Connectivity to configure user equivalence

Adding Chinese Language

Select Install Enterprise Edition

Select the installation directory. Since the root directory belongs to the Grid user, manually create this directory on both nodes and assign permissions and ownership groups to the Oracle user

mkdir -p /oracle/app/oracle

chown oracle:oinstall /oracle/app/oracle

Select the install software group

Check whether the system environment meets the installation requirements

The following errors can be ignored

Check and start installation

Once installed, follow the prompts to execute the script on both nodes as root

Execute the corresponding script to complete the installation

3.3 Creating a database

  • Create ASM disk groups with Grid users

\# xhost +

\ # su – grid

$export DISPLAY= client IP: 0.0

$ asmca

Create the disk group as shown in the following figure, where the CRS was established when the CRS software was installed

  • Create a clustered database with Oracle users

\# xhost +

\ # su – oracle

$export DISPLAY= client IP: 0.0

$ dbca

Select the cluster database

Choose to create a database

Select the custom database

Enter the database name according to the actual situation, and check all nodes

Select Configure EM

Set the password for the user. Here all the passwords are 1qaz. Oracle. If the password is not secure enough, click Yes

Select the data file storage mode, select ASM, and fill in the previously created disk group name. Asmsnmp password will be required, and enter 1qaz.oracle

Configure the quick recovery area, select the ASM disk group created previously, enter the size, in this case, 900G, and tick the box to activate the archive

Remove unnecessary components

Configure the memory size, check the automatic memory management, this value can be built after the library according to the actual situation

Configure the block size and number of connections

To configure the character set, select AL32UTF8

Select the proprietary mode by default

Configure the data file, assign 4 groups to each node, and change the redo size to 512M

Confirm and start to build the database

Start building

The database is now installed

4. Frequently used maintenance commands

4.1 Start and close the cluster

(1) Start cluster components and cluster database

Cluster system is automatically started by default. The command to manually start cluster components is:

-bash-3.00# CD /u01/app/11.2.0/grid/bin -bash-3.00# CD /u01/app/11.2.0/grid/bin

– bash – 3.00 #. / CRSCTL start cluster

You can also execute the following command (which is not recommended in version 11.2) for Grid users

– bash – 3.00 # su – grid

– bash – 3.00 $crs_start – all

Start the cluster database

– bash – 3.00 $su – oracle

$SRVCTL start database -d racdb -bash-3.00$SRVCTL start database -d racdb

$SRVCTL start instance-d racdb-n racdb1 $SRVCTL start instance-d racdb-n racdb1

$SRVCTL start instance-d racdb-n racdb2 -bash-3.00$SRVCTL start instance-d racdb-n racdb2

(2) Close the RAC cluster database and cluster components

Start by shutting down the cluster database

– bash – 3.00 $su – oracle

-bash-3.00$SRVCTL stop database-d racdb

Close the cluster again (root user)

– bash – 3.00 $su –

– bash – 3.00 # CD/u01 / app / 11.2.0 / grid/bin

– bash – 3.00 #. / CRSCTL stop cluster

4.2 RAC common commands

Cluster system maintenance can be completed by using the SRVCTL command, we can use the help command to check its use, under the Oracle user srvctl-help to view the help information

– bash – 3.00 # su – oracle

– bash – 3.00 $SRVCTL -help

Usage: srvctl <command> <object> [<options>]

commands: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config

objects: database|instance|service|nodeapps|vip|asm|diskgroup|listener|srvpool|server|scan|scan_listener|oc4j|home|filesystem|gns

For detailed help on each command and object and its options use:

srvctl <command> -h or

srvctl <command> <object> -h

The format and usage of the command are listed. To continue to view the specific usage, add the -h parameter after the command and object, so that you can get the full usage without looking at the other data.

-bash-3.00$SRVCTL add-h ($SRVCTL add-h) -bash-3.00$SRVCTL add-h

-bash-3.00$SRVCTL remove-h

-bash-3.00$SRVCTL modify-h

$SRVCTL config-h -bash-3.00$SRVCTL config-h -bash-3.00$SRVCTL config-h

-bash-3.00$SRVCTL status-h (to view the status of objects in the cluster)

-bash-3.00$SRVCTL relocate -h -bash-3.00$SRVCTL relocate -h -bash-3.00$SRVCTL relocate

-bash-3.00$SRVCTL enable-h (to make objects that already exist in the cluster available)

-bash-3.00$SRVCTL disable -h (make existing objects in the cluster unusable)

-bash-3.00$SRVCTL start-h (start an object that already exists in the cluster)

-bash-3.00$SRVCTL stop-h (stop existing objects in cluster)

Commonly used such as:

◆ View all the installation cluster database name

– bash – 3.00 $SRVCTL config database

racdb

View the configuration information of the specified cluster database

-bash-3.00$SRVCTL config database-d racdb

Database unique name: racdb

Database name: racdb

Oracle home: / u01 / app/Oracle/product / 11.2.0 / dbhome_1

Oracle user: oracle

Spfile: +RACDB_DATA/racdb/spfileracdb.ora

Domain: racnode.com

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: racdb

Database instances: racdb1,racdb2

Disk Groups: RACDB_DATA,FRA

Services: MYRAC

Database is administrator managed

◆ Display node application configuration

-bash-3.00$SRVCTL config nodeapps-a-g-s-e -bash-3.00$SRVCTL config nodeapps-a-g-s-e

VIP exists.:racnode1

VIP exists. : / 192.168.1.201/192.168.1.201 / / e1000g0 255.255.255.0

VIP exists.:racnode2

VIP exists. : / 192.168.1.205/192.168.1.205 / / e1000g0 255.255.255.0

GSD exists.

ONS daemon exists. Local port 6100, remote port 6200

Eons Daemon Exists. Multicast Port 16717, Multicast IP Address 234.92.69.133, Listening Port 2016

◆ List all running instances in the cluster

-bash-3.00$sqlplus/as sysdba

SQL> col host format a10

SQL> col db_status format a8

SQL> col inst_name format a8

SQL> SELECT inst_id, instance_number inst_no, instance_name inst_name,

2 parallel, status, database_status db_status, active_state state,

3 host_name host

4 FROM gv$instance

5 Order By inst_id;

INST_ID INST_NO INST_NAM PAR STATUS DB_STATU STATE HOST


1 1 racdb1 YES OPEN ACTIVE NORMAL racnode1

2 2 racdb2 YES OPEN ACTIVE NORMAL racnode2

◆ List all data files, temporary data files, log files, control files

SQL> set pagesize 100; Set the number of rows displayed in SQLPlus to 100 rows at a time.

SQL> set linesize 100; Set the total display width of SQLPlus to 80 by default, or 100 or more.

SQL> col file_size format a9;

SQL> col file# format 99999; Formatting a field of numeric type

SQL>

SQL> SELECT ‘data_file’ as file_type, file#, creation_time, status, name, to_char(bytes/(1024*1024) || ‘M’) as file_size FROM v$datafile

2 union

3 SELECT ‘temp_file’ as file_type, file#, creation_time, status, name, to_char(bytes/(1024*1024) || ‘M’) as file_size FROM v$tempfile

4 union

5 SELECT ‘log_file’ as file_type, group#, null as creation_time, type, member, null as file_size FROM v$logfile

6 union

7 SELECT ‘control_file’ as file_type, null as file#, null as creation_time, status, name, null as file_size FROM v$controlfile;

FILE_TYPE FILE# CREATION_ STATUS NAME FILE_SIZE


Control_file + FRA/racdb controlfile/current. 256.711918025

Control_file + RACDB_DATA/racdb controlfile/current. 260.711918019

Data_file 20-1 NOV – 09 SYSTEM + RACDB_DATA/racdb/datafile/SYSTEM. 256.711917577 690 m

Data_file 20-2 NOV – 09 ONLINE + RACDB_DATA/racdb/datafile/sysaux. 257.711917583 610 m

Data_file 3 20 – NOV – 09 ONLINE + RACDB_DATA racdb/datafile/undotbs1.258.711917585 90 m

Data_file 20-4 NOV – 09 ONLINE + RACDB_DATA racdb/datafile/users. 259.711917585 5 m

Data_file 25 – FEB 5-10 ONLINE + RACDB_DATA/racdb/datafile/example. 264.711918155 100 m

Data_file 25 – FEB 6-10 ONLINE + RACDB_DATA/racdb/datafile/undotbs2.265.711919153 50 m

Data_file 7 04-mar-10 ONLINE +RACDB_DATA/racdb/datafile/ts_front.269.712715421 300M

Log_file 1 ONLINE + FRA/racdb/onlinelog/group_1. 257.711918047

Log_file 1 ONLINE + RACDB_DATA/racdb/onlinelog/group_1. 261.711918033

Log_file 2 ONLINE + FRA/racdb/onlinelog/group_2. 258.711918069

Log_file 2 ONLINE + RACDB_DATA/racdb/onlinelog/group_2. 262.711918057

Log_file 3 ONLINE + FRA/racdb/onlinelog/group_3. 259.711919447

Log_file 3 ONLINE + RACDB_DATA/racdb/onlinelog/group_3. 266.711919433

Log_file 4 ONLINE + FRA/racdb/onlinelog/group_4. 260.711919483

Log_file 4 ONLINE + RACDB_DATA/racdb/onlinelog/group_4. 267.711919461

Temp_file 1, 25 – FEB – 10 ONLINE + RACDB_DATA/racdb tempfile as expected/temp. 263.711918123 to 35 m

18 rows selected.

V $ASM_DISKGROUP = V $ASM_DISKGROUP = V $ASM_DISKGROUP

SQL> SELECT group_number group#, disk_number disk#, state, redundancy, name, failgroup, path, failgroup_type FROM v$asm_disk;

GROUP# DISK# STATE REDUNDA NAME FAILGROUP PATH FAILGRO


1 0 NORMAL UNKNOWN CRS_0000 CRS_0000 /ShareDisk/crs1 REGULAR

3 2 NORMAL UNKNOWN RACDB_DATA_0002 FG1 /ShareDisk/asm1 REGULAR

3 3 NORMAL UNKNOWN RACDB_DATA_0003 FG1 /ShareDisk/asm2 REGULAR

3 0 NORMAL UNKNOWN RACDB_DATA_0000 FG2 /ShareDisk/asm3 REGULAR

3 1 NORMAL UNKNOWN RACDB_DATA_0001 FG2 /ShareDisk/asm4 REGULAR

2 0 NORMAL UNKNOWN FRA_0000 FRA_0000 /ShareDisk/fra1 REGULAR

2 1 NORMAL UNKNOWN FRA_0001 FRA_0001 /ShareDisk/fra2 REGULAR

0 8 NORMAL UNKNOWN /ShareDisk/spfile REGULAR

0 1 NORMAL UNKNOWN /ShareDisk/crs2 REGULAR

SQL> SELECT group_number, name, state, type, total_mb, free_mb FROM v$asm_diskgroup;

GROUP_NUMBER NAME STATE TYPE TOTAL_MB FREE_MB


2 FRA CONNECTED NORMAL 184322 182400

1 CRS MOUNTED EXTERN 2048 1652

3 RACDB_DATA CONNECTED NORMAL 778240 773770

Look at the methods you are trying to structure

SQL> desc v$asm_disk;

SQL> desc v$asm_diskgroup;

More information can be queried by trying to connect the two.

Welcome to pay attention to my WeChat public number [super brother’s IT private food] for more technical dry goods!

If you have any jokes or feedback, just tell me! I will solve the problem you said, further better service you oh! Tips: If the QR code fails, you can also directly add WeChat ID: YSC13803862469!