This article guides the configuration of Oracle Database
Filesystem (DBFS) on Oracle Database Machine.
Most of the steps below are
one-time setup steps, except where noted otherwise. On platforms other than Oracle
Database Machine, additional setup steps may be required to install the
required fuse RPM packages which are installed by default on Oracle Database
Machine database servers.
Those using DBFS should review Note 1150157.1 for recommended patches.
Those using DBFS should review Note 1150157.1 for recommended patches.
For nodes running Solaris, the following items should be
reviewed before following the steps in this note.
- Solaris 11 SRU 7 (Patch 14050126) or higher is required for Solaris to support DBFS mounting with fuse. Solaris 11 Express hosts must be upgraded to Solaris 11 (see note below).
- Review Note 1021281.1 to learn about Solaris
support repositories which may be needed to apply SRU updates to Solaris
machines as prerequisites to configuring DBFS on Solaris database
servers.
- For
systems running Solaris 11 Express, follow MOS note 1431284.1 to upgrade
to Solaris 11. After upgrading to Solaris 11, apply Solaris 11 SRU 7 or
later.
Note: All references to ORACLE_HOME in this procedure are
to the RDBMS ORACLE_HOME directory (usually
/u01/app/oracle/product/11.2.0/dbhome_1) except where specifically
noted. All references to GI_HOME should be replaced with the ORACLE_HOME
directory for the Grid Infrastructure (GI).
By convention, the dollar sign ($) prompt signifies a command run as the oracle user (or Oracle software owner account) and the hash (#) prompt signifies a command that is run as root. This is further clarified by prefixing the $ or # with (oracle)$ or (root)#.
By convention, the dollar sign ($) prompt signifies a command run as the oracle user (or Oracle software owner account) and the hash (#) prompt signifies a command that is run as root. This is further clarified by prefixing the $ or # with (oracle)$ or (root)#.
- If running
Solaris 11, the system must be running Solaris 11 SRU 07 or later.
Additionally, the libfuse package must be installed. Presence of the
libfuse package can be verified with "pkg list libfuse" (should
return one line).
- To verify
the SRU currently on the system, as root run: "pkg info entire |
grep SRU" and you'll see a reference to the SRU in the output. The
delivered SRU version based on the Exadata release may be found in
888828.1. If the system is running SRU 06 or earlier, it will require an
update before installing the libfuse package. If the system is running
SRU 07 or later, skip to the next step to install libfuse.
- After
reviewing note 1021281.1 to configure repository access, run: pkg update
- The
system will apply the latest package updates and create a new boot
environment and set it as the default. To confirm, run: beadm list. You
should see a "R" shown next to the boot environment that will
be active upon reboot. The "N" will show the boot environment
that is active now. At this stage, these two letters should be on
different lines until you reboot the system.
- Reboot
the server to have it boot to the updated SRU environment.
- If running
Solaris 11, ensure that the libfuse package is installed by running
"pkg info libfuse" at the prompt. If no rows or an error are
returned, then follow the steps below to install libfuse.
- After
reviewing note 1021281.1 to configure repository access, run this command
to install libfuse: pkg install libfuse
- Confirm
that it installed by running: pkg verify libfuse
- The pkg
verify command should have no output if successful.
- In the
procedures listed in this note, both Solaris and Linux database servers
are assumed to have user equivalence for root and the DBFS respository
database (typically "oracle") users. Each of those users is
assumed to have a dbs_group file in their $HOME directory that contains a
list of cluster hostnames. The dcli utility is assumed to be available on
both Solaris and Linux database nodes.
- When
non-root commands are shown, it is assumed that proper environment
variables for ORACLE_SID and ORACLE_HOME have been set and that the PATH
has been modified to include $ORACLE_HOME/bin. These things may be done
automatically by the oraenv script on Linux or Solaris systems.
- For Linux
database servers, there are several steps to perform as root. Solaris
database servers do not require this step and can skip it. First, add the
oracle user to the fuse group on Linux. Run these commands as
the root user.
(root)# dcli -g ~/dbs_group -l root usermod
-a -G fuse oracle
Create the /etc/fuse.conf file with the user_allow_other option. Ensure proper privileges are applied to this file.
(root)# dcli -g ~/dbs_group -l root
"echo user_allow_other > /etc/fuse.conf"
(root)# dcli -g ~/dbs_group -l root chmod 644 /etc/fuse.conf
(root)# dcli -g ~/dbs_group -l root chmod 644 /etc/fuse.conf
- For
Solaris database servers, to enable easier debugging and troubleshooting,
it is suggested to add a line to the /etc/user_attr file to give the
oracle user the ability to mount filesystems directly. As root, run this
on a database server:
(root)# dcli -g ~/dbs_group -l root
"echo
'oracle::::type=normal;project=group.oinstall;defaultpriv=basic,priv_sys_mount'
>> /etc/user_attr"
After running this, logout of your oracle
session and login again to enable the additional privileges.
- For all
database servers, create an empty directory that will be used as the mount
point for the DBFS filesystem.
(root)# dcli -g ~/dbs_group -l root mkdir
/dbfs_direct
Change ownership on the mount point directory
so oracle can access it.
(root)# dcli -g ~/dbs_group -l root chown
oracle:dba /dbfs_direct
- For
Solaris database hosts, it is required to employ a workaround for bug
12832052. This requires an edit to the <GI_HOME>/bin/ohasd script to
be made on each database server locally. First, make a copy of the
current ohasd script as root:
(root)# dcli -g ~/dbs_group -l root cp -p /u01/app/11.2.0/grid/bin/ohasd /u01/app/11.2.0/grid/bin/ohasd.pre12832052
Then edit the script locally on each node (do not copy the file from one node to another) and change the original lines (at around line 231) from this:
$LOGMSG "exec $PERL /u01/app/11.2.0/grid/bin/crswrapexece.pl $ENV_FILE $ORASYM \"$@\""
exec $PERL /u01/app/11.2.0/grid/bin/crswrapexece.pl $ENV_FILE $ORASYM "$@"
To add a line before the existing ones as shown (the line starting with ppriv is added) so that the resulting section looks like this:
ppriv -s I+sys_mount $$
$LOGMSG "exec $PERL /u01/app/11.2.0/grid/bin/crswrapexece.pl $ENV_FILE $ORASYM \"$@\""
exec $PERL /u01/app/11.2.0/grid/bin/crswrapexece.pl $ENV_FILE $ORASYM "$@"
Note that this workaround will be required after each bundle patch installation on the GI_HOME until bug 12832052 is fixed and included in the bundle patch. - To pick up
the additional group (fuse) membership for the oracle user on Linux or the
workaround above on Solaris, Clusterware must be restarted. For example,
to restart Clusterware on all nodes at the same time (non-rolling), you
can use the following commands as root:
(root)# dcli -g ~/dbs_group -l root /u01/app/11.2.0/grid/bin/crsctl stop crs
(root)# dcli -g ~/dbs_group -l root /u01/app/11.2.0/grid/bin/crsctl start crs
Note that the "crsctl stop cluster -all" syntax may not be used as it leaves ohasd running and Solaris database hosts require it to be restarted for the workaround to take effect. - Create a
database to hold the DBFS repository. Follow Note 1191144.1 to create the DBFS
repository database.
- As the
RDBMS software owner, create the DBFS repository inside the repository
database. To create the repository, create a new tablespace to hold the
DBFS objects and a database user that will own the objects.
- Use
sqlplus to login to the DBFS repository database as a DBA user (i.e. SYS
or SYSTEM).
- In the
following create tablespace statement, use the any disk group (this
example shows DBFS_DG, but any diskgroup with sufficient space available
can be used) and size appropriately for the intended initial capacity.
Autoextend can be used, as long as the initial size can accommodate the
repository without requiring autoextension. The following example
statements create a tablespace of 32GB with autoextend on, allocating
additional 8GB to the tablespace as needed. You should size your
tablespace according to your expected DBFS utilization. A bigfile
tablespace is used in this example for convenience, but smallfile
tablespaces may be used as well.
SQL> create bigfile tablespace dbfsts datafile '+DBFS_DG' size 32g autoextend on next 8g maxsize 300g NOLOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO ;
SQL> create user dbfs_user identified by dbfs_passwd default tablespace dbfsts quota unlimited on dbfsts;
SQL> grant create session, create table, create view, create procedure, dbfs_role to dbfs_user;
- With the
user created and privileges granted, create the database objects that will
hold DBFS.
(oracle)$ cd $ORACLE_HOME/rdbms/admin
(oracle)$ sqlplus dbfs_user/dbfs_passwd
SQL> start dbfs_create_filesystem dbfsts FS1
This script takes two arguments:
- dbfsts: tablespace for the DBFS
database objects
- FS1: filesystem name, this can be
any string and will appear as a directory under the mount point
For more information about these arguments,
see the DBFS documentation
11g: http://download.oracle.com/docs/cd/E11882_01/appdev.112/e18294/adlob_client.htm
12c: https://docs.oracle.com/database/121/ADLOB/adlob_client.htm
Check the output of the dbfs_create_filesystem script for errors.
11g: http://download.oracle.com/docs/cd/E11882_01/appdev.112/e18294/adlob_client.htm
12c: https://docs.oracle.com/database/121/ADLOB/adlob_client.htm
Check the output of the dbfs_create_filesystem script for errors.
- For PDB
databases only: PDBs will require a connect entry in tnsnames and a new
service.
In tnsnames.ora, please add a connect entry using one of the methods below.
IPC (recommended for best performance). In this method, make sure the KEY matches the IPC key in listener.ora. By default, the entry exists in the listener file and has the value KEY=LISTENER.
pdb01ipc =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY=LISTENER))
(CONNECT_DATA =
(SERVICE_NAME = pdb01)
)
)
OR
alternative method:
pdb01 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname-scanname)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = pdb01)
)
) - For PDB
databases only: create a new service on the container database
(CDB). In the example, replace the container name, PDB name, and
service name with your values.
srvctl add service -db cdbm01 -pdb pdb01 -service pdb01svc -preferred "cdbm011,cdbm012"
srvctl start service -d cdbm01 -s pdb01svc
srvctl config service -d cdbm01 - Perform
the one-time setup steps for mounting the filesystem. The mount-dbfs.zip
script attached to this note provides the logic and necessary scripting to
mount DBFS as a cluster resource. The one-time setup steps required for
each of the two mount methods (dbfs_client or mount) are outlined below.
There are two options for mounting the DBFS filesystem and each will
result in the filesystem being available at /dbfs_direct. Choose one of
the two options.
- The first option is to utilize
the dbfs_client command directly, without using an Oracle Wallet. There
are no additional setup steps required to use this option.
- The second option is to use the
Oracle Wallet to store the password and make use of the mount command.
The wallet directory ($HOME/dbfs/wallet in the example here) may be any oracle-writable
directory (creating a new, empty directory is recommended). All commands
in this section should be run by the oracle user unless otherwise noted.
Note that the Oracle Wallet is part of the Oracle Advanced Security
Option and is separately licensed.
- On Linux
DB nodes running storage software 12.1.x.x.x or lower, set
the library path on all nodes using the commands that follow (substitute
proper RDBMS ORACLE_HOMEs):
(root)# dcli -g dbs_group -l root mkdir -p /usr/local/lib
(root)# dcli -g dbs_group -l root ln -s /u01/app/oracle/product/11.2.0/dbhome_1/lib/libnnz11.so /usr/local/lib/libnnz11.so
(root)# dcli -g dbs_group -l root ln -s /u01/app/oracle/product/11.2.0/dbhome_1/lib/libclntsh.so.11.1 /usr/local/lib/libclntsh.so.11.1
(root)# dcli -g dbs_group -l root ln -s /lib64/libfuse.so.2 /usr/local/lib/libfuse.so.2
(root)# dcli -g dbs_group -l root 'echo /usr/local/lib >> /etc/ld.so.conf.d/usr_local_lib.conf'
(root)# dcli -g dbs_group -l root ldconfig - On Linux
DB nodes running storage software 12.2.x.x.x or higher, set
the library path on all nodes using the commands that follow (substitute
proper RDBMS ORACLE_HOMEs):
(root)# dcli -g dbs_group -l root mkdir -p /usr/local/lib
(root)# dcli -g dbs_group -l root ln -s /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libnnz12.a /usr/local/lib/libnnz12.a
(root)# dcli -g dbs_group -l root ln -s /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libclntsh.so /usr/local/lib/libclntsh.so
(root)# dcli -g dbs_group -l root ln -s /usr/lib64/libfuse.so.2 /usr/local/lib/libfuse.so.2
(root)# dcli -g dbs_group -l root 'echo /usr/local/lib >> /etc/ld.so.conf.d/usr_local_lib.conf'
(root)# dcli -g dbs_group -l root ldconfig - Create a
new TNS_ADMIN directory ($HOME/dbfs/tnsadmin) for exclusive use by the DBFS mount
script.
(oracle)$ dcli -g dbs_group -l oracle mkdir -p $HOME/dbfs/tnsadmin - Create
the $HOME/dbfs/tnsadmin/tnsnames.ora file with the following
contents on the first node. This example presumes that the
name of the DBFS repository database is fsdb and the instance on the
first node is named fsdb1. If your RDBMS ORACLE_HOME is not /u01/app/oracle/product/11.2.0/dbhome_1, then change the PROGRAM and ENVSsettings accordingly).
fsdb.local =
(DESCRIPTION =
(ADDRESS =
(PROTOCOL=BEQ)
(PROGRAM=/u01/app/oracle/product/11.2.0/dbhome_1/bin/oracle)
(ARGV0=oraclefsdb1)
(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
(ENVS='ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1,ORACLE_SID=fsdb1')
)
(CONNECT_DATA=(SID=fsdb1))
) - On other
nodes, create similar entries (all using the name "fsdb.local")
and change all occurrences of fsdb1 to the appropriate
instance name to match the instance name running on the node where that
tnsnames.ora file resides. The tnsnames.ora file on each node
will be slightly different so that each tnsnames.ora file references
the instance running locally on that node.
- On each
node, create the $HOME/dbfs/tnsadmin/sqlnet.ora file with the same contents on each node
after making the proper substitution for <HOMEDIR_PATH_HERE>:
WALLET_LOCATION =
(SOURCE=(METHOD=FILE)
(METHOD_DATA=(DIRECTORY=<HOMEDIR_PATH_HERE>/dbfs/wallet))
)
SQLNET.WALLET_OVERRIDE = TRUE
Ensure you substitute the correct path for the DIRECTORY attribute. You may not use variables in this path - it must be a literal full path.
Copy the file to all nodes using dcli:
(oracle)$ dcli -g ~/dbs_group -l oracle -d $HOME/dbfs/tnsadmin -f $HOME/dbfs/tnsadmin/sqlnet.ora - Create a
wallet directory on one database server as the oracle user. For
example:
(oracle)$ mkdir -p $HOME/dbfs/wallet - Create
an empty auto-login wallet:
(oracle)$ mkstore -wrl $HOME/dbfs/wallet -create - Add the
necessary credentials to the wallet. The credentials can be specific for
the connect string used as shown here:
(oracle)$ mkstore -wrl $HOME/dbfs/wallet -createCredential fsdb.local dbfs_user dbfs_passwd - Copy the
wallet files to all database nodes.
(oracle)$ dcli -g ~/dbs_group -l oracle mkdir -p $HOME/dbfs/wallet
(oracle)$ dcli -g ~/dbs_group -l oracle -d $HOME/dbfs/wallet -f $HOME/dbfs/wallet/ewallet.p12
(oracle)$ dcli -g ~/dbs_group -l oracle -d $HOME/dbfs/wallet -f $HOME/dbfs/wallet/cwallet.sso - Ensure
that the TNS entry specified above (fsdb.local in the example) is
defined and working properly (checking with "TNS_ADMIN=/home/oracle/dbfs/tnsadmin
tnsping fsdb.local" is a good test).
(oracle)$ dcli -g ~/dbs_group -l oracle "export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1; TNS_ADMIN=$HOME/dbfs/tnsadmin /u01/app/oracle/product/11.2.0/dbhome_1/bin/tnsping fsdb.local | grep OK"
dm01db01: OK (20 msec)
dm01db02: OK (30 msec) - Download
the mount-dbfs.zip script attached to this note, place it on one database
server in a temporary location (like /tmp), and unzip the file. You will have two files: mount-dbfs.sh and
mount-dbfs.conf. To ensure that the file transfer didn't modify the script
contents, run dos2unix against them on the database server:
For Linux, run this:
(root)# dos2unix /tmp/mount-dbfs.sh
(root)# dos2unix /tmp/mount-dbfs.conf
For Solaris: run these:
(root)# dos2unix /tmp/mount-dbfs.sh /tmp/mount-dbfs.sh.new
(root)# mv /tmp/mount-dbfs.sh.new /tmp/mount-dbfs.sh
(root)# dos2unix /tmp/mount-dbfs.conf /tmp/mount-dbfs.conf.new
(root)# mv /tmp/mount-dbfs.conf.new /tmp/mount-dbfs.conf
The following is an option for customers that have removed the dos2unix tool:
1. vi /tmp/mount-dbfs.sh
2. :set ff=unix
3. :wq
4. <repeat for /tmp/mount-dbfs.conf>
(root)# dos2unix /tmp/mount-dbfs.sh
(root)# dos2unix /tmp/mount-dbfs.conf
For Solaris: run these:
(root)# dos2unix /tmp/mount-dbfs.sh /tmp/mount-dbfs.sh.new
(root)# mv /tmp/mount-dbfs.sh.new /tmp/mount-dbfs.sh
(root)# dos2unix /tmp/mount-dbfs.conf /tmp/mount-dbfs.conf.new
(root)# mv /tmp/mount-dbfs.conf.new /tmp/mount-dbfs.conf
The following is an option for customers that have removed the dos2unix tool:
1. vi /tmp/mount-dbfs.sh
2. :set ff=unix
3. :wq
4. <repeat for /tmp/mount-dbfs.conf>
- Edit the
variable settings in mount-dbfs.conf for your environment. Edit or confirm
the settings for the following variables in the script. Comments in the
script will help you to confirm the values for these variables.
- DBNAME
- MOUNT_POINT
- DBFS_USER
- ORACLE_HOME
(should be the RDBMS ORACLE_HOME directory)
- LOGGER_FACILITY
(used by syslog to log the messages/output from this script)
- MOUNT_OPTIONS
- DBFS_PASSWD
(used only if WALLET=false)
- DBFS_PWDFILE_BASE
(used only if WALET=false)
- WALLET
(must be true or false)
- TNS_ADMIN
(used only if WALLET=true)
- DBFS_LOCAL_TNSALIAS
- IS_PDB
(set to true if using PDBs)
- PDB (PDB
name, if applicable)
- PDB_SERVICE
(the service name you created in step 14, if applicable)
- After
editing, copy mount-dbfs.sh (rename it if desired or needed) to the proper
directory (GI_HOME/crs/script) on database nodes and set proper
permissions on it, as the root user. Repeat the steps for
mount-dbfs.conf and copy the file to /etc/oracle.
(root)# dcli -g ~/dbs_group -l root -d /u01/app/11.2.0/grid/crs/script -f /tmp/mount-dbfs.sh
(root)# dcli -g ~/dbs_group -l root chown oracle:dba /u01/app/11.2.0/grid/crs/script/mount-dbfs.sh
(root)# dcli -g ~/dbs_group -l root chmod 750 /u01/app/11.2.0/grid/crs/script/mount-dbfs.sh
(root)# dcli -g ~/dbs_group -l root -d /etc/oracle -f /tmp/mount-dbfs.conf
(root)# dcli -g ~/dbs_group -l root chown oracle:dba /etc/oracle/mount-dbfs.conf
(root)# dcli -g ~/dbs_group -l root chmod 640 /etc/oracle/mount-dbfs.conf - With the
appropriate preparation steps for one of the two mount methods complete,
the Clusterware resource for DBFS mounting can now be registered. Register
the Clusterware resource by executing the following as the RDBMS owner of
the DBFS repository database (typically "oracle") user. The
ORACLE_HOME and DBNAME should reference your Grid Infrastructure
ORACLE_HOME directory and your DBFS repository database name,
respectively. If mounting multiple filesystems, you may also need to
modify the ACTION_SCRIPT and RESNAME. For more information, see section
below regarding Creating and Mounting Multiple DBFS
Filesystems. Create this short script and run it as the RDBMS owner
(typically "oracle") on only one database server in your
cluster.
##### start script add-dbfs-resource.sh
#!/bin/bash
ACTION_SCRIPT=/u01/app/11.2.0/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_mount
DBNAME=fsdb
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/11.2.0/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
-type local_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
SCRIPT_TIMEOUT=300"
##### end script add-dbfs-resource.sh
Note: if setting up DBFS with GoldenGate, -type should be set to cluster_resource instead of local_resource, as documented in http://www.oracle.com/technetwork/database/features/availability/maa-goldengate-rac-2007111.pdf - Then run
this as the RDBMS Infrastructure owner (typically oracle) on one database
server only:
(oracle)$ sh ./add-dbfs-resource.sh
When successful, this command has no output.
It is not necessary to restart the database resource at this point, however, you should review the following note regarding restarting the database now that the dependencies have been added.
Note: After creating the $RESNAME resource, in order to stop the $DBNAME database when the $RESNAME resource is ONLINE, you will have to specify the force flag when using srvctl. For example: "srvctl stop database -d fsdb -f". If you do not specify the -f flag, you will receive an error like this:
(oracle)$ srvctl stop database -d fsdb
PRCD-1124 : Failed to stop database fsdb and its services
PRCR-1065 : Failed to stop resource (((((NAME STARTS_WITH ora.fsdb.) && (NAME ENDS_WITH .svc)) && (TYPE == ora.service.type)) && ((STATE != OFFLINE) || (TARGET != OFFLINE))) || (((NAME == ora.fsdb.db) && (TYPE == ora.database.type)) && (STATE != OFFLINE)))
CRS-2529: Unable to act on 'ora.fsdb.db' because that would require stopping or relocating 'dbfs_mount', but the force option was not specified
Using the -f flag allows a successful shutdown and results in no output.
Also note that once the $RESNAME resource is started and then the database it depends on is shut down as shown above (with the -f flag), the database will remain down. However, if Clusterware is then stopped and started, because the $RESNAME resource is still has a target state of ONLINE, it will cause the database to be started automatically when normally it would have remained down. To remedy this, ensure that $RESNAME is taken offline (crsctl stop resource $RESNAME) at the same time the DBFS database is shutdown.
- After the
resource is created, you should be able to see the dbfs_mount resource by
running crsctl stat
res dbfs_mount and
it should show OFFLINE on all nodes. For example:
(oracle)$ <GI_HOME>/bin/crsctl stat res dbfs_mount -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
dbfs_mount
OFFLINE OFFLINE dscbac05
OFFLINE OFFLINE dscbac06
- To bring
dbfs_mount online which will mount the filesystem on all nodes, run crsctl start resource dbfs_mount from any cluster node. This
will mount DBFS on all nodes. For example:
(oracle)$ <GI_HOME>/bin/crsctl start resource dbfs_mount
CRS-2672: Attempting to start 'dbfs_mount' on 'dscbac05'
CRS-2672: Attempting to start 'dbfs_mount' on 'dscbac06'
CRS-2676: Start of 'dbfs_mount' on 'dscbac06' succeeded
CRS-2676: Start of 'dbfs_mount' on 'dscbac05' succeeded
(oracle)$ <GI_HOME>/bin/crsctl stat res dbfs_mount -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
dbfs_mount
ONLINE ONLINE dscbac05
ONLINE ONLINE dscbac06
- Once the
dbfs_mount Clusterware resource is online, you should be able to observe
the mount point with df -h on each
node. Also, the default startup for this resource is "restore"
which means that if it is online before Clusterware is stopped, it will
attempt to come online after Clusterware is restarted. For example:
(oracle)$ df -h /dbfs_direct
Filesystem Size Used Avail Use% Mounted on
dbfs 1.5M 40K 1.4M 3% /dbfs_direct - To unmount
DBFS on all nodes, run this as the oracle user:
(oracle)$ <GI_HOME>/bin/crsctl stop res dbfs_mount
Note the following regarding restarting the
database now that the dependencies have been added between the dbfs_mount
resource and the DBFS repository database resource.
Note: After creating the dbfs_mount resource,
in order to stop the DBFS repository database when the dbfs_mount resource is
ONLINE, you will have to specify the force flag when using srvctl. For example:
"srvctl
stop database -d fsdb -f".
If you do not specify the -f flag, you will receive an error like this:
(oracle)$ srvctl stop database -d fsdb
PRCD-1124 : Failed to stop database fsdb and its services
PRCR-1065 : Failed to stop resource (((((NAME STARTS_WITH ora.fsdb.) && (NAME ENDS_WITH .svc)) && (TYPE == ora.service.type)) && ((STATE != OFFLINE) || (TARGET != OFFLINE))) || (((NAME == ora.fsdb.db) && (TYPE == ora.database.type)) && (STATE != OFFLINE)))
CRS-2529: Unable to act on 'ora.fsdb.db' because that would require stopping or relocating 'dbfs_mount', but the force option was not specified
PRCD-1124 : Failed to stop database fsdb and its services
PRCR-1065 : Failed to stop resource (((((NAME STARTS_WITH ora.fsdb.) && (NAME ENDS_WITH .svc)) && (TYPE == ora.service.type)) && ((STATE != OFFLINE) || (TARGET != OFFLINE))) || (((NAME == ora.fsdb.db) && (TYPE == ora.database.type)) && (STATE != OFFLINE)))
CRS-2529: Unable to act on 'ora.fsdb.db' because that would require stopping or relocating 'dbfs_mount', but the force option was not specified
Using the -f flag allows a successful
shutdown and results in no output.
Also note that once the dbfs_mount resource
is started and then the database it depends on is shut down as shown above
(with the -f flag), the database will remain down. However, if Clusterware is
then stopped and started, because the dbfs_mount resource still has a target
state of ONLINE, it will cause the database to be started automatically when
normally it would have remained down. To remedy this, ensure that dbfs_mount is
taken offline (crsctl stop resource dbfs_mount) at the same time the DBFS
database is shutdown.
There are several cases where the ORACLE_HOMEs used in
the management or mounting of DBFS may change. The most common case is when
performing an out-of-place upgrade or doing out-of-place patching by cloning an
ORACLE_HOME. When the Grid Infrastructure ORACLE_HOME or RDBMS ORACLE_HOME
change, a few changes are required. The items that require changing are:
- Modifications
to the mount-dbfs.sh script. This is also a good time to consider updating
to the latest version of the script attached to this note.
- If using
the wallet-based mount on Linux hosts, the shared libraries must be reset.
For example, if the new RDBMS
ORACLE_HOME=/u01/app/oracle/product/11.2.0.2/dbhome_1 *AND* the wallet-based
mounting method using /etc/fstab is chosen, then the following commands will be
required as the root user. If the default method (using dbfs_client directly)
is used, these steps may be skipped.
- (root)#
dcli -l root -g ~/dbs_group rm
-f /usr/local/lib/libnnz11.so /usr/local/lib/libclntsh.so.11.1
- (root)#
dcli -l root -g ~/dbs_group "cd /usr/local/lib; ln -sf
/u01/app/oracle/product/11.2.0.2/dbhome_1/lib/libnnz11.so"
- (root)#
dcli -l root -g ~/dbs_group "cd /usr/local/lib; ln -sf
/u01/app/oracle/product/11.2.0.2/dbhome_1/lib/libclntsh.so.11.1"
- (root)#
dcli -l root -g ~/dbs_group ldconfig
- (root)#
dcli -l root -g ~/dbs_group rm -f /sbin/mount.dbfs ### remove this, new
deployments don't use it any longer
For all deployments, the mount-dbfs.sh script must be
located in the new Grid Infrastructure ORACLE_HOME (<GI_HOME>/crs/script/mount-dbfs.sh). At times when the
ORACLE_HOMEs change, the latest mount-dbfs.sh script should be downloaded from
this note's attachments and deployed using the steps detailed earlier in this
note steps 14-16. Since the custom resource is already registered, it does not
need to be registered again.
With the new script deployed into the correct location on the new ORACLE_HOME, the next step is to modify the cluster resource, to change the location of the mount-dbfs.sh script. Also, if not already configured, take the opportunity to change the RESTART_ATTEMPTS=10. Use these commands which should be run from any cluster node (replace <NEW_GI_HOME> with full path appropriately):
With the new script deployed into the correct location on the new ORACLE_HOME, the next step is to modify the cluster resource, to change the location of the mount-dbfs.sh script. Also, if not already configured, take the opportunity to change the RESTART_ATTEMPTS=10. Use these commands which should be run from any cluster node (replace <NEW_GI_HOME> with full path appropriately):
- (oracle)$
crsctl modify resource dbfs_mount -attr
"ACTION_SCRIPT=<NEW_GI_HOME>/crs/script/mount-dbfs.sh"
- (oracle)$
crsctl modify resource dbfs_mount -attr "RESTART_ATTEMPTS=10"
After these changes are complete, verify that the status
of the resources is still online. This concludes the changes required when the
ORACLE_HOMEs change.
The steps in this section will deconfigure the components
configured by the steps above. The steps here will only deconfigure the parts
that were configured by this procedure.
- Stop the dbfs_mount
service in clusterware using the oracle account.
(oracle)$ <GI_HOME>/bin/crsctl stop resource dbfs_mount
CRS-2673: Attempting to stop 'dbfs_mount' on 'dadzab06'
CRS-2673: Attempting to stop 'dbfs_mount' on 'dadzab05'
CRS-2677: Stop of 'dbfs_mount' on 'dadzab05' succeeded
CRS-2677: Stop of 'dbfs_mount' on 'dadzab06' succeeded - Confirm
that the resource is stopped and then remove the clusterware resource for
dbfs_mount as the oracle (or Grid Infrastructure owner) user.
(oracle)$ <GI_HOME>/bin/crsctl stat resource dbfs_mount -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
dbfs_mount
OFFLINE OFFLINE dadzab05
OFFLINE OFFLINE dadzab06
(oracle)$ <GI_HOME>/bin/crsctl delete resource dbfs_mount - If a
wallet was used, remove the /home/oracle/dbfs directory and subdirectories
as the oracle user.
(oracle)$ dcli -g dbs_group -l oracle rm -rf $HOME/dbfs - Remove the
custom action script that supported the resource and the /etc/fuse.conf
file as the root user.
(root)# dcli -g dbs_group -l root rm -f /u01/app/11.2.0/grid/crs/script/mount-dbfs.sh /etc/fuse.conf - Remove the
mount point directory as the root user.
(root)# dcli -g dbs_group -l root rmdir /dbfs_direct - On Linux
servers only, modify the group memberships for the oracle user account.
This assumes that you have not added any other group memberships for the
oracle account (other than the defaults configured during deployment). The
following is an example showing that the group memberships that remain do
not include the fuse group. In this case, the oracle user was a member of
oinstall, dbs, oper, and asmdba groups in addition to the fuse group. The
group memberships for the oracle user may vary, so some modification to
this example command may be required.
(root)# dcli -g dbs_group -l root usermod -G oinstall,dba,oper,asmdba oracle - On Linux
servers only, if a wallet was used, follow these steps to remove wallet-specifc
configuration changes:
- dcli -g
~/dbs_group -l root 'sed -i "/^\/sbin\/mount.dbfs/d" /etc/fstab
- dcli -g
~/dbs_group -l root rm -f /sbin/mount.dbfs
- dcli -g
~/dbs_group -l root 'cd /usr/local/lib; rm -f libclntsh.so.11.1
libfuse.so.2 libnnz11.so'
- dcli -g
~/dbs_group -l root 'sed -i "/^\/usr\/local\/lib$/d"
/etc/ld.so.conf.d/usr_local_lib.conf'
- dcli -g
~/dbs_group -l root ldconfig
- dcli -g
~/dbs_group -l root 'sed -i "/^\/sbin\/mount.dbfs/d"
/etc/fstab'
- On Solaris
servers only, remove the line from /etc/user_attr that was added in this
procedure by executing the follow command as root:
(root)# dcli -g ~/dbs_group -l root 'sed
"/^oracle::::/d" /etc/user_attr > /tmp/user_attr.new ; cp
/tmp/user_attr.new /etc/user_attr ; rm -f /tmp/user_attr.new
- The DBFS
repository objects remain. You may either:
- Delete
the DBFS repository database using DBCA once the steps above are
completed.
- Remove
the DBFS repository by connecting to the database as the repository owner
using SQL*Plus and running @?/rdbms/admin/dbfs_drop_filesystem
<filesystem-name>. In the example in this note, the filesystem-name
is FS1, so the command would be @?/rdbms/admin/dbfs_drop_filesystem FS1
SQL> connect dbfs_user/dbfs_passwd
SQL> @?/rdbms/admin/dbfs_drop_filesystem FS1
SQL> connect / as sysdba
SQL> drop user dbfs_user cascasde;
SQL> @?/rdbms/admin/dbfs_drop_filesystem FS1
SQL> connect / as sysdba
SQL> drop user dbfs_user cascasde;
There are several ways to create additional DBFS
filesystems. Some environments may wish to have more than one DBFS filesystem
to support non-direct_io. DBFS filesystems may always hold shell script files
or binary files, but if mounted with the direct_io option, the files on DBFS
will not be executable. In such cases, a second DBFS filesystem may be used
since it can be mounted without the direct_io option to support executable
files or scripts.
There is nothing "inside" the DBFS filesystem that makes it direct_io or non-direct. Instead, to change from one type of access to the other, the filesystem should be unmounted (using the CRS resource), mount options changed in the mount-dbfs.sh script on all nodes, and then the filesystem mounted again (using the CRS resource).
Let's review some high-level points related to multiple DBFS filesystems.
There is nothing "inside" the DBFS filesystem that makes it direct_io or non-direct. Instead, to change from one type of access to the other, the filesystem should be unmounted (using the CRS resource), mount options changed in the mount-dbfs.sh script on all nodes, and then the filesystem mounted again (using the CRS resource).
Let's review some high-level points related to multiple DBFS filesystems.
- Create
additional filesystems under same DBFS repository owner (database user)
- The
additional filesystems will show as sub-directories which are the
filesystem names given during creation of the filesystem (second argument
to the dbfs_create_filesystem_advanced script).
- There is
only one mount point for all filesystems created in this way.
- Only one
mount-dbfs.sh script needs to be configured.
- All
filesystems owned by the same DBFS repository owner will share the same
mount options (i.e. direct_io).
- Create
another DBFS repository owner (database user) and new filesystems under that
owner.
- Can be in
the same database with other DBFS repository owners or in a completely
separate database.
- Completely
separate: can use different tablespaces (which could be in different
diskgroups), separate mount points, possibly different mount options
(direct_io versus non-direct_io).
- One DBFS
filesystem has no impact on others in terms of administration or
dbfs_client start/stop.
- Requires
a new mount point to be created and used.
- Requires
a second mount-dbfs.sh to be created and configured in Clusterware.
- Also
supports having completely separate ORACLE_HOMEs with possibly different
software owner (Linux/Solaris) accounts managing the repository
databases.
To configure option #1 above, follow these steps:
- It is
recommended (but optional) to create a new tablespace for the new DBFS
filesystem you are creating.
- Connect to
the DBFS repository as the current owner (dbfs_user is the example owner
used in this note) and then run the dbfs_filesystem_create_advanced script
again using a different filesystem name (the 2nd argument).
- The
filesystem will appear as another subdirectory just below the chosen mount
point.
To configure option #2 above, follow these steps:
- Optionally
create a second DBFS repository database.
- Create a
new tablespace and a DBFS repository owner account (database user) for the
new DBFS filesystem as shown in step 4 above.
- Create the
new filesystem using the procedure shown in step 5 above. substituting the
proper values for the tablespace name and desired filesystem name.
- If using a
wallet, you must create a separate TNS_ADMIN directory and a separate
wallet. Be sure to use the proper ORACLE_HOME, ORACLE_SID, username and
password when setting up those components.
- Ensure you
use the latest mount-dbfs.sh script attached to this note. Updates were
made on 7-Oct-2010 to support multiple filesystems. If you are using
previous versions of this script, download the new version and after
applying the necessary configuration modifications in it, replace your
current version.
- To have
Clusterware manage a second filesystem mount, use a second copy of the
mount-dbfs.sh and mount-dbfs.conf scripts. Rename them to a unique file
name like mount-dbfs2.sh/mount-dbfs2.conf and place it in the proper
directory as shown in step 18 above. Once mount-dbfs2.conf has been
modified with proper configuration information, a second Clusterware
resource (with a unique name) should be created. The procedure for this is
outlined in step 19 above. Note: if you are using PDBs, then a new
service and tnsnames entry needs to be created first, as shown in steps 13
and 14.
The remaining steps in this note (configuration,
starting, stopping) relate to a single DBFS filesystem resource. If you create
additional DBFS filesystem resources, you will need to start each of them
individually at startup time (i.e. when the database is restarted). Starting or
stopping one DBFS resource does not have any affect on other DBFS resources you
have configured.
When configuring DBFS, if the clusterware resource(s) do
not mount the DBFS filesystem successfully, it may be useful to run the
mount-dbfs.sh script directly from the command line on one node. You should run
this as the RDBMS owner user like this (with specifying one of the 3 arguments
shown):
<GI_HOME>/crs/script/mount-dbfs.sh [ start | stop | check ]
Your script may have a slightly different name, especially if you are deploying multiple filesystems. Often, running the script in this way will display errors that may otherwise not be reported by clusterware.
Also, starting with the 28-Jan-2011 version, you will find messages related to DBFS in the /var/log/messages (on Linux) or /var/adm/messages (on Solaris) file tagged with the string DBFS_<mountpoint> for easy identification.
If you encounter issues when mounting the DBFS filesystem, it may be required to umount the filesystem manually. To unmount file system, run "fusermount -u /dbfs_direct" (on Linux) on the node(s) having problems and then make sure that the dbfs_client (or mount.dbfs if using the second mounting option) is not running. When using "fusermount -u /dbfs_direct" to unmount the filesystem, if the client (dbfs_client or mount.dbfs) is still running, that process should be killed. On Solaris, use "umount /dbfs_direct" instead of "fusermount -u /dbfs_direct".
Normally, if no mounts are present, there should be an empty directory at /dbfs_direct (ls -l /dbfs_direct). When running "ls -l /dbfs_direct", if an error message like "Transport Endpoint is not connected" is observed, this may indicate that the DBFS client (dbfs_client) is no longer running, but fuse still has record of the mount. Often, this will be resolved by using "fusermount -u /dbfs_direct" and ensuring that the client process is no longer running before re-attempting the mount operation using one of the methods outlined in this note.
Other items that can lead to "Transport Endpoint is not connected" error:
- the sqlnet.ora is missing
- the dbfs_user's password contained a dollar sign. So the password included in the wallet configuration command mkstore must be enclosed by single quotes.
- TNS_ADMIN must be defined prior to running the mkstore command because it needs to connect to the database via the SQL*Net connect string fsdb.local.
If attempting to umount the filesystem results in errors saying "device busy", then you may try using "fusermount -u -z /dbfs_direct" (on Linux) and then identify the dbfs_client and/or mount.dbfs programs that are still running and kill them. Then, you should be able to mount the filesystem again.
<GI_HOME>/crs/script/mount-dbfs.sh [ start | stop | check ]
Your script may have a slightly different name, especially if you are deploying multiple filesystems. Often, running the script in this way will display errors that may otherwise not be reported by clusterware.
Also, starting with the 28-Jan-2011 version, you will find messages related to DBFS in the /var/log/messages (on Linux) or /var/adm/messages (on Solaris) file tagged with the string DBFS_<mountpoint> for easy identification.
If you encounter issues when mounting the DBFS filesystem, it may be required to umount the filesystem manually. To unmount file system, run "fusermount -u /dbfs_direct" (on Linux) on the node(s) having problems and then make sure that the dbfs_client (or mount.dbfs if using the second mounting option) is not running. When using "fusermount -u /dbfs_direct" to unmount the filesystem, if the client (dbfs_client or mount.dbfs) is still running, that process should be killed. On Solaris, use "umount /dbfs_direct" instead of "fusermount -u /dbfs_direct".
Normally, if no mounts are present, there should be an empty directory at /dbfs_direct (ls -l /dbfs_direct). When running "ls -l /dbfs_direct", if an error message like "Transport Endpoint is not connected" is observed, this may indicate that the DBFS client (dbfs_client) is no longer running, but fuse still has record of the mount. Often, this will be resolved by using "fusermount -u /dbfs_direct" and ensuring that the client process is no longer running before re-attempting the mount operation using one of the methods outlined in this note.
Other items that can lead to "Transport Endpoint is not connected" error:
- the sqlnet.ora is missing
- the dbfs_user's password contained a dollar sign. So the password included in the wallet configuration command mkstore must be enclosed by single quotes.
- TNS_ADMIN must be defined prior to running the mkstore command because it needs to connect to the database via the SQL*Net connect string fsdb.local.
If attempting to umount the filesystem results in errors saying "device busy", then you may try using "fusermount -u -z /dbfs_direct" (on Linux) and then identify the dbfs_client and/or mount.dbfs programs that are still running and kill them. Then, you should be able to mount the filesystem again.
On Solaris hosts, if you want to inspect the arguments
for dbfs_client to see what options it was invoked with, you will want to
identify the process ID using "ps -ef|grep dbfs_client", but then you'll need to use the
"pargs <PID>" command to see the complete options. The Solaris
output for the ps command truncates the command line at 80 characters which is
typically not enough to display all options.
If you receive an error saying "File system already
present at specified mount point <mountpoint>" then ensure that the
mount point directory is empty. If there are any contents in the mount point
directory, this error will prevent the filesystem mount from succeeding.
Seasoned system administrators will note that this behavior differs from
typical filesystem mounts where mount point directories can have contents and
those contents will be hidden while the mounted filesystem remains mounted. In
other terms, it "overlays" the new mount. With fuse-mounted
filesystems, the mount point directory must be empty prior to mounting the fuse
(in this case, DBFS) filesystem.
If EXECUTE permissions for DBMS_LOB was revoked for SYS
due to security hardening, you will need to at least grant EXECUTE for the dbfs
user, otherwise you could receive similar to the following when accessing dbfs
mount point:
# ls -l /dbfs_direct
ls: reading directory /dbfs_direct: Invalid argument
total 0
No comments:
Post a Comment