Installing Oracle Database 18c on FlashArray
This article provides instructions on how to install a single instance Oracle Database 18c on Linux with storage on a Pure FlashArray. This objective is to provide an overview of the installation process with a focus on the steps needed to be carried out on the FlashArray.
Oracle provides two storage options for the database - File System and Automatic Storage Management (ASM in short). This document will go through the installation process for each. For detailed configuration options and instructions for installing the Oracle Database 18c, please refer to the Oracle Grid Infrastructure Installation Guide and the Oracle Database Installation Guide.
Before you begin, please go through the Oracle Database Installation Checklist and make sure all the prerequisites have been met.
Please review the recommendations provided in Oracle Database Recommended Settings for FlashArray and implement the ones that are applicable to your environment.
I. Prepare database host
1. Install prerequisite packages
1.1 Install sg3_utils. This provides useful SCSI utilities.
# yum install sg3_utils
1.2 Oracle Universal Installer requires an X Window System so make sure X libraries are installed. Installing xclock will pull in the required dependencies and can also be used to test X Window System is installed correctly and in a working state.
# yum install xclock
For more details, please refer to Setting up X Window Display section in the Appendix.
1.3 Download and install the pre-install rpm.
# curl -o oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm##
# yum -y localinstall oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm
Check the RPM log file to review the changes that the pre-install rpm made. On Oracle Linux 7, it can be found at /var/log/oracle-database-preinstall-18c/backup/<timestamp>/orakernel.log.
The pre-install RPM should install all the prerequisite package needed for the installation. For reference, a complete list can be found at Operating System Requirements for x86-64 Linux Platforms.
You can check if one or more packages are installed using the following command.
# rpm -q binutils compat-libstdc++ gcc glibc libaio libgcc libstdc++ make sysstat unixodbc
2. Create Users and Groups
2.1 Create operating system users and groups.
groupadd -g 54321 oinstall groupadd -g 54322 dba groupadd -g 54323 oper groupadd -g 54331 asmadmin # needed if installing ASM groupadd -g 54332 asmdba # needed if installing ASM groupadd -g 54328 asmoper # needed if installing ASM
useradd -u 54331 -g oinstall -G dba,asmadmin,asmdba,asmoper grid useradd -u 54321 -g oinstall -G dba,oper,asmdba oracle
2.2 Add umask to .bash_profile of oracle and grid user
umask 022
3. Add Host entries
3.1 Add entries to /etc/hosts
3.2 Add entries to /etc/hostname
hostnamectl set-hostname orademo1.puretec.purestorage.com --static
3.3 Set ORACLE_HOSTNAME in .bash_profile file.
$ export ORACLE_HOSTNAME=orademo1.puretec.purestorage.com
4. Update UDEV rules
Create a rules file called /etc/udev/rules.d/99-pure-storage.rules
with the settings described below so that the settings get applied after each reboot.
4.1 Change Disk I/O Scheduler
Disk I/O schedulers reorder, delay, or merge requests for disk I/O to achieve better throughput and lower latency. Linux comes with multiple disk I/O schedulers, like Deadline, Noop, Anticipatory, and Completely Fair Queuing (CFQ).
Pure Storage recommends that noop scheduler be used with FlashArrays. Here is an example of how a rule can be formulated.
# Use noop scheduler for high-performance solid-state storage ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="noop"
4.2 Collection of entropy for the kernel random number generator
Some I/O events contribute to the entropy pool for /dev/random. Linux uses the entropy pool for things like generating SSH keys, SSL certificates or anything else crypto. Preventing the I/O device from contributing to this pool isn't going to materially impact the randomness. This parameter can be set to 0 to reduce this overhead.
# Reduce CPU overhead due to entropy collection ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/add_random}="0"
4.3 Set rq_affinity
By default this value is set to 1, which means that once an I/O request has been completed by the block device, it will be sent back to the "group" of CPUs that the request came from. This can sometimes help to improve CPU performance due to caching on the CPU, which requires less cycles.
If this value is set to 2, the block device will sent the completed request back to the actual CPU that requested it, not to the general group. If you have a beefy CPU and really want to utilize all the cores and spread the load around as much as possible then a value of 2 can provide better results.
# Spread CPU load by redirecting completions to originating CPU ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/rq_affinity}="2"
4.4 Set HBA timeout
# Set the HBA timeout to 60 seconds ACTION=="add", SUBSYSTEMS=="scsi", ATTRS{model}=="FlashArray ", RUN+="/bin/sh -c 'echo 60 > /sys/$DEVPATH/device/timeout'"
5. HBA I/O Timeout Settings
Though the Pure Storage FlashArray is designed to service IO with consistently low latency, there are error conditions that can cause much longer latencies. It is important to ensure dependent servers and applications are tuned appropriately to ride out these error conditions without issue. By design, given the worst case for a recoverable error condition, the FlashArray will take up to 60 seconds to service an individual IO so we should set the same value at the host level. Please note that 0x3c is hexadecimal for 60.
Edit /etc/system and either add or modify (if not present) the sd setting as follows:
set sd:sd_io_time = 0x3c set ssd:ssd_io_time=0x3C
II. Prepare storage on the FlashArray
1. Create the volumes
As the FlashArray is built using solid-state technology, the design considerations that were very important for spinning disk based storage are no longer valid. We do not have to worry about distributing I/O over multiple disks, therefore no need of making sure that tables and its indexes are on different disks. We can place our entire database on a single volume without any performance implication. One thing to keep in mind when deciding on the number of volumes to create is that performance and data reduction statistics are captured and stored at the volume level.
In this installation, we will create 3 volumes for the database. One for data and temp files, one for redo log files and the third one for the Flash Recovery Area.
1.1 Go to Storage -> Volumes, and click on the plus sign on the Volumes section.
1.2 Create a volume for the Oracle Home
1.3 Create one or more volumes for database Data and Temp files
1.4 Create one volume for the Redo Log files.
1.5 Create one or more volumes for the Flash Recovery Area
2. Create the host
2.1 Go to Storage->Hosts and click on the plus icon in the Hosts section.
2.2 The Create Host dialog will popup. Enter the name for the host that we are going to add. Note that this can be any name you think would best identify your host in the array, and does not have to be the operating system hostname. Click on Create to create a host object.
2.3 The host will now appear in the Hosts section. Click on the name to go to the Host details page.
2.4 Click on the Menu icon at the top right corner of the Host Ports section. A drop-down menu with a list of options will appear. This is where we set up the connectivity between the FlashArray and the Host. You need to choose the configuration option depending on the type of network we are using to connect.
In this case, we are connecting a Fibre Channel network, so we select the option to Configure WWNs.
For a iSCSI network, one would select the option to Configure IQNs.
For a NVME over Fabrics network, one would select the option to Configure NQNs.
For more details on how to identify the port names, please refer to Finding Host Port Names section in the Appendix.
2.5 On selecting Configure WWNs... from the menu, I dialog similar to the one below will be displayed with a list of available WWNs in the left pane. It is not showing any available on this demo host as the WWNs have already been added to the host object.
2.6 Once the WWNs corresponding to the host are selected, they will appear in the Host Ports panel.
3. Connect the Volumes
3.1 Click on the Menu icon on the Connected Volumes panel. From the drop-down menu, select Connect....
3.2 Select the volumes that we created in Step 1 and click Connect.
3.3 The volumes will now appear under the Connected Volumes panel for the host.
3.4 Clicking on a volume name will show the details page where we can find the volume serial number.
4. Setup volumes on the host
After the volumes are created on the FlashArray and connected to the host, we need to configure them before we can start the Oracle installer.
4.1 For the volumes to show up on the host, we need to rescan the scsi bus.
# rescan-scsi-bus.sh -a
4.2 Check the volumes are visible on the host using commands like lsscsi or lsblk.
4.3 Edit multipath.conf file and create aliases for the volumes.
# vi /etc/multipath.conf
The wwid can be obtained by prefixing 3624a9370 to the volume serial number (in lower case) obtained in step 3.4 above.
multipath { wwid 3624a93706c1d7213605f4920000751d0 alias oracle-rt-ora02prd-home } multipath { wwid 3624a93706c1d7213605f4920000751d1 alias oracle-rt-ora02prd-data01 } multipath { wwid 3624a93706c1d7213605f49200007aab2 alias oracle-rt-ora02prd-redo } multipath { wwid 3624a93706c1d7213605f4920000751d2 alias oracle-rt-ora02prd-fra01 }
# service multipathd restart
# multipath -ll
4.4 Create a filesystem on the volume for Oracle Home
# mkfs.ext4 /dev/mapper/oracle-rt-ora02prd-home
4.5 Create a mount point
# mkdir /u01
4.6 Add an entry to /etc/fstab
/dev/mapper/oracle-rt-ora02prd-home /u01 ext4 defaults 0 0
4.7 Mount the volume
# mount /u01
III a. Install Oracle Database on File System
Configure Database Storage
As the database will be created on File System, we need to create the mount points and file system on the volumes before we can mount them.
1. Create file systems on the database volumes.
[root@orademo1 ~]# mkfs.ext4 /dev/mapper/oracle-rt-ora02prd-data01 [root@orademo1 ~]# mkfs.ext4 /dev/mapper/oracle-rt-ora02prd-fra01 [root@orademo1 ~]# mkfs.ext4 /dev/mapper/oracle-rt-ora02prd-redo
2. Create the mount points
[root@orademo1 ~]# mkdir /ora02mnt [root@orademo1 ~]# mkdir /ora02mnt/redo [root@orademo1 ~]# mkdir /ora02mnt/data [root@orademo1 ~]# mkdir /ora02mnt/fra
3. Add entries to /etc/fstab
/dev/mapper/oracle-rt-ora02prd-data01 /ora02mnt/data ext4 _netdev,discard,noatime 0 0 /dev/mapper/oracle-rt-ora02prd-redo /ora02mnt/redo ext4 _netdev,discard,noatime 0 0 /dev/mapper/oracle-rt-ora02prd-fra01 /ora02mnt/fra ext4 _netdev,discard,noatime 0 0
4. Mount the volumes
[root@orademo1 ~]# mount /ora02mnt/redo [root@orademo1 ~]# mount /ora02mnt/data/ [root@orademo1 ~]# mount /ora02mnt/fra/
5. Change ownership of the mount points to Oracle owner.
[root@orademo1 ~]# chown oracle:oinstall /ora02mnt/redo [root@orademo1 ~]# chown oracle:oinstall /ora02mnt/data [root@orademo1 ~]# chown oracle:oinstall /ora02mnt/fra
[root@orademo1 ~]# df -h Filesystem Size Used Avail Use% Mounted on ... /dev/mapper/oracle-rt-ora02prd-redo 197G 61M 187G 1% /ora02mnt/redo /dev/mapper/oracle-rt-ora02prd-data01 2.0T 81M 1.9T 1% /ora02mnt/data /dev/mapper/oracle-rt-ora02prd-fra01 2.0T 81M 1.9T 1% /ora02mnt/fra
Create ORACLE HOME and unzip software
1. Create directories and set the permissions
# mkdir -p /u01/app/oracle/product/18.0.0/dbhome_1 # chown -R oracle:oinstall /u01/app # chmod g+w /u01/app # chmod g+w /u01/app/oracle/product/18.0.0
2. Unzip software as the oracle user.
[oracle] $ cd /u01/app/oracle/product/18.0.0/dbhome_1 [oracle] $ unzip -q <Zipfile location>/db18c/LINUX.X64_180000_db_home.zip
Start the installer
./runInstaller
Select “Set Up Software Only” as we will use dbca to create the database after the software installation is complete.
Select "Single instance database installation" as we are installing a single-instance database.
Select the database edition you would like to install.
Specify the directory for ORACLE_BASE.
Here you can specify OS group for each oracle group. Accept default settings and click “Next” button.
Fix any issues identified by the prerequisite checks. You can select Ignore All checkbox to proceed if you understand the implications.
Oracle needs a confirmation that you understand the impact and would like to proceed.
The installer provides a summary of selections made. Click on Install to proceed.
Installation in progress.....
root.sh script located in ORACLE_HOME needs to be installed as root user from another window. Once executed, click OK.
Software installation is successful.
Now that the database software is installed, create the database using the Database Configuration Assistant (dbca).
[oracle@orademo1 dbhome_1]$ dbca
Select Create a database
Choose Typical or Advanced. Advanced gives more options to customize so we select that.
Select type of database you want to create.
Provide database identification details
Select database storage option. In this section, we are installing on the file system.
Specify the location of the Fast Recovery Area and enable Archiving.
Create a new Listener. We do not have any as this is a fresh install.
Select Oracle Data Vault Configuration options
Select Instance configuration options
Specify Management options. If you already have EM Cloud Control installed, you can provide the details here to bring this database under management.
Specify database credentials for administrative accounts
Select Database Creation option
Summary of selections made, if looks good, click on Finish.
Database creation in progress....
Database creation complete.
III b. Install Oracle Database on Automatic Storage Management (ASM)
Oracle ASM is installed as part of an Oracle Grid Infrastructure installation. As we are going to use Oracle ASM for storage, we first need to install Oracle Grid Infrastructure before we install and create the database.
Before we start the Grid Installation, we need to decide how we would like to make the storage available to ASM. There are three options for drivers/device persistence :
- UDEV rules
- ASMLIB
- ASM Filter Driver (AFD)
In this example installation, we will be using ASMLIB to configure storage. For more information on setting up storage using UDEV rules or ASM Filter Driver, please refer to the ASM Device Persistence options section in the Appendix.
Configure Database Storage using ASMLIB
1 Download and install ASMLib
# rpm -Uvh http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.12-1.el7.x86_64.rpm
# yum -y install kmod-oracleasm oracleasm-support
2 Run the oracleasm initialization script with the configure option.
# oracleasm configure -i
The script will prompt for the user and group to own the driver interface. That would be grid and asmadmin respectively. To questions on whether it should start the ASM Library driver on boot, and whether it should scan for ASM disks on boot, we need to answer Yes to both.
3 Enter the following command to load the oracleasm kernel module.
# oracleasm init
To check the status of the oracleasm driver, run the status command.
[root@orademo1 oracle]# oracleasm status Checking if ASM is loaded: yes Checking if /dev/oracleasm is mounted: yes
4 Now that the driver is loaded, we can go ahead and create ASM disks
# oracleasm createdisk ASM02PRD_DATA01 /dev/mapper/oracle-rt-ora02prd-data01 # oracleasm createdisk ASM02PRD_FRA01 /dev/mapper/oracle-rt-ora02prd-fra01 # oracleasm createdisk ASM02PRD_REDO /dev/mapper/oracle-rt-ora02prd-redo
To check if the asm disks have been created successfully, run the listdisks command.
# oracleasm listdisks
To check the status of each disk returned by the above command, run the querydisks command.
# oracleasm querydisk ASM02PRD_DATA01
For more information on Oracle ASMLib, please refer to Automatic Storage Management feature of the Oracle Database.
Create GRID HOME and unzip software
1. Create directories and set the permissions
# mkdir -p /u01/app/oracle/product/18.0.0/grid # chown -R oracle:oinstall /u01/app # chmod g+w /u01/app # chmod g+w /u01/app/oracle/product/18.0.0 # chown grid:oinstall /u01/app/oracle/product/18.0.0/grid
2. Unzip software
[grid] $ cd /u01/app/oracle/product/18.0.0/grid [grid] $unzip -q <Zipfile location>/LINUX.X64_180000_grid_home.zip
3. Install the cluster verify utility
cd /u01/app/oracle/product/18.0.0/grid/cv/rpm rpm -i cvuqdisk-1.0.10-1.rpm
Start Grid Infrastructure Installer
[grid] $ ./gridSetup.sh
As we are installing a single instance database, select Configure Oracle Grid Infrastructure for a Standalone Server (Oracle Restart)
Enter the name of the Disk Group for database files.
Next, click on Change Discovery Path button to change the path to where our disks can be located.
As we are using ASMLIB, we set the discovery path to /dev/oracleasm/disks.
The ASMLIB disks should now appear. Select the disks that should be part of the DATA disk group. In this example, we have only one.
Change Redundancy setting to External as the FlashArray comes with always-on RAID-HA out of the box.
Also, as we are using ASMLIB, we will leave Configure ASM Filter Driver unchecked.
On clicking Next, we will get the following popup. Click on Yes and continue.
Enter the desired passwords.
Select operating system groups.
You can run the root.sh script manually when prompted, or select the options like below to have the installer run it automatically.
Review the results of the prerequisite check. If any issues are found, it is recommended to fix them before proceeding.
If you choose to ignore some of the issues found by the checks as we have done above, the installer will show the following warning.
Review the summary of inputs and settings, and click on Install to proceed.
Once the installer completes, the Grid Infrastructure would be installed and the ASM instance would be up and running. This can be verified by running the asmcmd command.
[grid@orademo1 ~]$ asmcmd ASMCMD> lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 512 4096 4194304 2097152 2091924 0 2091924 0 N DATA/
Install Oracle Database
1. Create directories and set the permissions
# mkdir -p /u01/app/oracle/product/18.0.0/dbhome_1 # chown -R oracle:oinstall /u01/app # chmod g+w /u01/app # chmod g+w /u01/app/oracle/product/18.0.0
2. Unzip software as the oracle user.
[oracle] $ cd /u01/app/oracle/product/18.0.0/dbhome_1 [oracle] $ unzip -q <Zipfile location>/db18c/LINUX.X64_180000_db_home.zip
3. Start the installer
./runInstaller
Select the disk group for the database files. DATA is the disk group we created while installing the Grid Infrastructure.
Enter the desired passwords
Select OS group for each privilege.
Prerequisite checks are performed. Fix issues identified before proceeding. Ignore only if you know what you are doing and understand the implications.
Summary of inputs and options selected
Installation in progress
Installation complete!
The database should be up and running now. If you have not done so already, you may want to update the .bash_profile script for the grid and oracle users, and set the environment variables like below.
Before releasing the database to the users, please make sure that the recommendations provided in Oracle Database Recommended Settings for FlashArray have been applied.
Appendix
1. Finding Host Port Names
Host Port Names are needed for connecting the Host to the Storage Array. The terminology as well as the command to find the Port Name depends on the type of the network.
1.1 Fibre Channel
All FC devices have a unique identity that is called the Worldwide Name or WWN. Find the WWNs for the host HBA ports
If systool is installed, run the following command (systool is part of sysfsutils package and can be easily installed using yum).
[grid@orademo1 ~]$ systool -c fc_host -v | grep port_name port_name = "0x2100000e1e259800" port_name = "0x2100000e1e259801"
Else run the following command.
[grid@orademo1 ~]$ more /sys/class/fc_host/host?/port_name :::::::::::::: /sys/class/fc_host/host0/port_name :::::::::::::: 0x2100000e1e259800 :::::::::::::: /sys/class/fc_host/host7/port_name :::::::::::::: 0x2100000e1e259801
1.2 iSCSI
In case of iSCSI, the port name is called iSCSI Qualified Name or IQN. If iSCSI has been setup on a host, the IQN is stored in /etc/iscsi/initiatorname.iscsi.
[oracle@orademo1 ~]$ cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.1988-12.com.oracle:c03b8e4947bf
1.3 NVME over Fabrics
In case of NVME-oF, the port name is called NVME Qualified Name or NQN. If NVME-oF has been setup on a host, the NQN is stored in /etc/nvme/hostnqn.
[oracle@sn1-r720-f12-15 ~]$ cat /etc/nvme/hostnqn nqn.2014-08.org.nvmexpress:uuid:928b208e-a6d1-4137-9810-ec064bd93854
2. ASM Device Persistence options
2.1 Oracle ASM Filter Driver (ASMFD)
Oracle ASM Filter Driver is a new option introduced in Oracle 18c. It rejects write I/O requests that are not issued by Oracle software. This write filter helps to prevent users with administrative privileges from inadvertently overwriting Oracle ASM disks, thus preventing corruption in Oracle ASM disks and files within the disk group.
Oracle ASMFD simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
After Grid Software is unzipped and before the Grid Installer is started, disks need to be configured for use with ASMFD.
a. Login as root and set ORACLE_HOME and ORACLE_BASE
# set ORACLE_HOME=/u01/app/oracle/product/18.0.0/grid # set ORACLE_BASE=/tmp
We set ORACLE_BASE to a temporary location to create temporary and diagnostic files.
b. Use the ASMCMD afd_label command to provision disk devices for use with Oracle ASM Filter Driver.
[root@orademo1 bin]# cd $ORACLE_HOME/bin [root@orademo1 bin]# ./asmcmd afd_label DATA /dev/mapper/testdb01-data --init [root@orademo1 bin]# ./asmcmd afd_label REDO /dev/mapper/testdb01-redo --init [root@orademo1 bin]# ./asmcmd afd_label FRA /dev/mapper/testdb01-fra --init
c. Use the ASMCMD afd_lslbl command to verify the device has been marked for use with Oracle ASMFD. For example:
[root@orademo1 bin]# ./asmcmd afd_lslbl /dev/mapper/dbtest02-data -------------------------------------------------------------------------------- Label Duplicate Path ================================================================================ DATA /dev/mapper/dbtest02-data
d. Unset ORACLE_BASE
# unset ORACLE_BASE
e. Change permission on the devices
After the ASMFD disks are labeled, the installation process is the same as ASMLIB process detailed above, except for a change in the following screen.
Firstly, the disk discovery string needs to be changed to AFD:*.
Secondly, the Configure Oracle ASM Filter Driver checkbox highlighted below needs to be selected.
On Linux, if you want to use Oracle ASM Filter Driver (Oracle ASMFD) to manage your Oracle ASM disk devices, then you must deinstall Oracle ASM library driver (Oracle ASMLIB) before you start the Oracle Grid Infrastructure installation as the two are not compatible.
2.2 Oracle ASMLIB
Oracle ASMLIB maintains permissions and disk labels that are persistent on the storage device, so that the label is available even after an operating system upgrade.
The Oracle Automatic Storage Management library driver simplifies the configuration and management of block disk devices by eliminating the need to rebind block disk devices used with Oracle Automatic Storage Management (Oracle ASM) each time the system is restarted.
With Oracle ASMLIB, you define the range of disks you want to have made available as Oracle ASM disks. Oracle ASMLIB maintains permissions and disk labels that are persistent on the storage device, so that the label is available even after an operating system upgrade.
The detailed steps for setting up ASMLIB have been provided in the main section above.
2.3 UDEV Rules
Device persistence can be configured manually for Oracle ASM using UDEV rules.
For more details, please refer to Configuring Device Persistence Manually for Oracle ASM.
3. Setting up X Window Display
Install the xclock program to test if the X windowing system is properly installed and working. The yum installer will pull in the required dependencies.
# yum install xclock
After installing xclock, run it on the command line first as root user and then as oracle and grid (if applicable) user.
[root@orademo1 ~]# xclock
If a small window pops with a clock display, that confirms that X Window System is working properly and it is OK to run the Oracle Installer.
Please note that you will need an X Window System terminal emulator to start an X Window System session.
One of the common issues on linux is that xclock works fine when executed from the root user, but errors out with Error: Can't open display: when executed from any other user.
To fix this issue, perform the following steps.
1. Login as the root user.
a. Run the xauth list $DISPLAY command and make a note of the output.
[root@orademo1 ~]# xauth list $DISPLAY orademo1.puretec.purestorage.com/unix:10 MIT-MAGIC-COOKIE-1 299e371ddd4cdd695e9232abbd0d602f
b. Make a note of the DISPLAY setting for the root user
[root@orademo1 ~]# echo $DISPLAY localhost:10.0
2. Now login as the non-root user for which xclock is not working.
a. Run the xauth add command with the value returned
[oracle@orademo1 ~]$ xauth add orademo1.puretec.purestorage.com/unix:10 MIT-MAGIC-COOKIE-1 299e371ddd4cdd695e9232abbd0d602f
b. Set the DISPLAY variable to the same setting as root.
[oracle@orademo1 ~]$ export DISPLAY=localhost:10.0
xclock should work now from non-root user as well.