Oracle linux

From KeegansWiki
Jump to: navigation, search

Initial Setup

For some reason Oracle Linux 5.5 doesn't come with any repos, so updating requires setting this up

  • Download Repo
# cd /etc/yum.repos.d
# wget
  • Enable [el5_u5_base] and [ol5_u5_base] in this repo
  • yum install kernel
  • yum install oracle-linux

More info [here]


Information on adding multipath luns Linux#Add_NetApp_LUNS_to_RHEL_with_Multipath

Required Software

  • See errors below but this is the quick way:
yum downgrade glibc glibc-common libstdc++ cpp
yum install -y gcc gcc-c++ libaio-devel libstdc++-devel sysstat elfutils-libelf-devel glibc-devel glibc-devel
yum install oracle-validated

ULN Config and Registration

Blue screen


  • Few things to do. Make a backup, and remove /etc/sysconfig/rhn/systemid
  • add new uuid (/usr/bin/uuidgen) to /etc/sysconfig/rhn/up2date-uuid

Yum Repo

name=Latest Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch)

name=Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch)


Dependency errors

libstdc++-devel-4.1.2-48.el5.x86_64 from el5_u5_base has depsolving problems
  --> Missing Dependency: libstdc++ = 4.1.2-48.el5 is needed by package libstdc++-devel-4.1.2-48.el5.x86_64   (el5_u5_base)
libstdc++-devel-4.1.2-48.el5.i386 from el5_u5_base has depsolving problems
  --> Missing Dependency: libstdc++ = 4.1.2-48.el5 is needed by package libstdc++-devel-4.1.2-48.el5.i386 (el5_u5_base)
Error: Missing Dependency: libstdc++ = 4.1.2-48.el5 is needed by package libstdc++-devel-4.1.2-48.el5.i386 (el5_u5_base)
Error: Missing Dependency: libstdc++ = 4.1.2-48.el5 is needed by package libstdc++-devel-4.1.2-48.el5.x86_64 (el5_u5_base)
 You could try using --skip-broken to work around the problem
 You could try running: package-cleanup --problems
                        package-cleanup --dupes
                        rpm -Va --nofiles --nodigest

Not sure what caused this, but doing a yum downgrade of each of the dependencies worked.

Kernel Params for Oracle

[root@dsoragrid01 ~]# cat /etc/sysctl.conf 
# Kernel sysctl configuration file for Oracle Enterprise Linux
# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
# sysctl.conf(5) for more details.

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536

# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65000
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

fs.file-max = 6815744


Dealing with pdflush on Oraclelinux

It appears that with all the default kernel params, pdflush will not run often enough, so when it does run, it hogs cpu and disk io. The following are recommended by RHEL:


Here is a summary of what each of these do:

Exactly what each pdflush thread does is controlled by a series of parameters in /proc/sys/vm:

dirty_writeback_centisecs (default 500): In hundredths of second, this is how often pdflush wakes up to write data to disk. The default wakes up pdflush every five seconds. It may be beneficial to decrease this parameter on systems with lots of memory and active writing processes.The smaller value will make pdflush threads more aggressive in cleaning dirty pages. The kernel implements some rules to prevent write congestion. It limits the number of pages that can be flushed at once and may skip one second between pdflush activations if the threads are too busy. It does not make sense to set this parameter too low (less than 100).

Firstly, pdflush will work on is writing pages that have been dirty for longer than it deems acceptable. This is controlled by:

dirty_expire_centisecs (default 3000): in hundredths of second, how long data can be in the page cash before it is considered expired and must be written at the next opportunity. Note that this default is very long: a full 30 seconds. That means that under normal circumstances, unless you write enough to trigger the other pflush method, Linux willnot actually commit anything you write until 30 seconds later. This may be acceptable for general desktop and computational applications but for write-heavy workloads the value of this parameter should be lowered although not to extremely low levels. Because of the way the dirty page writing mechanism works, attempting to lower this value to less than a few seconds is unlikely to work well. Constantly trying to write dirty pages out will just trigger the I/O congestion code more frequently.

Secondly, pdflush will work on is writing pages if memory is low. This is controlled by:

dirty_background_ratio (default 10): Maximum percentage of active memory that can be filled with dirty pages before pdflush begins to write them. In terms of the meminfo output, the active memory is

MemFree + Cached – Mapped

This is the primary tunable to adjust downward on systems with the large amount of memory and heavy writing applications. The usual issue with these applications on Linux is buffering too much data in the attempt to improve efficiency. This is particularly troublesome for operations that require synchronizing the file system using system calls like fsync. If there is a lot of data in the buffer cache when this call is made, the system can freeze for quite some time to process the sync.

Another common issue is that because so much data must be written and cached before any physical writes start, the I/O appears more in bursts than would seem optimal. Long periods are observed where no physical writes occur as the large page cache is filled, followed by writes at the highest speed the device can achieve once one of the pdflush triggers has been tripped. If the goal is to reduce the amount of data Linux keeps cached in memory so that it writes it more consistently to the disk rather than in a batch, decreasing dirty_background_ratio is most effective.

There is another parameter that affects page cache management:

dirty_ratio (default 40): Maximum percentage of total memory that can be filled with dirty pages before user processes are forced to write dirty buffers themselves during their time slice instead of being allowed to do more writes.

Note that all processes are blocked for writes when this happens, not just the one that filled the write buffers. This can cause what is perceived as an unfair behavior where a single process can ”hog” all I/O on the system. Applications that can cope with their writes being blocked altogether might benefit from substantially decreasing this value. </pre>


grep Huge /proc/meminfo

Determining HugePages Total

  • Note: this should be executed after oracle is running at full capacity. Otherwise you'll just get 1 as the answer.

From Oracle

# Linux bash script to compute values for the
# recommended HugePages/HugeTLB configuration
# Note: This script does calculation for all shared memory
# segments available when the script is run, no matter it
# is an Oracle RDBMS shared memory segment or not.
# Check for the kernel version
KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`
# Find out the HugePage size
HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}`
# Start from 1 pages to be on the safe side and guarantee 1 free HugePage
# Cumulative number of pages required to handle the running shared memory segments
for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"`
   MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
   if [ $MIN_PG -gt 0 ]; then
      NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
# Finish with results
case $KERN in
   '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
          echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
   '2.6') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
    *) echo "Unrecognized kernel version $KERN. Exiting." ;;
# End

Tuning Hugepages

These changes will make pdflush dump more often. We've been encountering problems with pdflush going nuts every hour or so.



This needs to be set in /etc/security/limits.conf. Should be at least ( HugePages_Total * Hugepagesize ). This should be set when the oracle validated kernel is installed.