What are the most common performance problem causes?
From my personal experience (your mileage may vary):
* 50% poor disk layout + mgmt - some disks 90%+ busy while 50% not used at all - do you have a clear document that maps the files to actual disks?
* 10% poor setup of RDBMS tuning parameters relating to memory use
* 10% single threaded batch applications (and we have been using SMP for 9 years!!)
* 10% poorly written customer extensions to standard applications
* 5% system running with errors in the errpt log file (including CPU failures!!)
* 5% paging on large RAM (>2 GB) systems & vmtune not use to set min/maxperm
* 5% AIX problems already discovered and fixed but AIX was not up to date.
* 4% badly ported app = not compiled with optimization or on old AIX versions
* 1% genuine bugs in AIX or commands
and every single one of these was reported as a problem with the hardware!!!
Perfpmr - the performance guru's secret weapon
Perfpmr is the official AIX Support performance data gathering tool. If you report a performance problem then after the usual checks on your software levels, AIX Support will ask you to use this tool to gather loads of information that you must return, as soon as possible. It is actually a set of shell scripts that use standard tools to gather information and write report summaries. This means it is very useful for you too and why reinvent the wheel, when the IBM Austin experts are maintaining this excellent tool for you. Get the latest AIX version
It comes with an excellent README file with all the details on running it, but briefly, find at least 100 Mbytes of free disk space and as the root user untar the file and run the master perfpmr.sh shell script. This needs to run during the problem (of course) and I suggest for 5 minutes = 600 seconds. For example, ./perfpmr.sh 600 It actually takes longer than this as there are several sections and the final phase gathers lot of configuration details that takes a long time with machine with lots of disks. Once finished take a look at the summary (.sum) files as these are readable. You may be able to sort out the performance problem yourself from this data alone.
But perfpmr is more than this because:
* it was precisely documented your system configuration at the hardware and AIX levels - this saves you doing this.
* if you take regular perfpmr data of the system running happily on a normal workload day. Then when there is a problem - you or the AIX support team can take a look at the differences between the good and bad performance captures and this will make diagnosis ten times simpler.
* if you should take perfpmr data of the system running happily before and after any minor or major changes to the system. For example software upgrades, AIX upgrades, changes to disk subsystems and adding software. Again, you will be able to determine what changed - for good or bad - ten times faster with before and after data captures.
My favorite Desktop tools of the job
I have a Windows based Thinkpad and AIX servers and this is what I use every day:
1. Virtual Network Computing VNC from TightVNC - so I can have X Windows at zero cost and it stays running for tomorrow
2. WebSM Remote Client - so I can remotely manage my AIX machines and HMC's. You can download this from you HMC (if you have it setup right and allowed the protocol) from http://
5. Thrashing rising page outs, CPU wait and run queue.
6. Disk I/O bound when %iowait greater than 40% (iostat)
* Although in recent years this might not be true as CPUs are much faster and disks only a bit faster.
* This means many workloads are not disk bound and the CPUs deal with the data faster than the disks deliver it.
* So high I/O wait is perfectly normal on many systems.
Setting minfree and maxfree on an AIX 5.3 System
minfree = (maximum of either 960 or # of logical CPUs * 120) divided by # of memory pools
maxfree = minfree + (# of logical CPUs * maximum read ahead) divided by # of memory pools
Where,
1. of logical CPUs from bindprocessor -q (count number of available processors)
2. of memory pools from vmstat -v
(note: If the number is 0 use 1 as a default)
Maximum read ahead is the greater of maxpgahead or j2_maxPageReadAhead from ioo -a
Tuning minfree and maxfree
You need to increase the value of minfree when you see 'Free Frame Waits' increasing over time.
Use the 'vmstat -s' to display the currently value of 'Free Frame Waits'.
Remember, to calculate a new value for maxfree.
Large Disk Subsystem Setup
Setting up Disks? - Want some advice? Here are my (Nigel Griffiths) top tips:
1. More disks are goodness - increasingly hard to justify with bigger disks but spindles count for performance
2. Use ALL of the disks - ALL of the time. But make the computer do this i.e. not manually move data around.
3. Hose it all about (especially for systems with less than 8 disks = one RAID5 is best)
4. Using RAID5 - 7+1 parity for maximum disk use
5. Aim for 16 to 32 LUNs - make then big enough to reduce the number of LUNs - no one can manage a thousand LUN's
6. Use 4 paths = 64 to 128 vpaths - two paths also OK. Never use more than 4 paths.
7. All LUNs same size
8. Clear map of layout - you must known where LUNs are placed and overlap.
9. 4 to 16 filesystems (never just one filesystem) to avoid free space allocation bottlenecks.
10. With AIX 5.2 onwards, the 64 bit Kernel and JFS2 should be thought of as the default.
11. LVM striped at 64KB or 128KB stripe width.
12. Database should have 8KB or better yet 16KB block size minimum.
13. Mix data, index, logs up on all the disks - to avoid hot spots (don't have disks for specific data types)
14. Direct attach, if you are the only ESS/FAStT user
15. Sequential I/O - big block = max throughput above 64KB blocks, also 256KB or 1MB works well (4KB can kill your throughput)
16. Random I/O - many files, equally spread across filesystems and hence disks
17. Two Fibre Channel adapters per TB,
18. Expect 70% adapter bandwidth max (2 Gbit FC = max 200MB/s and 140MB/s real)
For ESS/FAST/DS in particular (these numbers are 2 years old and probably need updating):
1. 6 to 8 Fibre Channel adapters per ESS
2. Disk Size: 36 GB=OK, 72 GB OK for Random, 146 GB only for archive
3. Use ESS bid against EMC and HDS
4. Typically 30+ hosts
5. Disk qdepth ESSF20=60 and ESS800=90 FAST900=1024/disks/hosts FAST500+600= 212/hosts/disks
6. Fibre Channel memory DMA=1MB, CMD's 248
Thursday, July 24, 2008
Performance Monitoring Tips and Techniques - AIX
Diposting oleh Nurcahyo Arifianto di 1:17 PM
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment