Archive for the ‘monitoring’ Category
A primer on Processor Trace timing
Processor Trace can be quite useful to understand the timing break down of programs. This post describes how the timing works and how to get the best timing resolution.
Processor Trace logs are packetized. A trace uses three kinds of timing packets: PSB TSC updates, MTC and CYC. PSB is the basic synchronization of the trace and provides a full TSC time stamp. However it is quite coarse grained. The MTC packet provides regular timing updates from the 24/25Mhz ART Always Running Timer. And then finally there is an optional cycle accurate mode that gives cycle accurate updates relative to the last MTC packet.
Note that Broadwell Processor Trace used a different much less accurate method only using the PSB timing updates. For accurate timing please use Skylake or later or Goldmont Atom or later..
By default Linux perf uses a mtc period of 3, which means there is an ART timing update ever 2^3=8 times, so roughly every 300ns. That is quite coarse grained, usually not fine grained enough for a program time break down. It is possible to increase the timing resolution at some trace overhead cost, longer decoding time, and a risk of additional data loss.
We can set the MTC period to 0 to get more frequent MTC updates. In addition by default perf script only reports us, so we should use the –ns option to force nano second timestamp output:
perf record -e intel_pt/mtc_period=0/ -a ...
perf script --ns --insn-trace --xed
Now in addition to that we can enable cycle accurate mode to get better resolution (at the cost of a significantly larger trace):
perf record -e intel_pt/mtc_period=0,cyc/ -a ...
perf script --ns --insn-trace --xed
When the resulting trace is too big it’s also possible to use the cyc_thresh=N option, which configured cycle packets only every 2^N cycles. This can be useful if the full accurate trace causes too much data loss.
When looking at the output we still only get updates roughly every 5 conditional branches or returns. This is related to how PT encodes the branches into packets. Cycle updates are only written to the trace when the CPU writes a packet for a branch, or another reason. Conditional branches and returns are encoded into TNT packets, and a TNT packet is always filled with 5 conditionals or returns before it is being written.
For returns we can disable the ‘return compression’ which leads to a guaranteed packet at every return, so at least we get a time stamp at the end of every function. Again this comes at significant additional trace overhead.
perf record -e intel_pt/noretcomp,cyc,mtc_period=0/ ...
perf script --ns --insn-trace --xed
With this we can see timing updates every 5 conditional branches or return. But often we want to time complete functions. Unfortunately function calls are direct, which are not logged by PT because it can be statically determined by the PT decoder. One track is to call the function through an indirect pointer, which results in a TIP packet, and therefore a timing update at the beginning of the function. However that requires modifying the code.
There is another trick that we can use to time individual functions. PT has address filters to enable/disable the trace. We can enable PT at the beginning of the function and disable it at the end. Disable/Enable has accurate time stamps, so that’s good enough to time the function:
perf record -e intel_pt/cyc,mtc_period=0,noretcomp/ --filter 'myfunc @ /path/to/executable' executable
perf script --ns --insn-trace --xed
The CPU supports upto two address filter region, so this trick works for two function at a time. We could time more by switching the filter regions over time. It can be used to time smaller program regions too, however it is required that they be entered and exited by a branch.
[Based on an earlier writeup from Adrian Hunter]
Cheat sheet for Intel Processor Trace with Linux perf and gdb
What is Processor Trace
Intel Processor Trace (PT) traces program execution (every branch) with low overhead.
This is a cheat sheet of how to use PT with perf for common tasks
It is not a full introduction to PT. Please read Adding PT to Linux perf or the links from the general PT reference page.
PT support in hardware
CPU | Support |
Broadwell (5th generation Core, Xeon v4) | More overhead. No fine grained timing. |
Skylake (6th generation Core, Xeon v5) | Fine grained timing. Address filtering. |
Goldmont (Apollo Lake, Denverton) | Fine grained timing. Address filtering. |
PT support in Linux
PT is supported in Linux perf, which is integrated in the Linux kernel.
It can be used through the “perf” command or through gdb.
There are also other tools that support PT: VTune, simple-pt, gdb, JTAG debuggers.
In general it is best to use the newest kernel and the newest Linux perf tools. If that is not possible older tools and kernels can be used. Newer tools can be used on an older kernel, but may not support all features
Linux version | Support |
Linux 4.1 | Initial PT driver |
Linux 4.2 | Support for Skylake and Goldmont |
Linux 4.3 | Initial user tools support in Linux perf |
Linux 4.5 | Support for JIT decoding using agent |
Linux 4.6 | Bug fixes. Support address filtering. |
Linux 4.8 | Bug fixes. |
Linux 4.10 | Bug fixes. Support for PTWRITE and power tracing |
Many commands require recent perf tools, you may need to update them rom a recent kernel tree.
This article covers mainly Linux perf and briefly gdb.
Preparations
Only needed once.
Allow seeing kernel symbols (as root)
echo kernel.kptr_restrict=0' >> /etc/sysctl.conf
sysctl -p
Basic perf command lines for recording PT
ls /sys/devices/intel_pt/format
Check if PT is supported and what capabilities.
perf record -e intel_pt// program
Trace program
perf record -e intel_pt// -a sleep 1
Trace whole system for 1 second
perf record -C 0 -e intel_pt// -a sleep 1
Trace CPU 0 for 1 second
perf record --pid $(pidof program) -e intel_pt//
Trace already running program.
perf has to save the data to disk. The CPU can execute branches much faster than than the disk can keep up, so there will be some data loss for code that executes
many instructions. perf has no way to slow down the CPU, so when trace bandwidth > disk bandwidth there will be gaps in the trace. Because of this it is usually not a good idea
to try to save a long trace, but work with shorter traces. Long traces also take a lot of time to decode.
When decoding kernel data the decoder usually has to run as root.
An alternative is to use the perf-with-kcore.sh script included with perf
perf script --ns --itrace=cr
Record program execution and display function call graph.
perf script by defaults “samples” the data (only dumps a sample every 100us).
This can be configured using the –itrace option (see reference below)
Install xed first.
perf script --itrace=i0ns --ns -F time,pid,comm,sym,symoff,insn,ip | xed -F insn: -S /proc/kallsyms -64
Show every assembly instruction executed with disassembler.
For this it is also useful to get more accurate time stamps (see below)
perf script --itrace=i0ns --ns -F time,sym,srcline,ip
Show source lines executed (requires debug information)
perf script --itrace=s1Mi0ns ....
Often initialization code is not interesting. Skip initial 1M instructions while decoding:
perf script --time 1.000,2.000 ...
Slice trace into different time regions Generally the time stamps need to be looked up first in the trace, as they are absolute.
perf report --itrace=g32l64i100us --branch-history
Print hot paths every 100us as call graph histograms
Install Flame graph tools first.
perf script --itrace=i100usg | stackcollapse-perf.pl > workload.folded
flamegraph.pl workloaded.folded > workload.svg
google-chrome workload.svg
Generate flame graph from execution, sampled every 100us
Other ways to record data
perf record -a -e intel_pt// sleep 1
Capture whole system for 1 second
Use snapshot mode
This collects data, but does not continuously save it all to disk. When an event of interest happens a data dump of the current buffer can be triggered by sending a SIGUSR2 signal to the perf process.
perf record -a -e --snapshot intel_pt// sleep 1
PERF_PID=$!
*execute workload*
*event happens*
kill -USR2 $PERF_PID
*end of recording*
kill $PERF_PID>
Record kernel only, complete system
perf record -a -e intel_pt//k sleep 1
Record user space only, complete system
perf record -a -e intel_pt//u
Enable fine grained timing (needs Skylake/Goldmont, adds more overhead)
perf record -a -e intel_pt/cyc=1,cyc_thresh=2/ ...
echo $[100*1024*1024] > /proc/sys/kernel/perf_event_mlock_kb
perf record -m 512,100000 -e intel_pt// ...
Increase perf buffer to limit data loss
perf record -e intel_pt// --filter 'filter main @ /path/to/program' ...
Only record main function in program
perf record -e intel_pt// -a --filter 'filter sys_write' program
Filter kernel code (needs 4.11+ kernel)
perf record -e intel_pt// -a --filter 'stop func2 @ program' program
Stop tracing at func2.
perf archive
rsync -r ~/.debug perf.data other-system:
Transfer data to a trace on another system. May also require using perf-with-kcore.sh if decoding
kernel.
Using gdb
Requires a new enough gdb built with libipt. For user space only.
gdb program
start
record btrace pt # cannot be done before start. has to be redone every re-start
cont
record instruction-history /m # show instructions
record function-call-history # show functions executed
reverse-step # step backwards in time
For more information on gdb pt see the gdb documentation
References
Reference for –itrace option (from perf documentation)
i synthesize "instructions" events
b synthesize "branches" events
x synthesize "transactions" events
c synthesize branches events (calls only)
r synthesize branches events (returns only)
e synthesize tracing error events
d create a debug log
g synthesize a call chain (use with i or x)
l synthesize last branch entries (use with i or x)
s skip initial number of events
Reference for –filter option (from perf documentation)
A hardware trace PMU advertises its ability to accept a number of
address filters by specifying a non-zero value in
/sys/bus/event_source/devices/ /nr_addr_filters.
Address filters have the format:
filter|start|stop|tracestop [/ ] [@]
Where:
– ‘filter’: defines a region that will be traced.
– ‘start’: defines an address at which tracing will begin.
– ‘stop’: defines an address at which tracing will stop.
– ‘tracestop’: defines a region in which tracing will stop.
is the name of the object file, is the offset to the
code to trace in that file, and is the size of the region to
trace. ‘start’ and ‘stop’ filters need not specify a .
If no object file is specified then the kernel is assumed, in which case
the start address must be a current kernel memory address.
can also be specified by providing the name of a symbol. If the
symbol name is not unique, it can be disambiguated by inserting #n where
‘n’ selects the n’th symbol in address order. Alternately #0, #g or #G
select only a global symbol. can also be specified by providing
the name of a symbol, in which case the size is calculated to the end
of that symbol. For ‘filter’ and ‘tracestop’ filters, if is
omitted and is a symbol, then the size is calculated to the end
of that symbol.
If is omitted and is ‘*’, then the start and size will
be calculated from the first and last symbols, i.e. to trace the whole
file.
If symbol names (or ‘*’) are provided, they must be surrounded by white
space.
The filter passed to the kernel is not necessarily the same as entered.
To see the filter that is passed, use the -v option.
The kernel may not be able to configure a trace region if it is not
within a single mapping. MMAP events (or /proc/ /maps) can be
examined to determine if that is a possibility.
Multiple filters can be separated with space or comma.
v2: Fix some typos/broken links
v3: Update gdb commands
Intel Processor Trace resources
Intel Processor Trace (PT) can be used on modern Intel CPUs to trace execution. This page contains references for learning about and using Intel PT.
Basic information:
- Intel Software Developer’s Manual Vol 3 low level reference information on Processor Trace trace format and registers (Chapter 36)
- Intel Processor Trace on Linux gives an overview of processor trace on Linux
- A tutorial web site for PT that contains many references
- Intel® Developer Forum 2015: Zoom-in on Your Code with Intel® Processor Trace and Supporting Tools (find SPCS012)
- Intel® Developer Forum 2014: Debug and Fine-grain Profiling with Intel® Processor Trace
- Efficient and large scale program flow tracing in Linux
Implementations
- Adding processor trace to Linux describes the Linux perf Processor trace implementation.
- Reference documentation for PT on Linux perf
- simple-pt is an alternative reference PT implementation. It is implemented on Linux, but can be also used as a starting point to implement PT on other OS.
- The GNU debugger gdb support PT on Linux for backward debugging
- Intel VTune amplifier supports PT for performance analysis
- A Windows windbg processor trace plugin for debugging on Windows with PT.
- A reference Processor Trace decode library.
- A plugin for Linux crash to dump PT buffers (look for ptdump)
JTAG support
- The Lauterbach JTAG debugger supports PT
- Intel system studio supports JTAG debugging with PT
- The SourcePoint for Intel debugger support PT
Other presentations
Usages
- The hongfuzz fuzzer supports feedback fuzzing using PT
- Harnessing Intel Processor Trace on Windows for vulnerability discovery
Research papers using PT (subset):
- Failure Sketching: A technique for automated root cause analysis of in production failures
- Griffin: Guarding control flows using Intel Processor Trace
- Hardware-assisted instruction profiling and latency detection
- Inspector: Data Provenance using Intel Processor Trace
- Transparent and efficient CFI enforcement with Intel Processor Trace
Generating Flame graphs with Processor Trace
How to generate a FlameGraph with Processor Trace. Everybody loves Flame Graphs.
Processor trace allows to do as very exact histograms of a program’s run time. Normal sampling has shadow effects, which can hide some details. Processor traces every branch, so it can be much more accurate than normal sampling.
You need a Intel Broadwell or Skylake CPU.
Running at 4.1 or later Linux kernel where perf supports PT.
You can verify the kernel supports pt with
ls /sys/devices/intel_pt
You need perf user tools built from https://github.com/virtuoso/linux-perf
(this should soon be fixed when the user tools code is merged into Linux mainline)
Build perf with PT support
# set up https_proxy as needed
git clone https://github.com/virtuoso/linux-perf
cd linux-perf/tools/perf
make
Copy the resulting perf binary to where you want to run it
Get the flamegraph code
git clone https://github.com/brendangregg/FlameGraph.git
.
Collect data from the workload. Best to not collect too long traces as they take much longer to process and may need too much disk space.
perf record -e intel_pt// workload (or -a sleep 1 to collect 1s globally)
Decode the data. This may take quite some time
perf script --itrace=i100usg | /path/to/FlameGraph/ | stackcollapse-perf.pl > workload.folded
The i100us means the trace decoder samples an instruction every 100us. This can be made more accurate (down to 1ns), at the cost of longer decoding time. The ‘g’ tells the decoder to add callgraphs.
Then generate the Flamegraph with
/path/to/FlameGraph/flamegraph.pl workloaded.folded > workload.svg
Then view the resulting SVG in a SVG viewer, such as google chrome
google-chrome workload.svg
It is possible to click around.
Here’s a larger svg example from a gcc build (2.5MB). May need chrome or firefox to view.
In principle the trace also has support for more information not in normal sampling, such as determining the exact run time of individual functions from the trace. This is unfortunately not (yet?) supported by the Flame Graph tools.
Speeding up less
Often when doing performance analysis or debugging, it boils down to stare at long text trace files with the less text viewer. Yes you can do a lot of analysis with custom scripts, but at some point it’s usually needed to also look at the raw data.
The first annoyance in less when opening a large file is the time it takes to count lines (less counts lines at the beginning to show you the current position as a percentage). The line counting has the easy workaround of hitting Ctrl-C or using less -n to disable percentage. But it would be still better if that wasn’t needed.
Nicolai Haenle speeded the process by about 20x in his less repository.
One thing that always bothered me was that searching in less is so slow. If you’re browsing a tens to hundreds of MB file file it can easily take minutes to search for a string. When browsing log and trace files searching over longer distances is often very important.
And there is no good workaround. Running grep on the file is much faster, but you can’t easily transfer the file position from grep to the less session.
Some profiling with perf shows that most of the time searching is spent converting each line. Less internally cleans up the line, convert it to canonical case, remove backspace bold, and some other changes. The conversion loop processes each character in a inefficient way. Most of the time this is not needed, so I replaced that with a quick check if the line contains any backspaces using the optimized strchr() from the standard C library. For case conversion the string search functions (either regular expression or fixed string search) can also handle case insensitive search directly, so we don’t need an extra conversion step. The default fixed string search (when the search string contains no regular expression meta characters) can be also done using the optimized C library functions.
The resulting less version searches ~85% faster on my benchmarks. I tried to submit the patch to the less maintainer, but it was ignored unfortunately. The less version in the repository also includes Nicolai’s speedup patches for the initial line counting.
One side effect of the patch is that less now defaults to case sensitive searches. The original less had a feature (or bug) to default to case-insensitive even without the -i option. To get case insensitive searches now “less -i” needs to be used.
[Edit: Fix typos]
toplev tutorial and manual
toplev, part of pmu-tools is a tool to determine the CPU bottleneck of workloads. Now finally there is a tutorial and manual available for toplev.,
pmu-tools, part II: toplev
In part 1 I gave an introduction to pmu-tools, and described ocperf, which allows low level access to the Intel defined CPU performance counter events.
toplev introduction
This part describes another component of pmu-tools: toplev toplev builds on top of ocperf, but works at a much higher level.
perf record defaults to cycle sampling. cycle sampling can tell roughly what part of the workload is taking up CPU time. It cannot directly tell why it is slow. If you have psychic super powers you may be able to figure it out from the source code. If not, using other measurements can help to narrow down the performance bottleneck.
ocperf has a lot of low level events to sample or count specific conditions, but using them requires some knowledge of the CPU to select the right events.
Another approach is to just count specific events. Many parts of the CPU have “stall cycles” counter support, that is they can count how long they are waiting for something. This can be used to compute “stall ratios” when divided by the total number of cycles.
The standard “perf stat” displays such ratios as “stalled-cycles-frontend” (the part of the CPU decoding instructions) and for the backend (that is the actual execution) as “stalled-cycles-backend”.
This assumes a very simplified CPU model. But modern out of order CPUs execute many instructions in parallel and try to execute something else in stall times. The stalls are only a performance problem if they actually bottleneck the execution, that is if there is nothing else to do that could hide the stall.
Also some workloads simply don’t do specific operations much (for example a workload that fits into the L1 cache does not do much memory operations) and evaluating the stall cycles of the memory subsystem may not be very useful, as they are only for very rare events.
So just looking at isolated ratios is not necessarily useful.
To avoid this problem we can compute a larger number of ratios for different units in the CPU, and then define a hierarchy of thresholds between ratios that define whether a specific ratio is meaningful or not.
This is described as the “Top Down” methology in B.3.2 of the Intel optimization manual. More information on TopDown is in this article or in Ahmad Yasin’s ISCA workshop presentation. I didn’t invent it, I’m just implementing it.
The toplev tool in pmu-tools implements this methology. It uses counting, not sampling, which means it can only tell you “what”, but not “where exactly in the program”. If interval mode is used (-I1000) it can also give a very rough “when”.
how toplev works
toplev automatically runs perf stat with the right counters and computes the thresholds and only displays meaningful bottlenecks. toplev defaults to a single 5 event model that already gives some useful information for Intel Core CPUs since Sandy Bridge.
The simple model has the advantage that it fits into the standard 4 performance counters without multiplexing, which makes it more reliable. More on that later.
For specific CPUs there is also a more detailed model available (enable with -d)
The detailed model is a tree of different levels. The first level corresponds to the simplified model. Additional levels (max 4, default 2, using the -l option) can be used to narrow down specific issues more by going down the tree. Each level is only meaningful if the parent crossed its threshold.
The detailed model requires running many more events to compute all the needed ratios. Since the CPU only has 4 (or 8 with HyperThreading off) general performance counters available, perf will need to multiplex (that is regularly re-program) the counters, which adds measurement errors.
In general the lowers levels less reliable than the higher levels and should be taken with a grain of salt. But upto level 2 works generally well.
Examples
First set up pmu-tools if you haven’t yet.
% git clone https://github.com/andikleen/pmu-tools
% cd pmu-tools
% export PATH=$PATH:$(pwd)
Let’s try a memory bound program. The STREAM benchmark is very memory bound. We use the simple (single threaded, not terrible optimized) version from numademo.
% toplev.py numademo 100M stream
...
perf stat --log-fd 4 -x, -e {r100030d,r2c2,r19c,r10e,cycles} numademo 100M stream
...
Backend Bound: 72.33%
This category reflects slots where no uops are being delivered due to a lack
of required resources for accepting more uops in the Backend of the pipeline.
Lets look a bit closer with a level 2 detailed model
% toplev.py -d -l2 numademo 100M stream
...
perf stat --log-fd 4 -x, -e
{r3079,r19c,r10401c3,r100030d,rc5,r10e,cycles,r400019c,r2c2,instructions}
{r15e,r60006a3,r30001b1,r40004a3,r8a2,r10001b1,cycles}
numademo 100M stream
...
BE Backend Bound: 72.03%
This category reflects slots where no uops are being delivered due to a lack
of required resources for accepting more uops in the Backend of the pipeline.
BE/Mem Memory Bound: 43.18%
This metric represents how much Memory subsystem was a bottleneck.
BE/Core Core Bound: 18.90%
This metric represents how much Core non-memory issues were a bottleneck.
RET BASE: 24.76%
This metric represents slots fraction CPU was retiring uops not originated
from the microcode-sequencer.
So we’re memory bound as expected, but it’s only part of the problem.
With a level 3 measurement we can look even further. As you can see the underlying perf command line already gets really complicated for this, a tool like toplev is really needed to set it up.
% toplev.py -d -l3 numademo 100M stream
...
perf stat --log-fd 4 -x, -e
{r2ab,r19c,r2c2,r485,r480,r400019c,r187,cycles,r114,instructions},
{r4001879,r1002479,r40001a8,r4002479,r50005a3,r1001879,r10001a8,cycles,r12000ca3},
{r3079,r2c2,r20d1,r100030d,r10e,r50005a3,r4d1,cycles,r19c},
{r60006a3,cycles,r45f,r12000ca3,r8408},{r2c2,r10401c3,r100030d,rc5,r10e,cycles},
{r15e,r10401c3,r1fe6,rc5,r184015e,r480,cycles},
{r15e,r60006a3,r30001b1,r40004a3,r8a2,r10001b1,cycles,r114},
{r211,r8010,r4010,r1010,r1b1,r110,r111,r2010},
r211,r8010,r4010,r3079,r1010,r1b1,r110,r111,r2010 numademo 100M stream
...
BE Backend Bound: 71.58%
This category reflects slots where no uops are being delivered due to a lack
of required resources for accepting more uops in the Backend of the pipeline.
BE/Mem Memory Bound: 43.66%
This metric represents how much Memory subsystem was a bottleneck.
BE/Mem L1 Bound: 33.26%
This metric represents how often CPU was stalled without missing the L1 data
cache.
BE/Core Core Bound: 19.24%
This metric represents how much Core non-memory issues were a bottleneck.
BE/Core Ports Utilization: 19.24%
This metric represents cycles fraction application was stalled due to Core
non-divider-related issues.
RET BASE: 25.08%
This metric represents slots fraction CPU was retiring uops not originated
from the microcode-sequencer.
RET OTHER: 87.89%
This metric represents non-floating-point (FP) uop fraction the CPU has
executed.
This shows that numademo’s STREAM actually consists of more loads/stores than floating operations. It’s not a really optimized version.
And finally a “real workload”, a kernel build with gcc. gcc has a lot of code, so the CPU’s instruction decoding frontend becomes a bottleneck, partly caused by branch mispredictions (which cause the frontend to do more work). This data is averaged over 4 cores.
FE Frontend Bound: 54.07%
This category reflects slots where the Frontend of the processor undersupplies
its Backend.
FE Frontend Latency: 39.53%
This metric represents slots fraction CPU was stalled due to Frontend latency
issues.
BAD Bad Speculation: 11.75%
This category reflects slots wasted due to incorrect speculations, which
include slots used to allocate uops that do not eventually get retired and
slots for which allocation was blocked due to recovery from earlier incorrect
speculation.
BAD Branch Mispredicts: 11.66%
This metric represents slots fraction CPU was impacted by Branch
Missprediction.
RET BASE: 25.74%
This metric represents slots fraction CPU was retiring uops not originated
from the microcode-sequencer.
Some caveats with TopDown
The topdown approach only works for CPU bound workloads. If the program’s performance is limited by something else (for example waiting for IO or blocking for other reasons) other methods need to be used.
The lower levels of the measurement tree are less reliable than the higher levels. They also rely on counter multi-plexing and cannot use groups, which can cause larger measurement errors with non steady state workloads.
(If you don’t understand this terminology; it means measurements are much less accurate and it works best with programs that primarily do the same thing over and over)
It’s recommended to measure the work load only after the startup phase by using interval mode or attaching later.
level 1 or running without -d is generally the most reliable. The lower tree levels have larger measurement errors. Level 2 usually also works well. Level 3 and 4 can have some mismeasurements.
One of the events (even used by level 1) requires a recent enough kernel that understands its counter constraints. 3.10+ is safe.
Update 2013/07/28: Add links to other reference material on TopDown. Change “much less reliable” to “less reliable”.
pmu-tools part I
Introduction
Modern CPUs are quite complicated and to understand the performance profiling often needs to be used. The CPUs have performance monitoring units (PMUs) that allow to count and sample a wide variety of events. Linux perf provides an interface to the PMU. It has been designed to provide an abstracted view of the PMU events, and provides a limited number of abstracted events for common situations. In addition it has an interface to access all the raw events. pmu-tools is my toolkit to make access to these raw events more user-friendly for Intel CPUs, and provide some additional functionality. It is not really an replacement for perf, just an addition. If the abstracted perf events work there is no need to use pmu-tools. But there are some situations where additional events are useful. Also it can be useful to experiment: if a “raw” pmu tool use case is useful it may move later into “abstracted” perf.
pmu-tools has a number of components: several of wrappers for perf, and some C libraries for programs. I’ll describe these different components in a number of posts.
Getting pmu-tools
git clone git://github.com/andikleen/pmu-tools
cd pmu-tools
pmu-tools currently has no installer. I just run the tools from the source directory.
# export PATH=$PATH:$(pwd)
ocperf
The first (and original component of pmu-tools) is ocperf. ocperf is a wrapper around the perf command line program that translates events from the full Intel event lists to perf format, and does some additional setup.
The command line is the same as normal perf, just in the ‘-e’ line you can also use Intel events. ocperf list outputs all the additional events.
# perf list | wc -l 544
# ocperf.py list | wc -l 1244
(the actual numbers will vary based on system setup and CPU)
As you can see ocperf adds a large number of additional events. I’m not describing all these events, but the ocperf event list includes a brief description. They can be used to analyze a wide variety of performance conditions
To use them just use a normal perf command line with ocperf
Count global remote node accesses
# ocperf.py stat -e offcore_response.demand_data.remote_dram_1 -a sleep 5
Sample conditional branches
# ocperf.py record -e br_inst_exec.cond my-program
# ocperf.py report --stdio
Translate an event into the raw format to use directly with perf
# ocperf.py --print stat -e DTLB_MISSES.LARGE_WALK_COMPLETED
perf -e r8049
The r8049 code can be used directly with perf or other tools that accept raw events.
ocperf translates the events and calls perf with the translated events. It also tries to translate them back in the output. This only works for “–stdio” output. When you are using the interactive browser (or the gtk UI) you will see the raw translated events in the output.
Another ocperf feature is to set the recommended Intel sampling period for an event (with -c default). By default perf uses an adaptive sampling period, that may use a lot of additional CPU time and is less predictible. This is only supported on some CPUs.
To set additional perf flags you can use the usual :XXX syntax
Count all the division operations in the kernel
# ocperf.py stat -e arith.div:k
ocperf currently only supports the old-style perf attribute syntax (with :xxx), not “cpu//”. This may change in future versions
ocperf background
Originally ocperf was just to handle “offcore events” (that is what the oc in the name stands for), but these days it is useful for far more.
First I should mention that oprofile recently added an “operf” tool. ocperf is not related to that tool and the name predates it.
Modern CPU cores are very fast at computation, and often spend large parts of their time waiting for something else (memory, IO, other cores) As you can imagine, profiling for that can be fairly important. Since Nehalem, Intel Core CPUs, have special offcore events to distinguish all the different “offcore” cases: L3 hit, memory hit, remote cache hit/miss etc. There are so many cases that the normal unit mask of a PMU event does not have enough bits to describe them, so separate registers are used instead. Originally perf didn’t know how to program these additional registers, so couldn’t profile offcore events.
ocperf was a workaround to program these registers directly from user space. This is fixed in recent perf versions (using the offcore_rsp attribute) and not needed anymore. But ocperf is still quite useful as it can directly generate all the needed masks from a predefined table. perf has some builtin offcoure events, but the set supplied by ocperf is larger and better documented.
And of course it still supports older kernels too, if you are not running the latest and greatest.
These days — in addition to translating events from the Intel events table — it also provides some additional workarounds, for example an offcore problem on Xeon E5 2600 series
I will write about more pmu-tools features in future posts.