Debugging Performance Issues¶
Most chapters of this book deal with functional issues – that is, issues related to the functionality (or its absence) of the code in question. However, debugging can also involve nonfunctional issues, however – performance, usability, reliability, and more. In this chapter, we give a short introduction on how to debug such nonfunctional issues, notably performance issues.
from bookutils import YouTubeVideo
YouTubeVideo("0tMeB9G0uUI")
Prerequisites
- This chapter leverages visualization capabilities from the chapter on statistical debugging.
- We also show how to debug nonfunctional issues using delta debugging.
import bookutils.setup
import StatisticalDebugger
import DeltaDebugger
Synopsis¶
To use the code provided in this chapter, write
>>> from debuggingbook.PerformanceDebugger import <identifier>
and then make use of the following features.
This chapter provides a class PerformanceDebugger
that allows measuring and visualizing the time taken per line in a function.
>>> with PerformanceDebugger(TimeCollector) as debugger:
>>> for i in range(100):
>>> s = remove_html_markup('<b>foo</b>')
The distribution of executed time within each function can be obtained by printing out the debugger:
>>> print(debugger)
238 2% def remove_html_markup(s):
239 2% tag = False
240 1% quote = False
241 1% out = ""
242 0%
243 16% for c in s:
244 15% assert tag or not quote
245 0%
246 15% if c == '<' and not quote:
247 3% tag = True
248 12% elif c == '>' and not quote:
249 3% tag = False
250 9% elif (c == '"' or c == "'") and tag:
251 0% quote = not quote
252 9% elif not tag:
253 4% out = out + c
254 0%
255 3% return out
The sum of all percentages in a function should always be 100%.
These percentages can also be visualized, where darker shades represent higher percentage values:
>>> debugger
238 def remove_html_markup(s): # type: ignore
239 tag = False
240 quote = False
241 out = ""
242
243 for c in s:
244 assert tag or not quote
245
246 if c == '<' and not quote:
247 tag = True
248 elif c == '>' and not quote:
249 tag = False
250 elif (c == '"' or c == "'") and tag:
251 quote = not quote
252 elif not tag:
253 out = out + c
254
255 return out
The abstract MetricCollector
class allows subclassing to build more collectors, such as HitCollector
.
Measuring Performance¶
The solution to debugging performance issues fits in two simple rules:
- Measure performance
- Break down how individual parts of your code contribute to performance.
The first part, actually measuring performance, is key here. Developers often take elaborated guesses on which aspects of their code impact performance, and think about all possible ways to optimize their code – and at the same time, making it harder to understand, harder to evolve, and harder to maintain. In most cases, such guesses are wrong. Instead, measure performance of your program, identify the very few parts that may need to get improved, and again measure the impact of your changes.
Almost all programming languages offer a way to measure performance and breaking it down to individual parts of the code – a means also known as profiling. Profiling works by measuring the execution time for each function (or even more fine-grained location) in your program. This can be achieved by
Instrumenting or tracing code such that the current time at entry and exit of each function (or line), thus determining the time spent. In Python, this is achieved by profilers like profile or cProfile
Sampling the current function call stack at regular intervals, and thus assessing which functions are most active (= take the most time) during execution. For Python, the scalene profiler works this way.
Pretty much all programming languages support profiling, either through measuring, sampling, or both. As a rule of thumb, interpreted languages more frequently support measuring (as it is easy to implement in an interpreter), while compiled languages more frequently support sampling (because instrumentation requires recompilation). Python is lucky to support both methods.
Tracing Execution Profiles¶
Let us illustrate profiling in a simple example. The ChangeCounter
class (which we will encounter in the chapter on mining version histories) reads in a version history from a git repository. Yet, it takes more than a minute to read in the debugging book change history:
from ChangeCounter import ChangeCounter, debuggingbook_change_counter # minor dependency
import Timer
with Timer.Timer() as t:
change_counter = debuggingbook_change_counter(ChangeCounter)
t.elapsed_time()
The Python profile
and cProfile
modules offer a simple way to identify the most time-consuming functions. They are invoked using the run()
function, whose argument is the command to be profiled. The output reports, for each function encountered:
- How often it was called (
ncalls
column) - How much time was spent in the given function, excluding time spent in calls to sub-functions (
tottime
column) - The fraction of
tottime
/ncalls
(firstpercall
column) - How much time was spent in the given function, including time spent in calls to sub-functions (
cumtime
column) - The fraction of
cumtime
/percall
(secondpercall
column)
Let us have a look at the profile we obtain:
import cProfile
cProfile.run('debuggingbook_change_counter(ChangeCounter)', sort='cumulative')
Yes, that's an awful lot of functions, but we can quickly narrow things down. The cumtime
column is sorted by largest values first. We see that the debuggingbook_change_counter()
method at the top takes up all the time – but this is not surprising, since it is the method we called in the first place. This calls a method mine()
in the ChangeCounter
class, which does all the work.
The next places are more interesting: almost all time is spent in a single method, named modifications()
. This method determines the difference between two versions, which is an expensive operation; this is also supported by the observation that half of the time is spent in a diff()
method.
This profile thus already gets us a hint on how to improve performance: Rather than computing the diff between versions for every version, we could do so on demand (and possibly cache results so we don't have to compute them twice). Alas, this (slow) functionality is part of the
underlying PyDriller Python package, so we cannot fix this within the ChangeCounter
class. But we could file a bug with the developers, suggesting a patch to improve performance.
Sampling Execution Profiles¶
Instrumenting code is precise, but it is also slow. An alternate way to measure performance is to sample in regular intervals which functions are currently active – for instance, by examining the current function call stack. The more frequently a function is sampled as active, the more time is spent in that function.
One profiler for Python that implements such sampling is Scalene – a high-performance, high-precision CPU, GPU, and memory profiler for Python. We can invoke it on our example as follows:
$ scalene --html test.py > scalene-out.html
where test.py
is a script that again invokes
debuggingbook_change_counter(ChangeCounter)
The output of scalene
is sent to a HTML file (here, scalene-out.html
) which is organized by lines – that is, for each line, we see how much it contributed to overall execution time. Opening the output scalene-out.html
in a HTML browser, we see these lines:
As with cProfile
, above, we identify the mine()
method in the ChangeCounter
class as the main performance hog – and in the mine()
method, it is the iteration over all modifications that takes all the time. Adding the option --profile-all
to scalene
would extend the profile to all executed code, including the pydriller
third-party library.
Besides relying on sampling rather that tracing (which is more efficient) and breaking down execution time by line, scalene
also provides additional information on memory usage and more. If cProfile
is not sufficient, then scalene
will bring profiling to the next level.
Improving Performance¶
Identifying a culprit is not always that easy. Notably, when the first set of obvious performance hogs is fixed, it becomes more and more difficult to squeeze out additional performance – and, as stated above, such optimization may be in conflict with readability and maintainability of your code. Here are some simple ways to improve performance:
Efficient algorithms. For many tasks, the simplest algorithm is not always the best performing one. Consider alternatives that may be more efficient, and measure whether they pay off.
Efficient data types. Remember that certain operations, such as looking up whether an element is contained, may take different amounts of time depending on the data structure. In Python, a query like
x in xs
takes (mostly) constant time ifxs
is a set, but linear time ifxs
is a list; these differences become significant as the size ofxs
grows.Efficient modules. In Python, most frequently used modules (or at least parts of) are implemented in C, which is way more efficient than plain Python. Rely on existing modules whenever possible. Or implement your own, after having measured that this may pay off.
These are all things you can already use during programming – and also set up your code such that exchanging, say, one data type by another will still be possible later. This is best achieved by hiding implementation details (such as the used data types) behind an abstract interface used by your clients.
But beyond these points, remember the famous words by Donald E. Knuth:
from bookutils import quiz
This quote should always remind us that after a good design, you should always first measure and then optimize.
Building a Profiler¶
Having discussed profilers from a user perspective, let us now dive into how they are actually implemented. It turns out we can use most of our existing infrastructure to implement a simple tracing profiler with only a few lines of code.
The program we will apply our profiler on is – surprise! – our ongoing example, remove_html_markup()
. Our aim is to understand how much time is spent in each line of the code (such that we have a new feature on top of Python cProfile
).
from Intro_Debugging import remove_html_markup
print_content(inspect.getsource(remove_html_markup), '.py',
start_line_number=238)
We introduce a class PerformanceTracer
that tracks, for each line in the code:
- how often it was executed (
hits
), and - how much time was spent during its execution (
time
).
To this end, we make use of our Timer
class, which measures time, and the Tracer
class from the chapter on tracing, which allows us to track every line of the program as it is being executed.
from Tracer import Tracer
In PerformanceTracker
, the attributes hits
and time
are mappings indexed by unique locations – that is, pairs of function name and line number.
Location = Tuple[str, int]
class PerformanceTracer(Tracer):
"""Trace time and #hits for individual program lines"""
def __init__(self) -> None:
"""Constructor."""
super().__init__()
self.reset_timer()
self.hits: Dict[Location, int] = {}
self.time: Dict[Location, float] = {}
def reset_timer(self) -> None:
self.timer = Timer.Timer()
As common in this book, we want to use PerformanceTracer
in a with
-block around the function call(s) to be tracked:
with PerformanceTracer() as perf_tracer:
function(...)
When entering the with
block (__enter__()
), we reset all timers. Also, coming from the __enter__()
method of the superclass Tracer
, we enable tracing through the traceit()
method.
from types import FrameType
class PerformanceTracer(PerformanceTracer):
def __enter__(self) -> Any:
"""Enter a `with` block."""
super().__enter__()
self.reset_timer()
return self
The traceit()
method extracts the current location. It increases the corresponding hits
value by 1, and adds the elapsed time to the corresponding time
.
class PerformanceTracer(PerformanceTracer):
def traceit(self, frame: FrameType, event: str, arg: Any) -> None:
"""Tracing function; called for every line."""
t = self.timer.elapsed_time()
location = (frame.f_code.co_name, frame.f_lineno)
self.hits.setdefault(location, 0)
self.time.setdefault(location, 0.0)
self.hits[location] += 1
self.time[location] += t
self.reset_timer()
This is it already. We can now determine where most time is spent in remove_html_markup()
. We invoke it 10,000 times such that we can average over runs:
with PerformanceTracer() as perf_tracer:
for i in range(10000):
s = remove_html_markup('<b>foo</b>')
Here are the hits. For every line executed, we see how often it was executed. The most executed line is the for
loop with 110,000 hits – once for each of the 10 characters in <b>foo</b>
, once for the final check, and all of this 10,000 times.
perf_tracer.hits
The time
attribute collects how much time was spent in each line. Within the loop, again, the for
statement takes the most time. The other lines show some variability, though.
perf_tracer.time
For a full profiler, these numbers would now be sorted and printed in a table, much like cProfile
does. However, we will borrow some material from previous chapters and annotate our code accordingly.
Visualizing Performance Metrics¶
In the chapter on statistical debugging, we have encountered the CoverageCollector
class, which collects line and function coverage during execution, using a collect()
method that is invoked for every line. We will repurpose this class to collect arbitrary metrics on the lines executed, notably time taken.
Collecting Time Spent¶
from StatisticalDebugger import CoverageCollector, SpectrumDebugger
The MetricCollector
class is an abstract superclass that provides an interface to access a particular metric.
class MetricCollector(CoverageCollector):
"""Abstract superclass for collecting line-specific metrics"""
def metric(self, event: Any) -> Optional[float]:
"""Return a metric for an event, or none."""
return None
def all_metrics(self, func: str) -> List[float]:
"""Return all metric for a function `func`."""
return []
Given these metrics, we can also compute sums and maxima for a single function.
class MetricCollector(MetricCollector):
def total(self, func: str) -> float:
return sum(self.all_metrics(func))
def maximum(self, func: str) -> float:
return max(self.all_metrics(func))
Let us instantiate this superclass into TimeCollector
– a subclass that measures time. This is modeled after our PerformanceTracer
class, above; notably, the time
attribute serves the same role.
class TimeCollector(MetricCollector):
"""Collect time executed for each line"""
def __init__(self) -> None:
"""Constructor"""
super().__init__()
self.reset_timer()
self.time: Dict[Location, float] = {}
self.add_items_to_ignore([Timer.Timer, Timer.clock])
def collect(self, frame: FrameType, event: str, arg: Any) -> None:
"""Invoked for every line executed. Accumulate time spent."""
t = self.timer.elapsed_time()
super().collect(frame, event, arg)
location = (frame.f_code.co_name, frame.f_lineno)
self.time.setdefault(location, 0.0)
self.time[location] += t
self.reset_timer()
def reset_timer(self) -> None:
self.timer = Timer.Timer()
def __enter__(self) -> Any:
super().__enter__()
self.reset_timer()
return self
The metric()
and all_metrics()
methods accumulate the metric (time taken) for an individual function:
class TimeCollector(TimeCollector):
def metric(self, location: Any) -> Optional[float]:
if location in self.time:
return self.time[location]
else:
return None
def all_metrics(self, func: str) -> List[float]:
return [time
for (func_name, lineno), time in self.time.items()
if func_name == func]
Here's how to use TimeCollector()
– again, in a with
block:
with TimeCollector() as collector:
for i in range(100):
s = remove_html_markup('<b>foo</b>')
The time
attribute holds the time spent in each line:
for location, time_spent in collector.time.items():
print(location, time_spent)
And we can also create a total for an entire function:
collector.total('remove_html_markup')
Visualizing Time Spent¶
Let us now go and visualize these numbers in a simple form. The idea is to assign each line a color whose saturation indicates the time spent in that line relative to the time spent in the function overall – the higher the fraction, the darker the line. We create a MetricDebugger
class built as a specialization of SpectrumDebugger
, in which suspiciousness()
and color()
are repurposed to show these metrics.
class MetricDebugger(SpectrumDebugger):
"""Visualize a metric"""
def metric(self, location: Location) -> float:
sum = 0.0
for outcome in self.collectors:
for collector in self.collectors[outcome]:
assert isinstance(collector, MetricCollector)
m = collector.metric(location)
if m is not None:
sum += m
return sum
def total(self, func_name: str) -> float:
total = 0.0
for outcome in self.collectors:
for collector in self.collectors[outcome]:
assert isinstance(collector, MetricCollector)
total += sum(collector.all_metrics(func_name))
return total
def maximum(self, func_name: str) -> float:
maximum = 0.0
for outcome in self.collectors:
for collector in self.collectors[outcome]:
assert isinstance(collector, MetricCollector)
maximum = max(maximum,
max(collector.all_metrics(func_name)))
return maximum
def suspiciousness(self, location: Location) -> float:
func_name, _ = location
return self.metric(location) / self.total(func_name)
def color(self, location: Location) -> str:
func_name, _ = location
hue = 240 # blue
saturation = 100 # fully saturated
darkness = self.metric(location) / self.maximum(func_name)
lightness = 100 - darkness * 25
return f"hsl({hue}, {saturation}%, {lightness}%)"
def tooltip(self, location: Location) -> str:
return f"{super().tooltip(location)} {self.metric(location)}"
We can now introduce PerformanceDebugger
as a subclass of MetricDebugger
, using an arbitrary MetricCollector
(such as TimeCollector
) to obtain the metric we want to visualize.
class PerformanceDebugger(MetricDebugger):
"""Collect and visualize a metric"""
def __init__(self, collector_class: Type, log: bool = False):
assert issubclass(collector_class, MetricCollector)
super().__init__(collector_class, log=log)
With PerformanceDebugger
, we inherit all the capabilities of SpectrumDebugger
, such as showing the (relative) percentage of time spent in a table. We see that the for
condition and the following assert
take most of the time, followed by the first condition.
with PerformanceDebugger(TimeCollector) as debugger:
for i in range(100):
s = remove_html_markup('<b>foo</b>')
print(debugger)
However, we can also visualize these percentages, using shades of blue to indicate those lines most time spent in:
debugger
Other Metrics¶
Our framework is flexible enough to collect (and visualize) arbitrary metrics. This HitCollector
class, for instance, collects how often a line is being executed.
class HitCollector(MetricCollector):
"""Collect how often a line is executed"""
def __init__(self) -> None:
super().__init__()
self.hits: Dict[Location, int] = {}
def collect(self, frame: FrameType, event: str, arg: Any) -> None:
super().collect(frame, event, arg)
location = (frame.f_code.co_name, frame.f_lineno)
self.hits.setdefault(location, 0)
self.hits[location] += 1
def metric(self, location: Location) -> Optional[int]:
if location in self.hits:
return self.hits[location]
else:
return None
def all_metrics(self, func: str) -> List[float]:
return [hits
for (func_name, lineno), hits in self.hits.items()
if func_name == func]
We can plug in this class into PerformanceDebugger
to obtain a distribution of lines executed:
with PerformanceDebugger(HitCollector) as debugger:
for i in range(100):
s = remove_html_markup('<b>foo</b>')
In total, during this call to remove_html_markup()
, there are 6,400 lines executed:
debugger.total('remove_html_markup')
Again, we can visualize the distribution as a table and using colors. We can see how the shade gets lighter in the lower part of the loop as individual conditions have been met.
print(debugger)
debugger
Integrating with Delta Debugging¶
Besides identifying causes for performance issues in the code, one may also search for causes in the input, using Delta Debugging. This can be useful if one does not immediately want to embark into investigating the code, but maybe first determine external influences that are related to performance issues.
Here is a variant of remove_html_markup()
that introduces a (rather obvious) performance issue.
import time
def remove_html_markup_ampersand(s: str) -> str:
tag = False
quote = False
out = ""
for c in s:
assert tag or not quote
if c == '&':
time.sleep(0.1) # <-- the obvious performance issue
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif (c == '"' or c == "'") and tag:
quote = not quote
elif not tag:
out = out + c
return out
We can easily trigger this issue by measuring time taken:
with Timer.Timer() as t:
remove_html_markup_ampersand('&&&')
t.elapsed_time()
Let us set up a test that checks whether the performance issue is present.
def remove_html_test(s: str) -> None:
with Timer.Timer() as t:
remove_html_markup_ampersand(s)
assert t.elapsed_time() < 0.1
We can now apply delta debugging to determine a minimum input that causes the failure:
s_fail = '<b>foo&</b>'
with DeltaDebugger.DeltaDebugger() as dd:
remove_html_test(s_fail)
dd.min_args()
For performance issues, however, a minimal input is often not enough to highlight the failure cause. This is because short inputs tend to take less processing time than longer inputs, which increases the risks of a spurious diagnosis. A better alternative is to compute a maximum input where the issue does not occur:
s_pass = dd.max_args()
s_pass
We see that the culprit character (the &
) is removed. This tells us the failure-inducing difference – or, more precisely, the cause for the performance issue.
Lessons Learned¶
- To measure performance,
- instrument the code such that the time taken per function (or line) is collected; or
- sample the execution that at regular intervals, the active call stack is collected.
- To make code performant, focus on efficient algorithms, efficient data types, and sufficient abstraction such that you can replace them by alternatives.
- Beyond efficient algorithms and data types, do not optimize before measuring.
Next Steps¶
This chapter concludes the part on abstracting failures. The next part will focus on
Background¶
Scalene is a high-performance, high-precision CPU, GPU, and memory profiler for Python. In contrast to the standard Python cProfile
profiler, it uses sampling instead of instrumentation or relying on Python's tracing facilities; and it also supports line-by-line profiling. Scalene might be the tool of choice if you want to go beyond basic profiling.
The Wikipedia articles on profiling) and performance analysis tools provide several additional resources on profiling tools and how to apply them in practice.
Exercises¶
Exercise 1: Profiling Memory Usage¶
The Python tracemalloc
module allows tracking memory usage during execution. Between tracemalloc.start()
and tracemalloc.end()
, use tracemalloc.get_traced_memory()
to obtain how much memory is currently being consumed:
import tracemalloc
tracemalloc.start()
current_size, peak_size = tracemalloc.get_traced_memory()
current_size
tracemalloc.stop()
Create a subclass of MetricCollector
named MemoryCollector
. Make it measure the memory consumption before and after each line executed (0 if negative), and visualize the impact of individual lines on memory. Create an appropriate test program that (temporarily) consumes larger amounts of memory.
Exercise 2: Statistical Performance Debugging¶
In a similar way as we integrated a binary "performance test" with delta debugging, we can also integrate such a test with other techniques. Combining a performance test with Statistical Debugging, for instance, will highlight those lines whose execution correlates with low performance. But then, the performance test need not be binary, as with functional pass/fail tests – you can also weight individual lines by how much they impact performance. Create a variant of StatisticalDebugger
that reflects the impact of individual lines on an arbitrary (summarized) performance metric.
The content of this project is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. The source code that is part of the content, as well as the source code used to format and display that content is licensed under the MIT License. Last change: 2023-11-16 15:49:39+01:00 • Cite • Imprint