Most chapters of this book deal with functional issues – that is, issues related to the functionality (or its absence) of the code in question. However, debugging can also involve nonfunctional issues, however – performance, usability, reliability, and more. In this chapter, we give a short introduction on how to debug such nonfunctional issues, notably performance issues.
Prerequisites
The solution to debugging performance issues fits in two simple rules:
The first part, actually measuring performance, is key here. Developers often take elaborated guesses on which aspects of their code impact performance, and think about all possible ways to optimize their code – and at the same time, making it harder to understand, harder to evolve, and harder to maintain. In most cases, such guesses are wrong. Instead, measure performance of your program, identify the very few parts that may need to get improved, and again measure the impact of your changes.
Almost all programming languages offer a way to measure performance and breaking it down to individual parts of the code – a means also known as profiling. Profiling works by measuring the execution time for each function (or even more fine-grained location) in your program. This can be achieved by
Instrumenting or tracing code such that the current time at entry and exit of each function (or line), thus determining the time spent. In Python, this is achieved by profilers like profile or cProfile
Sampling the current function call stack at regular intervals, and thus assessing which functions are most active (= take the most time) during execution. For Python, the scalene profiler works this way.
Pretty much all programming languages support profiling, either through measuring, sampling, or both. As a rule of thumb, interpreted languages more frequently support measuring (as it is easy to implement in an interpreter), while compiled languages more frequently support sampling (because instrumentation requires recompilation). Python is lucky to support both methods.
Let us illustrate profiling in a simple example. The ChangeCounter
class (which we will encounter in the chapter on mining version histories) reads in a version history from a git repository. Yet, it takes more than a minute to read in the debugging book change history:
with Timer.Timer() as t:
change_counter = debuggingbook_change_counter(ChangeCounter)
t.elapsed_time()
165.73121154200635
The Python profile
and cProfile
modules offer a simple way to identify the most time-consuming functions. They are invoked using the run()
function, whose argument is the command to be profiled. The output reports, for each function encountered:
ncalls
column)tottime
column)tottime
/ ncalls
(first percall
column)cumtime
column)cumtime
/ percall
(second percall
column)Let us have a look at the profile we obtain:
cProfile.run('debuggingbook_change_counter(ChangeCounter)', sort='cumulative')
21389077 function calls (21232681 primitive calls) in 175.519 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 175.519 175.519 {built-in method builtins.exec} 1 0.000 0.000 175.519 175.519 <string>:1(<module>) 1 0.000 0.000 175.519 175.519 ChangeCounter.ipynb:168(debuggingbook_change_counter) 1 0.001 0.001 175.519 175.519 ChangeCounter.ipynb:51(__init__) 1 1.095 1.095 175.519 175.519 ChangeCounter.ipynb:88(mine) 1235 0.028 0.000 174.163 0.141 ChangeCounter.ipynb:102(mine_commit) 1235 0.038 0.000 173.966 0.141 commit.py:680(modified_files) 1235 0.023 0.000 173.927 0.141 commit.py:696(_get_modified_files) 1185 0.212 0.000 145.691 0.123 diff.py:95(diff) 1234 0.092 0.000 76.666 0.062 cmd.py:1114(_call_process) 1234 0.122 0.000 76.558 0.062 cmd.py:726(execute) 1234 0.172 0.000 76.148 0.062 subprocess.py:753(__init__) 1234 0.181 0.000 75.938 0.062 subprocess.py:1682(_execute_child) 1234 73.728 0.060 73.728 0.060 {built-in method _posixsubprocess.fork_exec} 1185 16.772 0.014 73.108 0.062 diff.py:445(_index_from_patch_format) 1186 0.059 0.000 72.381 0.061 cmd.py:638(<lambda>) 1185 0.089 0.000 42.447 0.036 cmd.py:71(handle_process_output) 11964 41.934 0.004 41.934 0.004 {method 'acquire' of '_thread.lock' objects} 2372 0.013 0.000 41.830 0.018 threading.py:1057(join) 2372 0.012 0.000 41.814 0.018 threading.py:1095(_wait_for_tstate_lock) 1235 0.051 0.000 28.186 0.023 commit.py:730(_parse_diff) 26030 0.027 0.000 27.148 0.001 commit.py:759(_get_undecoded_content) 86718 0.092 0.000 21.822 0.000 cmd.py:528(read) 73280 0.025 0.000 21.758 0.000 base.py:137(read) 173414 21.724 0.000 21.724 0.000 {method 'read' of '_io.BufferedReader' objects} 13015 0.097 0.000 12.997 0.001 diff.py:290(__init__) 12203 0.011 0.000 12.675 0.001 base.py:363(submodules) 12203 0.044 0.000 12.664 0.001 util.py:1092(list_items) 25388/25385 0.031 0.000 12.607 0.000 {method 'extend' of 'list' objects} 36609 0.137 0.000 12.576 0.000 base.py:1228(iter_items) 86718 0.128 0.000 8.778 0.000 db.py:46(stream) 86718 0.197 0.000 8.571 0.000 cmd.py:1263(stream_object_data) 135530 0.402 0.000 5.681 0.000 cmd.py:1235(__get_object_header) 24468 0.020 0.000 5.544 0.000 base.py:129(data_stream) 63435 0.056 0.000 4.972 0.000 util.py:248(__getattr__) 223749 4.402 0.000 4.410 0.000 {method 'readline' of '_io.BufferedReader' objects} 135530 0.031 0.000 4.390 0.000 cmd.py:1221(_get_persistent_cmd) 85421/24406 0.048 0.000 4.027 0.000 tree.py:347(__getitem__) 85421/24406 0.165 0.000 4.003 0.000 tree.py:245(join) 36609 0.033 0.000 3.821 0.000 base.py:527(commit) 48812 0.051 0.000 3.571 0.000 tree.py:224(_set_cache_) 36609 0.023 0.000 3.375 0.000 symbolic.py:212(_get_commit) 36609 0.035 0.000 3.352 0.000 symbolic.py:203(_get_object) 48812 0.033 0.000 2.358 0.000 symbolic.py:143(dereference_recursive) 97625 0.073 0.000 2.325 0.000 symbolic.py:196(_get_ref_info) 48812 0.111 0.000 2.295 0.000 base.py:75(new_from_sha) 97625 0.233 0.000 2.253 0.000 symbolic.py:156(_get_ref_info_helper) 12203 0.046 0.000 2.222 0.000 base.py:196(_config_parser) 48812 0.081 0.000 1.930 0.000 db.py:42(info) 48812 0.036 0.000 1.801 0.000 cmd.py:1243(get_object_header) 1234 1.594 0.001 1.594 0.001 {built-in method posix.read} 12203 0.055 0.000 1.458 0.000 fun.py:191(rev_parse) 112718 1.356 0.000 1.411 0.000 {built-in method io.open} 12203 0.042 0.000 1.393 0.000 fun.py:121(name_to_object) 14623 0.035 0.000 1.335 0.000 commit.py:196(_set_cache_) 97910/85516 0.068 0.000 1.050 0.000 config.py:104(assure_data_present) 12203 0.023 0.000 1.046 0.000 util.py:72(__init__) 12297 0.046 0.000 1.032 0.000 config.py:281(__init__) 12297 0.058 0.000 0.982 0.000 configparser.py:601(__init__) 48812 0.720 0.000 0.947 0.000 fun.py:59(tree_entries_from_data) 1185 0.808 0.001 0.808 0.001 {method 'join' of 'bytes' objects} 1026027 0.794 0.000 0.794 0.000 {method 'decode' of 'bytes' objects} 12297 0.190 0.000 0.776 0.000 configparser.py:1321(__init__) 97910/85613 0.076 0.000 0.760 0.000 config.py:543(read) 13015 0.004 0.000 0.699 0.000 commit.py:749(_get_decoded_str) 12297 0.230 0.000 0.465 0.000 config.py:391(_read) 12297 0.393 0.000 0.393 0.000 {built-in method builtins.dir} 97809 0.275 0.000 0.346 0.000 {method 'read' of '_io.TextIOWrapper' objects} 135530 0.273 0.000 0.344 0.000 cmd.py:1207(_prepare_ref) 13438 0.096 0.000 0.343 0.000 commit.py:525(_deserialize) 14250 0.006 0.000 0.276 0.000 commit.py:587(committer_date) 1698594 0.275 0.000 0.275 0.000 {method 'match' of 're.Pattern' objects} 14250 0.006 0.000 0.270 0.000 commit.py:209(committed_datetime) 1236 0.002 0.000 0.261 0.000 repository.py:207(traverse_commits) 2372 0.013 0.000 0.258 0.000 threading.py:909(start) 13015 0.033 0.000 0.241 0.000 commit.py:155(__init__) 135530 0.236 0.000 0.239 0.000 {method 'flush' of '_io.BufferedWriter' objects} 139519 0.146 0.000 0.227 0.000 posixpath.py:71(join) 1234 0.080 0.000 0.219 0.000 subprocess.py:1222(_close_pipe_fds) 1239 0.021 0.000 0.212 0.000 repository.py:246(_iter_commits) 24732 0.020 0.000 0.208 0.000 pathlib.py:955(__new__) 2372 0.013 0.000 0.203 0.000 ipkernel.py:768(init_closure) 427105 0.120 0.000 0.199 0.000 compat.py:49(safe_decode) 48905 0.052 0.000 0.196 0.000 configparser.py:766(get) 2372 0.075 0.000 0.190 0.000 threading.py:820(__init__) 24734 0.018 0.000 0.188 0.000 pathlib.py:587(_from_parts) 851088 0.127 0.000 0.185 0.000 cmd.py:466(__getattr__) 135530 0.136 0.000 0.183 0.000 cmd.py:1185(_parse_object_header) 2373 0.008 0.000 0.176 0.000 threading.py:582(wait) 1234 0.026 0.000 0.169 0.000 os.py:711(copy) 24734 0.036 0.000 0.168 0.000 pathlib.py:567(_parse_args) 2374 0.010 0.000 0.157 0.000 threading.py:288(wait) 162456 0.113 0.000 0.151 0.000 base.py:50(__init__) 26876 0.036 0.000 0.150 0.000 util.py:268(parse_actor_and_date) 48812 0.076 0.000 0.144 0.000 util.py:85(get_object_type_by_name) 110109 0.143 0.000 0.143 0.000 {method '__exit__' of '_io._IOBase' objects} 24730 0.097 0.000 0.141 0.000 util.py:69(mode_str_to_int) 12297 0.075 0.000 0.138 0.000 configparser.py:1244(__init__) 98921 0.055 0.000 0.133 0.000 base.py:153(__init__) 24734 0.066 0.000 0.123 0.000 pathlib.py:56(parse_parts) 63535 0.046 0.000 0.119 0.000 commit.py:84(__init__) 1112149/1112147 0.115 0.000 0.116 0.000 {built-in method builtins.getattr} 1805709/1805707 0.103 0.000 0.107 0.000 {built-in method builtins.isinstance} 50047 0.032 0.000 0.103 0.000 tree.py:214(__init__) 14250 0.013 0.000 0.096 0.000 util.py:167(from_timestamp) 2516 0.051 0.000 0.095 0.000 contextlib.py:496(callback) 24406 0.014 0.000 0.090 0.000 base.py:335(index) 85513 0.067 0.000 0.090 0.000 util.py:163(join_path) 1234 0.016 0.000 0.086 0.000 util.py:412(remove_password_if_present) 90175 0.040 0.000 0.086 0.000 os.py:674(__getitem__) 214716 0.052 0.000 0.085 0.000 os.py:804(fsencode) 48905 0.047 0.000 0.085 0.000 configparser.py:1143(_unify_values) 24406 0.018 0.000 0.076 0.000 base.py:117(__init__) 97809 0.054 0.000 0.071 0.000 codecs.py:319(decode) 78603 0.068 0.000 0.071 0.000 {method 'search' of 're.Pattern' objects} 2372 0.068 0.000 0.068 0.000 {built-in method _thread.start_new_thread} 12300 0.010 0.000 0.062 0.000 config.py:489(_has_includes) 26876 0.022 0.000 0.060 0.000 util.py:665(_from_string) 12000 0.033 0.000 0.059 0.000 parse.py:437(urlsplit) 24406 0.015 0.000 0.058 0.000 base.py:157(_index_path) 1186 0.010 0.000 0.058 0.000 util.py:383(finalize_process) 24406 0.018 0.000 0.056 0.000 base.py:114(__init__) 2419 0.011 0.000 0.056 0.000 subprocess.py:1199(wait) 26030 0.010 0.000 0.055 0.000 diff.py:432(_pick_best_path) 29616 0.011 0.000 0.055 0.000 subprocess.py:1770(<genexpr>) 1234 0.020 0.000 0.055 0.000 os.py:619(get_exec_path) 97809 0.037 0.000 0.055 0.000 codecs.py:309(__init__) 36610 0.027 0.000 0.054 0.000 base.py:342(head) 91316 0.012 0.000 0.054 0.000 _collections_abc.py:877(__iter__) 97625 0.038 0.000 0.053 0.000 symbolic.py:43(_git_dir) 772407 0.053 0.000 0.053 0.000 {built-in method builtins.len} 48905 0.033 0.000 0.053 0.000 __init__.py:976(__getitem__) 2373 0.008 0.000 0.053 0.000 threading.py:538(__init__) 12297 0.017 0.000 0.052 0.000 config.py:492(_included_paths) 2371 0.016 0.000 0.051 0.000 cmd.py:470(wait) 122262 0.042 0.000 0.051 0.000 config.py:192(__getitem__) 224190 0.049 0.000 0.050 0.000 {method 'split' of 'str' objects} 26030 0.030 0.000 0.046 0.000 diff.py:54(decode_path) 2419 0.005 0.000 0.045 0.000 subprocess.py:1906(_wait) 13015 0.004 0.000 0.045 0.000 ChangeCounter.ipynb:112(include) 14250 0.042 0.000 0.045 0.000 {built-in method fromtimestamp} 1282 0.007 0.000 0.044 0.000 cmd.py:463(__del__) 440553 0.044 0.000 0.044 0.000 {method 'encode' of 'str' objects} 24498 0.012 0.000 0.043 0.000 util.py:200(join_path_native) 92 0.000 0.000 0.043 0.000 util.py:98(wrapper) 92 0.000 0.000 0.043 0.000 base.py:1103(module) 93 0.002 0.000 0.043 0.000 base.py:108(__init__) 251445 0.042 0.000 0.042 0.000 {method 'startswith' of 'str' objects} 307774 0.042 0.000 0.042 0.000 {method 'endswith' of 'str' objects} 91316 0.024 0.000 0.042 0.000 os.py:697(__iter__) 13015 0.008 0.000 0.041 0.000 ChangeCounter.ipynb:176(filter) 86572 0.038 0.000 0.039 0.000 config.py:180(__setitem__) 86718 0.024 0.000 0.038 0.000 base.py:128(__new__) 2378 0.037 0.000 0.037 0.000 threading.py:236(__init__) 721336 0.037 0.000 0.037 0.000 {method 'append' of 'list' objects} 148380 0.037 0.000 0.037 0.000 typing.py:305(inner) 24406 0.029 0.000 0.036 0.000 symbolic.py:439(to_full_path) 142638 0.023 0.000 0.036 0.000 posixpath.py:41(_get_sep) 13015 0.019 0.000 0.035 0.000 commit.py:923(_from_change_to_modification_type) 180164 0.021 0.000 0.035 0.000 os.py:758(decode) 1234 0.007 0.000 0.035 0.000 subprocess.py:1893(_try_wait) 48813 0.026 0.000 0.034 0.000 <frozen importlib._bootstrap>:404(parent) 48813 0.026 0.000 0.034 0.000 <frozen importlib._bootstrap>:1053(_handle_fromlist) 1282 0.006 0.000 0.033 0.000 cmd.py:421(_terminate) 94172 0.033 0.000 0.033 0.000 {built-in method sys.intern} 1236 0.000 0.000 0.032 0.000 git.py:110(get_list_commits) 31724 0.010 0.000 0.031 0.000 commit.py:243(new_path) 162456 0.030 0.000 0.030 0.000 {method 'split' of 'bytes' objects} 236971 0.029 0.000 0.029 0.000 {built-in method binascii.a2b_hex} 174593 0.028 0.000 0.028 0.000 {built-in method __new__ of type object at 0x104ecb0a0} 87941 0.028 0.000 0.028 0.000 typing.py:1408(_no_init_or_replace_init) 90175 0.018 0.000 0.028 0.000 os.py:754(encode) 24406 0.018 0.000 0.028 0.000 configparser.py:878(has_option) 1234 0.018 0.000 0.028 0.000 contextlib.py:533(__exit__) 36611 0.021 0.000 0.028 0.000 head.py:38(__init__) 1330 0.028 0.000 0.028 0.000 {built-in method posix.waitpid} 2372 0.027 0.000 0.027 0.000 threading.py:1294(_make_invoke_excepthook) 94 0.000 0.000 0.026 0.000 cmd.py:1273(clear_cache) 46 0.000 0.000 0.026 0.001 base.py:1121(module_exists) 48812 0.016 0.000 0.025 0.000 base.py:35(__new__) 135662 0.025 0.000 0.025 0.000 {method 'write' of '_io.BufferedWriter' objects} 2516 0.018 0.000 0.025 0.000 contextlib.py:514(_push_exit_callback) 1234 0.024 0.000 0.024 0.000 cmd.py:416(__init__) 86718 0.024 0.000 0.024 0.000 cmd.py:517(__init__) 14250 0.021 0.000 0.024 0.000 {method 'astimezone' of 'datetime.datetime' objects} 49928 0.022 0.000 0.023 0.000 config.py:183(add) 1674 0.009 0.000 0.023 0.000 ipkernel.py:775(_clean_thread_parent_frames) 136171 0.023 0.000 0.023 0.000 {method 'group' of 're.Match' objects} 31243 0.012 0.000 0.021 0.000 pathlib.py:619(__str__) 1 0.000 0.000 0.020 0.020 base.py:560(iter_commits) 1 0.000 0.000 0.020 0.020 commit.py:246(iter_items) 383860 0.020 0.000 0.020 0.000 {built-in method posix.fspath} 140520 0.020 0.000 0.020 0.000 {built-in method binascii.b2a_hex} 261479 0.019 0.000 0.019 0.000 {method 'strip' of 'str' objects} 26876 0.019 0.000 0.019 0.000 util.py:110(utctz_to_altz) 35848 0.014 0.000 0.018 0.000 base.py:105(__ne__) 1234 0.018 0.000 0.018 0.000 warnings.py:458(__enter__) 130703 0.018 0.000 0.018 0.000 {method 'startswith' of 'bytes' objects} 97809 0.018 0.000 0.018 0.000 codecs.py:260(__init__) 92 0.000 0.000 0.018 0.000 base.py:240(__del__) 92 0.001 0.000 0.018 0.000 base.py:246(close) 2516 0.016 0.000 0.017 0.000 contextlib.py:446(_create_cb_wrapper) 86718 0.017 0.000 0.017 0.000 base.py:132(__init__) 97809 0.017 0.000 0.017 0.000 {built-in method _codecs.utf_8_decode} 1234 0.007 0.000 0.017 0.000 subprocess.py:1583(_get_handles) 24500 0.014 0.000 0.017 0.000 configparser.py:644(sections) 123837 0.013 0.000 0.016 0.000 {built-in method builtins.hasattr} 147921 0.016 0.000 0.016 0.000 {method 'rstrip' of 'str' objects} 86718 0.015 0.000 0.015 0.000 cmd.py:604(__del__) 48905 0.015 0.000 0.015 0.000 base.py:296(common_dir) 66767 0.015 0.000 0.015 0.000 {method 'groups' of 're.Match' objects} 2372 0.011 0.000 0.014 0.000 _weakrefset.py:86(add) 1234 0.014 0.000 0.014 0.000 contextlib.py:452(__init__) 48905 0.014 0.000 0.014 0.000 __init__.py:966(__init__) 14250 0.014 0.000 0.014 0.000 util.py:147(__init__) 61533 0.013 0.000 0.013 0.000 {built-in method builtins.setattr} 24406 0.011 0.000 0.013 0.000 util.py:34(sm_name) 2 0.000 0.000 0.013 0.007 {built-in method builtins.next} 2 0.000 0.000 0.013 0.007 repository.py:173(_prep_repo) 20678 0.013 0.000 0.013 0.000 {method 'get' of 'dict' objects} 3750 0.012 0.000 0.012 0.000 {built-in method posix.pipe} 4985 0.012 0.000 0.012 0.000 {built-in method posix.close} 25043 0.009 0.000 0.012 0.000 diff.py:403(a_path) 10717/1234 0.007 0.000 0.012 0.000 cmd.py:1069(__unpack_args) 188789 0.011 0.000 0.011 0.000 typing.py:1715(cast) 48812 0.011 0.000 0.011 0.000 base.py:38(__init__) 1236 0.001 0.000 0.011 0.000 commit.py:318(_iter_from_process_or_stream) 370/186 0.001 0.000 0.011 0.000 fun.py:85(find_submodule_git_dir) 12000 0.004 0.000 0.010 0.000 parse.py:155(password) 792 0.002 0.000 0.010 0.000 ChangeCounter.ipynb:121(update_stats) 13015 0.007 0.000 0.010 0.000 commit.py:549(committer) 11471 0.007 0.000 0.010 0.000 diff.py:426(renamed_file) 837 0.004 0.000 0.009 0.000 ipkernel.py:790(<setcomp>) 5984 0.009 0.000 0.009 0.000 {built-in method _thread.allocate_lock} 60813 0.009 0.000 0.009 0.000 {method 'rpartition' of 'str' objects} 122394 0.009 0.000 0.009 0.000 {function _OMD.__getitem__ at 0x1767b6e60} 1185 0.009 0.000 0.009 0.000 {method 'finditer' of 're.Pattern' objects} 1 0.000 0.000 0.009 0.009 contextlib.py:139(__exit__) 2 0.000 0.000 0.009 0.005 git.py:77(clear) 123240 0.009 0.000 0.009 0.000 config.py:387(optionxform) 93 0.000 0.000 0.009 0.000 base.py:488(config_reader) 1234 0.005 0.000 0.009 0.000 warnings.py:165(simplefilter) 12203 0.007 0.000 0.009 0.000 util.py:970(__new__) 25455 0.006 0.000 0.009 0.000 diff.py:407(b_path) 12531 0.006 0.000 0.008 0.000 pathlib.py:606(_format_parsed_parts) 2374 0.007 0.000 0.008 0.000 threading.py:279(_is_owned) 2370 0.008 0.000 0.008 0.000 threading.py:775(_newname) 2386 0.007 0.000 0.008 0.000 threading.py:264(__enter__) 4890 0.008 0.000 0.008 0.000 {method 'append' of 'collections.deque' objects} 24730 0.007 0.000 0.007 0.000 {method 'sub' of 're.Pattern' objects} 4744 0.006 0.000 0.007 0.000 threading.py:1423(current_thread) 36611 0.007 0.000 0.007 0.000 symbolic.py:65(__init__) 67877 0.007 0.000 0.007 0.000 {method 'readline' of '_io.BytesIO' objects} 12076 0.003 0.000 0.007 0.000 config.py:352(__del__) 1285 0.006 0.000 0.007 0.000 commit.py:623(parents) 4744 0.006 0.000 0.006 0.000 threading.py:1176(daemon) 2370 0.004 0.000 0.006 0.000 threading.py:1191(daemon) 1234 0.006 0.000 0.006 0.000 {built-in method posix.access} 26876 0.006 0.000 0.006 0.000 util.py:646(__init__) 48812 0.006 0.000 0.006 0.000 base.py:52(type) 97624 0.006 0.000 0.006 0.000 {built-in method builtins.ord} 12000 0.005 0.000 0.006 0.000 parse.py:189(_userinfo) 74872 0.006 0.000 0.006 0.000 {method 'lower' of 'str' objects} 12297 0.005 0.000 0.006 0.000 configparser.py:1363(__iter__) 2652 0.003 0.000 0.006 0.000 posixpath.py:150(dirname) 463 0.001 0.000 0.006 0.000 fun.py:44(is_git_dir) 48812 0.005 0.000 0.005 0.000 base.py:42(binsha) 1235 0.001 0.000 0.005 0.000 conf.py:257(is_commit_filtered) 24734 0.005 0.000 0.005 0.000 pathlib.py:239(splitroot) 12000 0.005 0.000 0.005 0.000 parse.py:114(_coerce_args) 13015 0.004 0.000 0.005 0.000 commit.py:614(msg) 12203 0.004 0.000 0.005 0.000 base.py:99(__eq__) 12203 0.005 0.000 0.005 0.000 fun.py:180(to_commit) 7562 0.005 0.000 0.005 0.000 threading.py:1138(ident) 1234 0.005 0.000 0.005 0.000 subprocess.py:1846(_handle_exitstatus) 12203 0.005 0.000 0.005 0.000 util.py:973(__init__) 4990 0.004 0.000 0.005 0.000 base.py:123(hexsha) 73371 0.005 0.000 0.005 0.000 {method 'replace' of 'str' objects} 1235 0.002 0.000 0.005 0.000 commit.py:529(hash) 1234 0.005 0.000 0.005 0.000 {built-in method sys.exc_info} 1234 0.005 0.000 0.005 0.000 warnings.py:437(__init__) 837 0.004 0.000 0.004 0.000 threading.py:1471(enumerate) 2372 0.003 0.000 0.004 0.000 threading.py:1021(_stop) 7444 0.003 0.000 0.004 0.000 conf.py:45(get) 48812 0.004 0.000 0.004 0.000 base.py:60(size) 12077 0.003 0.000 0.004 0.000 config.py:364(release) 792 0.004 0.000 0.004 0.000 ChangeCounter.ipynb:137(update_size) 1 0.000 0.000 0.004 0.004 contextlib.py:130(__enter__) 1234 0.002 0.000 0.004 0.000 warnings.py:181(_add_filter) 1234 0.004 0.000 0.004 0.000 subprocess.py:1060(__del__) 3608 0.004 0.000 0.004 0.000 {method 'add' of 'set' objects} 1 0.000 0.000 0.004 0.004 git.py:39(__init__) 24498 0.004 0.000 0.004 0.000 util.py:194(to_native_path_linux) 1327 0.001 0.000 0.004 0.000 abc.py:117(__instancecheck__) 48905 0.004 0.000 0.004 0.000 configparser.py:363(before_get) 1 0.000 0.000 0.004 0.004 git.py:86(_open_repository) 931 0.002 0.000 0.003 0.000 posixpath.py:337(normpath) 61485 0.003 0.000 0.003 0.000 {built-in method builtins.callable} 12297 0.002 0.000 0.003 0.000 config.py:333(_acquire_lock) 24590 0.003 0.000 0.003 0.000 base.py:290(working_tree_dir) 12297 0.003 0.000 0.003 0.000 configparser.py:1191(converters) 279 0.001 0.000 0.003 0.000 util.py:400(expand_path) 2386 0.003 0.000 0.003 0.000 threading.py:267(__exit__) 1327 0.003 0.000 0.003 0.000 {built-in method _abc._abc_instancecheck} 1304 0.003 0.000 0.003 0.000 {built-in method posix.stat} 1234 0.003 0.000 0.003 0.000 _collections_abc.py:828(keys) 57000 0.003 0.000 0.003 0.000 util.py:160(dst) 14250 0.003 0.000 0.003 0.000 developer.py:27(__init__) 792 0.002 0.000 0.003 0.000 ChangeCounter.ipynb:145(update_changes) 43985 0.003 0.000 0.003 0.000 util.py:154(utcoffset) 28382 0.003 0.000 0.003 0.000 {method 'endswith' of 'bytes' objects} 7114 0.003 0.000 0.003 0.000 threading.py:546(is_set) 2516 0.001 0.000 0.003 0.000 contextlib.py:448(_exit_wrapper) 2372 0.002 0.000 0.003 0.000 _weakrefset.py:39(_remove) 928 0.001 0.000 0.003 0.000 genericpath.py:39(isdir) 24500 0.003 0.000 0.003 0.000 {method 'keys' of 'collections.OrderedDict' objects} 106 0.000 0.000 0.002 0.000 parse.py:88(clear_cache) 13463 0.002 0.000 0.002 0.000 {method 'join' of 'str' objects} 212 0.002 0.000 0.002 0.000 {method 'clear' of 'dict' objects} 1236 0.001 0.000 0.002 0.000 __init__.py:1467(info) 2468 0.001 0.000 0.002 0.000 subprocess.py:1173(poll) 2370 0.001 0.000 0.002 0.000 base.py:115(__str__) 2370 0.002 0.000 0.002 0.000 threading.py:1162(is_alive) 93 0.000 0.000 0.002 0.000 configparser.py:827(getboolean) 1238 0.002 0.000 0.002 0.000 {method 'remove' of 'list' objects} 11471 0.002 0.000 0.002 0.000 diff.py:411(rename_from) 2371 0.001 0.000 0.002 0.000 encoding.py:1(force_bytes) 93 0.000 0.000 0.002 0.000 configparser.py:806(_get_conv) 3 0.000 0.000 0.002 0.001 config.py:659(write) 14730 0.002 0.000 0.002 0.000 {method 'strip' of 'bytes' objects} 12295 0.002 0.000 0.002 0.000 base.py:309(bare) 13438 0.002 0.000 0.002 0.000 {method 'read' of '_io.BytesIO' objects} 93 0.000 0.000 0.002 0.000 configparser.py:803(_get) 2374 0.001 0.000 0.001 0.000 threading.py:276(_acquire_restore) 281 0.000 0.000 0.001 0.000 posixpath.py:376(abspath) 3702 0.001 0.000 0.001 0.000 subprocess.py:1858(_internal_poll) 24735 0.001 0.000 0.001 0.000 {method 'reverse' of 'list' objects} 1234 0.001 0.000 0.001 0.000 cmd.py:1058(transform_kwargs) 2/1 0.000 0.000 0.001 0.001 config.py:117(flush_changes) 13015 0.001 0.000 0.001 0.000 {method 'end' of 're.Match' objects} 2470 0.001 0.000 0.001 0.000 __init__.py:1724(isEnabledFor) 1234 0.001 0.000 0.001 0.000 cmd.py:1148(<dictcomp>) 2374 0.001 0.000 0.001 0.000 threading.py:273(_release_save) 1 0.000 0.000 0.001 0.001 config.py:791(set_value) 93 0.000 0.000 0.001 0.000 cmd.py:612(__init__) 1234 0.001 0.000 0.001 0.000 warnings.py:477(__exit__) 1235 0.001 0.000 0.001 0.000 {built-in method builtins.sorted} 1284 0.000 0.000 0.001 0.000 cmd.py:180(dashify) 2126 0.001 0.000 0.001 0.000 <string>:1(<lambda>) 1234 0.001 0.000 0.001 0.000 __init__.py:1455(debug) 1185 0.001 0.000 0.001 0.000 base.py:90(_set_cache_) 12080 0.001 0.000 0.001 0.000 config.py:707(read_only) 11471 0.001 0.000 0.001 0.000 diff.py:415(rename_to) 12761 0.001 0.000 0.001 0.000 {method 'pop' of 'list' objects} 2371 0.001 0.000 0.001 0.000 cmd.py:632(__getattr__) 7116 0.001 0.000 0.001 0.000 {built-in method _thread.get_ident} 11830 0.001 0.000 0.001 0.000 {method 'start' of 're.Match' objects} 12297 0.001 0.000 0.001 0.000 {built-in method builtins.iter} 1235 0.001 0.000 0.001 0.000 commit.py:538(author) 1 0.000 0.000 0.001 0.001 base.py:511(config_writer) 2516 0.001 0.000 0.001 0.000 {method 'pop' of 'collections.deque' objects} 12000 0.001 0.000 0.001 0.000 parse.py:103(_noop) 2468 0.001 0.000 0.001 0.000 {method 'close' of '_io.BufferedReader' objects} 4 0.000 0.000 0.001 0.000 util.py:878(_obtain_lock) 4 0.000 0.000 0.001 0.000 util.py:856(_obtain_lock_or_raise) 1 0.001 0.001 0.001 0.001 {built-in method posix.open} 2126 0.001 0.000 0.001 0.000 {method 'find' of 'str' objects} 93 0.000 0.000 0.001 0.000 db.py:37(__init__) 1234 0.001 0.000 0.001 0.000 cmd.py:1156(<listcomp>) 281 0.000 0.000 0.001 0.000 genericpath.py:27(isfile) 1234 0.001 0.000 0.001 0.000 subprocess.py:246(_cleanup) 3217 0.001 0.000 0.001 0.000 {method '__exit__' of '_thread.RLock' objects} 1418 0.001 0.000 0.001 0.000 {method 'rfind' of 'str' objects} 2378 0.001 0.000 0.001 0.000 {method '__enter__' of '_thread.lock' objects} 2372 0.001 0.000 0.001 0.000 {method 'discard' of 'set' objects} 48 0.000 0.000 0.001 0.000 subprocess.py:2093(terminate) 1234 0.001 0.000 0.001 0.000 contextlib.py:530(__enter__) 93 0.000 0.000 0.001 0.000 genericpath.py:16(exists) 1234 0.001 0.000 0.001 0.000 {method 'rfind' of 'bytes' objects} 48 0.000 0.000 0.001 0.000 subprocess.py:2061(send_signal) 4842 0.001 0.000 0.001 0.000 {method 'release' of '_thread.lock' objects} 93 0.000 0.000 0.001 0.000 loose.py:77(__init__) 1234 0.001 0.000 0.001 0.000 _collections_abc.py:854(__init__) 94 0.000 0.000 0.001 0.000 base.py:464(_get_config_path) 4937 0.000 0.000 0.001 0.000 {method 'items' of 'dict' objects} 1235 0.000 0.000 0.001 0.000 git.py:134(get_commit_from_gitpython) 93 0.000 0.000 0.000 0.000 re.py:197(search) 1 0.000 0.000 0.000 0.000 repository.py:240(<dictcomp>) 4 0.000 0.000 0.000 0.000 thread.py:161(submit) 2 0.000 0.000 0.000 0.000 pathlib.py:1062(resolve) 1234 0.000 0.000 0.000 0.000 cmd.py:1149(<dictcomp>) 4789 0.000 0.000 0.000 0.000 {method 'insert' of 'list' objects} 4 0.000 0.000 0.000 0.000 thread.py:180(_adjust_thread_count) 2 0.000 0.000 0.000 0.000 posixpath.py:391(realpath) 48 0.000 0.000 0.000 0.000 {built-in method posix.kill} 467 0.000 0.000 0.000 0.000 posixpath.py:60(isabs) 3624 0.000 0.000 0.000 0.000 {method '__exit__' of '_thread.lock' objects} 2 0.000 0.000 0.000 0.000 posixpath.py:400(_joinrealpath) 93 0.000 0.000 0.000 0.000 _collections_abc.py:820(__contains__) 48 0.000 0.000 0.000 0.000 {method 'close' of '_io.BufferedWriter' objects} 93 0.000 0.000 0.000 0.000 re.py:288(_compile) 1766 0.000 0.000 0.000 0.000 {method 'values' of 'dict' objects} 3702 0.000 0.000 0.000 0.000 {built-in method _warnings._filters_mutated} 8 0.000 0.000 0.000 0.000 {built-in method posix.lstat} 93 0.000 0.000 0.000 0.000 base.py:113(__init__) 3 0.000 0.000 0.000 0.000 config.py:615(_write) 3348 0.000 0.000 0.000 0.000 {method 'keys' of 'dict' objects} 2126 0.000 0.000 0.000 0.000 parse.py:419(_checknetloc) 1234 0.000 0.000 0.000 0.000 {built-in method sys.audit} 92 0.000 0.000 0.000 0.000 base.py:197(abspath) 92 0.000 0.000 0.000 0.000 mman.py:408(collect) 45 0.000 0.000 0.000 0.000 config.py:618(write_section) 1235 0.000 0.000 0.000 0.000 commit.py:503(__init__) 279 0.000 0.000 0.000 0.000 posixpath.py:228(expanduser) 1282 0.000 0.000 0.000 0.000 {method 'update' of 'dict' objects} 1234 0.000 0.000 0.000 0.000 {built-in method posix.waitstatus_to_exitcode} 277 0.000 0.000 0.000 0.000 cmd.py:368(is_cygwin) 1 0.000 0.000 0.000 0.000 _base.py:636(__exit__) 1185 0.000 0.000 0.000 0.000 diff.py:86(_process_diff_args) 1234 0.000 0.000 0.000 0.000 {built-in method posix.WIFSTOPPED} 279 0.000 0.000 0.000 0.000 posixpath.py:284(expandvars) 2376 0.000 0.000 0.000 0.000 {method 'locked' of '_thread.lock' objects} 1234 0.000 0.000 0.000 0.000 {method 'pop' of 'dict' objects} 92 0.000 0.000 0.000 0.000 base.py:266(__ne__) 1185 0.000 0.000 0.000 0.000 util.py:257(_set_cache_) 52 0.000 0.000 0.000 0.000 cmd.py:1042(transform_kwarg) 1 0.000 0.000 0.000 0.000 thread.py:216(shutdown) 5 0.000 0.000 0.000 0.000 _base.py:201(as_completed) 92 0.000 0.000 0.000 0.000 mman.py:303(_collect_lru_region) 1 0.000 0.000 0.000 0.000 git.py:92(_discover_main_branch) 1 0.000 0.000 0.000 0.000 base.py:792(active_branch) 93 0.000 0.000 0.000 0.000 configparser.py:1163(_convert_to_boolean) 792 0.000 0.000 0.000 0.000 ChangeCounter.ipynb:155(update_elems) 1 0.000 0.000 0.000 0.000 symbolic.py:288(_get_reference) 45 0.000 0.000 0.000 0.000 config.py:216(items_all) 93 0.000 0.000 0.000 0.000 base.py:70(__init__) 835 0.000 0.000 0.000 0.000 {built-in method _stat.S_ISDIR} 277 0.000 0.000 0.000 0.000 util.py:347(is_cygwin_git) 45 0.000 0.000 0.000 0.000 config.py:218(<listcomp>) 2 0.000 0.000 0.000 0.000 util.py:883(_release_lock) 2 0.000 0.000 0.000 0.000 weakref.py:370(remove) 1 0.000 0.000 0.000 0.000 util.py:139(rmfile) 88 0.000 0.000 0.000 0.000 config.py:786(_value_to_string) 1 0.000 0.000 0.000 0.000 {built-in method posix.remove} 92 0.000 0.000 0.000 0.000 base.py:261(__eq__) 1 0.000 0.000 0.000 0.000 thread.py:123(__init__) 132 0.000 0.000 0.000 0.000 config.py:209(getall) 4 0.000 0.000 0.000 0.000 threading.py:421(acquire) 6 0.000 0.000 0.000 0.000 _base.py:179(_yield_finished_futures) 4 0.000 0.000 0.000 0.000 _base.py:418(result) 3 0.000 0.000 0.000 0.000 config.py:212(items) 1 0.000 0.000 0.000 0.000 repository.py:44(__init__) 3 0.000 0.000 0.000 0.000 config.py:214(<listcomp>) 1 0.000 0.000 0.000 0.000 symbolic.py:685(from_path) 2 0.000 0.000 0.000 0.000 pathlib.py:1090(stat) 1 0.000 0.000 0.000 0.000 repository.py:256(_split_in_chunks) 187 0.000 0.000 0.000 0.000 {built-in method _stat.S_ISREG} 88 0.000 0.000 0.000 0.000 encoding.py:11(force_text) 2 0.000 0.000 0.000 0.000 threading.py:793(_maintain_shutdown_locks) 4 0.000 0.000 0.000 0.000 pathlib.py:629(__fspath__) 4 0.000 0.000 0.000 0.000 _base.py:318(__init__) 1 0.000 0.000 0.000 0.000 conf.py:77(sanity_check_filters) 57 0.000 0.000 0.000 0.000 {method 'find' of 'bytes' objects} 1 0.000 0.000 0.000 0.000 threading.py:415(__init__) 5 0.000 0.000 0.000 0.000 {method 'put' of '_queue.SimpleQueue' objects} 1 0.000 0.000 0.000 0.000 conf.py:24(__init__) 1 0.000 0.000 0.000 0.000 _base.py:157(_create_and_install_waiters) 4 0.000 0.000 0.000 0.000 _base.py:388(__get_result) 1 0.000 0.000 0.000 0.000 _base.py:146(__init__) 1 0.000 0.000 0.000 0.000 _base.py:79(__init__) 57 0.000 0.000 0.000 0.000 {method 'rstrip' of 'bytes' objects} 1 0.000 0.000 0.000 0.000 conf.py:191(build_args) 93 0.000 0.000 0.000 0.000 {built-in method builtins.issubclass} 2 0.000 0.000 0.000 0.000 repository.py:148(_is_remote) 1 0.000 0.000 0.000 0.000 conf.py:287(_check_timezones) 7 0.000 0.000 0.000 0.000 conf.py:36(set_value) 1 0.000 0.000 0.000 0.000 reference.py:46(__init__) 1 0.000 0.000 0.000 0.000 _base.py:153(__exit__) 1 0.000 0.000 0.000 0.000 _base.py:63(__init__) 1 0.000 0.000 0.000 0.000 conf.py:293(_replace_timezone) 1 0.000 0.000 0.000 0.000 {built-in method math.ceil} 4 0.000 0.000 0.000 0.000 thread.py:47(__init__) 1 0.000 0.000 0.000 0.000 conf.py:65(_check_only_one_from_commit) 8 0.000 0.000 0.000 0.000 {method 'partition' of 'str' objects} 1 0.000 0.000 0.000 0.000 configparser.py:892(set) 1 0.000 0.000 0.000 0.000 git.py:322(__del__) 1 0.000 0.000 0.000 0.000 _base.py:149(__enter__) 1 0.000 0.000 0.000 0.000 contextlib.py:279(helper) 1 0.000 0.000 0.000 0.000 {method 'replace' of 'datetime.datetime' objects} 4 0.000 0.000 0.000 0.000 threading.py:90(RLock) 2 0.000 0.000 0.000 0.000 conf.py:181(only_one_filter) 1 0.000 0.000 0.000 0.000 conf.py:71(_check_only_one_to_commit) 1 0.000 0.000 0.000 0.000 contextlib.py:102(__init__) 2 0.000 0.000 0.000 0.000 weakref.py:428(__setitem__) 1 0.000 0.000 0.000 0.000 util.py:844(__del__) 1 0.000 0.000 0.000 0.000 reference.py:100(name) 2 0.000 0.000 0.000 0.000 threading.py:803(<listcomp>) 3 0.000 0.000 0.000 0.000 git.py:63(repo) 8 0.000 0.000 0.000 0.000 {method '__enter__' of '_thread.RLock' objects} 8 0.000 0.000 0.000 0.000 util.py:851(_has_lock) 1 0.000 0.000 0.000 0.000 threading.py:572(clear) 3 0.000 0.000 0.000 0.000 config.py:699(_assure_writable) 4 0.000 0.000 0.000 0.000 _base.py:225(<genexpr>) 5 0.000 0.000 0.000 0.000 {method 'remove' of 'set' objects} 8 0.000 0.000 0.000 0.000 {built-in method _stat.S_ISLNK} 1 0.000 0.000 0.000 0.000 conf.py:54(_sanity_check_repos) 2 0.000 0.000 0.000 0.000 util.py:847(_lock_file_path) 1 0.000 0.000 0.000 0.000 conf.py:142(get_starting_commit) 2 0.000 0.000 0.000 0.000 {method 'difference_update' of 'set' objects} 4 0.000 0.000 0.000 0.000 {built-in method time.monotonic} 1 0.000 0.000 0.000 0.000 conf.py:165(get_ending_commit) 1 0.000 0.000 0.000 0.000 pathlib.py:708(name) 4 0.000 0.000 0.000 0.000 {method 'lstrip' of 'str' objects} 1 0.000 0.000 0.000 0.000 conf.py:114(_check_correct_filters_order) 2 0.000 0.000 0.000 0.000 pathlib.py:1430(expanduser) 2 0.000 0.000 0.000 0.000 conf.py:189(<listcomp>) 4 0.000 0.000 0.000 0.000 {method 'acquire' of '_thread.RLock' objects} 2 0.000 0.000 0.000 0.000 {method 'remove' of 'collections.deque' objects} 1 0.000 0.000 0.000 0.000 util.py:840(__init__) 4 0.000 0.000 0.000 0.000 {method 'release' of '_thread.RLock' objects} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.000 0.000 0.000 0.000 __init__.py:230(utcoffset) 1 0.000 0.000 0.000 0.000 configparser.py:663(has_section) 1 0.000 0.000 0.000 0.000 _base.py:633(__enter__) 1 0.000 0.000 0.000 0.000 configparser.py:366(before_set)
Yes, that's an awful lot of functions, but we can quickly narrow things down. The cumtime
column is sorted by largest values first. We see that the debuggingbook_change_counter()
method at the top takes up all the time – but this is not surprising, since it is the method we called in the first place. This calls a method mine()
in the ChangeCounter
class, which does all the work.
The next places are more interesting: almost all time is spent in a single method, named modifications()
. This method determines the difference between two versions, which is an expensive operation; this is also supported by the observation that half of the time is spent in a diff()
method.
This profile thus already gets us a hint on how to improve performance: Rather than computing the diff between versions for every version, we could do so on demand (and possibly cache results so we don't have to compute them twice). Alas, this (slow) functionality is part of the
underlying PyDriller Python package, so we cannot fix this within the ChangeCounter
class. But we could file a bug with the developers, suggesting a patch to improve performance.
Instrumenting code is precise, but it is also slow. An alternate way to measure performance is to sample in regular intervals which functions are currently active – for instance, by examining the current function call stack. The more frequently a function is sampled as active, the more time is spent in that function.
One profiler for Python that implements such sampling is Scalene – a high-performance, high-precision CPU, GPU, and memory profiler for Python. We can invoke it on our example as follows:
$ scalene --html test.py > scalene-out.html
where test.py
is a script that again invokes
debuggingbook_change_counter(ChangeCounter)
The output of scalene
is sent to a HTML file (here, scalene-out.html
) which is organized by lines – that is, for each line, we see how much it contributed to overall execution time. Opening the output scalene-out.html
in a HTML browser, we see these lines:
As with cProfile
, above, we identify the mine()
method in the ChangeCounter
class as the main performance hog – and in the mine()
method, it is the iteration over all modifications that takes all the time. Adding the option --profile-all
to scalene
would extend the profile to all executed code, including the pydriller
third-party library.
Besides relying on sampling rather that tracing (which is more efficient) and breaking down execution time by line, scalene
also provides additional information on memory usage and more. If cProfile
is not sufficient, then scalene
will bring profiling to the next level.
Identifying a culprit is not always that easy. Notably, when the first set of obvious performance hogs is fixed, it becomes more and more difficult to squeeze out additional performance – and, as stated above, such optimization may be in conflict with readability and maintainability of your code. Here are some simple ways to improve performance:
Efficient algorithms. For many tasks, the simplest algorithm is not always the best performing one. Consider alternatives that may be more efficient, and measure whether they pay off.
Efficient data types. Remember that certain operations, such as looking up whether an element is contained, may take different amounts of time depending on the data structure. In Python, a query like x in xs
takes (mostly) constant time if xs
is a set, but linear time if xs
is a list; these differences become significant as the size of xs
grows.
Efficient modules. In Python, most frequently used modules (or at least parts of) are implemented in C, which is way more efficient than plain Python. Rely on existing modules whenever possible. Or implement your own, after having measured that this may pay off.
These are all things you can already use during programming – and also set up your code such that exchanging, say, one data type by another will still be possible later. This is best achieved by hiding implementation details (such as the used data types) behind an abstract interface used by your clients.
But beyond these points, remember the famous words by Donald E. Knuth:
quiz('Donald E. Knuth said: "Premature optimization..."',
[
"... is the root of all evil",
"... requires lots of experience",
"... should be left to assembly programmers",
"... is the reason why TeX is so fast",
], 'len("METAFONT") - len("TeX") - len("CWEB")')
This quote should always remind us that after a good design, you should always first measure and then optimize.
Having discussed profilers from a user perspective, let us now dive into how they are actually implemented. It turns out we can use most of our existing infrastructure to implement a simple tracing profiler with only a few lines of code.
The program we will apply our profiler on is – surprise! – our ongoing example, remove_html_markup()
. Our aim is to understand how much time is spent in each line of the code (such that we have a new feature on top of Python cProfile
).
# ignore
from typing import Any, Optional, Type, Dict, Tuple, List
# ignore
from bookutils import print_content
# ignore
import inspect
print_content(inspect.getsource(remove_html_markup), '.py',
start_line_number=238)
238 def remove_html_markup(s): # type: ignore 239 tag = False 240 quote = False 241 out = "" 242 243 for c in s: 244 assert tag or not quote 245 246 if c == '<' and not quote: 247 tag = True 248 elif c == '>' and not quote: 249 tag = False 250 elif (c == '"' or c == "'") and tag: 251 quote = not quote 252 elif not tag: 253 out = out + c 254 255 return out
We introduce a class PerformanceTracer
that tracks, for each line in the code:
hits
), andtime
).To this end, we make use of our Timer
class, which measures time, and the Tracer
class from the chapter on tracing, which allows us to track every line of the program as it is being executed.
In PerformanceTracker
, the attributes hits
and time
are mappings indexed by unique locations – that is, pairs of function name and line number.
Location = Tuple[str, int]
class PerformanceTracer(Tracer):
"""Trace time and #hits for individual program lines"""
def __init__(self) -> None:
"""Constructor."""
super().__init__()
self.reset_timer()
self.hits: Dict[Location, int] = {}
self.time: Dict[Location, float] = {}
def reset_timer(self) -> None:
self.timer = Timer.Timer()
As common in this book, we want to use PerformanceTracer
in a with
-block around the function call(s) to be tracked:
with PerformanceTracer() as perf_tracer:
function(...)
When entering the with
block (__enter__()
), we reset all timers. Also, coming from the __enter__()
method of the superclass Tracer
, we enable tracing through the traceit()
method.
class PerformanceTracer(PerformanceTracer):
def __enter__(self) -> Any:
"""Enter a `with` block."""
super().__enter__()
self.reset_timer()
return self
The traceit()
method extracts the current location. It increases the corresponding hits
value by 1, and adds the elapsed time to the corresponding time
.
class PerformanceTracer(PerformanceTracer):
def traceit(self, frame: FrameType, event: str, arg: Any) -> None:
"""Tracing function; called for every line."""
t = self.timer.elapsed_time()
location = (frame.f_code.co_name, frame.f_lineno)
self.hits.setdefault(location, 0)
self.time.setdefault(location, 0.0)
self.hits[location] += 1
self.time[location] += t
self.reset_timer()
This is it already. We can now determine where most time is spent in remove_html_markup()
. We invoke it 10,000 times such that we can average over runs:
with PerformanceTracer() as perf_tracer:
for i in range(10000):
s = remove_html_markup('<b>foo</b>')
Here are the hits. For every line executed, we see how often it was executed. The most executed line is the for
loop with 110,000 hits – once for each of the 10 characters in <b>foo</b>
, once for the final check, and all of this 10,000 times.
perf_tracer.hits
{('__init__', 17): 1, ('__init__', 19): 1, ('clock', 8): 1, ('clock', 12): 2, ('__init__', 20): 2, ('remove_html_markup', 238): 10000, ('remove_html_markup', 239): 10000, ('remove_html_markup', 240): 10000, ('remove_html_markup', 241): 10000, ('remove_html_markup', 243): 110000, ('remove_html_markup', 244): 100000, ('remove_html_markup', 246): 100000, ('remove_html_markup', 247): 20000, ('remove_html_markup', 248): 80000, ('remove_html_markup', 250): 60000, ('remove_html_markup', 252): 60000, ('remove_html_markup', 249): 20000, ('remove_html_markup', 253): 30000, ('remove_html_markup', 255): 20000}
The time
attribute collects how much time was spent in each line. Within the loop, again, the for
statement takes the most time. The other lines show some variability, though.
perf_tracer.time
{('__init__', 17): 2.274999860674143e-05, ('__init__', 19): 1.2500095181167126e-06, ('clock', 8): 9.580107871443033e-07, ('clock', 12): 1.624997821636498e-06, ('__init__', 20): 1.791995600797236e-06, ('remove_html_markup', 238): 0.011178294604178518, ('remove_html_markup', 239): 0.010568024183157831, ('remove_html_markup', 240): 0.01028170032077469, ('remove_html_markup', 241): 0.009721325113787316, ('remove_html_markup', 243): 0.09196083749702666, ('remove_html_markup', 244): 0.08108717501454521, ('remove_html_markup', 246): 0.08036003762390465, ('remove_html_markup', 247): 0.01642499772424344, ('remove_html_markup', 248): 0.065049918324803, ('remove_html_markup', 250): 0.048187176915234886, ('remove_html_markup', 252): 0.05017466680146754, ('remove_html_markup', 249): 0.01636469637742266, ('remove_html_markup', 253): 0.02437107532750815, ('remove_html_markup', 255): 0.016080042492831126}
For a full profiler, these numbers would now be sorted and printed in a table, much like cProfile
does. However, we will borrow some material from previous chapters and annotate our code accordingly.
In the chapter on statistical debugging, we have encountered the CoverageCollector
class, which collects line and function coverage during execution, using a collect()
method that is invoked for every line. We will repurpose this class to collect arbitrary metrics on the lines executed, notably time taken.
The MetricCollector
class is an abstract superclass that provides an interface to access a particular metric.
class MetricCollector(CoverageCollector):
"""Abstract superclass for collecting line-specific metrics"""
def metric(self, event: Any) -> Optional[float]:
"""Return a metric for an event, or none."""
return None
def all_metrics(self, func: str) -> List[float]:
"""Return all metric for a function `func`."""
return []
Given these metrics, we can also compute sums and maxima for a single function.
class MetricCollector(MetricCollector):
def total(self, func: str) -> float:
return sum(self.all_metrics(func))
def maximum(self, func: str) -> float:
return max(self.all_metrics(func))
Let us instantiate this superclass into TimeCollector
– a subclass that measures time. This is modeled after our PerformanceTracer
class, above; notably, the time
attribute serves the same role.
class TimeCollector(MetricCollector):
"""Collect time executed for each line"""
def __init__(self) -> None:
"""Constructor"""
super().__init__()
self.reset_timer()
self.time: Dict[Location, float] = {}
self.add_items_to_ignore([Timer.Timer, Timer.clock])
def collect(self, frame: FrameType, event: str, arg: Any) -> None:
"""Invoked for every line executed. Accumulate time spent."""
t = self.timer.elapsed_time()
super().collect(frame, event, arg)
location = (frame.f_code.co_name, frame.f_lineno)
self.time.setdefault(location, 0.0)
self.time[location] += t
self.reset_timer()
def reset_timer(self) -> None:
self.timer = Timer.Timer()
def __enter__(self) -> Any:
super().__enter__()
self.reset_timer()
return self
The metric()
and all_metrics()
methods accumulate the metric (time taken) for an individual function:
class TimeCollector(TimeCollector):
def metric(self, location: Any) -> Optional[float]:
if location in self.time:
return self.time[location]
else:
return None
def all_metrics(self, func: str) -> List[float]:
return [time
for (func_name, lineno), time in self.time.items()
if func_name == func]
Here's how to use TimeCollector()
– again, in a with
block:
with TimeCollector() as collector:
for i in range(100):
s = remove_html_markup('<b>foo</b>')
The time
attribute holds the time spent in each line:
for location, time_spent in collector.time.items():
print(location, time_spent)
('remove_html_markup', 238) 0.00020887405844405293 ('remove_html_markup', 239) 0.00018283202371094376 ('remove_html_markup', 240) 0.00017558105173520744 ('remove_html_markup', 241) 0.0001655041123740375 ('remove_html_markup', 243) 0.0015326741122407839 ('remove_html_markup', 244) 0.0014188957575242966 ('remove_html_markup', 246) 0.0013586477289209142 ('remove_html_markup', 247) 0.00036928906047251076 ('remove_html_markup', 248) 0.0010913212172454223 ('remove_html_markup', 250) 0.0008168340427801013 ('remove_html_markup', 252) 0.000838676918647252 ('remove_html_markup', 249) 0.00027391510957386345 ('remove_html_markup', 253) 0.0004042519722133875 ('remove_html_markup', 255) 0.0002715360897127539
And we can also create a total for an entire function:
collector.total('remove_html_markup')
0.009108833255595528
Let us now go and visualize these numbers in a simple form. The idea is to assign each line a color whose saturation indicates the time spent in that line relative to the time spent in the function overall – the higher the fraction, the darker the line. We create a MetricDebugger
class built as a specialization of SpectrumDebugger
, in which suspiciousness()
and color()
are repurposed to show these metrics.
class MetricDebugger(SpectrumDebugger):
"""Visualize a metric"""
def metric(self, location: Location) -> float:
sum = 0.0
for outcome in self.collectors:
for collector in self.collectors[outcome]:
assert isinstance(collector, MetricCollector)
m = collector.metric(location)
if m is not None:
sum += m # type: ignore
return sum
def total(self, func_name: str) -> float:
total = 0.0
for outcome in self.collectors:
for collector in self.collectors[outcome]:
assert isinstance(collector, MetricCollector)
total += sum(collector.all_metrics(func_name))
return total
def maximum(self, func_name: str) -> float:
maximum = 0.0
for outcome in self.collectors:
for collector in self.collectors[outcome]:
assert isinstance(collector, MetricCollector)
maximum = max(maximum,
max(collector.all_metrics(func_name)))
return maximum
def suspiciousness(self, location: Location) -> float:
func_name, _ = location
return self.metric(location) / self.total(func_name)
def color(self, location: Location) -> str:
func_name, _ = location
hue = 240 # blue
saturation = 100 # fully saturated
darkness = self.metric(location) / self.maximum(func_name)
lightness = 100 - darkness * 25
return f"hsl({hue}, {saturation}%, {lightness}%)"
def tooltip(self, location: Location) -> str:
return f"{super().tooltip(location)} {self.metric(location)}"
We can now introduce PerformanceDebugger
as a subclass of MetricDebugger
, using an arbitrary MetricCollector
(such as TimeCollector
) to obtain the metric we want to visualize.
class PerformanceDebugger(MetricDebugger):
"""Collect and visualize a metric"""
def __init__(self, collector_class: Type, log: bool = False):
assert issubclass(collector_class, MetricCollector)
super().__init__(collector_class, log=log)
With PerformanceDebugger
, we inherit all the capabilities of SpectrumDebugger
, such as showing the (relative) percentage of time spent in a table. We see that the for
condition and the following assert
take most of the time, followed by the first condition.
with PerformanceDebugger(TimeCollector) as debugger:
for i in range(100):
s = remove_html_markup('<b>foo</b>')
print(debugger)
238 2% def remove_html_markup(s): # type: ignore 239 2% tag = False 240 1% quote = False 241 1% out = "" 242 0% 243 16% for c in s: 244 14% assert tag or not quote 245 0% 246 14% if c == '<' and not quote: 247 3% tag = True 248 12% elif c == '>' and not quote: 249 2% tag = False 250 9% elif (c == '"' or c == "'") and tag: 251 0% quote = not quote 252 9% elif not tag: 253 4% out = out + c 254 0% 255 4% return out
However, we can also visualize these percentages, using shades of blue to indicate those lines most time spent in:
debugger
238 def remove_html_markup(s): # type: ignore
239 tag = False
240 quote = False
241 out = ""
242
243 for c in s:
244 assert tag or not quote
245
246 if c == '<' and not quote:
247 tag = True
248 elif c == '>' and not quote:
249 tag = False
250 elif (c == '"' or c == "'") and tag:
251 quote = not quote
252 elif not tag:
253 out = out + c
254
255 return out
Our framework is flexible enough to collect (and visualize) arbitrary metrics. This HitCollector
class, for instance, collects how often a line is being executed.
class HitCollector(MetricCollector):
"""Collect how often a line is executed"""
def __init__(self) -> None:
super().__init__()
self.hits: Dict[Location, int] = {}
def collect(self, frame: FrameType, event: str, arg: Any) -> None:
super().collect(frame, event, arg)
location = (frame.f_code.co_name, frame.f_lineno)
self.hits.setdefault(location, 0)
self.hits[location] += 1
def metric(self, location: Location) -> Optional[int]:
if location in self.hits:
return self.hits[location]
else:
return None
def all_metrics(self, func: str) -> List[float]:
return [hits
for (func_name, lineno), hits in self.hits.items()
if func_name == func]
We can plug in this class into PerformanceDebugger
to obtain a distribution of lines executed:
with PerformanceDebugger(HitCollector) as debugger:
for i in range(100):
s = remove_html_markup('<b>foo</b>')
In total, during this call to remove_html_markup()
, there are 6,400 lines executed:
debugger.total('remove_html_markup')
6400.0
Again, we can visualize the distribution as a table and using colors. We can see how the shade gets lighter in the lower part of the loop as individual conditions have been met.
print(debugger)
238 1% def remove_html_markup(s): # type: ignore 239 1% tag = False 240 1% quote = False 241 1% out = "" 242 0% 243 17% for c in s: 244 15% assert tag or not quote 245 0% 246 15% if c == '<' and not quote: 247 3% tag = True 248 12% elif c == '>' and not quote: 249 3% tag = False 250 9% elif (c == '"' or c == "'") and tag: 251 0% quote = not quote 252 9% elif not tag: 253 4% out = out + c 254 0% 255 3% return out
debugger
238 def remove_html_markup(s): # type: ignore
239 tag = False
240 quote = False
241 out = ""
242
243 for c in s:
244 assert tag or not quote
245
246 if c == '<' and not quote:
247 tag = True
248 elif c == '>' and not quote:
249 tag = False
250 elif (c == '"' or c == "'") and tag:
251 quote = not quote
252 elif not tag:
253 out = out + c
254
255 return out
Besides identifying causes for performance issues in the code, one may also search for causes in the input, using Delta Debugging. This can be useful if one does not immediately want to embark into investigating the code, but maybe first determine external influences that are related to performance issues.
Here is a variant of remove_html_markup()
that introduces a (rather obvious) performance issue.
def remove_html_markup_ampersand(s: str) -> str:
tag = False
quote = False
out = ""
for c in s:
assert tag or not quote
if c == '&':
time.sleep(0.1) # <-- the obvious performance issue
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif (c == '"' or c == "'") and tag:
quote = not quote
elif not tag:
out = out + c
return out
We can easily trigger this issue by measuring time taken:
with Timer.Timer() as t:
remove_html_markup_ampersand('&&&')
t.elapsed_time()
0.31661604199325666
Let us set up a test that checks whether the performance issue is present.
def remove_html_test(s: str) -> None:
with Timer.Timer() as t:
remove_html_markup_ampersand(s)
assert t.elapsed_time() < 0.1
We can now apply delta debugging to determine a minimum input that causes the failure:
s_fail = '<b>foo&</b>'
with DeltaDebugger.DeltaDebugger() as dd:
remove_html_test(s_fail)
dd.min_args()
{'s': '&'}
For performance issues, however, a minimal input is often not enough to highlight the failure cause. This is because short inputs tend to take less processing time than longer inputs, which increases the risks of a spurious diagnosis. A better alternative is to compute a maximum input where the issue does not occur:
s_pass = dd.max_args()
s_pass
{'s': '<b>fooamp;</b>'}
We see that the culprit character (the &
) is removed. This tells us the failure-inducing difference – or, more precisely, the cause for the performance issue.
This chapter provides a class PerformanceDebugger
that allows measuring and visualizing the time taken per line in a function.
with PerformanceDebugger(TimeCollector) as debugger:
for i in range(100):
s = remove_html_markup('<b>foo</b>')
The distribution of executed time within each function can be obtained by printing out the debugger:
print(debugger)
238 2% def remove_html_markup(s): # type: ignore 239 2% tag = False 240 1% quote = False 241 1% out = "" 242 0% 243 16% for c in s: 244 15% assert tag or not quote 245 0% 246 14% if c == '<' and not quote: 247 3% tag = True 248 12% elif c == '>' and not quote: 249 2% tag = False 250 8% elif (c == '"' or c == "'") and tag: 251 0% quote = not quote 252 9% elif not tag: 253 5% out = out + c 254 0% 255 2% return out
The sum of all percentages in a function should always be 100%.
These percentages can also be visualized, where darker shades represent higher percentage values:
debugger
238 def remove_html_markup(s): # type: ignore
239 tag = False
240 quote = False
241 out = ""
242
243 for c in s:
244 assert tag or not quote
245
246 if c == '<' and not quote:
247 tag = True
248 elif c == '>' and not quote:
249 tag = False
250 elif (c == '"' or c == "'") and tag:
251 quote = not quote
252 elif not tag:
253 out = out + c
254
255 return out
The abstract MetricCollector
class allows subclassing to build more collectors, such as HitCollector
.
# ignore
from ClassDiagram import display_class_hierarchy
# ignore
display_class_hierarchy([PerformanceDebugger, TimeCollector, HitCollector],
public_methods=[
PerformanceDebugger.__init__,
],
project='debuggingbook')
This chapter concludes the part on abstracting failures. The next part will focus on
Scalene is a high-performance, high-precision CPU, GPU, and memory profiler for Python. In contrast to the standard Python cProfile
profiler, it uses sampling instead of instrumentation or relying on Python's tracing facilities; and it also supports line-by-line profiling. Scalene might be the tool of choice if you want to go beyond basic profiling.
The Wikipedia articles on profiling and performance analysis tools provide several additional resources on profiling tools and how to apply them in practice.
The Python tracemalloc
module allows tracking memory usage during execution. Between tracemalloc.start()
and tracemalloc.end()
, use tracemalloc.get_traced_memory()
to obtain how much memory is currently being consumed:
tracemalloc.start()
current_size, peak_size = tracemalloc.get_traced_memory()
current_size
21819
tracemalloc.stop()
Create a subclass of MetricCollector
named MemoryCollector
. Make it measure the memory consumption before and after each line executed (0 if negative), and visualize the impact of individual lines on memory. Create an appropriate test program that (temporarily) consumes larger amounts of memory.
In a similar way as we integrated a binary "performance test" with delta debugging, we can also integrate such a test with other techniques. Combining a performance test with Statistical Debugging, for instance, will highlight those lines whose execution correlates with low performance. But then, the performance test need not be binary, as with functional pass/fail tests – you can also weight individual lines by how much they impact performance. Create a variant of StatisticalDebugger
that reflects the impact of individual lines on an arbitrary (summarized) performance metric.