I was trying to fix bug #109115, which is related to this one and found out a couple of things:
1. If you need to reproduce this bug (#109114), but are too lazy to wait for GB sized files to be processed, you can limit memory for the bzr process and use MB sized files instead. See the bash script testcase_109114.sh (attached).
2. It is interesting to note that when you add & commit one big file, bzr needs 1x amount of memory for:
lines = tree.get_file(ie.file_id, path).readlines()
and then ~4x more memory when calling
self._add_text_to_weave(...)
in bzrlib/repository.py CommitBuilder.record_entry_contents() .
Hi,
I was trying to fix bug #109115, which is related to this one and found out a couple of things:
1. If you need to reproduce this bug (#109114), but are too lazy to wait for GB sized files to be processed, you can limit memory for the bzr process and use MB sized files instead. See the bash script testcase_109114.sh (attached).
2. It is interesting to note that when you add & commit one big file, bzr needs 1x amount of memory for: file(ie. file_id, path).readlines() _add_text_ to_weave( ...) repository. py CommitBuilder. record_ entry_contents( ) .
lines = tree.get_
and then ~4x more memory when calling
self.
in bzrlib/