Commit 1029aef7 authored by Sebastien Robin's avatar Sebastien Robin

BigFile.py: use a default chunck size better compatible with default cache configuration

Before chuncks were 1MB big, so with a default zope cache size of 5000 objects,
it was possible to consume 5GB even though the BigFile was filled little by
little with many different transactions.
parent 3d4e38c8
...@@ -77,7 +77,11 @@ class BigFile(File): ...@@ -77,7 +77,11 @@ class BigFile(File):
def _read_data(self, file, data=None): def _read_data(self, file, data=None):
n=1 << 20 # We might need to make this value configurable. It is important to
# consider the max quantity of object used in the cache. With a default
# cache of 5000 objects, and with n = 64KB, this makes using about 330 MB
# of memory.
n=1 << 16
if isinstance(file, str): if isinstance(file, str):
# Big string: cut it into smaller chunks # Big string: cut it into smaller chunks
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment