Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
C
cpython
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Kirill Smelkov
cpython
Commits
86f96139
Commit
86f96139
authored
Aug 06, 2010
by
Raymond Hettinger
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Improve the whatsnew article on the lru/lfu cache decorators.
parent
7e3b948c
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
24 additions
and
13 deletions
+24
-13
Doc/whatsnew/3.2.rst
Doc/whatsnew/3.2.rst
+24
-13
No files found.
Doc/whatsnew/3.2.rst
View file @
86f96139
...
...
@@ -71,8 +71,8 @@ New, Improved, and Deprecated Modules
save repeated queries to an external resource whenever the results are
expected to be the same.
For example, adding a
n LFU
decorator to a database query function can save
database accesses for
the most
popular searches::
For example, adding a
caching
decorator to a database query function can save
database accesses for popular searches::
@functools.lfu_cache(maxsize=50)
def get_phone_number(name):
...
...
@@ -80,21 +80,32 @@ New, Improved, and Deprecated Modules
c.execute('SELECT phonenumber FROM phonelist WHERE name=?', (name,))
return c.fetchone()[0]
The
LFU (least-frequently-used) cache gives best results when the distribution
of popular queries tends to remain the same over time. In contrast, the LRU
(least-recently-used) cache gives best results when the distribution changes
over time (for example, the most popular news articles change each day a
s
newer articles are added).
The
caches support two strategies for limiting their size to *maxsize*. The
LFU (least-frequently-used) cache works bests when popular queries remain the
same over time. In contrast, the LRU (least-recently-used) cache works best
query popularity changes over time (for example, the most popular new
s
articles change each day as
newer articles are added).
The two caching decorators can be composed (nested) to handle hybrid cases
that have both long-term access patterns and some short-term access trends.
The two caching decorators can be composed (nested) to handle hybrid cases.
For example, music searches can reflect both long-term patterns (popular
classics) and short-term trends (new releases)::
@functools.lfu_cache(maxsize=500)
@functools.lru_cache(maxsize=100)
def find_music(song):
...
@functools.lfu_cache(maxsize=500)
@functools.lru_cache(maxsize=100)
def find_lyrics(song):
query = 'http://www.example.com/songlist/%s' % urllib.quote(song)
page = urllib.urlopen(query).read()
return parse_lyrics(page)
To help with choosing an effective cache size, the wrapped function
is instrumented with two attributes 'hits' and 'misses'::
>>> for song in user_requests:
... find_lyrics(song)
>>> print find_lyrics.hits
4805
>>> print find_lyrics.misses
980
(Contributed by Raymond Hettinger)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment