···1+# Mini Moka Cache — Change Log
2+3+## Version 0.10.3
4+5+### Fixed
6+7+- Fixed occasional panic in internal `FrequencySketch` in debug build.
8+ ([#21][gh-issue-0021])
9+10+11+## Version 0.10.2
12+13+### Fixed
14+15+- Fixed a memory corruption bug caused by the timing of concurrent `insert`,
16+ `get` and removal of the same cached entry. ([#15][gh-pull-0015]).
17+18+19+## Version 0.10.1
20+21+Bumped the minimum supported Rust version (MSRV) to 1.61 (May 19, 2022).
22+([#5][gh-pull-0005])
23+24+### Fixed
25+26+- Fixed the caches mutating a deque node through a `NonNull` pointer derived from a
27+ shared reference. ([#6][gh-pull-0006]).
28+29+30+## Version 0.10.0
31+32+In this version, we removed some dependencies from Mini Moka to make it more
33+lightweight.
34+35+### Removed
36+37+- Remove the background threads from the `sync::Cache` ([#1][gh-pull-0001]):
38+ - Also remove the following dependencies:
39+ - `scheduled-thread-pool`
40+ - `num_cpus`
41+ - `once_cell` (Moved to the dev-dependencies)
42+- Remove the following dependencies and crate features ([#2][gh-pull-0002]):
43+ - Removed dependencies:
44+ - `quanta`
45+ - `parking_lot`
46+ - `rustc_version` (from the build-dependencies)
47+ - Removed crate features:
48+ - `quanta` (was enabled by default)
49+ - `atomic64` (was enabled by default)
50+51+## Version 0.9.6
52+53+### Added
54+55+- Move the relevant source code from the GitHub moka-rs/moka repository (at
56+ [v0.9.6][moka-v0.9.6] tag) to this moka-rs/mini-moka repository.
57+ - Rename `moka::dash` module to `mini_moka::sync`.
58+ - Rename `moka::unsync` module to `mini_moka::unsync`.
59+ - Rename a crate feature `dash` to `sync` and make it a default.
60+61+<!-- Links -->
62+[moka-v0.9.6]: https://github.com/moka-rs/moka/tree/v0.9.6
63+64+[gh-issue-0021]: https://github.com/moka-rs/mini-moka/issues/21/
65+66+[gh-pull-0015]: https://github.com/moka-rs/mini-moka/pull/15/
67+[gh-pull-0006]: https://github.com/moka-rs/mini-moka/pull/6/
68+[gh-pull-0005]: https://github.com/moka-rs/mini-moka/pull/5/
69+[gh-pull-0002]: https://github.com/moka-rs/mini-moka/pull/2/
70+[gh-pull-0001]: https://github.com/moka-rs/mini-moka/pull/1/
···1+ Apache License
2+ Version 2.0, January 2004
3+ http://www.apache.org/licenses/
4+5+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6+7+ 1. Definitions.
8+9+ "License" shall mean the terms and conditions for use, reproduction,
10+ and distribution as defined by Sections 1 through 9 of this document.
11+12+ "Licensor" shall mean the copyright owner or entity authorized by
13+ the copyright owner that is granting the License.
14+15+ "Legal Entity" shall mean the union of the acting entity and all
16+ other entities that control, are controlled by, or are under common
17+ control with that entity. For the purposes of this definition,
18+ "control" means (i) the power, direct or indirect, to cause the
19+ direction or management of such entity, whether by contract or
20+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21+ outstanding shares, or (iii) beneficial ownership of such entity.
22+23+ "You" (or "Your") shall mean an individual or Legal Entity
24+ exercising permissions granted by this License.
25+26+ "Source" form shall mean the preferred form for making modifications,
27+ including but not limited to software source code, documentation
28+ source, and configuration files.
29+30+ "Object" form shall mean any form resulting from mechanical
31+ transformation or translation of a Source form, including but
32+ not limited to compiled object code, generated documentation,
33+ and conversions to other media types.
34+35+ "Work" shall mean the work of authorship, whether in Source or
36+ Object form, made available under the License, as indicated by a
37+ copyright notice that is included in or attached to the work
38+ (an example is provided in the Appendix below).
39+40+ "Derivative Works" shall mean any work, whether in Source or Object
41+ form, that is based on (or derived from) the Work and for which the
42+ editorial revisions, annotations, elaborations, or other modifications
43+ represent, as a whole, an original work of authorship. For the purposes
44+ of this License, Derivative Works shall not include works that remain
45+ separable from, or merely link (or bind by name) to the interfaces of,
46+ the Work and Derivative Works thereof.
47+48+ "Contribution" shall mean any work of authorship, including
49+ the original version of the Work and any modifications or additions
50+ to that Work or Derivative Works thereof, that is intentionally
51+ submitted to Licensor for inclusion in the Work by the copyright owner
52+ or by an individual or Legal Entity authorized to submit on behalf of
53+ the copyright owner. For the purposes of this definition, "submitted"
54+ means any form of electronic, verbal, or written communication sent
55+ to the Licensor or its representatives, including but not limited to
56+ communication on electronic mailing lists, source code control systems,
57+ and issue tracking systems that are managed by, or on behalf of, the
58+ Licensor for the purpose of discussing and improving the Work, but
59+ excluding communication that is conspicuously marked or otherwise
60+ designated in writing by the copyright owner as "Not a Contribution."
61+62+ "Contributor" shall mean Licensor and any individual or Legal Entity
63+ on behalf of whom a Contribution has been received by Licensor and
64+ subsequently incorporated within the Work.
65+66+ 2. Grant of Copyright License. Subject to the terms and conditions of
67+ this License, each Contributor hereby grants to You a perpetual,
68+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69+ copyright license to reproduce, prepare Derivative Works of,
70+ publicly display, publicly perform, sublicense, and distribute the
71+ Work and such Derivative Works in Source or Object form.
72+73+ 3. Grant of Patent License. Subject to the terms and conditions of
74+ this License, each Contributor hereby grants to You a perpetual,
75+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76+ (except as stated in this section) patent license to make, have made,
77+ use, offer to sell, sell, import, and otherwise transfer the Work,
78+ where such license applies only to those patent claims licensable
79+ by such Contributor that are necessarily infringed by their
80+ Contribution(s) alone or by combination of their Contribution(s)
81+ with the Work to which such Contribution(s) was submitted. If You
82+ institute patent litigation against any entity (including a
83+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84+ or a Contribution incorporated within the Work constitutes direct
85+ or contributory patent infringement, then any patent licenses
86+ granted to You under this License for that Work shall terminate
87+ as of the date such litigation is filed.
88+89+ 4. Redistribution. You may reproduce and distribute copies of the
90+ Work or Derivative Works thereof in any medium, with or without
91+ modifications, and in Source or Object form, provided that You
92+ meet the following conditions:
93+94+ (a) You must give any other recipients of the Work or
95+ Derivative Works a copy of this License; and
96+97+ (b) You must cause any modified files to carry prominent notices
98+ stating that You changed the files; and
99+100+ (c) You must retain, in the Source form of any Derivative Works
101+ that You distribute, all copyright, patent, trademark, and
102+ attribution notices from the Source form of the Work,
103+ excluding those notices that do not pertain to any part of
104+ the Derivative Works; and
105+106+ (d) If the Work includes a "NOTICE" text file as part of its
107+ distribution, then any Derivative Works that You distribute must
108+ include a readable copy of the attribution notices contained
109+ within such NOTICE file, excluding those notices that do not
110+ pertain to any part of the Derivative Works, in at least one
111+ of the following places: within a NOTICE text file distributed
112+ as part of the Derivative Works; within the Source form or
113+ documentation, if provided along with the Derivative Works; or,
114+ within a display generated by the Derivative Works, if and
115+ wherever such third-party notices normally appear. The contents
116+ of the NOTICE file are for informational purposes only and
117+ do not modify the License. You may add Your own attribution
118+ notices within Derivative Works that You distribute, alongside
119+ or as an addendum to the NOTICE text from the Work, provided
120+ that such additional attribution notices cannot be construed
121+ as modifying the License.
122+123+ You may add Your own copyright statement to Your modifications and
124+ may provide additional or different license terms and conditions
125+ for use, reproduction, or distribution of Your modifications, or
126+ for any such Derivative Works as a whole, provided Your use,
127+ reproduction, and distribution of the Work otherwise complies with
128+ the conditions stated in this License.
129+130+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131+ any Contribution intentionally submitted for inclusion in the Work
132+ by You to the Licensor shall be under the terms and conditions of
133+ this License, without any additional terms or conditions.
134+ Notwithstanding the above, nothing herein shall supersede or modify
135+ the terms of any separate license agreement you may have executed
136+ with Licensor regarding such Contributions.
137+138+ 6. Trademarks. This License does not grant permission to use the trade
139+ names, trademarks, service marks, or product names of the Licensor,
140+ except as required for reasonable and customary use in describing the
141+ origin of the Work and reproducing the content of the NOTICE file.
142+143+ 7. Disclaimer of Warranty. Unless required by applicable law or
144+ agreed to in writing, Licensor provides the Work (and each
145+ Contributor provides its Contributions) on an "AS IS" BASIS,
146+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147+ implied, including, without limitation, any warranties or conditions
148+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149+ PARTICULAR PURPOSE. You are solely responsible for determining the
150+ appropriateness of using or redistributing the Work and assume any
151+ risks associated with Your exercise of permissions under this License.
152+153+ 8. Limitation of Liability. In no event and under no legal theory,
154+ whether in tort (including negligence), contract, or otherwise,
155+ unless required by applicable law (such as deliberate and grossly
156+ negligent acts) or agreed to in writing, shall any Contributor be
157+ liable to You for damages, including any direct, indirect, special,
158+ incidental, or consequential damages of any character arising as a
159+ result of this License or out of the use or inability to use the
160+ Work (including but not limited to damages for loss of goodwill,
161+ work stoppage, computer failure or malfunction, or any and all
162+ other commercial damages or losses), even if such Contributor
163+ has been advised of the possibility of such damages.
164+165+ 9. Accepting Warranty or Additional Liability. While redistributing
166+ the Work or Derivative Works thereof, You may choose to offer,
167+ and charge a fee for, acceptance of support, warranty, indemnity,
168+ or other liability obligations and/or rights consistent with this
169+ License. However, in accepting such obligations, You may act only
170+ on Your own behalf and on Your sole responsibility, not on behalf
171+ of any other Contributor, and only if You agree to indemnify,
172+ defend, and hold each Contributor harmless for any liability
173+ incurred by, or claims asserted against, such Contributor by reason
174+ of your accepting any such warranty or additional liability.
175+176+ END OF TERMS AND CONDITIONS
177+178+ APPENDIX: How to apply the Apache License to your work.
179+180+ To apply the Apache License to your work, attach the following
181+ boilerplate notice, with the fields enclosed by brackets "[]"
182+ replaced with your own identifying information. (Don't include
183+ the brackets!) The text should be enclosed in the appropriate
184+ comment syntax for the file format. We also recommend that a
185+ file or class name and description of purpose be included on the
186+ same "printed page" as the copyright notice for easier
187+ identification within third-party archives.
188+189+ Copyright 2020 - 2024 Tatsuya Kawano
190+191+ Licensed under the Apache License, Version 2.0 (the "License");
192+ you may not use this file except in compliance with the License.
193+ You may obtain a copy of the License at
194+195+ http://www.apache.org/licenses/LICENSE-2.0
196+197+ Unless required by applicable law or agreed to in writing, software
198+ distributed under the License is distributed on an "AS IS" BASIS,
199+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200+ See the License for the specific language governing permissions and
201+ limitations under the License.
+21
crates/mini-moka-vendored/LICENSE-MIT
···000000000000000000000
···1+MIT License
2+3+Copyright (c) 2020 - 2024 Tatsuya Kawano
4+5+Permission is hereby granted, free of charge, to any person obtaining a copy
6+of this software and associated documentation files (the "Software"), to deal
7+in the Software without restriction, including without limitation the rights
8+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+copies of the Software, and to permit persons to whom the Software is
10+furnished to do so, subject to the following conditions:
11+12+The above copyright notice and this permission notice shall be included in all
13+copies or substantial portions of the Software.
14+15+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21+SOFTWARE.
···1+# Vendored in until upstream PR for wasm compat is merged or I reimplement.
2+3+# Mini Moka
4+5+[![GitHub Actions][gh-actions-badge]][gh-actions]
6+[![crates.io release][release-badge]][crate]
7+[![docs][docs-badge]][docs]
8+[![dependency status][deps-rs-badge]][deps-rs]
9+<!-- [![coverage status][coveralls-badge]][coveralls] -->
10+[![license][license-badge]](#license)
11+<!-- [](https://app.fossa.com/projects/git%2Bgithub.com%2Fmoka-rs%2Fmini-moka?ref=badge_shield) -->
12+13+Mini Moka is a fast, concurrent cache library for Rust. Mini Moka is a light edition
14+of [Moka][moka-git].
15+16+Mini Moka provides cache implementations on top of hash maps. They support full
17+concurrency of retrievals and a high expected concurrency for updates. Mini Moka also
18+provides a non-thread-safe cache implementation for single thread applications.
19+20+All caches perform a best-effort bounding of a hash map using an entry replacement
21+algorithm to determine which entries to evict when the capacity is exceeded.
22+23+[gh-actions-badge]: https://github.com/moka-rs/mini-moka/workflows/CI/badge.svg
24+[release-badge]: https://img.shields.io/crates/v/mini-moka.svg
25+[docs-badge]: https://docs.rs/mini-moka/badge.svg
26+[deps-rs-badge]: https://deps.rs/repo/github/moka-rs/mini-moka/status.svg
27+<!-- [coveralls-badge]: https://coveralls.io/repos/github/mini-moka-rs/moka/badge.svg?branch=main -->
28+[license-badge]: https://img.shields.io/crates/l/mini-moka.svg
29+<!-- [fossa-badge]: https://app.fossa.com/api/projects/git%2Bgithub.com%2Fmoka-rs%2Fmini-moka.svg?type=shield -->
30+31+[gh-actions]: https://github.com/moka-rs/mini-moka/actions?query=workflow%3ACI
32+[crate]: https://crates.io/crates/mini-moka
33+[docs]: https://docs.rs/mini-moka
34+[deps-rs]: https://deps.rs/repo/github/moka-rs/mini-moka
35+<!-- [coveralls]: https://coveralls.io/github/moka-rs/mini-moka?branch=main -->
36+<!-- [fossa]: https://app.fossa.com/projects/git%2Bgithub.com%2Fmoka-rs%2Fmini-moka?ref=badge_shield -->
37+38+[moka-git]: https://github.com/moka-rs/moka
39+[caffeine-git]: https://github.com/ben-manes/caffeine
40+41+42+## Features
43+44+- Thread-safe, highly concurrent in-memory cache implementation.
45+- A cache can be bounded by one of the followings:
46+ - The maximum number of entries.
47+ - The total weighted size of entries. (Size aware eviction)
48+- Maintains near optimal hit ratio by using an entry replacement algorithms inspired
49+ by Caffeine:
50+ - Admission to a cache is controlled by the Least Frequently Used (LFU) policy.
51+ - Eviction from a cache is controlled by the Least Recently Used (LRU) policy.
52+ - [More details and some benchmark results are available here][tiny-lfu].
53+- Supports expiration policies:
54+ - Time to live
55+ - Time to idle
56+57+<!--
58+Mini Moka provides a rich and flexible feature set while maintaining high hit ratio
59+and a high level of concurrency for concurrent access. However, it may not be as fast
60+as other caches, especially those that focus on much smaller feature sets.
61+62+If you do not need features like: time to live, and size aware eviction, you may want
63+to take a look at the [Quick Cache][quick-cache] crate.
64+-->
65+66+[tiny-lfu]: https://github.com/moka-rs/moka/wiki#admission-and-eviction-policies
67+<!-- [quick-cache]: https://crates.io/crates/quick_cache -->
68+69+70+## Change Log
71+72+- [CHANGELOG.md](https://github.com/moka-rs/mini-moka/blob/main/CHANGELOG.md)
73+74+75+## Table of Contents
76+77+- [Features](#features)
78+- [Change Log](#change-log)
79+- [Usage](#usage)
80+- [Example: Synchronous Cache](#example-synchronous-cache)
81+- [Avoiding to clone the value at `get`](#avoiding-to-clone-the-value-at-get)
82+- Examples (Part 2)
83+ - [Size Aware Eviction](#example-size-aware-eviction)
84+ - [Expiration Policies](#example-expiration-policies)
85+- [Minimum Supported Rust Versions](#minimum-supported-rust-versions)
86+- [Developing Mini Moka](#developing-mini-moka)
87+- [Credits](#credits)
88+- [License](#license)
89+90+91+## Usage
92+93+Add this to your `Cargo.toml`:
94+95+```toml
96+[dependencies]
97+mini_moka = "0.10"
98+```
99+100+101+## Example: Synchronous Cache
102+103+The thread-safe, synchronous caches are defined in the `sync` module.
104+105+Cache entries are manually added using `insert` method, and are stored in the cache
106+until either evicted or manually invalidated.
107+108+Here's an example of reading and updating a cache by using multiple threads:
109+110+```rust
111+// Use the synchronous cache.
112+use mini_moka::sync::Cache;
113+114+use std::thread;
115+116+fn value(n: usize) -> String {
117+ format!("value {}", n)
118+}
119+120+fn main() {
121+ const NUM_THREADS: usize = 16;
122+ const NUM_KEYS_PER_THREAD: usize = 64;
123+124+ // Create a cache that can store up to 10,000 entries.
125+ let cache = Cache::new(10_000);
126+127+ // Spawn threads and read and update the cache simultaneously.
128+ let threads: Vec<_> = (0..NUM_THREADS)
129+ .map(|i| {
130+ // To share the same cache across the threads, clone it.
131+ // This is a cheap operation.
132+ let my_cache = cache.clone();
133+ let start = i * NUM_KEYS_PER_THREAD;
134+ let end = (i + 1) * NUM_KEYS_PER_THREAD;
135+136+ thread::spawn(move || {
137+ // Insert 64 entries. (NUM_KEYS_PER_THREAD = 64)
138+ for key in start..end {
139+ my_cache.insert(key, value(key));
140+ // get() returns Option<String>, a clone of the stored value.
141+ assert_eq!(my_cache.get(&key), Some(value(key)));
142+ }
143+144+ // Invalidate every 4 element of the inserted entries.
145+ for key in (start..end).step_by(4) {
146+ my_cache.invalidate(&key);
147+ }
148+ })
149+ })
150+ .collect();
151+152+ // Wait for all threads to complete.
153+ threads.into_iter().for_each(|t| t.join().expect("Failed"));
154+155+ // Verify the result.
156+ for key in 0..(NUM_THREADS * NUM_KEYS_PER_THREAD) {
157+ if key % 4 == 0 {
158+ assert_eq!(cache.get(&key), None);
159+ } else {
160+ assert_eq!(cache.get(&key), Some(value(key)));
161+ }
162+ }
163+}
164+```
165+166+167+## Avoiding to clone the value at `get`
168+169+For the concurrent cache (`sync` cache), the return type of `get` method is
170+`Option<V>` instead of `Option<&V>`, where `V` is the value type. Every time `get` is
171+called for an existing key, it creates a clone of the stored value `V` and returns
172+it. This is because the `Cache` allows concurrent updates from threads so a value
173+stored in the cache can be dropped or replaced at any time by any other thread. `get`
174+cannot return a reference `&V` as it is impossible to guarantee the value outlives
175+the reference.
176+177+If you want to store values that will be expensive to clone, wrap them by
178+`std::sync::Arc` before storing in a cache. [`Arc`][rustdoc-std-arc] is a thread-safe
179+reference-counted pointer and its `clone()` method is cheap.
180+181+[rustdoc-std-arc]: https://doc.rust-lang.org/stable/std/sync/struct.Arc.html
182+183+```rust,ignore
184+use std::sync::Arc;
185+186+let key = ...
187+let large_value = vec![0u8; 2 * 1024 * 1024]; // 2 MiB
188+189+// When insert, wrap the large_value by Arc.
190+cache.insert(key.clone(), Arc::new(large_value));
191+192+// get() will call Arc::clone() on the stored value, which is cheap.
193+cache.get(&key);
194+```
195+196+197+## Example: Size Aware Eviction
198+199+If different cache entries have different "weights" — e.g. each entry has
200+different memory footprints — you can specify a `weigher` closure at the cache
201+creation time. The closure should return a weighted size (relative size) of an entry
202+in `u32`, and the cache will evict entries when the total weighted size exceeds its
203+`max_capacity`.
204+205+```rust
206+use std::convert::TryInto;
207+use mini_moka::sync::Cache;
208+209+fn main() {
210+ let cache = Cache::builder()
211+ // A weigher closure takes &K and &V and returns a u32 representing the
212+ // relative size of the entry. Here, we use the byte length of the value
213+ // String as the size.
214+ .weigher(|_key, value: &String| -> u32 {
215+ value.len().try_into().unwrap_or(u32::MAX)
216+ })
217+ // This cache will hold up to 32MiB of values.
218+ .max_capacity(32 * 1024 * 1024)
219+ .build();
220+ cache.insert(0, "zero".to_string());
221+}
222+```
223+224+Note that weighted sizes are not used when making eviction selections.
225+226+227+## Example: Expiration Policies
228+229+Mini Moka supports the following expiration policies:
230+231+- **Time to live**: A cached entry will be expired after the specified duration past
232+ from `insert`.
233+- **Time to idle**: A cached entry will be expired after the specified duration past
234+ from `get` or `insert`.
235+236+To set them, use the `CacheBuilder`.
237+238+```rust
239+use mini_moka::sync::Cache;
240+use std::time::Duration;
241+242+fn main() {
243+ let cache = Cache::builder()
244+ // Time to live (TTL): 30 minutes
245+ .time_to_live(Duration::from_secs(30 * 60))
246+ // Time to idle (TTI): 5 minutes
247+ .time_to_idle(Duration::from_secs( 5 * 60))
248+ // Create the cache.
249+ .build();
250+251+ // This entry will expire after 5 minutes (TTI) if there is no get().
252+ cache.insert(0, "zero");
253+254+ // This get() will extend the entry life for another 5 minutes.
255+ cache.get(&0);
256+257+ // Even though we keep calling get(), the entry will expire
258+ // after 30 minutes (TTL) from the insert().
259+}
260+```
261+262+### A note on expiration policies
263+264+The cache builders will panic if configured with either `time_to_live` or `time to
265+idle` longer than 1000 years. This is done to protect against overflow when computing
266+key expiration.
267+268+269+## Minimum Supported Rust Versions
270+271+Mini Moka's minimum supported Rust versions (MSRV) are the followings:
272+273+| Feature | MSRV |
274+|:-----------------|:--------------------------:|
275+| default features | Rust 1.76.0 (Feb 8, 2024) |
276+277+It will keep a rolling MSRV policy of at least 6 months. If only the default features
278+are enabled, MSRV will be updated conservatively. When using other features, MSRV
279+might be updated more frequently, up to the latest stable. In both cases, increasing
280+MSRV is _not_ considered a semver-breaking change.
281+282+283+## Developing Mini Moka
284+285+**Running All Tests**
286+287+To run all tests including doc tests on the README, use the following command:
288+289+```console
290+$ RUSTFLAGS='--cfg trybuild' cargo test --all-features
291+```
292+293+294+**Generating the Doc**
295+296+```console
297+$ cargo +nightly -Z unstable-options --config 'build.rustdocflags="--cfg docsrs"' \
298+ doc --no-deps
299+```
300+301+302+## Credits
303+304+### Caffeine
305+306+Mini Moka's architecture is heavily inspired by the [Caffeine][caffeine-git] library
307+for Java. Thanks go to Ben Manes and all contributors of Caffeine.
308+309+310+## License
311+312+Mini Moka is distributed under either of
313+314+- The MIT license
315+- The Apache License (Version 2.0)
316+317+at your option.
318+319+See [LICENSE-MIT](LICENSE-MIT) and [LICENSE-APACHE](LICENSE-APACHE) for details.
320+321+<!-- [](https://app.fossa.com/projects/git%2Bgithub.com%2Fmoka-rs%2Fmini-moka?ref=badge_large) -->
···1+// License and Copyright Notice:
2+//
3+// Some of the code and doc comments in this module were ported or copied from
4+// a Java class `com.github.benmanes.caffeine.cache.FrequencySketch` of Caffeine.
5+// https://github.com/ben-manes/caffeine/blob/master/caffeine/src/main/java/com/github/benmanes/caffeine/cache/FrequencySketch.java
6+//
7+// The original code/comments from Caffeine are licensed under the Apache License,
8+// Version 2.0 <https://github.com/ben-manes/caffeine/blob/master/LICENSE>
9+//
10+// Copyrights of the original code/comments are retained by their contributors.
11+// For full authorship information, see the version control history of
12+// https://github.com/ben-manes/caffeine/
13+14+/// A probabilistic multi-set for estimating the popularity of an element within
15+/// a time window. The maximum frequency of an element is limited to 15 (4-bits)
16+/// and an aging process periodically halves the popularity of all elements.
17+#[derive(Default)]
18+pub(crate) struct FrequencySketch {
19+ sample_size: u32,
20+ table_mask: u32,
21+ table: Box<[u64]>,
22+ size: u32,
23+}
24+25+// A mixture of seeds from FNV-1a, CityHash, and Murmur3. (Taken from Caffeine)
26+static SEED: [u64; 4] = [
27+ 0xc3a5_c85c_97cb_3127,
28+ 0xb492_b66f_be98_f273,
29+ 0x9ae1_6a3b_2f90_404f,
30+ 0xcbf2_9ce4_8422_2325,
31+];
32+33+static RESET_MASK: u64 = 0x7777_7777_7777_7777;
34+35+static ONE_MASK: u64 = 0x1111_1111_1111_1111;
36+37+// -------------------------------------------------------------------------------
38+// Some of the code and doc comments in this module were ported or copied from
39+// a Java class `com.github.benmanes.caffeine.cache.FrequencySketch` of Caffeine.
40+// https://github.com/ben-manes/caffeine/blob/master/caffeine/src/main/java/com/github/benmanes/caffeine/cache/FrequencySketch.java
41+// -------------------------------------------------------------------------------
42+//
43+// FrequencySketch maintains a 4-bit CountMinSketch [1] with periodic aging to
44+// provide the popularity history for the TinyLfu admission policy [2].
45+// The time and space efficiency of the sketch allows it to cheaply estimate the
46+// frequency of an entry in a stream of cache access events.
47+//
48+// The counter matrix is represented as a single dimensional array holding 16
49+// counters per slot. A fixed depth of four balances the accuracy and cost,
50+// resulting in a width of four times the length of the array. To retain an
51+// accurate estimation the array's length equals the maximum number of entries
52+// in the cache, increased to the closest power-of-two to exploit more efficient
53+// bit masking. This configuration results in a confidence of 93.75% and error
54+// bound of e / width.
55+//
56+// The frequency of all entries is aged periodically using a sampling window
57+// based on the maximum number of entries in the cache. This is referred to as
58+// the reset operation by TinyLfu and keeps the sketch fresh by dividing all
59+// counters by two and subtracting based on the number of odd counters
60+// found. The O(n) cost of aging is amortized, ideal for hardware pre-fetching,
61+// and uses inexpensive bit manipulations per array location.
62+//
63+// [1] An Improved Data Stream Summary: The Count-Min Sketch and its Applications
64+// http://dimacs.rutgers.edu/~graham/pubs/papers/cm-full.pdf
65+// [2] TinyLFU: A Highly Efficient Cache Admission Policy
66+// https://dl.acm.org/citation.cfm?id=3149371
67+//
68+// -------------------------------------------------------------------------------
69+70+impl FrequencySketch {
71+ /// Initializes and increases the capacity of this `FrequencySketch` instance,
72+ /// if necessary, to ensure that it can accurately estimate the popularity of
73+ /// elements given the maximum size of the cache. This operation forgets all
74+ /// previous counts when resizing.
75+ pub(crate) fn ensure_capacity(&mut self, cap: u32) {
76+ // The max byte size of the table, Box<[u64; table_size]>
77+ //
78+ // | Pointer width | Max size |
79+ // |:-----------------|---------:|
80+ // | 16 bit | 8 KiB |
81+ // | 32 bit | 128 MiB |
82+ // | 64 bit or bigger | 8 GiB |
83+84+ let maximum = if cfg!(target_pointer_width = "16") {
85+ cap.min(1024)
86+ } else if cfg!(target_pointer_width = "32") {
87+ cap.min(2u32.pow(24)) // about 16 millions
88+ } else {
89+ // Same to Caffeine's limit:
90+ // `Integer.MAX_VALUE >>> 1` with `ceilingPowerOfTwo()` applied.
91+ cap.min(2u32.pow(30)) // about 1 billion
92+ };
93+ let table_size = if maximum == 0 {
94+ 1
95+ } else {
96+ maximum.next_power_of_two()
97+ };
98+99+ if self.table.len() as u32 >= table_size {
100+ return;
101+ }
102+103+ self.table = vec![0; table_size as usize].into_boxed_slice();
104+ self.table_mask = table_size - 1;
105+ self.sample_size = if cap == 0 {
106+ 10
107+ } else {
108+ maximum.saturating_mul(10).min(i32::MAX as u32)
109+ };
110+ }
111+112+ /// Takes the hash value of an element, and returns the estimated number of
113+ /// occurrences of the element, up to the maximum (15).
114+ pub(crate) fn frequency(&self, hash: u64) -> u8 {
115+ if self.table.is_empty() {
116+ return 0;
117+ }
118+119+ let start = ((hash & 3) << 2) as u8;
120+ let mut frequency = u8::MAX;
121+ for i in 0..4 {
122+ let index = self.index_of(hash, i);
123+ let shift = (start + i) << 2;
124+ let count = ((self.table[index] >> shift) & 0xF) as u8;
125+ frequency = frequency.min(count);
126+ }
127+ frequency
128+ }
129+130+ /// Take a hash value of an element and increments the popularity of the
131+ /// element if it does not exceed the maximum (15). The popularity of all
132+ /// elements will be periodically down sampled when the observed events
133+ /// exceeds a threshold. This process provides a frequency aging to allow
134+ /// expired long term entries to fade away.
135+ pub(crate) fn increment(&mut self, hash: u64) {
136+ if self.table.is_empty() {
137+ return;
138+ }
139+140+ let start = ((hash & 3) << 2) as u8;
141+ let mut added = false;
142+ for i in 0..4 {
143+ let index = self.index_of(hash, i);
144+ added |= self.increment_at(index, start + i);
145+ }
146+147+ if added {
148+ self.size += 1;
149+ if self.size >= self.sample_size {
150+ self.reset();
151+ }
152+ }
153+ }
154+155+ /// Takes a table index (each entry has 16 counters) and counter index, and
156+ /// increments the counter by 1 if it is not already at the maximum value
157+ /// (15). Returns `true` if incremented.
158+ fn increment_at(&mut self, table_index: usize, counter_index: u8) -> bool {
159+ let offset = (counter_index as usize) << 2;
160+ let mask = 0xF_u64 << offset;
161+ if self.table[table_index] & mask != mask {
162+ self.table[table_index] += 1u64 << offset;
163+ true
164+ } else {
165+ false
166+ }
167+ }
168+169+ /// Reduces every counter by half of its original value.
170+ fn reset(&mut self) {
171+ let mut count = 0u32;
172+ for entry in self.table.iter_mut() {
173+ // Count number of odd numbers.
174+ count += (*entry & ONE_MASK).count_ones();
175+ *entry = (*entry >> 1) & RESET_MASK;
176+ }
177+ self.size = (self.size >> 1) - (count >> 2);
178+ }
179+180+ /// Returns the table index for the counter at the specified depth.
181+ fn index_of(&self, hash: u64, depth: u8) -> usize {
182+ let i = depth as usize;
183+ let mut hash = hash.wrapping_add(SEED[i]).wrapping_mul(SEED[i]);
184+ hash = hash.wrapping_add(hash >> 32);
185+ (hash & (self.table_mask as u64)) as usize
186+ }
187+}
188+189+// Methods only available for testing.
190+#[cfg(test)]
191+impl FrequencySketch {
192+ pub(crate) fn table_len(&self) -> usize {
193+ self.table.len()
194+ }
195+}
196+197+// Some test cases were ported from Caffeine at:
198+// https://github.com/ben-manes/caffeine/blob/master/caffeine/src/test/java/com/github/benmanes/caffeine/cache/FrequencySketchTest.java
199+//
200+// To see the debug prints, run test as `cargo test -- --nocapture`
201+#[cfg(test)]
202+mod tests {
203+ use super::FrequencySketch;
204+ use once_cell::sync::Lazy;
205+ use std::hash::{BuildHasher, Hash};
206+207+ static ITEM: Lazy<u32> = Lazy::new(|| {
208+ let mut buf = [0; 4];
209+ getrandom::getrandom(&mut buf).unwrap();
210+ unsafe { std::mem::transmute::<[u8; 4], u32>(buf) }
211+ });
212+213+ // This test was ported from Caffeine.
214+ #[test]
215+ fn increment_once() {
216+ let mut sketch = FrequencySketch::default();
217+ sketch.ensure_capacity(512);
218+ let hasher = hasher();
219+ let item_hash = hasher(*ITEM);
220+ sketch.increment(item_hash);
221+ assert_eq!(sketch.frequency(item_hash), 1);
222+ }
223+224+ // This test was ported from Caffeine.
225+ #[test]
226+ fn increment_max() {
227+ let mut sketch = FrequencySketch::default();
228+ sketch.ensure_capacity(512);
229+ let hasher = hasher();
230+ let item_hash = hasher(*ITEM);
231+ for _ in 0..20 {
232+ sketch.increment(item_hash);
233+ }
234+ assert_eq!(sketch.frequency(item_hash), 15);
235+ }
236+237+ // This test was ported from Caffeine.
238+ #[test]
239+ fn increment_distinct() {
240+ let mut sketch = FrequencySketch::default();
241+ sketch.ensure_capacity(512);
242+ let hasher = hasher();
243+ sketch.increment(hasher(*ITEM));
244+ sketch.increment(hasher(ITEM.wrapping_add(1)));
245+ assert_eq!(sketch.frequency(hasher(*ITEM)), 1);
246+ assert_eq!(sketch.frequency(hasher(ITEM.wrapping_add(1))), 1);
247+ assert_eq!(sketch.frequency(hasher(ITEM.wrapping_add(2))), 0);
248+ }
249+250+ // This test was ported from Caffeine.
251+ #[test]
252+ fn index_of_around_zero() {
253+ let mut sketch = FrequencySketch::default();
254+ sketch.ensure_capacity(512);
255+ let mut indexes = std::collections::HashSet::new();
256+ let hashes = [u64::MAX, 0, 1];
257+ for hash in hashes.iter() {
258+ for depth in 0..4 {
259+ indexes.insert(sketch.index_of(*hash, depth));
260+ }
261+ }
262+ assert_eq!(indexes.len(), 4 * hashes.len())
263+ }
264+265+ // This test was ported from Caffeine.
266+ #[test]
267+ fn reset() {
268+ let mut reset = false;
269+ let mut sketch = FrequencySketch::default();
270+ sketch.ensure_capacity(64);
271+ let hasher = hasher();
272+273+ for i in 1..(20 * sketch.table.len() as u32) {
274+ sketch.increment(hasher(i));
275+ if sketch.size != i {
276+ reset = true;
277+ break;
278+ }
279+ }
280+281+ assert!(reset);
282+ assert!(sketch.size <= sketch.sample_size / 2);
283+ }
284+285+ // This test was ported from Caffeine.
286+ #[test]
287+ fn heavy_hitters() {
288+ let mut sketch = FrequencySketch::default();
289+ sketch.ensure_capacity(65_536);
290+ let hasher = hasher();
291+292+ for i in 100..100_000 {
293+ sketch.increment(hasher(i));
294+ }
295+296+ for i in (0..10).step_by(2) {
297+ for _ in 0..i {
298+ sketch.increment(hasher(i));
299+ }
300+ }
301+302+ // A perfect popularity count yields an array [0, 0, 2, 0, 4, 0, 6, 0, 8, 0]
303+ let popularity = (0..10)
304+ .map(|i| sketch.frequency(hasher(i)))
305+ .collect::<Vec<_>>();
306+307+ for (i, freq) in popularity.iter().enumerate() {
308+ match i {
309+ 2 => assert!(freq <= &popularity[4]),
310+ 4 => assert!(freq <= &popularity[6]),
311+ 6 => assert!(freq <= &popularity[8]),
312+ 8 => (),
313+ _ => assert!(freq <= &popularity[2]),
314+ }
315+ }
316+ }
317+318+ fn hasher<K: Hash>() -> impl Fn(K) -> u64 {
319+ let build_hasher = std::collections::hash_map::RandomState::default();
320+ move |key| build_hasher.hash_one(&key)
321+ }
322+}
323+324+// Verify that some properties hold such as no panic occurs on any possible inputs.
325+#[cfg(kani)]
326+mod kani {
327+ use super::FrequencySketch;
328+329+ const CAPACITIES: &[u32] = &[
330+ 0,
331+ 1,
332+ 1024,
333+ 1025,
334+ 2u32.pow(24),
335+ 2u32.pow(24) + 1,
336+ 2u32.pow(30),
337+ 2u32.pow(30) + 1,
338+ u32::MAX,
339+ ];
340+341+ #[kani::proof]
342+ fn verify_ensure_capacity() {
343+ // Check for arbitrary capacities.
344+ let capacity = kani::any();
345+ let mut sketch = FrequencySketch::default();
346+ sketch.ensure_capacity(capacity);
347+ }
348+349+ #[kani::proof]
350+ fn verify_frequency() {
351+ // Check for some selected capacities.
352+ for capacity in CAPACITIES {
353+ let mut sketch = FrequencySketch::default();
354+ sketch.ensure_capacity(*capacity);
355+356+ // Check for arbitrary hashes.
357+ let hash = kani::any();
358+ let frequency = sketch.frequency(hash);
359+ assert!(frequency <= 15);
360+ }
361+ }
362+363+ #[kani::proof]
364+ fn verify_increment() {
365+ // Only check for small capacities. Because Kani Rust Verifier is a model
366+ // checking tool, it will take much longer time (exponential) to check larger
367+ // capacities here.
368+ for capacity in &[0, 1, 128] {
369+ let mut sketch = FrequencySketch::default();
370+ sketch.ensure_capacity(*capacity);
371+372+ // Check for arbitrary hashes.
373+ let hash = kani::any();
374+ sketch.increment(hash);
375+ }
376+ }
377+378+ #[kani::proof]
379+ fn verify_index_of() {
380+ // Check for arbitrary capacities.
381+ let capacity = kani::any();
382+ let mut sketch = FrequencySketch::default();
383+ sketch.ensure_capacity(capacity);
384+385+ // Check for arbitrary hashes.
386+ let hash = kani::any();
387+ for i in 0..4 {
388+ let index = sketch.index_of(hash, i);
389+ assert!(index < sketch.table.len());
390+ }
391+ }
392+}
+32
crates/mini-moka-vendored/src/common/time.rs
···00000000000000000000000000000000
···1+use std::time::Duration;
2+3+pub(crate) mod clock;
4+5+pub(crate) use clock::Clock;
6+7+/// a wrapper type over Instant to force checked additions and prevent
8+/// unintentional overflow. The type preserve the Copy semantics for the wrapped
9+#[derive(PartialEq, PartialOrd, Clone, Copy)]
10+pub(crate) struct Instant(clock::Instant);
11+12+pub(crate) trait CheckedTimeOps {
13+ fn checked_add(&self, duration: Duration) -> Option<Self>
14+ where
15+ Self: Sized;
16+}
17+18+impl Instant {
19+ pub(crate) fn new(instant: clock::Instant) -> Instant {
20+ Instant(instant)
21+ }
22+23+ pub(crate) fn now() -> Instant {
24+ Instant(clock::Instant::now())
25+ }
26+}
27+28+impl CheckedTimeOps for Instant {
29+ fn checked_add(&self, duration: Duration) -> Option<Instant> {
30+ self.0.checked_add(duration).map(Instant)
31+ }
32+}
···1+#![warn(clippy::all)]
2+#![warn(rust_2018_idioms)]
3+#![deny(rustdoc::broken_intra_doc_links)]
4+#![cfg_attr(docsrs, feature(doc_cfg))]
5+6+//! Mini Moka is a fast, concurrent cache library for Rust. Mini Moka is a light
7+//! edition of [Moka][moka-git].
8+//!
9+//! Mini Moka provides an in-memory concurrent cache implementation on top of hash
10+//! map. It supports high expected concurrency of retrievals and updates.
11+//!
12+//! Mini Moka also provides an in-memory, non-thread-safe cache implementation for
13+//! single thread applications.
14+//!
15+//! All cache implementations perform a best-effort bounding of the map using an
16+//! entry replacement algorithm to determine which entries to evict when the capacity
17+//! is exceeded.
18+//!
19+//! [moka-git]: https://github.com/moka-rs/moka
20+//! [caffeine-git]: https://github.com/ben-manes/caffeine
21+//!
22+//! # Features
23+//!
24+//! - A thread-safe, highly concurrent in-memory cache implementation.
25+//! - A cache can be bounded by one of the followings:
26+//! - The maximum number of entries.
27+//! - The total weighted size of entries. (Size aware eviction)
28+//! - Maintains good hit rate by using entry replacement algorithms inspired by
29+//! [Caffeine][caffeine-git]:
30+//! - Admission to a cache is controlled by the Least Frequently Used (LFU) policy.
31+//! - Eviction from a cache is controlled by the Least Recently Used (LRU) policy.
32+//! - Supports expiration policies:
33+//! - Time to live
34+//! - Time to idle
35+//!
36+//! # Examples
37+//!
38+//! See the following document:
39+//!
40+//! - A thread-safe, synchronous cache:
41+//! - [`sync::Cache`][sync-cache-struct]
42+//! - A not thread-safe, blocking cache for single threaded applications:
43+//! - [`unsync::Cache`][unsync-cache-struct]
44+//!
45+//! [sync-cache-struct]: ./sync/struct.Cache.html
46+//! [unsync-cache-struct]: ./unsync/struct.Cache.html
47+//!
48+//! # Minimum Supported Rust Versions
49+//!
50+//! This crate's minimum supported Rust versions (MSRV) are the followings:
51+//!
52+//! | Feature | MSRV |
53+//! |:-----------------|:--------------------------:|
54+//! | default features | Rust 1.76.0 (Feb 8, 2024) |
55+//!
56+//! If only the default features are enabled, MSRV will be updated conservatively.
57+//! When using other features, MSRV might be updated more frequently, up to the
58+//! latest stable. In both cases, increasing MSRV is _not_ considered a
59+//! semver-breaking change.
60+61+pub(crate) mod common;
62+pub(crate) mod policy;
63+pub mod unsync;
64+65+#[cfg(feature = "sync")]
66+#[cfg_attr(docsrs, doc(cfg(feature = "sync")))]
67+pub mod sync;
68+69+pub use policy::Policy;
70+71+#[cfg(test)]
72+mod tests {
73+ #[cfg(all(trybuild, feature = "sync"))]
74+ #[test]
75+ fn trybuild_sync() {
76+ let t = trybuild::TestCases::new();
77+ t.compile_fail("tests/compile_tests/sync/clone/*.rs");
78+ }
79+}
80+81+#[cfg(all(doctest, feature = "sync"))]
82+mod doctests {
83+ // https://doc.rust-lang.org/rustdoc/write-documentation/documentation-tests.html#include-items-only-when-collecting-doctests
84+ #[doc = include_str!("../README.md")]
85+ struct ReadMeDoctests;
86+}
+38
crates/mini-moka-vendored/src/policy.rs
···00000000000000000000000000000000000000
···1+use std::time::Duration;
2+3+#[derive(Clone, Debug)]
4+/// The policy of a cache.
5+pub struct Policy {
6+ max_capacity: Option<u64>,
7+ time_to_live: Option<Duration>,
8+ time_to_idle: Option<Duration>,
9+}
10+11+impl Policy {
12+ pub(crate) fn new(
13+ max_capacity: Option<u64>,
14+ time_to_live: Option<Duration>,
15+ time_to_idle: Option<Duration>,
16+ ) -> Self {
17+ Self {
18+ max_capacity,
19+ time_to_live,
20+ time_to_idle,
21+ }
22+ }
23+24+ /// Returns the `max_capacity` of the cache.
25+ pub fn max_capacity(&self) -> Option<u64> {
26+ self.max_capacity
27+ }
28+29+ /// Returns the `time_to_live` of the cache.
30+ pub fn time_to_live(&self) -> Option<Duration> {
31+ self.time_to_live
32+ }
33+34+ /// Returns the `time_to_idle` of the cache.
35+ pub fn time_to_idle(&self) -> Option<Duration> {
36+ self.time_to_idle
37+ }
38+}
+21
crates/mini-moka-vendored/src/sync.rs
···000000000000000000000
···1+//! Provides a thread-safe, concurrent cache implementation built upon
2+//! [`dashmap::DashMap`][dashmap].
3+//!
4+//! [dashmap]: https://docs.rs/dashmap/*/dashmap/struct.DashMap.html
5+6+mod base_cache;
7+mod builder;
8+mod cache;
9+mod iter;
10+mod mapref;
11+12+pub use builder::CacheBuilder;
13+pub use cache::Cache;
14+pub use iter::Iter;
15+pub use mapref::EntryRef;
16+17+/// Provides extra methods that will be useful for testing.
18+pub trait ConcurrentCacheExt<K, V> {
19+ /// Performs any pending maintenance operations needed by the cache.
20+ fn sync(&self);
21+}
···1+use super::{base_cache::BaseCache, CacheBuilder, ConcurrentCacheExt, EntryRef, Iter};
2+use crate::{
3+ common::{
4+ concurrent::{
5+ constants::{MAX_SYNC_REPEATS, WRITE_RETRY_INTERVAL_MICROS},
6+ housekeeper::{Housekeeper, InnerSync},
7+ Weigher, WriteOp,
8+ },
9+ time::Instant,
10+ },
11+ Policy,
12+};
13+14+use crossbeam_channel::{Sender, TrySendError};
15+use std::{
16+ borrow::Borrow,
17+ collections::hash_map::RandomState,
18+ fmt,
19+ hash::{BuildHasher, Hash},
20+ sync::Arc,
21+ time::Duration,
22+};
23+24+/// A thread-safe concurrent in-memory cache built upon [`dashmap::DashMap`][dashmap].
25+///
26+/// The `Cache` uses `DashMap` as the central key-value storage. It performs a
27+/// best-effort bounding of the map using an entry replacement algorithm to determine
28+/// which entries to evict when the capacity is exceeded.
29+///
30+/// To use this cache, enable a crate feature called "dash" in your Cargo.toml.
31+/// Please note that the API of `dash` cache will _be changed very often_ in next few
32+/// releases as this is yet an experimental component.
33+///
34+/// # Examples
35+///
36+/// Cache entries are manually added using [`insert`](#method.insert) method, and are
37+/// stored in the cache until either evicted or manually invalidated.
38+///
39+/// Here's an example of reading and updating a cache by using multiple threads:
40+///
41+/// ```rust
42+/// use mini_moka::sync::Cache;
43+///
44+/// use std::thread;
45+///
46+/// fn value(n: usize) -> String {
47+/// format!("value {}", n)
48+/// }
49+///
50+/// const NUM_THREADS: usize = 16;
51+/// const NUM_KEYS_PER_THREAD: usize = 64;
52+///
53+/// // Create a cache that can store up to 10,000 entries.
54+/// let cache = Cache::new(10_000);
55+///
56+/// // Spawn threads and read and update the cache simultaneously.
57+/// let threads: Vec<_> = (0..NUM_THREADS)
58+/// .map(|i| {
59+/// // To share the same cache across the threads, clone it.
60+/// // This is a cheap operation.
61+/// let my_cache = cache.clone();
62+/// let start = i * NUM_KEYS_PER_THREAD;
63+/// let end = (i + 1) * NUM_KEYS_PER_THREAD;
64+///
65+/// thread::spawn(move || {
66+/// // Insert 64 entries. (NUM_KEYS_PER_THREAD = 64)
67+/// for key in start..end {
68+/// my_cache.insert(key, value(key));
69+/// // get() returns Option<String>, a clone of the stored value.
70+/// assert_eq!(my_cache.get(&key), Some(value(key)));
71+/// }
72+///
73+/// // Invalidate every 4 element of the inserted entries.
74+/// for key in (start..end).step_by(4) {
75+/// my_cache.invalidate(&key);
76+/// }
77+/// })
78+/// })
79+/// .collect();
80+///
81+/// // Wait for all threads to complete.
82+/// threads.into_iter().for_each(|t| t.join().expect("Failed"));
83+///
84+/// // Verify the result.
85+/// for key in 0..(NUM_THREADS * NUM_KEYS_PER_THREAD) {
86+/// if key % 4 == 0 {
87+/// assert_eq!(cache.get(&key), None);
88+/// } else {
89+/// assert_eq!(cache.get(&key), Some(value(key)));
90+/// }
91+/// }
92+/// ```
93+///
94+/// # Avoiding to clone the value at `get`
95+///
96+/// The return type of `get` method is `Option<V>` instead of `Option<&V>`. Every
97+/// time `get` is called for an existing key, it creates a clone of the stored value
98+/// `V` and returns it. This is because the `Cache` allows concurrent updates from
99+/// threads so a value stored in the cache can be dropped or replaced at any time by
100+/// any other thread. `get` cannot return a reference `&V` as it is impossible to
101+/// guarantee the value outlives the reference.
102+///
103+/// If you want to store values that will be expensive to clone, wrap them by
104+/// `std::sync::Arc` before storing in a cache. [`Arc`][rustdoc-std-arc] is a
105+/// thread-safe reference-counted pointer and its `clone()` method is cheap.
106+///
107+/// [rustdoc-std-arc]: https://doc.rust-lang.org/stable/std/sync/struct.Arc.html
108+///
109+/// # Size-based Eviction
110+///
111+/// ```rust
112+/// use std::convert::TryInto;
113+/// use mini_moka::sync::Cache;
114+///
115+/// // Evict based on the number of entries in the cache.
116+/// let cache = Cache::builder()
117+/// // Up to 10,000 entries.
118+/// .max_capacity(10_000)
119+/// // Create the cache.
120+/// .build();
121+/// cache.insert(1, "one".to_string());
122+///
123+/// // Evict based on the byte length of strings in the cache.
124+/// let cache = Cache::builder()
125+/// // A weigher closure takes &K and &V and returns a u32
126+/// // representing the relative size of the entry.
127+/// .weigher(|_key, value: &String| -> u32 {
128+/// value.len().try_into().unwrap_or(u32::MAX)
129+/// })
130+/// // This cache will hold up to 32MiB of values.
131+/// .max_capacity(32 * 1024 * 1024)
132+/// .build();
133+/// cache.insert(2, "two".to_string());
134+/// ```
135+///
136+/// If your cache should not grow beyond a certain size, use the `max_capacity`
137+/// method of the [`CacheBuilder`][builder-struct] to set the upper bound. The cache
138+/// will try to evict entries that have not been used recently or very often.
139+///
140+/// At the cache creation time, a weigher closure can be set by the `weigher` method
141+/// of the `CacheBuilder`. A weigher closure takes `&K` and `&V` as the arguments and
142+/// returns a `u32` representing the relative size of the entry:
143+///
144+/// - If the `weigher` is _not_ set, the cache will treat each entry has the same
145+/// size of `1`. This means the cache will be bounded by the number of entries.
146+/// - If the `weigher` is set, the cache will call the weigher to calculate the
147+/// weighted size (relative size) on an entry. This means the cache will be bounded
148+/// by the total weighted size of entries.
149+///
150+/// Note that weighted sizes are not used when making eviction selections.
151+///
152+/// [builder-struct]: ./struct.CacheBuilder.html
153+///
154+/// # Time-based Expirations
155+///
156+/// `Cache` supports the following expiration policies:
157+///
158+/// - **Time to live**: A cached entry will be expired after the specified duration
159+/// past from `insert`.
160+/// - **Time to idle**: A cached entry will be expired after the specified duration
161+/// past from `get` or `insert`.
162+///
163+/// ```rust
164+/// use mini_moka::sync::Cache;
165+/// use std::time::Duration;
166+///
167+/// let cache = Cache::builder()
168+/// // Time to live (TTL): 30 minutes
169+/// .time_to_live(Duration::from_secs(30 * 60))
170+/// // Time to idle (TTI): 5 minutes
171+/// .time_to_idle(Duration::from_secs( 5 * 60))
172+/// // Create the cache.
173+/// .build();
174+///
175+/// // This entry will expire after 5 minutes (TTI) if there is no get().
176+/// cache.insert(0, "zero");
177+///
178+/// // This get() will extend the entry life for another 5 minutes.
179+/// cache.get(&0);
180+///
181+/// // Even though we keep calling get(), the entry will expire
182+/// // after 30 minutes (TTL) from the insert().
183+/// ```
184+///
185+/// # Thread Safety
186+///
187+/// All methods provided by the `Cache` are considered thread-safe, and can be safely
188+/// accessed by multiple concurrent threads.
189+///
190+/// - `Cache<K, V, S>` requires trait bounds `Send`, `Sync` and `'static` for `K`
191+/// (key), `V` (value) and `S` (hasher state).
192+/// - `Cache<K, V, S>` will implement `Send` and `Sync`.
193+///
194+/// # Sharing a cache across threads
195+///
196+/// To share a cache across threads, do one of the followings:
197+///
198+/// - Create a clone of the cache by calling its `clone` method and pass it to other
199+/// thread.
200+/// - Wrap the cache by a `sync::OnceCell` or `sync::Lazy` from
201+/// [once_cell][once-cell-crate] create, and set it to a `static` variable.
202+///
203+/// Cloning is a cheap operation for `Cache` as it only creates thread-safe
204+/// reference-counted pointers to the internal data structures.
205+///
206+/// [once-cell-crate]: https://crates.io/crates/once_cell
207+///
208+/// # Hashing Algorithm
209+///
210+/// By default, `Cache` uses a hashing algorithm selected to provide resistance
211+/// against HashDoS attacks. It will be the same one used by
212+/// `std::collections::HashMap`, which is currently SipHash 1-3.
213+///
214+/// While SipHash's performance is very competitive for medium sized keys, other
215+/// hashing algorithms will outperform it for small keys such as integers as well as
216+/// large keys such as long strings. However those algorithms will typically not
217+/// protect against attacks such as HashDoS.
218+///
219+/// The hashing algorithm can be replaced on a per-`Cache` basis using the
220+/// [`build_with_hasher`][build-with-hasher-method] method of the
221+/// `CacheBuilder`. Many alternative algorithms are available on crates.io, such
222+/// as the [aHash][ahash-crate] crate.
223+///
224+/// [build-with-hasher-method]: ./struct.CacheBuilder.html#method.build_with_hasher
225+/// [ahash-crate]: https://crates.io/crates/ahash
226+///
227+pub struct Cache<K, V, S = RandomState> {
228+ base: BaseCache<K, V, S>,
229+}
230+231+// TODO: https://github.com/moka-rs/moka/issues/54
232+#[allow(clippy::non_send_fields_in_send_ty)]
233+unsafe impl<K, V, S> Send for Cache<K, V, S>
234+where
235+ K: Send + Sync,
236+ V: Send + Sync,
237+ S: Send,
238+{
239+}
240+241+unsafe impl<K, V, S> Sync for Cache<K, V, S>
242+where
243+ K: Send + Sync,
244+ V: Send + Sync,
245+ S: Sync,
246+{
247+}
248+249+// NOTE: We cannot do `#[derive(Clone)]` because it will add `Clone` bound to `K`.
250+impl<K, V, S> Clone for Cache<K, V, S> {
251+ /// Makes a clone of this shared cache.
252+ ///
253+ /// This operation is cheap as it only creates thread-safe reference counted
254+ /// pointers to the shared internal data structures.
255+ fn clone(&self) -> Self {
256+ Self {
257+ base: self.base.clone(),
258+ }
259+ }
260+}
261+262+impl<K, V, S> fmt::Debug for Cache<K, V, S>
263+where
264+ K: Eq + Hash + fmt::Debug,
265+ V: fmt::Debug,
266+ S: BuildHasher + Clone,
267+{
268+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
269+ let mut d_map = f.debug_map();
270+271+ for r in self.iter() {
272+ let (k, v) = r.pair();
273+ d_map.entry(k, v);
274+ }
275+276+ d_map.finish()
277+ }
278+}
279+280+impl<K, V> Cache<K, V, RandomState>
281+where
282+ K: Hash + Eq + Send + Sync + 'static,
283+ V: Clone + Send + Sync + 'static,
284+{
285+ /// Constructs a new `Cache<K, V>` that will store up to the `max_capacity`.
286+ ///
287+ /// To adjust various configuration knobs such as `initial_capacity` or
288+ /// `time_to_live`, use the [`CacheBuilder`][builder-struct].
289+ ///
290+ /// [builder-struct]: ./struct.CacheBuilder.html
291+ pub fn new(max_capacity: u64) -> Self {
292+ let build_hasher = RandomState::default();
293+ Self::with_everything(Some(max_capacity), None, build_hasher, None, None, None)
294+ }
295+296+ /// Returns a [`CacheBuilder`][builder-struct], which can builds a `Cache` with
297+ /// various configuration knobs.
298+ ///
299+ /// [builder-struct]: ./struct.CacheBuilder.html
300+ pub fn builder() -> CacheBuilder<K, V, Cache<K, V, RandomState>> {
301+ CacheBuilder::default()
302+ }
303+}
304+305+impl<K, V, S> Cache<K, V, S> {
306+ /// Returns a read-only cache policy of this cache.
307+ ///
308+ /// At this time, cache policy cannot be modified after cache creation.
309+ /// A future version may support to modify it.
310+ pub fn policy(&self) -> Policy {
311+ self.base.policy()
312+ }
313+314+ /// Returns an approximate number of entries in this cache.
315+ ///
316+ /// The value returned is _an estimate_; the actual count may differ if there are
317+ /// concurrent insertions or removals, or if some entries are pending removal due
318+ /// to expiration. This inaccuracy can be mitigated by performing a `sync()`
319+ /// first.
320+ ///
321+ /// # Example
322+ ///
323+ /// ```rust
324+ /// use mini_moka::sync::Cache;
325+ ///
326+ /// let cache = Cache::new(10);
327+ /// cache.insert('n', "Netherland Dwarf");
328+ /// cache.insert('l', "Lop Eared");
329+ /// cache.insert('d', "Dutch");
330+ ///
331+ /// // Ensure an entry exists.
332+ /// assert!(cache.contains_key(&'n'));
333+ ///
334+ /// // However, followings may print stale number zeros instead of threes.
335+ /// println!("{}", cache.entry_count()); // -> 0
336+ /// println!("{}", cache.weighted_size()); // -> 0
337+ ///
338+ /// // To mitigate the inaccuracy, bring `ConcurrentCacheExt` trait to
339+ /// // the scope so we can use `sync` method.
340+ /// use mini_moka::sync::ConcurrentCacheExt;
341+ /// // Call `sync` to run pending internal tasks.
342+ /// cache.sync();
343+ ///
344+ /// // Followings will print the actual numbers.
345+ /// println!("{}", cache.entry_count()); // -> 3
346+ /// println!("{}", cache.weighted_size()); // -> 3
347+ /// ```
348+ ///
349+ pub fn entry_count(&self) -> u64 {
350+ self.base.entry_count()
351+ }
352+353+ /// Returns an approximate total weighted size of entries in this cache.
354+ ///
355+ /// The value returned is _an estimate_; the actual size may differ if there are
356+ /// concurrent insertions or removals, or if some entries are pending removal due
357+ /// to expiration. This inaccuracy can be mitigated by performing a `sync()`
358+ /// first. See [`entry_count`](#method.entry_count) for a sample code.
359+ pub fn weighted_size(&self) -> u64 {
360+ self.base.weighted_size()
361+ }
362+}
363+364+impl<K, V, S> Cache<K, V, S>
365+where
366+ K: Hash + Eq + Send + Sync + 'static,
367+ V: Clone + Send + Sync + 'static,
368+ S: BuildHasher + Clone + Send + Sync + 'static,
369+{
370+ pub(crate) fn with_everything(
371+ max_capacity: Option<u64>,
372+ initial_capacity: Option<usize>,
373+ build_hasher: S,
374+ weigher: Option<Weigher<K, V>>,
375+ time_to_live: Option<Duration>,
376+ time_to_idle: Option<Duration>,
377+ ) -> Self {
378+ Self {
379+ base: BaseCache::new(
380+ max_capacity,
381+ initial_capacity,
382+ build_hasher,
383+ weigher,
384+ time_to_live,
385+ time_to_idle,
386+ ),
387+ }
388+ }
389+390+ /// Returns `true` if the cache contains a value for the key.
391+ ///
392+ /// Unlike the `get` method, this method is not considered a cache read operation,
393+ /// so it does not update the historic popularity estimator or reset the idle
394+ /// timer for the key.
395+ ///
396+ /// The key may be any borrowed form of the cache's key type, but `Hash` and `Eq`
397+ /// on the borrowed form _must_ match those for the key type.
398+ pub fn contains_key<Q>(&self, key: &Q) -> bool
399+ where
400+ Arc<K>: Borrow<Q>,
401+ Q: Hash + Eq + ?Sized,
402+ {
403+ self.base.contains_key(key)
404+ }
405+406+ /// Returns a _clone_ of the value corresponding to the key.
407+ ///
408+ /// If you want to store values that will be expensive to clone, wrap them by
409+ /// `std::sync::Arc` before storing in a cache. [`Arc`][rustdoc-std-arc] is a
410+ /// thread-safe reference-counted pointer and its `clone()` method is cheap.
411+ ///
412+ /// The key may be any borrowed form of the cache's key type, but `Hash` and `Eq`
413+ /// on the borrowed form _must_ match those for the key type.
414+ ///
415+ /// [rustdoc-std-arc]: https://doc.rust-lang.org/stable/std/sync/struct.Arc.html
416+ pub fn get<Q>(&self, key: &Q) -> Option<V>
417+ where
418+ Arc<K>: Borrow<Q>,
419+ Q: Hash + Eq + ?Sized,
420+ {
421+ self.base.get_with_hash(key, self.base.hash(key))
422+ }
423+424+ /// Deprecated, replaced with [`get`](#method.get)
425+ #[doc(hidden)]
426+ #[deprecated(since = "0.8.0", note = "Replaced with `get`")]
427+ pub fn get_if_present<Q>(&self, key: &Q) -> Option<V>
428+ where
429+ Arc<K>: Borrow<Q>,
430+ Q: Hash + Eq + ?Sized,
431+ {
432+ self.get(key)
433+ }
434+435+ /// Inserts a key-value pair into the cache.
436+ ///
437+ /// If the cache has this key present, the value is updated.
438+ pub fn insert(&self, key: K, value: V) {
439+ let hash = self.base.hash(&key);
440+ let key = Arc::new(key);
441+ self.insert_with_hash(key, hash, value)
442+ }
443+444+ pub(crate) fn insert_with_hash(&self, key: Arc<K>, hash: u64, value: V) {
445+ let (op, now) = self.base.do_insert_with_hash(key, hash, value);
446+ let hk = self.base.housekeeper.as_ref();
447+ Self::schedule_write_op(
448+ self.base.inner.as_ref(),
449+ &self.base.write_op_ch,
450+ op,
451+ now,
452+ hk,
453+ )
454+ .expect("Failed to insert");
455+ }
456+457+ /// Discards any cached value for the key.
458+ ///
459+ /// The key may be any borrowed form of the cache's key type, but `Hash` and `Eq`
460+ /// on the borrowed form _must_ match those for the key type.
461+ pub fn invalidate<Q>(&self, key: &Q)
462+ where
463+ Arc<K>: Borrow<Q>,
464+ Q: Hash + Eq + ?Sized,
465+ {
466+ if let Some(kv) = self.base.remove_entry(key) {
467+ let op = WriteOp::Remove(kv);
468+ let now = self.base.current_time_from_expiration_clock();
469+ let hk = self.base.housekeeper.as_ref();
470+ Self::schedule_write_op(
471+ self.base.inner.as_ref(),
472+ &self.base.write_op_ch,
473+ op,
474+ now,
475+ hk,
476+ )
477+ .expect("Failed to remove");
478+ }
479+ }
480+481+ /// Discards all cached values.
482+ ///
483+ /// This method returns immediately and a background thread will evict all the
484+ /// cached values inserted before the time when this method was called. It is
485+ /// guaranteed that the `get` method must not return these invalidated values
486+ /// even if they have not been evicted.
487+ ///
488+ /// Like the `invalidate` method, this method does not clear the historic
489+ /// popularity estimator of keys so that it retains the client activities of
490+ /// trying to retrieve an item.
491+ pub fn invalidate_all(&self) {
492+ self.base.invalidate_all();
493+ }
494+}
495+496+// Clippy beta 0.1.83 (f41c7ed9889 2024-10-31) warns about unused lifetimes on 'a.
497+// This seems a false positive. The lifetimes are used in the trait bounds.
498+// https://rust-lang.github.io/rust-clippy/master/index.html#extra_unused_lifetimes
499+#[allow(clippy::extra_unused_lifetimes)]
500+impl<'a, K, V, S> Cache<K, V, S>
501+where
502+ K: 'a + Eq + Hash,
503+ V: 'a,
504+ S: BuildHasher + Clone,
505+{
506+ /// Creates an iterator visiting all key-value pairs in arbitrary order. The
507+ /// iterator element type is [`EntryRef<'a, K, V, S>`][moka-entry-ref].
508+ ///
509+ /// Unlike the `get` method, visiting entries via an iterator do not update the
510+ /// historic popularity estimator or reset idle timers for keys.
511+ ///
512+ /// # Locking behavior
513+ ///
514+ /// This iterator relies on the iterator of [`dashmap::DashMap`][dashmap-iter],
515+ /// which employs read-write locks. May deadlock if the thread holding an
516+ /// iterator attempts to update the cache.
517+ ///
518+ /// [moka-entry-ref]: ./struct.EntryRef.html
519+ /// [dashmap-iter]: <https://docs.rs/dashmap/*/dashmap/struct.DashMap.html#method.iter>
520+ ///
521+ /// # Examples
522+ ///
523+ /// ```rust
524+ /// use mini_moka::sync::Cache;
525+ ///
526+ /// let cache = Cache::new(100);
527+ /// cache.insert("Julia", 14);
528+ ///
529+ /// let mut iter = cache.iter();
530+ /// let entry_ref = iter.next().unwrap();
531+ /// assert_eq!(entry_ref.pair(), (&"Julia", &14));
532+ /// assert_eq!(entry_ref.key(), &"Julia");
533+ /// assert_eq!(entry_ref.value(), &14);
534+ /// assert_eq!(*entry_ref, 14);
535+ ///
536+ /// assert!(iter.next().is_none());
537+ /// ```
538+ ///
539+ pub fn iter(&self) -> Iter<'_, K, V, S> {
540+ self.base.iter()
541+ }
542+}
543+544+impl<K, V, S> ConcurrentCacheExt<K, V> for Cache<K, V, S>
545+where
546+ K: Hash + Eq + Send + Sync + 'static,
547+ V: Send + Sync + 'static,
548+ S: BuildHasher + Clone + Send + Sync + 'static,
549+{
550+ fn sync(&self) {
551+ self.base.inner.sync(MAX_SYNC_REPEATS);
552+ }
553+}
554+555+impl<'a, K, V, S> IntoIterator for &'a Cache<K, V, S>
556+where
557+ K: 'a + Eq + Hash,
558+ V: 'a,
559+ S: BuildHasher + Clone,
560+{
561+ type Item = EntryRef<'a, K, V>;
562+563+ type IntoIter = Iter<'a, K, V, S>;
564+565+ fn into_iter(self) -> Self::IntoIter {
566+ self.iter()
567+ }
568+}
569+570+// private methods
571+impl<K, V, S> Cache<K, V, S>
572+where
573+ K: Hash + Eq + Send + Sync + 'static,
574+ V: Clone + Send + Sync + 'static,
575+ S: BuildHasher + Clone + Send + Sync + 'static,
576+{
577+ #[inline]
578+ fn schedule_write_op(
579+ inner: &impl InnerSync,
580+ ch: &Sender<WriteOp<K, V>>,
581+ op: WriteOp<K, V>,
582+ now: Instant,
583+ housekeeper: Option<&Arc<Housekeeper>>,
584+ ) -> Result<(), TrySendError<WriteOp<K, V>>> {
585+ let mut op = op;
586+587+ // NOTES:
588+ // - This will block when the channel is full.
589+ // - We are doing a busy-loop here. We were originally calling `ch.send(op)?`,
590+ // but we got a notable performance degradation.
591+ loop {
592+ BaseCache::<K, V, S>::apply_reads_writes_if_needed(inner, ch, now, housekeeper);
593+ match ch.try_send(op) {
594+ Ok(()) => break,
595+ Err(TrySendError::Full(op1)) => {
596+ op = op1;
597+ std::thread::sleep(Duration::from_micros(WRITE_RETRY_INTERVAL_MICROS));
598+ }
599+ Err(e @ TrySendError::Disconnected(_)) => return Err(e),
600+ }
601+ }
602+ Ok(())
603+ }
604+}
605+606+// For unit tests.
607+#[cfg(test)]
608+impl<K, V, S> Cache<K, V, S>
609+where
610+ K: Hash + Eq + Send + Sync + 'static,
611+ V: Clone + Send + Sync + 'static,
612+ S: BuildHasher + Clone + Send + Sync + 'static,
613+{
614+ pub(crate) fn is_table_empty(&self) -> bool {
615+ self.entry_count() == 0
616+ }
617+618+ pub(crate) fn reconfigure_for_testing(&mut self) {
619+ self.base.reconfigure_for_testing();
620+ }
621+622+ pub(crate) fn set_expiration_clock(&self, clock: Option<crate::common::time::Clock>) {
623+ self.base.set_expiration_clock(clock);
624+ }
625+}
626+627+// To see the debug prints, run test as `cargo test -- --nocapture`
628+#[cfg(test)]
629+mod tests {
630+ use super::{Cache, ConcurrentCacheExt};
631+ use crate::common::time::Clock;
632+633+ use std::{sync::Arc, time::Duration};
634+635+ #[test]
636+ fn basic_single_thread() {
637+ let mut cache = Cache::new(3);
638+ cache.reconfigure_for_testing();
639+640+ // Make the cache exterior immutable.
641+ let cache = cache;
642+643+ cache.insert("a", "alice");
644+ cache.insert("b", "bob");
645+ assert_eq!(cache.get(&"a"), Some("alice"));
646+ assert!(cache.contains_key(&"a"));
647+ assert!(cache.contains_key(&"b"));
648+ assert_eq!(cache.get(&"b"), Some("bob"));
649+ cache.sync();
650+ // counts: a -> 1, b -> 1
651+652+ cache.insert("c", "cindy");
653+ assert_eq!(cache.get(&"c"), Some("cindy"));
654+ assert!(cache.contains_key(&"c"));
655+ // counts: a -> 1, b -> 1, c -> 1
656+ cache.sync();
657+658+ assert!(cache.contains_key(&"a"));
659+ assert_eq!(cache.get(&"a"), Some("alice"));
660+ assert_eq!(cache.get(&"b"), Some("bob"));
661+ assert!(cache.contains_key(&"b"));
662+ cache.sync();
663+ // counts: a -> 2, b -> 2, c -> 1
664+665+ // "d" should not be admitted because its frequency is too low.
666+ cache.insert("d", "david"); // count: d -> 0
667+ cache.sync();
668+ assert_eq!(cache.get(&"d"), None); // d -> 1
669+ assert!(!cache.contains_key(&"d"));
670+671+ cache.insert("d", "david");
672+ cache.sync();
673+ assert!(!cache.contains_key(&"d"));
674+ assert_eq!(cache.get(&"d"), None); // d -> 2
675+676+ // "d" should be admitted and "c" should be evicted
677+ // because d's frequency is higher than c's.
678+ cache.insert("d", "dennis");
679+ cache.sync();
680+ assert_eq!(cache.get(&"a"), Some("alice"));
681+ assert_eq!(cache.get(&"b"), Some("bob"));
682+ assert_eq!(cache.get(&"c"), None);
683+ assert_eq!(cache.get(&"d"), Some("dennis"));
684+ assert!(cache.contains_key(&"a"));
685+ assert!(cache.contains_key(&"b"));
686+ assert!(!cache.contains_key(&"c"));
687+ assert!(cache.contains_key(&"d"));
688+689+ cache.invalidate(&"b");
690+ assert_eq!(cache.get(&"b"), None);
691+ assert!(!cache.contains_key(&"b"));
692+ }
693+694+ #[test]
695+ fn size_aware_eviction() {
696+ let weigher = |_k: &&str, v: &(&str, u32)| v.1;
697+698+ let alice = ("alice", 10);
699+ let bob = ("bob", 15);
700+ let bill = ("bill", 20);
701+ let cindy = ("cindy", 5);
702+ let david = ("david", 15);
703+ let dennis = ("dennis", 15);
704+705+ let mut cache = Cache::builder().max_capacity(31).weigher(weigher).build();
706+ cache.reconfigure_for_testing();
707+708+ // Make the cache exterior immutable.
709+ let cache = cache;
710+711+ cache.insert("a", alice);
712+ cache.insert("b", bob);
713+ assert_eq!(cache.get(&"a"), Some(alice));
714+ assert!(cache.contains_key(&"a"));
715+ assert!(cache.contains_key(&"b"));
716+ assert_eq!(cache.get(&"b"), Some(bob));
717+ cache.sync();
718+ // order (LRU -> MRU) and counts: a -> 1, b -> 1
719+720+ cache.insert("c", cindy);
721+ assert_eq!(cache.get(&"c"), Some(cindy));
722+ assert!(cache.contains_key(&"c"));
723+ // order and counts: a -> 1, b -> 1, c -> 1
724+ cache.sync();
725+726+ assert!(cache.contains_key(&"a"));
727+ assert_eq!(cache.get(&"a"), Some(alice));
728+ assert_eq!(cache.get(&"b"), Some(bob));
729+ assert!(cache.contains_key(&"b"));
730+ cache.sync();
731+ // order and counts: c -> 1, a -> 2, b -> 2
732+733+ // To enter "d" (weight: 15), it needs to evict "c" (w: 5) and "a" (w: 10).
734+ // "d" must have higher count than 3, which is the aggregated count
735+ // of "a" and "c".
736+ cache.insert("d", david); // count: d -> 0
737+ cache.sync();
738+ assert_eq!(cache.get(&"d"), None); // d -> 1
739+ assert!(!cache.contains_key(&"d"));
740+741+ cache.insert("d", david);
742+ cache.sync();
743+ assert!(!cache.contains_key(&"d"));
744+ assert_eq!(cache.get(&"d"), None); // d -> 2
745+746+ cache.insert("d", david);
747+ cache.sync();
748+ assert_eq!(cache.get(&"d"), None); // d -> 3
749+ assert!(!cache.contains_key(&"d"));
750+751+ cache.insert("d", david);
752+ cache.sync();
753+ assert!(!cache.contains_key(&"d"));
754+ assert_eq!(cache.get(&"d"), None); // d -> 4
755+756+ // Finally "d" should be admitted by evicting "c" and "a".
757+ cache.insert("d", dennis);
758+ cache.sync();
759+ assert_eq!(cache.get(&"a"), None);
760+ assert_eq!(cache.get(&"b"), Some(bob));
761+ assert_eq!(cache.get(&"c"), None);
762+ assert_eq!(cache.get(&"d"), Some(dennis));
763+ assert!(!cache.contains_key(&"a"));
764+ assert!(cache.contains_key(&"b"));
765+ assert!(!cache.contains_key(&"c"));
766+ assert!(cache.contains_key(&"d"));
767+768+ // Update "b" with "bill" (w: 15 -> 20). This should evict "d" (w: 15).
769+ cache.insert("b", bill);
770+ cache.sync();
771+ assert_eq!(cache.get(&"b"), Some(bill));
772+ assert_eq!(cache.get(&"d"), None);
773+ assert!(cache.contains_key(&"b"));
774+ assert!(!cache.contains_key(&"d"));
775+776+ // Re-add "a" (w: 10) and update "b" with "bob" (w: 20 -> 15).
777+ cache.insert("a", alice);
778+ cache.insert("b", bob);
779+ cache.sync();
780+ assert_eq!(cache.get(&"a"), Some(alice));
781+ assert_eq!(cache.get(&"b"), Some(bob));
782+ assert_eq!(cache.get(&"d"), None);
783+ assert!(cache.contains_key(&"a"));
784+ assert!(cache.contains_key(&"b"));
785+ assert!(!cache.contains_key(&"d"));
786+787+ // Verify the sizes.
788+ assert_eq!(cache.entry_count(), 2);
789+ assert_eq!(cache.weighted_size(), 25);
790+ }
791+792+ #[test]
793+ fn basic_multi_threads() {
794+ let num_threads = 4;
795+ let cache = Cache::new(100);
796+797+ // https://rust-lang.github.io/rust-clippy/master/index.html#needless_collect
798+ #[allow(clippy::needless_collect)]
799+ let handles = (0..num_threads)
800+ .map(|id| {
801+ let cache = cache.clone();
802+ std::thread::spawn(move || {
803+ cache.insert(10, format!("{}-100", id));
804+ cache.get(&10);
805+ cache.insert(20, format!("{}-200", id));
806+ cache.invalidate(&10);
807+ })
808+ })
809+ .collect::<Vec<_>>();
810+811+ handles.into_iter().for_each(|h| h.join().expect("Failed"));
812+813+ assert!(cache.get(&10).is_none());
814+ assert!(cache.get(&20).is_some());
815+ assert!(!cache.contains_key(&10));
816+ assert!(cache.contains_key(&20));
817+ }
818+819+ #[test]
820+ fn invalidate_all() {
821+ let mut cache = Cache::new(100);
822+ cache.reconfigure_for_testing();
823+824+ // Make the cache exterior immutable.
825+ let cache = cache;
826+827+ cache.insert("a", "alice");
828+ cache.insert("b", "bob");
829+ cache.insert("c", "cindy");
830+ assert_eq!(cache.get(&"a"), Some("alice"));
831+ assert_eq!(cache.get(&"b"), Some("bob"));
832+ assert_eq!(cache.get(&"c"), Some("cindy"));
833+ assert!(cache.contains_key(&"a"));
834+ assert!(cache.contains_key(&"b"));
835+ assert!(cache.contains_key(&"c"));
836+837+ // `cache.sync()` is no longer needed here before invalidating. The last
838+ // modified timestamp of the entries were updated when they were inserted.
839+ // https://github.com/moka-rs/moka/issues/155
840+841+ cache.invalidate_all();
842+ cache.sync();
843+844+ cache.insert("d", "david");
845+ cache.sync();
846+847+ assert!(cache.get(&"a").is_none());
848+ assert!(cache.get(&"b").is_none());
849+ assert!(cache.get(&"c").is_none());
850+ assert_eq!(cache.get(&"d"), Some("david"));
851+ assert!(!cache.contains_key(&"a"));
852+ assert!(!cache.contains_key(&"b"));
853+ assert!(!cache.contains_key(&"c"));
854+ assert!(cache.contains_key(&"d"));
855+ }
856+857+ #[test]
858+ fn time_to_live() {
859+ let mut cache = Cache::builder()
860+ .max_capacity(100)
861+ .time_to_live(Duration::from_secs(10))
862+ .build();
863+864+ cache.reconfigure_for_testing();
865+866+ let (clock, mock) = Clock::mock();
867+ cache.set_expiration_clock(Some(clock));
868+869+ // Make the cache exterior immutable.
870+ let cache = cache;
871+872+ cache.insert("a", "alice");
873+ cache.sync();
874+875+ mock.increment(Duration::from_secs(5)); // 5 secs from the start.
876+ cache.sync();
877+878+ assert_eq!(cache.get(&"a"), Some("alice"));
879+ assert!(cache.contains_key(&"a"));
880+881+ mock.increment(Duration::from_secs(5)); // 10 secs.
882+ assert_eq!(cache.get(&"a"), None);
883+ assert!(!cache.contains_key(&"a"));
884+885+ assert_eq!(cache.iter().count(), 0);
886+887+ cache.sync();
888+ assert!(cache.is_table_empty());
889+890+ cache.insert("b", "bob");
891+ cache.sync();
892+893+ assert_eq!(cache.entry_count(), 1);
894+895+ mock.increment(Duration::from_secs(5)); // 15 secs.
896+ cache.sync();
897+898+ assert_eq!(cache.get(&"b"), Some("bob"));
899+ assert!(cache.contains_key(&"b"));
900+ assert_eq!(cache.entry_count(), 1);
901+902+ cache.insert("b", "bill");
903+ cache.sync();
904+905+ mock.increment(Duration::from_secs(5)); // 20 secs
906+ cache.sync();
907+908+ assert_eq!(cache.get(&"b"), Some("bill"));
909+ assert!(cache.contains_key(&"b"));
910+ assert_eq!(cache.entry_count(), 1);
911+912+ mock.increment(Duration::from_secs(5)); // 25 secs
913+ assert_eq!(cache.get(&"a"), None);
914+ assert_eq!(cache.get(&"b"), None);
915+ assert!(!cache.contains_key(&"a"));
916+ assert!(!cache.contains_key(&"b"));
917+918+ assert_eq!(cache.iter().count(), 0);
919+920+ cache.sync();
921+ assert!(cache.is_table_empty());
922+ }
923+924+ #[test]
925+ fn time_to_idle() {
926+ let mut cache = Cache::builder()
927+ .max_capacity(100)
928+ .time_to_idle(Duration::from_secs(10))
929+ .build();
930+931+ cache.reconfigure_for_testing();
932+933+ let (clock, mock) = Clock::mock();
934+ cache.set_expiration_clock(Some(clock));
935+936+ // Make the cache exterior immutable.
937+ let cache = cache;
938+939+ cache.insert("a", "alice");
940+ cache.sync();
941+942+ mock.increment(Duration::from_secs(5)); // 5 secs from the start.
943+ cache.sync();
944+945+ assert_eq!(cache.get(&"a"), Some("alice"));
946+947+ mock.increment(Duration::from_secs(5)); // 10 secs.
948+ cache.sync();
949+950+ cache.insert("b", "bob");
951+ cache.sync();
952+953+ assert_eq!(cache.entry_count(), 2);
954+955+ mock.increment(Duration::from_secs(2)); // 12 secs.
956+ cache.sync();
957+958+ // contains_key does not reset the idle timer for the key.
959+ assert!(cache.contains_key(&"a"));
960+ assert!(cache.contains_key(&"b"));
961+ cache.sync();
962+963+ assert_eq!(cache.entry_count(), 2);
964+965+ mock.increment(Duration::from_secs(3)); // 15 secs.
966+ assert_eq!(cache.get(&"a"), None);
967+ assert_eq!(cache.get(&"b"), Some("bob"));
968+ assert!(!cache.contains_key(&"a"));
969+ assert!(cache.contains_key(&"b"));
970+971+ assert_eq!(cache.iter().count(), 1);
972+973+ cache.sync();
974+ assert_eq!(cache.entry_count(), 1);
975+976+ mock.increment(Duration::from_secs(10)); // 25 secs
977+ assert_eq!(cache.get(&"a"), None);
978+ assert_eq!(cache.get(&"b"), None);
979+ assert!(!cache.contains_key(&"a"));
980+ assert!(!cache.contains_key(&"b"));
981+982+ assert_eq!(cache.iter().count(), 0);
983+984+ cache.sync();
985+ assert!(cache.is_table_empty());
986+ }
987+988+ #[test]
989+ fn test_iter() {
990+ const NUM_KEYS: usize = 50;
991+992+ fn make_value(key: usize) -> String {
993+ format!("val: {}", key)
994+ }
995+996+ let cache = Cache::builder()
997+ .max_capacity(100)
998+ .time_to_idle(Duration::from_secs(10))
999+ .build();
1000+1001+ for key in 0..NUM_KEYS {
1002+ cache.insert(key, make_value(key));
1003+ }
1004+1005+ let mut key_set = std::collections::HashSet::new();
1006+1007+ for entry in &cache {
1008+ let (key, value) = entry.pair();
1009+ assert_eq!(value, &make_value(*key));
1010+1011+ key_set.insert(*key);
1012+ }
1013+1014+ // Ensure there are no missing or duplicate keys in the iteration.
1015+ assert_eq!(key_set.len(), NUM_KEYS);
1016+1017+ // DO NOT REMOVE THE COMMENT FROM THIS BLOCK.
1018+ // This block demonstrates how you can write a code to get a deadlock.
1019+ // {
1020+ // let mut iter = cache.iter();
1021+ // let _ = iter.next();
1022+1023+ // for key in 0..NUM_KEYS {
1024+ // cache.insert(key, make_value(key));
1025+ // println!("{}", key);
1026+ // }
1027+1028+ // let _ = iter.next();
1029+ // }
1030+ }
1031+1032+ /// Runs 16 threads at the same time and ensures no deadlock occurs.
1033+ ///
1034+ /// - Eight of the threads will update key-values in the cache.
1035+ /// - Eight others will iterate the cache.
1036+ ///
1037+ #[test]
1038+ fn test_iter_multi_threads() {
1039+ use std::collections::HashSet;
1040+1041+ const NUM_KEYS: usize = 1024;
1042+ const NUM_THREADS: usize = 16;
1043+1044+ fn make_value(key: usize) -> String {
1045+ format!("val: {}", key)
1046+ }
1047+1048+ let cache = Cache::builder()
1049+ .max_capacity(2048)
1050+ .time_to_idle(Duration::from_secs(10))
1051+ .build();
1052+1053+ // Initialize the cache.
1054+ for key in 0..NUM_KEYS {
1055+ cache.insert(key, make_value(key));
1056+ }
1057+1058+ let rw_lock = Arc::new(std::sync::RwLock::<()>::default());
1059+ let write_lock = rw_lock.write().unwrap();
1060+1061+ // https://rust-lang.github.io/rust-clippy/master/index.html#needless_collect
1062+ #[allow(clippy::needless_collect)]
1063+ let handles = (0..NUM_THREADS)
1064+ .map(|n| {
1065+ let cache = cache.clone();
1066+ let rw_lock = Arc::clone(&rw_lock);
1067+1068+ if n % 2 == 0 {
1069+ // This thread will update the cache.
1070+ std::thread::spawn(move || {
1071+ let read_lock = rw_lock.read().unwrap();
1072+ for key in 0..NUM_KEYS {
1073+ // TODO: Update keys in a random order?
1074+ cache.insert(key, make_value(key));
1075+ }
1076+ std::mem::drop(read_lock);
1077+ })
1078+ } else {
1079+ // This thread will iterate the cache.
1080+ std::thread::spawn(move || {
1081+ let read_lock = rw_lock.read().unwrap();
1082+ let mut key_set = HashSet::new();
1083+ for entry in &cache {
1084+ let (key, value) = entry.pair();
1085+ assert_eq!(value, &make_value(*key));
1086+ key_set.insert(*key);
1087+ }
1088+ // Ensure there are no missing or duplicate keys in the iteration.
1089+ assert_eq!(key_set.len(), NUM_KEYS);
1090+ std::mem::drop(read_lock);
1091+ })
1092+ }
1093+ })
1094+ .collect::<Vec<_>>();
1095+1096+ // Let these threads to run by releasing the write lock.
1097+ std::mem::drop(write_lock);
1098+1099+ handles.into_iter().for_each(|h| h.join().expect("Failed"));
1100+1101+ // Ensure there are no missing or duplicate keys in the iteration.
1102+ let key_set = cache.iter().map(|ent| *ent.key()).collect::<HashSet<_>>();
1103+ assert_eq!(key_set.len(), NUM_KEYS);
1104+ }
1105+1106+ #[test]
1107+ fn test_debug_format() {
1108+ let cache = Cache::new(10);
1109+ cache.insert('a', "alice");
1110+ cache.insert('b', "bob");
1111+ cache.insert('c', "cindy");
1112+1113+ let debug_str = format!("{:?}", cache);
1114+ assert!(debug_str.starts_with('{'));
1115+ assert!(debug_str.contains(r#"'a': "alice""#));
1116+ assert!(debug_str.contains(r#"'b': "bob""#));
1117+ assert!(debug_str.contains(r#"'c': "cindy""#));
1118+ assert!(debug_str.ends_with('}'));
1119+ }
1120+}