···11+# Mini Moka Cache — Change Log
22+33+## Version 0.10.3
44+55+### Fixed
66+77+- Fixed occasional panic in internal `FrequencySketch` in debug build.
88+ ([#21][gh-issue-0021])
99+1010+1111+## Version 0.10.2
1212+1313+### Fixed
1414+1515+- Fixed a memory corruption bug caused by the timing of concurrent `insert`,
1616+ `get` and removal of the same cached entry. ([#15][gh-pull-0015]).
1717+1818+1919+## Version 0.10.1
2020+2121+Bumped the minimum supported Rust version (MSRV) to 1.61 (May 19, 2022).
2222+([#5][gh-pull-0005])
2323+2424+### Fixed
2525+2626+- Fixed the caches mutating a deque node through a `NonNull` pointer derived from a
2727+ shared reference. ([#6][gh-pull-0006]).
2828+2929+3030+## Version 0.10.0
3131+3232+In this version, we removed some dependencies from Mini Moka to make it more
3333+lightweight.
3434+3535+### Removed
3636+3737+- Remove the background threads from the `sync::Cache` ([#1][gh-pull-0001]):
3838+ - Also remove the following dependencies:
3939+ - `scheduled-thread-pool`
4040+ - `num_cpus`
4141+ - `once_cell` (Moved to the dev-dependencies)
4242+- Remove the following dependencies and crate features ([#2][gh-pull-0002]):
4343+ - Removed dependencies:
4444+ - `quanta`
4545+ - `parking_lot`
4646+ - `rustc_version` (from the build-dependencies)
4747+ - Removed crate features:
4848+ - `quanta` (was enabled by default)
4949+ - `atomic64` (was enabled by default)
5050+5151+## Version 0.9.6
5252+5353+### Added
5454+5555+- Move the relevant source code from the GitHub moka-rs/moka repository (at
5656+ [v0.9.6][moka-v0.9.6] tag) to this moka-rs/mini-moka repository.
5757+ - Rename `moka::dash` module to `mini_moka::sync`.
5858+ - Rename `moka::unsync` module to `mini_moka::unsync`.
5959+ - Rename a crate feature `dash` to `sync` and make it a default.
6060+6161+<!-- Links -->
6262+[moka-v0.9.6]: https://github.com/moka-rs/moka/tree/v0.9.6
6363+6464+[gh-issue-0021]: https://github.com/moka-rs/mini-moka/issues/21/
6565+6666+[gh-pull-0015]: https://github.com/moka-rs/mini-moka/pull/15/
6767+[gh-pull-0006]: https://github.com/moka-rs/mini-moka/pull/6/
6868+[gh-pull-0005]: https://github.com/moka-rs/mini-moka/pull/5/
6969+[gh-pull-0002]: https://github.com/moka-rs/mini-moka/pull/2/
7070+[gh-pull-0001]: https://github.com/moka-rs/mini-moka/pull/1/
···11+ Apache License
22+ Version 2.0, January 2004
33+ http://www.apache.org/licenses/
44+55+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
66+77+ 1. Definitions.
88+99+ "License" shall mean the terms and conditions for use, reproduction,
1010+ and distribution as defined by Sections 1 through 9 of this document.
1111+1212+ "Licensor" shall mean the copyright owner or entity authorized by
1313+ the copyright owner that is granting the License.
1414+1515+ "Legal Entity" shall mean the union of the acting entity and all
1616+ other entities that control, are controlled by, or are under common
1717+ control with that entity. For the purposes of this definition,
1818+ "control" means (i) the power, direct or indirect, to cause the
1919+ direction or management of such entity, whether by contract or
2020+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
2121+ outstanding shares, or (iii) beneficial ownership of such entity.
2222+2323+ "You" (or "Your") shall mean an individual or Legal Entity
2424+ exercising permissions granted by this License.
2525+2626+ "Source" form shall mean the preferred form for making modifications,
2727+ including but not limited to software source code, documentation
2828+ source, and configuration files.
2929+3030+ "Object" form shall mean any form resulting from mechanical
3131+ transformation or translation of a Source form, including but
3232+ not limited to compiled object code, generated documentation,
3333+ and conversions to other media types.
3434+3535+ "Work" shall mean the work of authorship, whether in Source or
3636+ Object form, made available under the License, as indicated by a
3737+ copyright notice that is included in or attached to the work
3838+ (an example is provided in the Appendix below).
3939+4040+ "Derivative Works" shall mean any work, whether in Source or Object
4141+ form, that is based on (or derived from) the Work and for which the
4242+ editorial revisions, annotations, elaborations, or other modifications
4343+ represent, as a whole, an original work of authorship. For the purposes
4444+ of this License, Derivative Works shall not include works that remain
4545+ separable from, or merely link (or bind by name) to the interfaces of,
4646+ the Work and Derivative Works thereof.
4747+4848+ "Contribution" shall mean any work of authorship, including
4949+ the original version of the Work and any modifications or additions
5050+ to that Work or Derivative Works thereof, that is intentionally
5151+ submitted to Licensor for inclusion in the Work by the copyright owner
5252+ or by an individual or Legal Entity authorized to submit on behalf of
5353+ the copyright owner. For the purposes of this definition, "submitted"
5454+ means any form of electronic, verbal, or written communication sent
5555+ to the Licensor or its representatives, including but not limited to
5656+ communication on electronic mailing lists, source code control systems,
5757+ and issue tracking systems that are managed by, or on behalf of, the
5858+ Licensor for the purpose of discussing and improving the Work, but
5959+ excluding communication that is conspicuously marked or otherwise
6060+ designated in writing by the copyright owner as "Not a Contribution."
6161+6262+ "Contributor" shall mean Licensor and any individual or Legal Entity
6363+ on behalf of whom a Contribution has been received by Licensor and
6464+ subsequently incorporated within the Work.
6565+6666+ 2. Grant of Copyright License. Subject to the terms and conditions of
6767+ this License, each Contributor hereby grants to You a perpetual,
6868+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
6969+ copyright license to reproduce, prepare Derivative Works of,
7070+ publicly display, publicly perform, sublicense, and distribute the
7171+ Work and such Derivative Works in Source or Object form.
7272+7373+ 3. Grant of Patent License. Subject to the terms and conditions of
7474+ this License, each Contributor hereby grants to You a perpetual,
7575+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
7676+ (except as stated in this section) patent license to make, have made,
7777+ use, offer to sell, sell, import, and otherwise transfer the Work,
7878+ where such license applies only to those patent claims licensable
7979+ by such Contributor that are necessarily infringed by their
8080+ Contribution(s) alone or by combination of their Contribution(s)
8181+ with the Work to which such Contribution(s) was submitted. If You
8282+ institute patent litigation against any entity (including a
8383+ cross-claim or counterclaim in a lawsuit) alleging that the Work
8484+ or a Contribution incorporated within the Work constitutes direct
8585+ or contributory patent infringement, then any patent licenses
8686+ granted to You under this License for that Work shall terminate
8787+ as of the date such litigation is filed.
8888+8989+ 4. Redistribution. You may reproduce and distribute copies of the
9090+ Work or Derivative Works thereof in any medium, with or without
9191+ modifications, and in Source or Object form, provided that You
9292+ meet the following conditions:
9393+9494+ (a) You must give any other recipients of the Work or
9595+ Derivative Works a copy of this License; and
9696+9797+ (b) You must cause any modified files to carry prominent notices
9898+ stating that You changed the files; and
9999+100100+ (c) You must retain, in the Source form of any Derivative Works
101101+ that You distribute, all copyright, patent, trademark, and
102102+ attribution notices from the Source form of the Work,
103103+ excluding those notices that do not pertain to any part of
104104+ the Derivative Works; and
105105+106106+ (d) If the Work includes a "NOTICE" text file as part of its
107107+ distribution, then any Derivative Works that You distribute must
108108+ include a readable copy of the attribution notices contained
109109+ within such NOTICE file, excluding those notices that do not
110110+ pertain to any part of the Derivative Works, in at least one
111111+ of the following places: within a NOTICE text file distributed
112112+ as part of the Derivative Works; within the Source form or
113113+ documentation, if provided along with the Derivative Works; or,
114114+ within a display generated by the Derivative Works, if and
115115+ wherever such third-party notices normally appear. The contents
116116+ of the NOTICE file are for informational purposes only and
117117+ do not modify the License. You may add Your own attribution
118118+ notices within Derivative Works that You distribute, alongside
119119+ or as an addendum to the NOTICE text from the Work, provided
120120+ that such additional attribution notices cannot be construed
121121+ as modifying the License.
122122+123123+ You may add Your own copyright statement to Your modifications and
124124+ may provide additional or different license terms and conditions
125125+ for use, reproduction, or distribution of Your modifications, or
126126+ for any such Derivative Works as a whole, provided Your use,
127127+ reproduction, and distribution of the Work otherwise complies with
128128+ the conditions stated in this License.
129129+130130+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131131+ any Contribution intentionally submitted for inclusion in the Work
132132+ by You to the Licensor shall be under the terms and conditions of
133133+ this License, without any additional terms or conditions.
134134+ Notwithstanding the above, nothing herein shall supersede or modify
135135+ the terms of any separate license agreement you may have executed
136136+ with Licensor regarding such Contributions.
137137+138138+ 6. Trademarks. This License does not grant permission to use the trade
139139+ names, trademarks, service marks, or product names of the Licensor,
140140+ except as required for reasonable and customary use in describing the
141141+ origin of the Work and reproducing the content of the NOTICE file.
142142+143143+ 7. Disclaimer of Warranty. Unless required by applicable law or
144144+ agreed to in writing, Licensor provides the Work (and each
145145+ Contributor provides its Contributions) on an "AS IS" BASIS,
146146+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147147+ implied, including, without limitation, any warranties or conditions
148148+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149149+ PARTICULAR PURPOSE. You are solely responsible for determining the
150150+ appropriateness of using or redistributing the Work and assume any
151151+ risks associated with Your exercise of permissions under this License.
152152+153153+ 8. Limitation of Liability. In no event and under no legal theory,
154154+ whether in tort (including negligence), contract, or otherwise,
155155+ unless required by applicable law (such as deliberate and grossly
156156+ negligent acts) or agreed to in writing, shall any Contributor be
157157+ liable to You for damages, including any direct, indirect, special,
158158+ incidental, or consequential damages of any character arising as a
159159+ result of this License or out of the use or inability to use the
160160+ Work (including but not limited to damages for loss of goodwill,
161161+ work stoppage, computer failure or malfunction, or any and all
162162+ other commercial damages or losses), even if such Contributor
163163+ has been advised of the possibility of such damages.
164164+165165+ 9. Accepting Warranty or Additional Liability. While redistributing
166166+ the Work or Derivative Works thereof, You may choose to offer,
167167+ and charge a fee for, acceptance of support, warranty, indemnity,
168168+ or other liability obligations and/or rights consistent with this
169169+ License. However, in accepting such obligations, You may act only
170170+ on Your own behalf and on Your sole responsibility, not on behalf
171171+ of any other Contributor, and only if You agree to indemnify,
172172+ defend, and hold each Contributor harmless for any liability
173173+ incurred by, or claims asserted against, such Contributor by reason
174174+ of your accepting any such warranty or additional liability.
175175+176176+ END OF TERMS AND CONDITIONS
177177+178178+ APPENDIX: How to apply the Apache License to your work.
179179+180180+ To apply the Apache License to your work, attach the following
181181+ boilerplate notice, with the fields enclosed by brackets "[]"
182182+ replaced with your own identifying information. (Don't include
183183+ the brackets!) The text should be enclosed in the appropriate
184184+ comment syntax for the file format. We also recommend that a
185185+ file or class name and description of purpose be included on the
186186+ same "printed page" as the copyright notice for easier
187187+ identification within third-party archives.
188188+189189+ Copyright 2020 - 2024 Tatsuya Kawano
190190+191191+ Licensed under the Apache License, Version 2.0 (the "License");
192192+ you may not use this file except in compliance with the License.
193193+ You may obtain a copy of the License at
194194+195195+ http://www.apache.org/licenses/LICENSE-2.0
196196+197197+ Unless required by applicable law or agreed to in writing, software
198198+ distributed under the License is distributed on an "AS IS" BASIS,
199199+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200200+ See the License for the specific language governing permissions and
201201+ limitations under the License.
+21
crates/mini-moka-vendored/LICENSE-MIT
···11+MIT License
22+33+Copyright (c) 2020 - 2024 Tatsuya Kawano
44+55+Permission is hereby granted, free of charge, to any person obtaining a copy
66+of this software and associated documentation files (the "Software"), to deal
77+in the Software without restriction, including without limitation the rights
88+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
99+copies of the Software, and to permit persons to whom the Software is
1010+furnished to do so, subject to the following conditions:
1111+1212+The above copyright notice and this permission notice shall be included in all
1313+copies or substantial portions of the Software.
1414+1515+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
1616+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
1717+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
1818+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
1919+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
2020+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
2121+SOFTWARE.
+321
crates/mini-moka-vendored/README.md
···11+# Vendored in until upstream PR for wasm compat is merged or I reimplement.
22+33+# Mini Moka
44+55+[![GitHub Actions][gh-actions-badge]][gh-actions]
66+[![crates.io release][release-badge]][crate]
77+[![docs][docs-badge]][docs]
88+[![dependency status][deps-rs-badge]][deps-rs]
99+<!-- [![coverage status][coveralls-badge]][coveralls] -->
1010+[![license][license-badge]](#license)
1111+<!-- [](https://app.fossa.com/projects/git%2Bgithub.com%2Fmoka-rs%2Fmini-moka?ref=badge_shield) -->
1212+1313+Mini Moka is a fast, concurrent cache library for Rust. Mini Moka is a light edition
1414+of [Moka][moka-git].
1515+1616+Mini Moka provides cache implementations on top of hash maps. They support full
1717+concurrency of retrievals and a high expected concurrency for updates. Mini Moka also
1818+provides a non-thread-safe cache implementation for single thread applications.
1919+2020+All caches perform a best-effort bounding of a hash map using an entry replacement
2121+algorithm to determine which entries to evict when the capacity is exceeded.
2222+2323+[gh-actions-badge]: https://github.com/moka-rs/mini-moka/workflows/CI/badge.svg
2424+[release-badge]: https://img.shields.io/crates/v/mini-moka.svg
2525+[docs-badge]: https://docs.rs/mini-moka/badge.svg
2626+[deps-rs-badge]: https://deps.rs/repo/github/moka-rs/mini-moka/status.svg
2727+<!-- [coveralls-badge]: https://coveralls.io/repos/github/mini-moka-rs/moka/badge.svg?branch=main -->
2828+[license-badge]: https://img.shields.io/crates/l/mini-moka.svg
2929+<!-- [fossa-badge]: https://app.fossa.com/api/projects/git%2Bgithub.com%2Fmoka-rs%2Fmini-moka.svg?type=shield -->
3030+3131+[gh-actions]: https://github.com/moka-rs/mini-moka/actions?query=workflow%3ACI
3232+[crate]: https://crates.io/crates/mini-moka
3333+[docs]: https://docs.rs/mini-moka
3434+[deps-rs]: https://deps.rs/repo/github/moka-rs/mini-moka
3535+<!-- [coveralls]: https://coveralls.io/github/moka-rs/mini-moka?branch=main -->
3636+<!-- [fossa]: https://app.fossa.com/projects/git%2Bgithub.com%2Fmoka-rs%2Fmini-moka?ref=badge_shield -->
3737+3838+[moka-git]: https://github.com/moka-rs/moka
3939+[caffeine-git]: https://github.com/ben-manes/caffeine
4040+4141+4242+## Features
4343+4444+- Thread-safe, highly concurrent in-memory cache implementation.
4545+- A cache can be bounded by one of the followings:
4646+ - The maximum number of entries.
4747+ - The total weighted size of entries. (Size aware eviction)
4848+- Maintains near optimal hit ratio by using an entry replacement algorithms inspired
4949+ by Caffeine:
5050+ - Admission to a cache is controlled by the Least Frequently Used (LFU) policy.
5151+ - Eviction from a cache is controlled by the Least Recently Used (LRU) policy.
5252+ - [More details and some benchmark results are available here][tiny-lfu].
5353+- Supports expiration policies:
5454+ - Time to live
5555+ - Time to idle
5656+5757+<!--
5858+Mini Moka provides a rich and flexible feature set while maintaining high hit ratio
5959+and a high level of concurrency for concurrent access. However, it may not be as fast
6060+as other caches, especially those that focus on much smaller feature sets.
6161+6262+If you do not need features like: time to live, and size aware eviction, you may want
6363+to take a look at the [Quick Cache][quick-cache] crate.
6464+-->
6565+6666+[tiny-lfu]: https://github.com/moka-rs/moka/wiki#admission-and-eviction-policies
6767+<!-- [quick-cache]: https://crates.io/crates/quick_cache -->
6868+6969+7070+## Change Log
7171+7272+- [CHANGELOG.md](https://github.com/moka-rs/mini-moka/blob/main/CHANGELOG.md)
7373+7474+7575+## Table of Contents
7676+7777+- [Features](#features)
7878+- [Change Log](#change-log)
7979+- [Usage](#usage)
8080+- [Example: Synchronous Cache](#example-synchronous-cache)
8181+- [Avoiding to clone the value at `get`](#avoiding-to-clone-the-value-at-get)
8282+- Examples (Part 2)
8383+ - [Size Aware Eviction](#example-size-aware-eviction)
8484+ - [Expiration Policies](#example-expiration-policies)
8585+- [Minimum Supported Rust Versions](#minimum-supported-rust-versions)
8686+- [Developing Mini Moka](#developing-mini-moka)
8787+- [Credits](#credits)
8888+- [License](#license)
8989+9090+9191+## Usage
9292+9393+Add this to your `Cargo.toml`:
9494+9595+```toml
9696+[dependencies]
9797+mini_moka = "0.10"
9898+```
9999+100100+101101+## Example: Synchronous Cache
102102+103103+The thread-safe, synchronous caches are defined in the `sync` module.
104104+105105+Cache entries are manually added using `insert` method, and are stored in the cache
106106+until either evicted or manually invalidated.
107107+108108+Here's an example of reading and updating a cache by using multiple threads:
109109+110110+```rust
111111+// Use the synchronous cache.
112112+use mini_moka::sync::Cache;
113113+114114+use std::thread;
115115+116116+fn value(n: usize) -> String {
117117+ format!("value {}", n)
118118+}
119119+120120+fn main() {
121121+ const NUM_THREADS: usize = 16;
122122+ const NUM_KEYS_PER_THREAD: usize = 64;
123123+124124+ // Create a cache that can store up to 10,000 entries.
125125+ let cache = Cache::new(10_000);
126126+127127+ // Spawn threads and read and update the cache simultaneously.
128128+ let threads: Vec<_> = (0..NUM_THREADS)
129129+ .map(|i| {
130130+ // To share the same cache across the threads, clone it.
131131+ // This is a cheap operation.
132132+ let my_cache = cache.clone();
133133+ let start = i * NUM_KEYS_PER_THREAD;
134134+ let end = (i + 1) * NUM_KEYS_PER_THREAD;
135135+136136+ thread::spawn(move || {
137137+ // Insert 64 entries. (NUM_KEYS_PER_THREAD = 64)
138138+ for key in start..end {
139139+ my_cache.insert(key, value(key));
140140+ // get() returns Option<String>, a clone of the stored value.
141141+ assert_eq!(my_cache.get(&key), Some(value(key)));
142142+ }
143143+144144+ // Invalidate every 4 element of the inserted entries.
145145+ for key in (start..end).step_by(4) {
146146+ my_cache.invalidate(&key);
147147+ }
148148+ })
149149+ })
150150+ .collect();
151151+152152+ // Wait for all threads to complete.
153153+ threads.into_iter().for_each(|t| t.join().expect("Failed"));
154154+155155+ // Verify the result.
156156+ for key in 0..(NUM_THREADS * NUM_KEYS_PER_THREAD) {
157157+ if key % 4 == 0 {
158158+ assert_eq!(cache.get(&key), None);
159159+ } else {
160160+ assert_eq!(cache.get(&key), Some(value(key)));
161161+ }
162162+ }
163163+}
164164+```
165165+166166+167167+## Avoiding to clone the value at `get`
168168+169169+For the concurrent cache (`sync` cache), the return type of `get` method is
170170+`Option<V>` instead of `Option<&V>`, where `V` is the value type. Every time `get` is
171171+called for an existing key, it creates a clone of the stored value `V` and returns
172172+it. This is because the `Cache` allows concurrent updates from threads so a value
173173+stored in the cache can be dropped or replaced at any time by any other thread. `get`
174174+cannot return a reference `&V` as it is impossible to guarantee the value outlives
175175+the reference.
176176+177177+If you want to store values that will be expensive to clone, wrap them by
178178+`std::sync::Arc` before storing in a cache. [`Arc`][rustdoc-std-arc] is a thread-safe
179179+reference-counted pointer and its `clone()` method is cheap.
180180+181181+[rustdoc-std-arc]: https://doc.rust-lang.org/stable/std/sync/struct.Arc.html
182182+183183+```rust,ignore
184184+use std::sync::Arc;
185185+186186+let key = ...
187187+let large_value = vec![0u8; 2 * 1024 * 1024]; // 2 MiB
188188+189189+// When insert, wrap the large_value by Arc.
190190+cache.insert(key.clone(), Arc::new(large_value));
191191+192192+// get() will call Arc::clone() on the stored value, which is cheap.
193193+cache.get(&key);
194194+```
195195+196196+197197+## Example: Size Aware Eviction
198198+199199+If different cache entries have different "weights" — e.g. each entry has
200200+different memory footprints — you can specify a `weigher` closure at the cache
201201+creation time. The closure should return a weighted size (relative size) of an entry
202202+in `u32`, and the cache will evict entries when the total weighted size exceeds its
203203+`max_capacity`.
204204+205205+```rust
206206+use std::convert::TryInto;
207207+use mini_moka::sync::Cache;
208208+209209+fn main() {
210210+ let cache = Cache::builder()
211211+ // A weigher closure takes &K and &V and returns a u32 representing the
212212+ // relative size of the entry. Here, we use the byte length of the value
213213+ // String as the size.
214214+ .weigher(|_key, value: &String| -> u32 {
215215+ value.len().try_into().unwrap_or(u32::MAX)
216216+ })
217217+ // This cache will hold up to 32MiB of values.
218218+ .max_capacity(32 * 1024 * 1024)
219219+ .build();
220220+ cache.insert(0, "zero".to_string());
221221+}
222222+```
223223+224224+Note that weighted sizes are not used when making eviction selections.
225225+226226+227227+## Example: Expiration Policies
228228+229229+Mini Moka supports the following expiration policies:
230230+231231+- **Time to live**: A cached entry will be expired after the specified duration past
232232+ from `insert`.
233233+- **Time to idle**: A cached entry will be expired after the specified duration past
234234+ from `get` or `insert`.
235235+236236+To set them, use the `CacheBuilder`.
237237+238238+```rust
239239+use mini_moka::sync::Cache;
240240+use std::time::Duration;
241241+242242+fn main() {
243243+ let cache = Cache::builder()
244244+ // Time to live (TTL): 30 minutes
245245+ .time_to_live(Duration::from_secs(30 * 60))
246246+ // Time to idle (TTI): 5 minutes
247247+ .time_to_idle(Duration::from_secs( 5 * 60))
248248+ // Create the cache.
249249+ .build();
250250+251251+ // This entry will expire after 5 minutes (TTI) if there is no get().
252252+ cache.insert(0, "zero");
253253+254254+ // This get() will extend the entry life for another 5 minutes.
255255+ cache.get(&0);
256256+257257+ // Even though we keep calling get(), the entry will expire
258258+ // after 30 minutes (TTL) from the insert().
259259+}
260260+```
261261+262262+### A note on expiration policies
263263+264264+The cache builders will panic if configured with either `time_to_live` or `time to
265265+idle` longer than 1000 years. This is done to protect against overflow when computing
266266+key expiration.
267267+268268+269269+## Minimum Supported Rust Versions
270270+271271+Mini Moka's minimum supported Rust versions (MSRV) are the followings:
272272+273273+| Feature | MSRV |
274274+|:-----------------|:--------------------------:|
275275+| default features | Rust 1.76.0 (Feb 8, 2024) |
276276+277277+It will keep a rolling MSRV policy of at least 6 months. If only the default features
278278+are enabled, MSRV will be updated conservatively. When using other features, MSRV
279279+might be updated more frequently, up to the latest stable. In both cases, increasing
280280+MSRV is _not_ considered a semver-breaking change.
281281+282282+283283+## Developing Mini Moka
284284+285285+**Running All Tests**
286286+287287+To run all tests including doc tests on the README, use the following command:
288288+289289+```console
290290+$ RUSTFLAGS='--cfg trybuild' cargo test --all-features
291291+```
292292+293293+294294+**Generating the Doc**
295295+296296+```console
297297+$ cargo +nightly -Z unstable-options --config 'build.rustdocflags="--cfg docsrs"' \
298298+ doc --no-deps
299299+```
300300+301301+302302+## Credits
303303+304304+### Caffeine
305305+306306+Mini Moka's architecture is heavily inspired by the [Caffeine][caffeine-git] library
307307+for Java. Thanks go to Ben Manes and all contributors of Caffeine.
308308+309309+310310+## License
311311+312312+Mini Moka is distributed under either of
313313+314314+- The MIT license
315315+- The Apache License (Version 2.0)
316316+317317+at your option.
318318+319319+See [LICENSE-MIT](LICENSE-MIT) and [LICENSE-APACHE](LICENSE-APACHE) for details.
320320+321321+<!-- [](https://app.fossa.com/projects/git%2Bgithub.com%2Fmoka-rs%2Fmini-moka?ref=badge_large) -->
+49
crates/mini-moka-vendored/src/common.rs
···11+use std::convert::TryInto;
22+33+#[cfg(feature = "sync")]
44+pub(crate) mod concurrent;
55+66+pub(crate) mod builder_utils;
77+pub(crate) mod deque;
88+pub(crate) mod frequency_sketch;
99+pub(crate) mod time;
1010+1111+// Note: `CacheRegion` cannot have more than four enum variants. This is because
1212+// `crate::{sync,unsync}::DeqNodes` uses a `tagptr::TagNonNull<DeqNode<T>, 2>`
1313+// pointer, where the 2-bit tag is `CacheRegion`.
1414+#[derive(Clone, Copy, Debug, Eq)]
1515+pub(crate) enum CacheRegion {
1616+ Window = 0,
1717+ MainProbation = 1,
1818+ MainProtected = 2,
1919+ Other = 3,
2020+}
2121+2222+impl From<usize> for CacheRegion {
2323+ fn from(n: usize) -> Self {
2424+ match n {
2525+ 0 => Self::Window,
2626+ 1 => Self::MainProbation,
2727+ 2 => Self::MainProtected,
2828+ 3 => Self::Other,
2929+ _ => panic!("No such CacheRegion variant for {}", n),
3030+ }
3131+ }
3232+}
3333+3434+impl PartialEq<Self> for CacheRegion {
3535+ fn eq(&self, other: &Self) -> bool {
3636+ core::mem::discriminant(self) == core::mem::discriminant(other)
3737+ }
3838+}
3939+4040+impl PartialEq<usize> for CacheRegion {
4141+ fn eq(&self, other: &usize) -> bool {
4242+ *self as usize == *other
4343+ }
4444+}
4545+4646+// Ensures the value fits in a range of `128u32..=u32::MAX`.
4747+pub(crate) fn sketch_capacity(max_capacity: u64) -> u32 {
4848+ max_capacity.try_into().unwrap_or(u32::MAX).max(128)
4949+}
···11+// License and Copyright Notice:
22+//
33+// Some of the code and doc comments in this module were ported or copied from
44+// a Java class `com.github.benmanes.caffeine.cache.FrequencySketch` of Caffeine.
55+// https://github.com/ben-manes/caffeine/blob/master/caffeine/src/main/java/com/github/benmanes/caffeine/cache/FrequencySketch.java
66+//
77+// The original code/comments from Caffeine are licensed under the Apache License,
88+// Version 2.0 <https://github.com/ben-manes/caffeine/blob/master/LICENSE>
99+//
1010+// Copyrights of the original code/comments are retained by their contributors.
1111+// For full authorship information, see the version control history of
1212+// https://github.com/ben-manes/caffeine/
1313+1414+/// A probabilistic multi-set for estimating the popularity of an element within
1515+/// a time window. The maximum frequency of an element is limited to 15 (4-bits)
1616+/// and an aging process periodically halves the popularity of all elements.
1717+#[derive(Default)]
1818+pub(crate) struct FrequencySketch {
1919+ sample_size: u32,
2020+ table_mask: u32,
2121+ table: Box<[u64]>,
2222+ size: u32,
2323+}
2424+2525+// A mixture of seeds from FNV-1a, CityHash, and Murmur3. (Taken from Caffeine)
2626+static SEED: [u64; 4] = [
2727+ 0xc3a5_c85c_97cb_3127,
2828+ 0xb492_b66f_be98_f273,
2929+ 0x9ae1_6a3b_2f90_404f,
3030+ 0xcbf2_9ce4_8422_2325,
3131+];
3232+3333+static RESET_MASK: u64 = 0x7777_7777_7777_7777;
3434+3535+static ONE_MASK: u64 = 0x1111_1111_1111_1111;
3636+3737+// -------------------------------------------------------------------------------
3838+// Some of the code and doc comments in this module were ported or copied from
3939+// a Java class `com.github.benmanes.caffeine.cache.FrequencySketch` of Caffeine.
4040+// https://github.com/ben-manes/caffeine/blob/master/caffeine/src/main/java/com/github/benmanes/caffeine/cache/FrequencySketch.java
4141+// -------------------------------------------------------------------------------
4242+//
4343+// FrequencySketch maintains a 4-bit CountMinSketch [1] with periodic aging to
4444+// provide the popularity history for the TinyLfu admission policy [2].
4545+// The time and space efficiency of the sketch allows it to cheaply estimate the
4646+// frequency of an entry in a stream of cache access events.
4747+//
4848+// The counter matrix is represented as a single dimensional array holding 16
4949+// counters per slot. A fixed depth of four balances the accuracy and cost,
5050+// resulting in a width of four times the length of the array. To retain an
5151+// accurate estimation the array's length equals the maximum number of entries
5252+// in the cache, increased to the closest power-of-two to exploit more efficient
5353+// bit masking. This configuration results in a confidence of 93.75% and error
5454+// bound of e / width.
5555+//
5656+// The frequency of all entries is aged periodically using a sampling window
5757+// based on the maximum number of entries in the cache. This is referred to as
5858+// the reset operation by TinyLfu and keeps the sketch fresh by dividing all
5959+// counters by two and subtracting based on the number of odd counters
6060+// found. The O(n) cost of aging is amortized, ideal for hardware pre-fetching,
6161+// and uses inexpensive bit manipulations per array location.
6262+//
6363+// [1] An Improved Data Stream Summary: The Count-Min Sketch and its Applications
6464+// http://dimacs.rutgers.edu/~graham/pubs/papers/cm-full.pdf
6565+// [2] TinyLFU: A Highly Efficient Cache Admission Policy
6666+// https://dl.acm.org/citation.cfm?id=3149371
6767+//
6868+// -------------------------------------------------------------------------------
6969+7070+impl FrequencySketch {
7171+ /// Initializes and increases the capacity of this `FrequencySketch` instance,
7272+ /// if necessary, to ensure that it can accurately estimate the popularity of
7373+ /// elements given the maximum size of the cache. This operation forgets all
7474+ /// previous counts when resizing.
7575+ pub(crate) fn ensure_capacity(&mut self, cap: u32) {
7676+ // The max byte size of the table, Box<[u64; table_size]>
7777+ //
7878+ // | Pointer width | Max size |
7979+ // |:-----------------|---------:|
8080+ // | 16 bit | 8 KiB |
8181+ // | 32 bit | 128 MiB |
8282+ // | 64 bit or bigger | 8 GiB |
8383+8484+ let maximum = if cfg!(target_pointer_width = "16") {
8585+ cap.min(1024)
8686+ } else if cfg!(target_pointer_width = "32") {
8787+ cap.min(2u32.pow(24)) // about 16 millions
8888+ } else {
8989+ // Same to Caffeine's limit:
9090+ // `Integer.MAX_VALUE >>> 1` with `ceilingPowerOfTwo()` applied.
9191+ cap.min(2u32.pow(30)) // about 1 billion
9292+ };
9393+ let table_size = if maximum == 0 {
9494+ 1
9595+ } else {
9696+ maximum.next_power_of_two()
9797+ };
9898+9999+ if self.table.len() as u32 >= table_size {
100100+ return;
101101+ }
102102+103103+ self.table = vec![0; table_size as usize].into_boxed_slice();
104104+ self.table_mask = table_size - 1;
105105+ self.sample_size = if cap == 0 {
106106+ 10
107107+ } else {
108108+ maximum.saturating_mul(10).min(i32::MAX as u32)
109109+ };
110110+ }
111111+112112+ /// Takes the hash value of an element, and returns the estimated number of
113113+ /// occurrences of the element, up to the maximum (15).
114114+ pub(crate) fn frequency(&self, hash: u64) -> u8 {
115115+ if self.table.is_empty() {
116116+ return 0;
117117+ }
118118+119119+ let start = ((hash & 3) << 2) as u8;
120120+ let mut frequency = u8::MAX;
121121+ for i in 0..4 {
122122+ let index = self.index_of(hash, i);
123123+ let shift = (start + i) << 2;
124124+ let count = ((self.table[index] >> shift) & 0xF) as u8;
125125+ frequency = frequency.min(count);
126126+ }
127127+ frequency
128128+ }
129129+130130+ /// Take a hash value of an element and increments the popularity of the
131131+ /// element if it does not exceed the maximum (15). The popularity of all
132132+ /// elements will be periodically down sampled when the observed events
133133+ /// exceeds a threshold. This process provides a frequency aging to allow
134134+ /// expired long term entries to fade away.
135135+ pub(crate) fn increment(&mut self, hash: u64) {
136136+ if self.table.is_empty() {
137137+ return;
138138+ }
139139+140140+ let start = ((hash & 3) << 2) as u8;
141141+ let mut added = false;
142142+ for i in 0..4 {
143143+ let index = self.index_of(hash, i);
144144+ added |= self.increment_at(index, start + i);
145145+ }
146146+147147+ if added {
148148+ self.size += 1;
149149+ if self.size >= self.sample_size {
150150+ self.reset();
151151+ }
152152+ }
153153+ }
154154+155155+ /// Takes a table index (each entry has 16 counters) and counter index, and
156156+ /// increments the counter by 1 if it is not already at the maximum value
157157+ /// (15). Returns `true` if incremented.
158158+ fn increment_at(&mut self, table_index: usize, counter_index: u8) -> bool {
159159+ let offset = (counter_index as usize) << 2;
160160+ let mask = 0xF_u64 << offset;
161161+ if self.table[table_index] & mask != mask {
162162+ self.table[table_index] += 1u64 << offset;
163163+ true
164164+ } else {
165165+ false
166166+ }
167167+ }
168168+169169+ /// Reduces every counter by half of its original value.
170170+ fn reset(&mut self) {
171171+ let mut count = 0u32;
172172+ for entry in self.table.iter_mut() {
173173+ // Count number of odd numbers.
174174+ count += (*entry & ONE_MASK).count_ones();
175175+ *entry = (*entry >> 1) & RESET_MASK;
176176+ }
177177+ self.size = (self.size >> 1) - (count >> 2);
178178+ }
179179+180180+ /// Returns the table index for the counter at the specified depth.
181181+ fn index_of(&self, hash: u64, depth: u8) -> usize {
182182+ let i = depth as usize;
183183+ let mut hash = hash.wrapping_add(SEED[i]).wrapping_mul(SEED[i]);
184184+ hash = hash.wrapping_add(hash >> 32);
185185+ (hash & (self.table_mask as u64)) as usize
186186+ }
187187+}
188188+189189+// Methods only available for testing.
190190+#[cfg(test)]
191191+impl FrequencySketch {
192192+ pub(crate) fn table_len(&self) -> usize {
193193+ self.table.len()
194194+ }
195195+}
196196+197197+// Some test cases were ported from Caffeine at:
198198+// https://github.com/ben-manes/caffeine/blob/master/caffeine/src/test/java/com/github/benmanes/caffeine/cache/FrequencySketchTest.java
199199+//
200200+// To see the debug prints, run test as `cargo test -- --nocapture`
201201+#[cfg(test)]
202202+mod tests {
203203+ use super::FrequencySketch;
204204+ use once_cell::sync::Lazy;
205205+ use std::hash::{BuildHasher, Hash};
206206+207207+ static ITEM: Lazy<u32> = Lazy::new(|| {
208208+ let mut buf = [0; 4];
209209+ getrandom::getrandom(&mut buf).unwrap();
210210+ unsafe { std::mem::transmute::<[u8; 4], u32>(buf) }
211211+ });
212212+213213+ // This test was ported from Caffeine.
214214+ #[test]
215215+ fn increment_once() {
216216+ let mut sketch = FrequencySketch::default();
217217+ sketch.ensure_capacity(512);
218218+ let hasher = hasher();
219219+ let item_hash = hasher(*ITEM);
220220+ sketch.increment(item_hash);
221221+ assert_eq!(sketch.frequency(item_hash), 1);
222222+ }
223223+224224+ // This test was ported from Caffeine.
225225+ #[test]
226226+ fn increment_max() {
227227+ let mut sketch = FrequencySketch::default();
228228+ sketch.ensure_capacity(512);
229229+ let hasher = hasher();
230230+ let item_hash = hasher(*ITEM);
231231+ for _ in 0..20 {
232232+ sketch.increment(item_hash);
233233+ }
234234+ assert_eq!(sketch.frequency(item_hash), 15);
235235+ }
236236+237237+ // This test was ported from Caffeine.
238238+ #[test]
239239+ fn increment_distinct() {
240240+ let mut sketch = FrequencySketch::default();
241241+ sketch.ensure_capacity(512);
242242+ let hasher = hasher();
243243+ sketch.increment(hasher(*ITEM));
244244+ sketch.increment(hasher(ITEM.wrapping_add(1)));
245245+ assert_eq!(sketch.frequency(hasher(*ITEM)), 1);
246246+ assert_eq!(sketch.frequency(hasher(ITEM.wrapping_add(1))), 1);
247247+ assert_eq!(sketch.frequency(hasher(ITEM.wrapping_add(2))), 0);
248248+ }
249249+250250+ // This test was ported from Caffeine.
251251+ #[test]
252252+ fn index_of_around_zero() {
253253+ let mut sketch = FrequencySketch::default();
254254+ sketch.ensure_capacity(512);
255255+ let mut indexes = std::collections::HashSet::new();
256256+ let hashes = [u64::MAX, 0, 1];
257257+ for hash in hashes.iter() {
258258+ for depth in 0..4 {
259259+ indexes.insert(sketch.index_of(*hash, depth));
260260+ }
261261+ }
262262+ assert_eq!(indexes.len(), 4 * hashes.len())
263263+ }
264264+265265+ // This test was ported from Caffeine.
266266+ #[test]
267267+ fn reset() {
268268+ let mut reset = false;
269269+ let mut sketch = FrequencySketch::default();
270270+ sketch.ensure_capacity(64);
271271+ let hasher = hasher();
272272+273273+ for i in 1..(20 * sketch.table.len() as u32) {
274274+ sketch.increment(hasher(i));
275275+ if sketch.size != i {
276276+ reset = true;
277277+ break;
278278+ }
279279+ }
280280+281281+ assert!(reset);
282282+ assert!(sketch.size <= sketch.sample_size / 2);
283283+ }
284284+285285+ // This test was ported from Caffeine.
286286+ #[test]
287287+ fn heavy_hitters() {
288288+ let mut sketch = FrequencySketch::default();
289289+ sketch.ensure_capacity(65_536);
290290+ let hasher = hasher();
291291+292292+ for i in 100..100_000 {
293293+ sketch.increment(hasher(i));
294294+ }
295295+296296+ for i in (0..10).step_by(2) {
297297+ for _ in 0..i {
298298+ sketch.increment(hasher(i));
299299+ }
300300+ }
301301+302302+ // A perfect popularity count yields an array [0, 0, 2, 0, 4, 0, 6, 0, 8, 0]
303303+ let popularity = (0..10)
304304+ .map(|i| sketch.frequency(hasher(i)))
305305+ .collect::<Vec<_>>();
306306+307307+ for (i, freq) in popularity.iter().enumerate() {
308308+ match i {
309309+ 2 => assert!(freq <= &popularity[4]),
310310+ 4 => assert!(freq <= &popularity[6]),
311311+ 6 => assert!(freq <= &popularity[8]),
312312+ 8 => (),
313313+ _ => assert!(freq <= &popularity[2]),
314314+ }
315315+ }
316316+ }
317317+318318+ fn hasher<K: Hash>() -> impl Fn(K) -> u64 {
319319+ let build_hasher = std::collections::hash_map::RandomState::default();
320320+ move |key| build_hasher.hash_one(&key)
321321+ }
322322+}
323323+324324+// Verify that some properties hold such as no panic occurs on any possible inputs.
325325+#[cfg(kani)]
326326+mod kani {
327327+ use super::FrequencySketch;
328328+329329+ const CAPACITIES: &[u32] = &[
330330+ 0,
331331+ 1,
332332+ 1024,
333333+ 1025,
334334+ 2u32.pow(24),
335335+ 2u32.pow(24) + 1,
336336+ 2u32.pow(30),
337337+ 2u32.pow(30) + 1,
338338+ u32::MAX,
339339+ ];
340340+341341+ #[kani::proof]
342342+ fn verify_ensure_capacity() {
343343+ // Check for arbitrary capacities.
344344+ let capacity = kani::any();
345345+ let mut sketch = FrequencySketch::default();
346346+ sketch.ensure_capacity(capacity);
347347+ }
348348+349349+ #[kani::proof]
350350+ fn verify_frequency() {
351351+ // Check for some selected capacities.
352352+ for capacity in CAPACITIES {
353353+ let mut sketch = FrequencySketch::default();
354354+ sketch.ensure_capacity(*capacity);
355355+356356+ // Check for arbitrary hashes.
357357+ let hash = kani::any();
358358+ let frequency = sketch.frequency(hash);
359359+ assert!(frequency <= 15);
360360+ }
361361+ }
362362+363363+ #[kani::proof]
364364+ fn verify_increment() {
365365+ // Only check for small capacities. Because Kani Rust Verifier is a model
366366+ // checking tool, it will take much longer time (exponential) to check larger
367367+ // capacities here.
368368+ for capacity in &[0, 1, 128] {
369369+ let mut sketch = FrequencySketch::default();
370370+ sketch.ensure_capacity(*capacity);
371371+372372+ // Check for arbitrary hashes.
373373+ let hash = kani::any();
374374+ sketch.increment(hash);
375375+ }
376376+ }
377377+378378+ #[kani::proof]
379379+ fn verify_index_of() {
380380+ // Check for arbitrary capacities.
381381+ let capacity = kani::any();
382382+ let mut sketch = FrequencySketch::default();
383383+ sketch.ensure_capacity(capacity);
384384+385385+ // Check for arbitrary hashes.
386386+ let hash = kani::any();
387387+ for i in 0..4 {
388388+ let index = sketch.index_of(hash, i);
389389+ assert!(index < sketch.table.len());
390390+ }
391391+ }
392392+}
+32
crates/mini-moka-vendored/src/common/time.rs
···11+use std::time::Duration;
22+33+pub(crate) mod clock;
44+55+pub(crate) use clock::Clock;
66+77+/// a wrapper type over Instant to force checked additions and prevent
88+/// unintentional overflow. The type preserve the Copy semantics for the wrapped
99+#[derive(PartialEq, PartialOrd, Clone, Copy)]
1010+pub(crate) struct Instant(clock::Instant);
1111+1212+pub(crate) trait CheckedTimeOps {
1313+ fn checked_add(&self, duration: Duration) -> Option<Self>
1414+ where
1515+ Self: Sized;
1616+}
1717+1818+impl Instant {
1919+ pub(crate) fn new(instant: clock::Instant) -> Instant {
2020+ Instant(instant)
2121+ }
2222+2323+ pub(crate) fn now() -> Instant {
2424+ Instant(clock::Instant::now())
2525+ }
2626+}
2727+2828+impl CheckedTimeOps for Instant {
2929+ fn checked_add(&self, duration: Duration) -> Option<Instant> {
3030+ self.0.checked_add(duration).map(Instant)
3131+ }
3232+}
···11+#![warn(clippy::all)]
22+#![warn(rust_2018_idioms)]
33+#![deny(rustdoc::broken_intra_doc_links)]
44+#![cfg_attr(docsrs, feature(doc_cfg))]
55+66+//! Mini Moka is a fast, concurrent cache library for Rust. Mini Moka is a light
77+//! edition of [Moka][moka-git].
88+//!
99+//! Mini Moka provides an in-memory concurrent cache implementation on top of hash
1010+//! map. It supports high expected concurrency of retrievals and updates.
1111+//!
1212+//! Mini Moka also provides an in-memory, non-thread-safe cache implementation for
1313+//! single thread applications.
1414+//!
1515+//! All cache implementations perform a best-effort bounding of the map using an
1616+//! entry replacement algorithm to determine which entries to evict when the capacity
1717+//! is exceeded.
1818+//!
1919+//! [moka-git]: https://github.com/moka-rs/moka
2020+//! [caffeine-git]: https://github.com/ben-manes/caffeine
2121+//!
2222+//! # Features
2323+//!
2424+//! - A thread-safe, highly concurrent in-memory cache implementation.
2525+//! - A cache can be bounded by one of the followings:
2626+//! - The maximum number of entries.
2727+//! - The total weighted size of entries. (Size aware eviction)
2828+//! - Maintains good hit rate by using entry replacement algorithms inspired by
2929+//! [Caffeine][caffeine-git]:
3030+//! - Admission to a cache is controlled by the Least Frequently Used (LFU) policy.
3131+//! - Eviction from a cache is controlled by the Least Recently Used (LRU) policy.
3232+//! - Supports expiration policies:
3333+//! - Time to live
3434+//! - Time to idle
3535+//!
3636+//! # Examples
3737+//!
3838+//! See the following document:
3939+//!
4040+//! - A thread-safe, synchronous cache:
4141+//! - [`sync::Cache`][sync-cache-struct]
4242+//! - A not thread-safe, blocking cache for single threaded applications:
4343+//! - [`unsync::Cache`][unsync-cache-struct]
4444+//!
4545+//! [sync-cache-struct]: ./sync/struct.Cache.html
4646+//! [unsync-cache-struct]: ./unsync/struct.Cache.html
4747+//!
4848+//! # Minimum Supported Rust Versions
4949+//!
5050+//! This crate's minimum supported Rust versions (MSRV) are the followings:
5151+//!
5252+//! | Feature | MSRV |
5353+//! |:-----------------|:--------------------------:|
5454+//! | default features | Rust 1.76.0 (Feb 8, 2024) |
5555+//!
5656+//! If only the default features are enabled, MSRV will be updated conservatively.
5757+//! When using other features, MSRV might be updated more frequently, up to the
5858+//! latest stable. In both cases, increasing MSRV is _not_ considered a
5959+//! semver-breaking change.
6060+6161+pub(crate) mod common;
6262+pub(crate) mod policy;
6363+pub mod unsync;
6464+6565+#[cfg(feature = "sync")]
6666+#[cfg_attr(docsrs, doc(cfg(feature = "sync")))]
6767+pub mod sync;
6868+6969+pub use policy::Policy;
7070+7171+#[cfg(test)]
7272+mod tests {
7373+ #[cfg(all(trybuild, feature = "sync"))]
7474+ #[test]
7575+ fn trybuild_sync() {
7676+ let t = trybuild::TestCases::new();
7777+ t.compile_fail("tests/compile_tests/sync/clone/*.rs");
7878+ }
7979+}
8080+8181+#[cfg(all(doctest, feature = "sync"))]
8282+mod doctests {
8383+ // https://doc.rust-lang.org/rustdoc/write-documentation/documentation-tests.html#include-items-only-when-collecting-doctests
8484+ #[doc = include_str!("../README.md")]
8585+ struct ReadMeDoctests;
8686+}
+38
crates/mini-moka-vendored/src/policy.rs
···11+use std::time::Duration;
22+33+#[derive(Clone, Debug)]
44+/// The policy of a cache.
55+pub struct Policy {
66+ max_capacity: Option<u64>,
77+ time_to_live: Option<Duration>,
88+ time_to_idle: Option<Duration>,
99+}
1010+1111+impl Policy {
1212+ pub(crate) fn new(
1313+ max_capacity: Option<u64>,
1414+ time_to_live: Option<Duration>,
1515+ time_to_idle: Option<Duration>,
1616+ ) -> Self {
1717+ Self {
1818+ max_capacity,
1919+ time_to_live,
2020+ time_to_idle,
2121+ }
2222+ }
2323+2424+ /// Returns the `max_capacity` of the cache.
2525+ pub fn max_capacity(&self) -> Option<u64> {
2626+ self.max_capacity
2727+ }
2828+2929+ /// Returns the `time_to_live` of the cache.
3030+ pub fn time_to_live(&self) -> Option<Duration> {
3131+ self.time_to_live
3232+ }
3333+3434+ /// Returns the `time_to_idle` of the cache.
3535+ pub fn time_to_idle(&self) -> Option<Duration> {
3636+ self.time_to_idle
3737+ }
3838+}
+21
crates/mini-moka-vendored/src/sync.rs
···11+//! Provides a thread-safe, concurrent cache implementation built upon
22+//! [`dashmap::DashMap`][dashmap].
33+//!
44+//! [dashmap]: https://docs.rs/dashmap/*/dashmap/struct.DashMap.html
55+66+mod base_cache;
77+mod builder;
88+mod cache;
99+mod iter;
1010+mod mapref;
1111+1212+pub use builder::CacheBuilder;
1313+pub use cache::Cache;
1414+pub use iter::Iter;
1515+pub use mapref::EntryRef;
1616+1717+/// Provides extra methods that will be useful for testing.
1818+pub trait ConcurrentCacheExt<K, V> {
1919+ /// Performs any pending maintenance operations needed by the cache.
2020+ fn sync(&self);
2121+}
+1380
crates/mini-moka-vendored/src/sync/base_cache.rs
···11+use super::{iter::DashMapIter, Iter};
22+use crate::{
33+ common::{
44+ self,
55+ concurrent::{
66+ atomic_time::AtomicInstant,
77+ constants::{
88+ READ_LOG_FLUSH_POINT, READ_LOG_SIZE, WRITE_LOG_FLUSH_POINT, WRITE_LOG_SIZE,
99+ },
1010+ deques::Deques,
1111+ entry_info::EntryInfo,
1212+ housekeeper::{Housekeeper, InnerSync},
1313+ AccessTime, KeyDate, KeyHash, KeyHashDate, KvEntry, ReadOp, ValueEntry, Weigher,
1414+ WriteOp,
1515+ },
1616+ deque::{DeqNode, Deque},
1717+ frequency_sketch::FrequencySketch,
1818+ time::{CheckedTimeOps, Clock, Instant},
1919+ CacheRegion,
2020+ },
2121+ Policy,
2222+};
2323+2424+use crossbeam_channel::{Receiver, Sender, TrySendError};
2525+use crossbeam_utils::atomic::AtomicCell;
2626+use dashmap::mapref::one::Ref as DashMapRef;
2727+use smallvec::SmallVec;
2828+use std::{
2929+ borrow::Borrow,
3030+ collections::hash_map::RandomState,
3131+ hash::{BuildHasher, Hash},
3232+ ptr::NonNull,
3333+ sync::{
3434+ atomic::{AtomicBool, Ordering},
3535+ Arc, Mutex, RwLock,
3636+ },
3737+ time::Duration,
3838+};
3939+use triomphe::Arc as TrioArc;
4040+4141+pub(crate) struct BaseCache<K, V, S = RandomState> {
4242+ pub(crate) inner: Arc<Inner<K, V, S>>,
4343+ read_op_ch: Sender<ReadOp<K, V>>,
4444+ pub(crate) write_op_ch: Sender<WriteOp<K, V>>,
4545+ pub(crate) housekeeper: Option<Arc<Housekeeper>>,
4646+}
4747+4848+impl<K, V, S> Clone for BaseCache<K, V, S> {
4949+ /// Makes a clone of this shared cache.
5050+ ///
5151+ /// This operation is cheap as it only creates thread-safe reference counted
5252+ /// pointers to the shared internal data structures.
5353+ fn clone(&self) -> Self {
5454+ Self {
5555+ inner: Arc::clone(&self.inner),
5656+ read_op_ch: self.read_op_ch.clone(),
5757+ write_op_ch: self.write_op_ch.clone(),
5858+ housekeeper: self.housekeeper.clone(),
5959+ }
6060+ }
6161+}
6262+6363+impl<K, V, S> Drop for BaseCache<K, V, S> {
6464+ fn drop(&mut self) {
6565+ // The housekeeper needs to be dropped before the inner is dropped.
6666+ std::mem::drop(self.housekeeper.take());
6767+ }
6868+}
6969+7070+impl<K, V, S> BaseCache<K, V, S> {
7171+ pub(crate) fn policy(&self) -> Policy {
7272+ self.inner.policy()
7373+ }
7474+7575+ pub(crate) fn entry_count(&self) -> u64 {
7676+ self.inner.entry_count()
7777+ }
7878+7979+ pub(crate) fn weighted_size(&self) -> u64 {
8080+ self.inner.weighted_size()
8181+ }
8282+}
8383+8484+impl<K, V, S> BaseCache<K, V, S>
8585+where
8686+ K: Hash + Eq + Send + Sync + 'static,
8787+ V: Clone + Send + Sync + 'static,
8888+ S: BuildHasher + Clone + Send + Sync + 'static,
8989+{
9090+ pub(crate) fn new(
9191+ max_capacity: Option<u64>,
9292+ initial_capacity: Option<usize>,
9393+ build_hasher: S,
9494+ weigher: Option<Weigher<K, V>>,
9595+ time_to_live: Option<Duration>,
9696+ time_to_idle: Option<Duration>,
9797+ ) -> Self {
9898+ let (r_snd, r_rcv) = crossbeam_channel::bounded(READ_LOG_SIZE);
9999+ let (w_snd, w_rcv) = crossbeam_channel::bounded(WRITE_LOG_SIZE);
100100+101101+ let inner = Inner::new(
102102+ max_capacity,
103103+ initial_capacity,
104104+ build_hasher,
105105+ weigher,
106106+ r_rcv,
107107+ w_rcv,
108108+ time_to_live,
109109+ time_to_idle,
110110+ );
111111+ Self {
112112+ #[cfg_attr(beta_clippy, allow(clippy::arc_with_non_send_sync))]
113113+ inner: Arc::new(inner),
114114+ read_op_ch: r_snd,
115115+ write_op_ch: w_snd,
116116+ housekeeper: Some(Arc::new(Housekeeper::default())),
117117+ }
118118+ }
119119+120120+ #[inline]
121121+ pub(crate) fn hash<Q>(&self, key: &Q) -> u64
122122+ where
123123+ Arc<K>: Borrow<Q>,
124124+ Q: Hash + Eq + ?Sized,
125125+ {
126126+ self.inner.hash(key)
127127+ }
128128+129129+ pub(crate) fn contains_key<Q>(&self, key: &Q) -> bool
130130+ where
131131+ Arc<K>: Borrow<Q>,
132132+ Q: Hash + Eq + ?Sized,
133133+ {
134134+ match self.inner.get(key) {
135135+ None => false,
136136+ Some(entry) => {
137137+ let i = &self.inner;
138138+ let (ttl, tti, va) = (&i.time_to_live(), &i.time_to_idle(), &i.valid_after());
139139+ let now = i.current_time_from_expiration_clock();
140140+ let entry = &*entry;
141141+142142+ !is_expired_entry_wo(ttl, va, entry, now)
143143+ && !is_expired_entry_ao(tti, va, entry, now)
144144+ }
145145+ }
146146+ }
147147+148148+ pub(crate) fn get_with_hash<Q>(&self, key: &Q, hash: u64) -> Option<V>
149149+ where
150150+ Arc<K>: Borrow<Q>,
151151+ Q: Hash + Eq + ?Sized,
152152+ {
153153+ let record = |op, now| {
154154+ self.record_read_op(op, now)
155155+ .expect("Failed to record a get op");
156156+ };
157157+ let now = self.inner.current_time_from_expiration_clock();
158158+159159+ match self.inner.get(key) {
160160+ None => {
161161+ record(ReadOp::Miss(hash), now);
162162+ None
163163+ }
164164+ Some(entry) => {
165165+ let i = &self.inner;
166166+ let (ttl, tti, va) = (&i.time_to_live(), &i.time_to_idle(), &i.valid_after());
167167+ let arc_entry = &*entry;
168168+169169+ if is_expired_entry_wo(ttl, va, arc_entry, now)
170170+ || is_expired_entry_ao(tti, va, arc_entry, now)
171171+ {
172172+ // Drop the entry to avoid to deadlock with record_read_op.
173173+ std::mem::drop(entry);
174174+ // Expired or invalidated entry. Record this access as a cache miss
175175+ // rather than a hit.
176176+ record(ReadOp::Miss(hash), now);
177177+ None
178178+ } else {
179179+ // Valid entry.
180180+ let v = arc_entry.value.clone();
181181+ let e = TrioArc::clone(arc_entry);
182182+ // Drop the entry to avoid to deadlock with record_read_op.
183183+ std::mem::drop(entry);
184184+ record(ReadOp::Hit(hash, e, now), now);
185185+ Some(v)
186186+ }
187187+ }
188188+ }
189189+ }
190190+191191+ #[inline]
192192+ pub(crate) fn remove_entry<Q>(&self, key: &Q) -> Option<KvEntry<K, V>>
193193+ where
194194+ Arc<K>: Borrow<Q>,
195195+ Q: Hash + Eq + ?Sized,
196196+ {
197197+ self.inner.remove_entry(key)
198198+ }
199199+200200+ #[inline]
201201+ pub(crate) fn apply_reads_writes_if_needed(
202202+ inner: &impl InnerSync,
203203+ ch: &Sender<WriteOp<K, V>>,
204204+ now: Instant,
205205+ housekeeper: Option<&Arc<Housekeeper>>,
206206+ ) {
207207+ let w_len = ch.len();
208208+209209+ if let Some(hk) = housekeeper {
210210+ if hk.should_apply_writes(w_len, now) {
211211+ hk.try_sync(inner);
212212+ }
213213+ }
214214+ }
215215+216216+ pub(crate) fn invalidate_all(&self) {
217217+ let now = self.inner.current_time_from_expiration_clock();
218218+ self.inner.set_valid_after(now);
219219+ }
220220+}
221221+222222+// Clippy beta 0.1.83 (f41c7ed9889 2024-10-31) warns about unused lifetimes on 'a.
223223+// This seems a false positive. The lifetimes are used in the trait bounds.
224224+// https://rust-lang.github.io/rust-clippy/master/index.html#extra_unused_lifetimes
225225+#[allow(clippy::extra_unused_lifetimes)]
226226+impl<'a, K, V, S> BaseCache<K, V, S>
227227+where
228228+ K: 'a + Eq + Hash,
229229+ V: 'a,
230230+ S: BuildHasher + Clone,
231231+{
232232+ pub(crate) fn iter(&self) -> Iter<'_, K, V, S> {
233233+ Iter::new(self, self.inner.iter())
234234+ }
235235+}
236236+237237+impl<K, V, S> BaseCache<K, V, S> {
238238+ pub(crate) fn is_expired_entry(&self, entry: &TrioArc<ValueEntry<K, V>>) -> bool {
239239+ let i = &self.inner;
240240+ let (ttl, tti, va) = (&i.time_to_live(), &i.time_to_idle(), &i.valid_after());
241241+ let now = i.current_time_from_expiration_clock();
242242+243243+ is_expired_entry_wo(ttl, va, entry, now) || is_expired_entry_ao(tti, va, entry, now)
244244+ }
245245+}
246246+247247+//
248248+// private methods
249249+//
250250+impl<K, V, S> BaseCache<K, V, S>
251251+where
252252+ K: Hash + Eq + Send + Sync + 'static,
253253+ V: Clone + Send + Sync + 'static,
254254+ S: BuildHasher + Clone + Send + Sync + 'static,
255255+{
256256+ #[inline]
257257+ fn record_read_op(
258258+ &self,
259259+ op: ReadOp<K, V>,
260260+ now: Instant,
261261+ ) -> Result<(), TrySendError<ReadOp<K, V>>> {
262262+ self.apply_reads_if_needed(self.inner.as_ref(), now);
263263+ let ch = &self.read_op_ch;
264264+ match ch.try_send(op) {
265265+ // Discard the ReadOp when the channel is full.
266266+ Ok(()) | Err(TrySendError::Full(_)) => Ok(()),
267267+ Err(e @ TrySendError::Disconnected(_)) => Err(e),
268268+ }
269269+ }
270270+271271+ #[inline]
272272+ pub(crate) fn do_insert_with_hash(
273273+ &self,
274274+ key: Arc<K>,
275275+ hash: u64,
276276+ value: V,
277277+ ) -> (WriteOp<K, V>, Instant) {
278278+ let ts = self.inner.current_time_from_expiration_clock();
279279+ let weight = self.inner.weigh(&key, &value);
280280+ let mut insert_op = None;
281281+ let mut update_op = None;
282282+283283+ self.inner
284284+ .cache
285285+ .entry(Arc::clone(&key))
286286+ // Update
287287+ .and_modify(|entry| {
288288+ // NOTES on `new_value_entry_from` method:
289289+ // 1. The internal EntryInfo will be shared between the old and new
290290+ // ValueEntries.
291291+ // 2. This method will set the dirty flag to prevent this new
292292+ // ValueEntry from being evicted by an expiration policy.
293293+ // 3. This method will update the policy_weight with the new weight.
294294+ let old_weight = entry.policy_weight();
295295+ *entry = self.new_value_entry_from(value.clone(), ts, weight, entry);
296296+ update_op = Some(WriteOp::Upsert {
297297+ key_hash: KeyHash::new(Arc::clone(&key), hash),
298298+ value_entry: TrioArc::clone(entry),
299299+ old_weight,
300300+ new_weight: weight,
301301+ });
302302+ })
303303+ // Insert
304304+ .or_insert_with(|| {
305305+ let entry = self.new_value_entry(value.clone(), ts, weight);
306306+ insert_op = Some(WriteOp::Upsert {
307307+ key_hash: KeyHash::new(Arc::clone(&key), hash),
308308+ value_entry: TrioArc::clone(&entry),
309309+ old_weight: 0,
310310+ new_weight: weight,
311311+ });
312312+ entry
313313+ });
314314+315315+ match (insert_op, update_op) {
316316+ (Some(ins_op), None) => (ins_op, ts),
317317+ (None, Some(upd_op)) => (upd_op, ts),
318318+ _ => unreachable!(),
319319+ }
320320+ }
321321+322322+ #[inline]
323323+ fn new_value_entry(
324324+ &self,
325325+ value: V,
326326+ timestamp: Instant,
327327+ policy_weight: u32,
328328+ ) -> TrioArc<ValueEntry<K, V>> {
329329+ let info = TrioArc::new(EntryInfo::new(timestamp, policy_weight));
330330+ TrioArc::new(ValueEntry::new(value, info))
331331+ }
332332+333333+ #[inline]
334334+ fn new_value_entry_from(
335335+ &self,
336336+ value: V,
337337+ timestamp: Instant,
338338+ policy_weight: u32,
339339+ other: &ValueEntry<K, V>,
340340+ ) -> TrioArc<ValueEntry<K, V>> {
341341+ let info = TrioArc::clone(other.entry_info());
342342+ // To prevent this updated ValueEntry from being evicted by an expiration policy,
343343+ // set the dirty flag to true. It will be reset to false when the write is applied.
344344+ info.set_dirty(true);
345345+ info.set_last_accessed(timestamp);
346346+ info.set_last_modified(timestamp);
347347+ info.set_policy_weight(policy_weight);
348348+ TrioArc::new(ValueEntry::new(value, info))
349349+ }
350350+351351+ #[inline]
352352+ fn apply_reads_if_needed(&self, inner: &impl InnerSync, now: Instant) {
353353+ let len = self.read_op_ch.len();
354354+355355+ if let Some(hk) = &self.housekeeper {
356356+ if hk.should_apply_reads(len, now) {
357357+ if let Some(h) = &self.housekeeper {
358358+ h.try_sync(inner);
359359+ }
360360+ }
361361+ }
362362+ }
363363+364364+ #[inline]
365365+ pub(crate) fn current_time_from_expiration_clock(&self) -> Instant {
366366+ self.inner.current_time_from_expiration_clock()
367367+ }
368368+}
369369+370370+//
371371+// for testing
372372+//
373373+#[cfg(test)]
374374+impl<K, V, S> BaseCache<K, V, S>
375375+where
376376+ K: Hash + Eq + Send + Sync + 'static,
377377+ V: Clone + Send + Sync + 'static,
378378+ S: BuildHasher + Clone + Send + Sync + 'static,
379379+{
380380+ pub(crate) fn reconfigure_for_testing(&mut self) {
381381+ // Enable the frequency sketch.
382382+ self.inner.enable_frequency_sketch_for_testing();
383383+ }
384384+385385+ pub(crate) fn set_expiration_clock(&self, clock: Option<Clock>) {
386386+ self.inner.set_expiration_clock(clock);
387387+ }
388388+}
389389+390390+struct EvictionCounters {
391391+ entry_count: u64,
392392+ weighted_size: u64,
393393+}
394394+395395+impl EvictionCounters {
396396+ #[inline]
397397+ fn new(entry_count: u64, weighted_size: u64) -> Self {
398398+ Self {
399399+ entry_count,
400400+ weighted_size,
401401+ }
402402+ }
403403+404404+ #[inline]
405405+ fn saturating_add(&mut self, entry_count: u64, weight: u32) {
406406+ self.entry_count += entry_count;
407407+ let total = &mut self.weighted_size;
408408+ *total = total.saturating_add(weight as u64);
409409+ }
410410+411411+ #[inline]
412412+ fn saturating_sub(&mut self, entry_count: u64, weight: u32) {
413413+ self.entry_count -= entry_count;
414414+ let total = &mut self.weighted_size;
415415+ *total = total.saturating_sub(weight as u64);
416416+ }
417417+}
418418+419419+#[derive(Default)]
420420+struct EntrySizeAndFrequency {
421421+ policy_weight: u64,
422422+ freq: u32,
423423+}
424424+425425+impl EntrySizeAndFrequency {
426426+ fn new(policy_weight: u32) -> Self {
427427+ Self {
428428+ policy_weight: policy_weight as u64,
429429+ ..Default::default()
430430+ }
431431+ }
432432+433433+ fn add_policy_weight(&mut self, weight: u32) {
434434+ self.policy_weight += weight as u64;
435435+ }
436436+437437+ fn add_frequency(&mut self, freq: &FrequencySketch, hash: u64) {
438438+ self.freq += freq.frequency(hash) as u32;
439439+ }
440440+}
441441+442442+// Access-Order Queue Node
443443+type AoqNode<K> = NonNull<DeqNode<KeyHashDate<K>>>;
444444+445445+enum AdmissionResult<K> {
446446+ Admitted {
447447+ victim_nodes: SmallVec<[AoqNode<K>; 8]>,
448448+ skipped_nodes: SmallVec<[AoqNode<K>; 4]>,
449449+ },
450450+ Rejected {
451451+ skipped_nodes: SmallVec<[AoqNode<K>; 4]>,
452452+ },
453453+}
454454+455455+type CacheStore<K, V, S> = dashmap::DashMap<Arc<K>, TrioArc<ValueEntry<K, V>>, S>;
456456+457457+type CacheEntryRef<'a, K, V> = DashMapRef<'a, Arc<K>, TrioArc<ValueEntry<K, V>>>;
458458+459459+pub(crate) struct Inner<K, V, S> {
460460+ max_capacity: Option<u64>,
461461+ entry_count: AtomicCell<u64>,
462462+ weighted_size: AtomicCell<u64>,
463463+ cache: CacheStore<K, V, S>,
464464+ build_hasher: S,
465465+ deques: Mutex<Deques<K>>,
466466+ frequency_sketch: RwLock<FrequencySketch>,
467467+ frequency_sketch_enabled: AtomicBool,
468468+ read_op_ch: Receiver<ReadOp<K, V>>,
469469+ write_op_ch: Receiver<WriteOp<K, V>>,
470470+ time_to_live: Option<Duration>,
471471+ time_to_idle: Option<Duration>,
472472+ valid_after: AtomicInstant,
473473+ weigher: Option<Weigher<K, V>>,
474474+ has_expiration_clock: AtomicBool,
475475+ expiration_clock: RwLock<Option<Clock>>,
476476+}
477477+478478+// functions/methods used by BaseCache
479479+impl<K, V, S> Inner<K, V, S>
480480+where
481481+ K: Hash + Eq + Send + Sync + 'static,
482482+ V: Send + Sync + 'static,
483483+ S: BuildHasher + Clone,
484484+{
485485+ // Disable a Clippy warning for having more than seven arguments.
486486+ // https://rust-lang.github.io/rust-clippy/master/index.html#too_many_arguments
487487+ #[allow(clippy::too_many_arguments)]
488488+ fn new(
489489+ max_capacity: Option<u64>,
490490+ initial_capacity: Option<usize>,
491491+ build_hasher: S,
492492+ weigher: Option<Weigher<K, V>>,
493493+ read_op_ch: Receiver<ReadOp<K, V>>,
494494+ write_op_ch: Receiver<WriteOp<K, V>>,
495495+ time_to_live: Option<Duration>,
496496+ time_to_idle: Option<Duration>,
497497+ ) -> Self {
498498+ let initial_capacity = initial_capacity
499499+ .map(|cap| cap + WRITE_LOG_SIZE)
500500+ .unwrap_or_default();
501501+ let cache =
502502+ dashmap::DashMap::with_capacity_and_hasher(initial_capacity, build_hasher.clone());
503503+504504+ Self {
505505+ max_capacity,
506506+ entry_count: Default::default(),
507507+ weighted_size: Default::default(),
508508+ cache,
509509+ build_hasher,
510510+ deques: Mutex::new(Default::default()),
511511+ frequency_sketch: RwLock::new(Default::default()),
512512+ frequency_sketch_enabled: Default::default(),
513513+ read_op_ch,
514514+ write_op_ch,
515515+ time_to_live,
516516+ time_to_idle,
517517+ valid_after: Default::default(),
518518+ weigher,
519519+ has_expiration_clock: AtomicBool::new(false),
520520+ expiration_clock: RwLock::new(None),
521521+ }
522522+ }
523523+524524+ #[inline]
525525+ fn hash<Q>(&self, key: &Q) -> u64
526526+ where
527527+ Arc<K>: Borrow<Q>,
528528+ Q: Hash + Eq + ?Sized,
529529+ {
530530+ self.build_hasher.hash_one(key)
531531+ }
532532+533533+ #[inline]
534534+ fn get<Q>(&self, key: &Q) -> Option<CacheEntryRef<'_, K, V>>
535535+ where
536536+ Arc<K>: Borrow<Q>,
537537+ Q: Hash + Eq + ?Sized,
538538+ {
539539+ self.cache.get(key)
540540+ }
541541+542542+ #[inline]
543543+ fn remove_entry<Q>(&self, key: &Q) -> Option<KvEntry<K, V>>
544544+ where
545545+ Arc<K>: Borrow<Q>,
546546+ Q: Hash + Eq + ?Sized,
547547+ {
548548+ self.cache
549549+ .remove(key)
550550+ .map(|(key, entry)| KvEntry::new(key, entry))
551551+ }
552552+}
553553+554554+// functions/methods used by BaseCache
555555+impl<K, V, S> Inner<K, V, S> {
556556+ fn policy(&self) -> Policy {
557557+ Policy::new(self.max_capacity, self.time_to_live, self.time_to_idle)
558558+ }
559559+560560+ #[inline]
561561+ fn time_to_live(&self) -> Option<Duration> {
562562+ self.time_to_live
563563+ }
564564+565565+ #[inline]
566566+ fn time_to_idle(&self) -> Option<Duration> {
567567+ self.time_to_idle
568568+ }
569569+570570+ #[inline]
571571+ fn entry_count(&self) -> u64 {
572572+ self.entry_count.load()
573573+ }
574574+575575+ #[inline]
576576+ pub(crate) fn weighted_size(&self) -> u64 {
577577+ self.weighted_size.load()
578578+ }
579579+580580+ #[inline]
581581+ fn has_expiry(&self) -> bool {
582582+ self.time_to_live.is_some() || self.time_to_idle.is_some()
583583+ }
584584+585585+ #[inline]
586586+ fn is_write_order_queue_enabled(&self) -> bool {
587587+ self.time_to_live.is_some()
588588+ }
589589+590590+ #[inline]
591591+ fn valid_after(&self) -> Option<Instant> {
592592+ self.valid_after.instant()
593593+ }
594594+595595+ #[inline]
596596+ fn set_valid_after(&self, timestamp: Instant) {
597597+ self.valid_after.set_instant(timestamp);
598598+ }
599599+600600+ #[inline]
601601+ fn has_valid_after(&self) -> bool {
602602+ self.valid_after.is_set()
603603+ }
604604+605605+ #[inline]
606606+ fn weigh(&self, key: &K, value: &V) -> u32 {
607607+ self.weigher.as_ref().map(|w| w(key, value)).unwrap_or(1)
608608+ }
609609+610610+ #[inline]
611611+ fn current_time_from_expiration_clock(&self) -> Instant {
612612+ if self.has_expiration_clock.load(Ordering::Relaxed) {
613613+ Instant::new(
614614+ self.expiration_clock
615615+ .read()
616616+ .expect("lock poisoned")
617617+ .as_ref()
618618+ .expect("Cannot get the expiration clock")
619619+ .now(),
620620+ )
621621+ } else {
622622+ Instant::now()
623623+ }
624624+ }
625625+}
626626+627627+// Clippy beta 0.1.83 (f41c7ed9889 2024-10-31) warns about unused lifetimes on 'a.
628628+// This seems a false positive. The lifetimes are used in the trait bounds.
629629+// https://rust-lang.github.io/rust-clippy/master/index.html#extra_unused_lifetimes
630630+#[allow(clippy::extra_unused_lifetimes)]
631631+impl<'a, K, V, S> Inner<K, V, S>
632632+where
633633+ K: 'a + Eq + Hash,
634634+ V: 'a,
635635+ S: BuildHasher + Clone,
636636+{
637637+ fn iter(&self) -> DashMapIter<'_, K, V, S> {
638638+ self.cache.iter()
639639+ }
640640+}
641641+642642+mod batch_size {
643643+ pub(crate) const EVICTION_BATCH_SIZE: usize = 500;
644644+}
645645+646646+// TODO: Divide this method into smaller methods so that unit tests can do more
647647+// precise testing.
648648+// - sync_reads
649649+// - sync_writes
650650+// - evict
651651+// - invalidate_entries
652652+impl<K, V, S> InnerSync for Inner<K, V, S>
653653+where
654654+ K: Hash + Eq + Send + Sync + 'static,
655655+ V: Send + Sync + 'static,
656656+ S: BuildHasher + Clone + Send + Sync + 'static,
657657+{
658658+ fn sync(&self, max_repeats: usize) {
659659+ let mut deqs = self.deques.lock().expect("lock poisoned");
660660+ let mut calls = 0;
661661+ let mut should_sync = true;
662662+663663+ let current_ec = self.entry_count.load();
664664+ let current_ws = self.weighted_size.load();
665665+ let mut counters = EvictionCounters::new(current_ec, current_ws);
666666+667667+ while should_sync && calls <= max_repeats {
668668+ let r_len = self.read_op_ch.len();
669669+ if r_len > 0 {
670670+ self.apply_reads(&mut deqs, r_len);
671671+ }
672672+673673+ let w_len = self.write_op_ch.len();
674674+ if w_len > 0 {
675675+ self.apply_writes(&mut deqs, w_len, &mut counters);
676676+ }
677677+678678+ if self.should_enable_frequency_sketch(&counters) {
679679+ self.enable_frequency_sketch(&counters);
680680+ }
681681+682682+ calls += 1;
683683+ should_sync = self.read_op_ch.len() >= READ_LOG_FLUSH_POINT
684684+ || self.write_op_ch.len() >= WRITE_LOG_FLUSH_POINT;
685685+ }
686686+687687+ if self.has_expiry() || self.has_valid_after() {
688688+ self.evict_expired(&mut deqs, batch_size::EVICTION_BATCH_SIZE, &mut counters);
689689+ }
690690+691691+ // Evict if this cache has more entries than its capacity.
692692+ let weights_to_evict = self.weights_to_evict(&counters);
693693+ if weights_to_evict > 0 {
694694+ self.evict_lru_entries(
695695+ &mut deqs,
696696+ batch_size::EVICTION_BATCH_SIZE,
697697+ weights_to_evict,
698698+ &mut counters,
699699+ );
700700+ }
701701+702702+ debug_assert_eq!(self.entry_count.load(), current_ec);
703703+ debug_assert_eq!(self.weighted_size.load(), current_ws);
704704+ self.entry_count.store(counters.entry_count);
705705+ self.weighted_size.store(counters.weighted_size);
706706+ }
707707+708708+ fn now(&self) -> Instant {
709709+ self.current_time_from_expiration_clock()
710710+ }
711711+}
712712+713713+//
714714+// private methods
715715+//
716716+impl<K, V, S> Inner<K, V, S>
717717+where
718718+ K: Hash + Eq + Send + Sync + 'static,
719719+ V: Send + Sync + 'static,
720720+ S: BuildHasher + Clone + Send + Sync + 'static,
721721+{
722722+ fn has_enough_capacity(&self, candidate_weight: u32, counters: &EvictionCounters) -> bool {
723723+ self.max_capacity
724724+ .map(|limit| counters.weighted_size + candidate_weight as u64 <= limit)
725725+ .unwrap_or(true)
726726+ }
727727+728728+ fn weights_to_evict(&self, counters: &EvictionCounters) -> u64 {
729729+ self.max_capacity
730730+ .map(|limit| counters.weighted_size.saturating_sub(limit))
731731+ .unwrap_or_default()
732732+ }
733733+734734+ #[inline]
735735+ fn should_enable_frequency_sketch(&self, counters: &EvictionCounters) -> bool {
736736+ if self.frequency_sketch_enabled.load(Ordering::Acquire) {
737737+ false
738738+ } else if let Some(max_cap) = self.max_capacity {
739739+ counters.weighted_size >= max_cap / 2
740740+ } else {
741741+ false
742742+ }
743743+ }
744744+745745+ #[inline]
746746+ fn enable_frequency_sketch(&self, counters: &EvictionCounters) {
747747+ if let Some(max_cap) = self.max_capacity {
748748+ let c = counters;
749749+ let cap = if self.weigher.is_none() {
750750+ max_cap
751751+ } else {
752752+ (c.entry_count as f64 * (c.weighted_size as f64 / max_cap as f64)) as u64
753753+ };
754754+ self.do_enable_frequency_sketch(cap);
755755+ }
756756+ }
757757+758758+ #[cfg(test)]
759759+ fn enable_frequency_sketch_for_testing(&self) {
760760+ if let Some(max_cap) = self.max_capacity {
761761+ self.do_enable_frequency_sketch(max_cap);
762762+ }
763763+ }
764764+765765+ #[inline]
766766+ fn do_enable_frequency_sketch(&self, cache_capacity: u64) {
767767+ let skt_capacity = common::sketch_capacity(cache_capacity);
768768+ self.frequency_sketch
769769+ .write()
770770+ .expect("lock poisoned")
771771+ .ensure_capacity(skt_capacity);
772772+ self.frequency_sketch_enabled.store(true, Ordering::Release);
773773+ }
774774+775775+ fn apply_reads(&self, deqs: &mut Deques<K>, count: usize) {
776776+ use ReadOp::*;
777777+ let mut freq = self.frequency_sketch.write().expect("lock poisoned");
778778+ let ch = &self.read_op_ch;
779779+ for _ in 0..count {
780780+ match ch.try_recv() {
781781+ Ok(Hit(hash, entry, timestamp)) => {
782782+ freq.increment(hash);
783783+ entry.set_last_accessed(timestamp);
784784+ if entry.is_admitted() {
785785+ deqs.move_to_back_ao(&entry);
786786+ }
787787+ }
788788+ Ok(Miss(hash)) => freq.increment(hash),
789789+ Err(_) => break,
790790+ }
791791+ }
792792+ }
793793+794794+ fn apply_writes(&self, deqs: &mut Deques<K>, count: usize, counters: &mut EvictionCounters) {
795795+ use WriteOp::*;
796796+ let freq = self.frequency_sketch.read().expect("lock poisoned");
797797+ let ch = &self.write_op_ch;
798798+799799+ for _ in 0..count {
800800+ match ch.try_recv() {
801801+ Ok(Upsert {
802802+ key_hash: kh,
803803+ value_entry: entry,
804804+ old_weight,
805805+ new_weight,
806806+ }) => self.handle_upsert(kh, entry, old_weight, new_weight, deqs, &freq, counters),
807807+ Ok(Remove(KvEntry { key: _key, entry })) => {
808808+ Self::handle_remove(deqs, entry, counters)
809809+ }
810810+ Err(_) => break,
811811+ };
812812+ }
813813+ }
814814+815815+ #[allow(clippy::too_many_arguments)]
816816+ fn handle_upsert(
817817+ &self,
818818+ kh: KeyHash<K>,
819819+ entry: TrioArc<ValueEntry<K, V>>,
820820+ old_weight: u32,
821821+ new_weight: u32,
822822+ deqs: &mut Deques<K>,
823823+ freq: &FrequencySketch,
824824+ counters: &mut EvictionCounters,
825825+ ) {
826826+ entry.set_dirty(false);
827827+828828+ if entry.is_admitted() {
829829+ // The entry has been already admitted, so treat this as an update.
830830+ counters.saturating_sub(0, old_weight);
831831+ counters.saturating_add(0, new_weight);
832832+ deqs.move_to_back_ao(&entry);
833833+ deqs.move_to_back_wo(&entry);
834834+ return;
835835+ }
836836+837837+ if self.has_enough_capacity(new_weight, counters) {
838838+ // There are enough room in the cache (or the cache is unbounded).
839839+ // Add the candidate to the deques.
840840+ self.handle_admit(kh, &entry, new_weight, deqs, counters);
841841+ return;
842842+ }
843843+844844+ if let Some(max) = self.max_capacity {
845845+ if new_weight as u64 > max {
846846+ // The candidate is too big to fit in the cache. Reject it.
847847+ self.cache.remove(&Arc::clone(&kh.key));
848848+ return;
849849+ }
850850+ }
851851+852852+ let skipped_nodes;
853853+ let mut candidate = EntrySizeAndFrequency::new(new_weight);
854854+ candidate.add_frequency(freq, kh.hash);
855855+856856+ // Try to admit the candidate.
857857+ match Self::admit(&candidate, &self.cache, deqs, freq) {
858858+ AdmissionResult::Admitted {
859859+ victim_nodes,
860860+ skipped_nodes: mut skipped,
861861+ } => {
862862+ // Try to remove the victims from the cache (hash map).
863863+ for victim in victim_nodes {
864864+ if let Some((_vic_key, vic_entry)) =
865865+ self.cache.remove(unsafe { victim.as_ref().element.key() })
866866+ {
867867+ // And then remove the victim from the deques.
868868+ Self::handle_remove(deqs, vic_entry, counters);
869869+ } else {
870870+ // Could not remove the victim from the cache. Skip this
871871+ // victim node as its ValueEntry might have been
872872+ // invalidated. Add it to the skipped nodes.
873873+ skipped.push(victim);
874874+ }
875875+ }
876876+ skipped_nodes = skipped;
877877+878878+ // Add the candidate to the deques.
879879+ self.handle_admit(kh, &entry, new_weight, deqs, counters);
880880+ }
881881+ AdmissionResult::Rejected { skipped_nodes: s } => {
882882+ skipped_nodes = s;
883883+ // Remove the candidate from the cache (hash map).
884884+ self.cache.remove(&Arc::clone(&kh.key));
885885+ }
886886+ };
887887+888888+ // Move the skipped nodes to the back of the deque. We do not unlink (drop)
889889+ // them because ValueEntries in the write op queue should be pointing them.
890890+ for node in skipped_nodes {
891891+ unsafe { deqs.probation.move_to_back(node) };
892892+ }
893893+ }
894894+895895+ /// Performs size-aware admission explained in the paper:
896896+ /// [Lightweight Robust Size Aware Cache Management][size-aware-cache-paper]
897897+ /// by Gil Einziger, Ohad Eytan, Roy Friedman, Ben Manes.
898898+ ///
899899+ /// [size-aware-cache-paper]: https://arxiv.org/abs/2105.08770
900900+ ///
901901+ /// There are some modifications in this implementation:
902902+ /// - To admit to the main space, candidate's frequency must be higher than
903903+ /// the aggregated frequencies of the potential victims. (In the paper,
904904+ /// `>=` operator is used rather than `>`) The `>` operator will do a better
905905+ /// job to prevent the main space from polluting.
906906+ /// - When a candidate is rejected, the potential victims will stay at the LRU
907907+ /// position of the probation access-order queue. (In the paper, they will be
908908+ /// promoted (to the MRU position?) to force the eviction policy to select a
909909+ /// different set of victims for the next candidate). We may implement the
910910+ /// paper's behavior later?
911911+ ///
912912+ #[inline]
913913+ fn admit(
914914+ candidate: &EntrySizeAndFrequency,
915915+ cache: &CacheStore<K, V, S>,
916916+ deqs: &Deques<K>,
917917+ freq: &FrequencySketch,
918918+ ) -> AdmissionResult<K> {
919919+ const MAX_CONSECUTIVE_RETRIES: usize = 5;
920920+ let mut retries = 0;
921921+922922+ let mut victims = EntrySizeAndFrequency::default();
923923+ let mut victim_nodes = SmallVec::default();
924924+ let mut skipped_nodes = SmallVec::default();
925925+926926+ // Get first potential victim at the LRU position.
927927+ let mut next_victim = deqs.probation.peek_front_ptr();
928928+929929+ // Aggregate potential victims.
930930+ while victims.policy_weight < candidate.policy_weight {
931931+ if candidate.freq < victims.freq {
932932+ break;
933933+ }
934934+ if let Some(victim) = next_victim.take() {
935935+ next_victim = DeqNode::next_node_ptr(victim);
936936+ let vic_elem = &unsafe { victim.as_ref() }.element;
937937+938938+ if let Some(vic_entry) = cache.get(vic_elem.key()) {
939939+ victims.add_policy_weight(vic_entry.policy_weight());
940940+ victims.add_frequency(freq, vic_elem.hash());
941941+ victim_nodes.push(victim);
942942+ retries = 0;
943943+ } else {
944944+ // Could not get the victim from the cache (hash map). Skip this node
945945+ // as its ValueEntry might have been invalidated.
946946+ skipped_nodes.push(victim);
947947+948948+ retries += 1;
949949+ if retries > MAX_CONSECUTIVE_RETRIES {
950950+ break;
951951+ }
952952+ }
953953+ } else {
954954+ // No more potential victims.
955955+ break;
956956+ }
957957+ }
958958+959959+ // Admit or reject the candidate.
960960+961961+ // TODO: Implement some randomness to mitigate hash DoS attack.
962962+ // See Caffeine's implementation.
963963+964964+ if victims.policy_weight >= candidate.policy_weight && candidate.freq > victims.freq {
965965+ AdmissionResult::Admitted {
966966+ victim_nodes,
967967+ skipped_nodes,
968968+ }
969969+ } else {
970970+ AdmissionResult::Rejected { skipped_nodes }
971971+ }
972972+ }
973973+974974+ fn handle_admit(
975975+ &self,
976976+ kh: KeyHash<K>,
977977+ entry: &TrioArc<ValueEntry<K, V>>,
978978+ policy_weight: u32,
979979+ deqs: &mut Deques<K>,
980980+ counters: &mut EvictionCounters,
981981+ ) {
982982+ let key = Arc::clone(&kh.key);
983983+ counters.saturating_add(1, policy_weight);
984984+ deqs.push_back_ao(
985985+ CacheRegion::MainProbation,
986986+ KeyHashDate::new(kh, entry.entry_info()),
987987+ entry,
988988+ );
989989+ if self.is_write_order_queue_enabled() {
990990+ deqs.push_back_wo(KeyDate::new(key, entry.entry_info()), entry);
991991+ }
992992+ entry.set_admitted(true);
993993+ }
994994+995995+ fn handle_remove(
996996+ deqs: &mut Deques<K>,
997997+ entry: TrioArc<ValueEntry<K, V>>,
998998+ counters: &mut EvictionCounters,
999999+ ) {
10001000+ if entry.is_admitted() {
10011001+ entry.set_admitted(false);
10021002+ counters.saturating_sub(1, entry.policy_weight());
10031003+ // The following two unlink_* functions will unset the deq nodes.
10041004+ deqs.unlink_ao(&entry);
10051005+ Deques::unlink_wo(&mut deqs.write_order, &entry);
10061006+ } else {
10071007+ entry.unset_q_nodes();
10081008+ }
10091009+ }
10101010+10111011+ fn handle_remove_with_deques(
10121012+ ao_deq_name: &str,
10131013+ ao_deq: &mut Deque<KeyHashDate<K>>,
10141014+ wo_deq: &mut Deque<KeyDate<K>>,
10151015+ entry: TrioArc<ValueEntry<K, V>>,
10161016+ counters: &mut EvictionCounters,
10171017+ ) {
10181018+ if entry.is_admitted() {
10191019+ entry.set_admitted(false);
10201020+ counters.saturating_sub(1, entry.policy_weight());
10211021+ // The following two unlink_* functions will unset the deq nodes.
10221022+ Deques::unlink_ao_from_deque(ao_deq_name, ao_deq, &entry);
10231023+ Deques::unlink_wo(wo_deq, &entry);
10241024+ } else {
10251025+ entry.unset_q_nodes();
10261026+ }
10271027+ }
10281028+10291029+ fn evict_expired(
10301030+ &self,
10311031+ deqs: &mut Deques<K>,
10321032+ batch_size: usize,
10331033+ counters: &mut EvictionCounters,
10341034+ ) {
10351035+ let now = self.current_time_from_expiration_clock();
10361036+10371037+ if self.is_write_order_queue_enabled() {
10381038+ self.remove_expired_wo(deqs, batch_size, now, counters);
10391039+ }
10401040+10411041+ if self.time_to_idle.is_some() || self.has_valid_after() {
10421042+ let (window, probation, protected, wo) = (
10431043+ &mut deqs.window,
10441044+ &mut deqs.probation,
10451045+ &mut deqs.protected,
10461046+ &mut deqs.write_order,
10471047+ );
10481048+10491049+ let mut rm_expired_ao =
10501050+ |name, deq| self.remove_expired_ao(name, deq, wo, batch_size, now, counters);
10511051+10521052+ rm_expired_ao("window", window);
10531053+ rm_expired_ao("probation", probation);
10541054+ rm_expired_ao("protected", protected);
10551055+ }
10561056+ }
10571057+10581058+ #[inline]
10591059+ fn remove_expired_ao(
10601060+ &self,
10611061+ deq_name: &str,
10621062+ deq: &mut Deque<KeyHashDate<K>>,
10631063+ write_order_deq: &mut Deque<KeyDate<K>>,
10641064+ batch_size: usize,
10651065+ now: Instant,
10661066+ counters: &mut EvictionCounters,
10671067+ ) {
10681068+ let tti = &self.time_to_idle;
10691069+ let va = &self.valid_after();
10701070+ for _ in 0..batch_size {
10711071+ // Peek the front node of the deque and check if it is expired.
10721072+ let key = deq.peek_front().and_then(|node| {
10731073+ // TODO: Skip the entry if it is dirty. See `evict_lru_entries` method as an example.
10741074+ if is_expired_entry_ao(tti, va, node, now) {
10751075+ Some(Arc::clone(node.element.key()))
10761076+ } else {
10771077+ None
10781078+ }
10791079+ });
10801080+10811081+ if key.is_none() {
10821082+ break;
10831083+ }
10841084+10851085+ let key = key.as_ref().unwrap();
10861086+10871087+ // Remove the key from the map only when the entry is really
10881088+ // expired. This check is needed because it is possible that the entry in
10891089+ // the map has been updated or deleted but its deque node we checked
10901090+ // above have not been updated yet.
10911091+ let maybe_entry = self
10921092+ .cache
10931093+ .remove_if(key, |_, v| is_expired_entry_ao(tti, va, v, now));
10941094+10951095+ if let Some((_k, entry)) = maybe_entry {
10961096+ Self::handle_remove_with_deques(deq_name, deq, write_order_deq, entry, counters);
10971097+ } else if !self.try_skip_updated_entry(key, deq_name, deq, write_order_deq) {
10981098+ break;
10991099+ }
11001100+ }
11011101+ }
11021102+11031103+ #[inline]
11041104+ fn try_skip_updated_entry(
11051105+ &self,
11061106+ key: &K,
11071107+ deq_name: &str,
11081108+ deq: &mut Deque<KeyHashDate<K>>,
11091109+ write_order_deq: &mut Deque<KeyDate<K>>,
11101110+ ) -> bool {
11111111+ if let Some(entry) = self.cache.get(key) {
11121112+ if entry.is_dirty() {
11131113+ // The key exists and the entry has been updated.
11141114+ Deques::move_to_back_ao_in_deque(deq_name, deq, &entry);
11151115+ Deques::move_to_back_wo_in_deque(write_order_deq, &entry);
11161116+ true
11171117+ } else {
11181118+ // The key exists but something unexpected.
11191119+ false
11201120+ }
11211121+ } else {
11221122+ // Skip this entry as the key might have been invalidated. Since the
11231123+ // invalidated ValueEntry (which should be still in the write op
11241124+ // queue) has a pointer to this node, move the node to the back of
11251125+ // the deque instead of popping (dropping) it.
11261126+ deq.move_front_to_back();
11271127+ true
11281128+ }
11291129+ }
11301130+11311131+ #[inline]
11321132+ fn remove_expired_wo(
11331133+ &self,
11341134+ deqs: &mut Deques<K>,
11351135+ batch_size: usize,
11361136+ now: Instant,
11371137+ counters: &mut EvictionCounters,
11381138+ ) {
11391139+ let ttl = &self.time_to_live;
11401140+ let va = &self.valid_after();
11411141+ for _ in 0..batch_size {
11421142+ let key = deqs.write_order.peek_front().and_then(|node| {
11431143+ // TODO: Skip the entry if it is dirty. See `evict_lru_entries` method as an example.
11441144+ if is_expired_entry_wo(ttl, va, node, now) {
11451145+ Some(Arc::clone(node.element.key()))
11461146+ } else {
11471147+ None
11481148+ }
11491149+ });
11501150+11511151+ if key.is_none() {
11521152+ break;
11531153+ }
11541154+11551155+ let key = key.as_ref().unwrap();
11561156+11571157+ let maybe_entry = self
11581158+ .cache
11591159+ .remove_if(key, |_, v| is_expired_entry_wo(ttl, va, v, now));
11601160+11611161+ if let Some((_k, entry)) = maybe_entry {
11621162+ Self::handle_remove(deqs, entry, counters);
11631163+ } else if let Some(entry) = self.cache.get(key) {
11641164+ if entry.is_dirty() {
11651165+ deqs.move_to_back_ao(&entry);
11661166+ deqs.move_to_back_wo(&entry);
11671167+ } else {
11681168+ // The key exists but something unexpected. Break.
11691169+ break;
11701170+ }
11711171+ } else {
11721172+ // Skip this entry as the key might have been invalidated. Since the
11731173+ // invalidated ValueEntry (which should be still in the write op
11741174+ // queue) has a pointer to this node, move the node to the back of
11751175+ // the deque instead of popping (dropping) it.
11761176+ deqs.write_order.move_front_to_back();
11771177+ }
11781178+ }
11791179+ }
11801180+11811181+ fn evict_lru_entries(
11821182+ &self,
11831183+ deqs: &mut Deques<K>,
11841184+ batch_size: usize,
11851185+ weights_to_evict: u64,
11861186+ counters: &mut EvictionCounters,
11871187+ ) {
11881188+ const DEQ_NAME: &str = "probation";
11891189+ let mut evicted = 0u64;
11901190+ let (deq, write_order_deq) = (&mut deqs.probation, &mut deqs.write_order);
11911191+11921192+ for _ in 0..batch_size {
11931193+ if evicted >= weights_to_evict {
11941194+ break;
11951195+ }
11961196+11971197+ let maybe_key_and_ts = deq.peek_front().map(|node| {
11981198+ let entry_info = node.element.entry_info();
11991199+ (
12001200+ Arc::clone(node.element.key()),
12011201+ entry_info.is_dirty(),
12021202+ entry_info.last_modified(),
12031203+ )
12041204+ });
12051205+12061206+ let (key, ts) = match maybe_key_and_ts {
12071207+ Some((key, false, Some(ts))) => (key, ts),
12081208+ // TODO: Remove the second pattern `Some((_key, false, None))` once we change
12091209+ // `last_modified` and `last_accessed` in `EntryInfo` from `Option<Instant>` to
12101210+ // `Instant`.
12111211+ Some((key, true, _)) | Some((key, false, None)) => {
12121212+ if self.try_skip_updated_entry(&key, DEQ_NAME, deq, write_order_deq) {
12131213+ continue;
12141214+ } else {
12151215+ break;
12161216+ }
12171217+ }
12181218+ None => break,
12191219+ };
12201220+12211221+ let maybe_entry = self.cache.remove_if(&key, |_, v| {
12221222+ if let Some(lm) = v.last_modified() {
12231223+ lm == ts
12241224+ } else {
12251225+ false
12261226+ }
12271227+ });
12281228+12291229+ if let Some((_k, entry)) = maybe_entry {
12301230+ let weight = entry.policy_weight();
12311231+ Self::handle_remove_with_deques(DEQ_NAME, deq, write_order_deq, entry, counters);
12321232+ evicted = evicted.saturating_add(weight as u64);
12331233+ } else if !self.try_skip_updated_entry(&key, DEQ_NAME, deq, write_order_deq) {
12341234+ break;
12351235+ }
12361236+ }
12371237+ }
12381238+}
12391239+12401240+//
12411241+// for testing
12421242+//
12431243+#[cfg(test)]
12441244+impl<K, V, S> Inner<K, V, S>
12451245+where
12461246+ K: Hash + Eq,
12471247+ S: BuildHasher + Clone,
12481248+{
12491249+ fn set_expiration_clock(&self, clock: Option<Clock>) {
12501250+ let mut exp_clock = self.expiration_clock.write().expect("lock poisoned");
12511251+ if let Some(clock) = clock {
12521252+ *exp_clock = Some(clock);
12531253+ self.has_expiration_clock.store(true, Ordering::SeqCst);
12541254+ } else {
12551255+ self.has_expiration_clock.store(false, Ordering::SeqCst);
12561256+ *exp_clock = None;
12571257+ }
12581258+ }
12591259+}
12601260+12611261+//
12621262+// private free-standing functions
12631263+//
12641264+#[inline]
12651265+fn is_expired_entry_ao(
12661266+ time_to_idle: &Option<Duration>,
12671267+ valid_after: &Option<Instant>,
12681268+ entry: &impl AccessTime,
12691269+ now: Instant,
12701270+) -> bool {
12711271+ if let Some(ts) = entry.last_accessed() {
12721272+ if let Some(va) = valid_after {
12731273+ if ts < *va {
12741274+ return true;
12751275+ }
12761276+ }
12771277+ if let Some(tti) = time_to_idle {
12781278+ let checked_add = ts.checked_add(*tti);
12791279+ if checked_add.is_none() {
12801280+ panic!("ttl overflow")
12811281+ }
12821282+ return checked_add.unwrap() <= now;
12831283+ }
12841284+ }
12851285+ false
12861286+}
12871287+12881288+#[inline]
12891289+fn is_expired_entry_wo(
12901290+ time_to_live: &Option<Duration>,
12911291+ valid_after: &Option<Instant>,
12921292+ entry: &impl AccessTime,
12931293+ now: Instant,
12941294+) -> bool {
12951295+ if let Some(ts) = entry.last_modified() {
12961296+ if let Some(va) = valid_after {
12971297+ if ts < *va {
12981298+ return true;
12991299+ }
13001300+ }
13011301+ if let Some(ttl) = time_to_live {
13021302+ let checked_add = ts.checked_add(*ttl);
13031303+ if checked_add.is_none() {
13041304+ panic!("ttl overflow");
13051305+ }
13061306+ return checked_add.unwrap() <= now;
13071307+ }
13081308+ }
13091309+ false
13101310+}
13111311+13121312+#[cfg(test)]
13131313+mod tests {
13141314+ use super::BaseCache;
13151315+13161316+ #[cfg_attr(target_pointer_width = "16", ignore)]
13171317+ #[test]
13181318+ fn test_skt_capacity_will_not_overflow() {
13191319+ use std::collections::hash_map::RandomState;
13201320+13211321+ // power of two
13221322+ let pot = |exp| 2u64.pow(exp);
13231323+13241324+ let ensure_sketch_len = |max_capacity, len, name| {
13251325+ let cache = BaseCache::<u8, u8>::new(
13261326+ Some(max_capacity),
13271327+ None,
13281328+ RandomState::default(),
13291329+ None,
13301330+ None,
13311331+ None,
13321332+ );
13331333+ cache.inner.enable_frequency_sketch_for_testing();
13341334+ assert_eq!(
13351335+ cache
13361336+ .inner
13371337+ .frequency_sketch
13381338+ .read()
13391339+ .expect("lock poisoned")
13401340+ .table_len(),
13411341+ len as usize,
13421342+ "{}",
13431343+ name
13441344+ );
13451345+ };
13461346+13471347+ if cfg!(target_pointer_width = "32") {
13481348+ let pot24 = pot(24);
13491349+ let pot16 = pot(16);
13501350+ ensure_sketch_len(0, 128, "0");
13511351+ ensure_sketch_len(128, 128, "128");
13521352+ ensure_sketch_len(pot16, pot16, "pot16");
13531353+ // due to ceiling to next_power_of_two
13541354+ ensure_sketch_len(pot16 + 1, pot(17), "pot16 + 1");
13551355+ // due to ceiling to next_power_of_two
13561356+ ensure_sketch_len(pot24 - 1, pot24, "pot24 - 1");
13571357+ ensure_sketch_len(pot24, pot24, "pot24");
13581358+ ensure_sketch_len(pot(27), pot24, "pot(27)");
13591359+ ensure_sketch_len(u32::MAX as u64, pot24, "u32::MAX");
13601360+ } else {
13611361+ // target_pointer_width: 64 or larger.
13621362+ let pot30 = pot(30);
13631363+ let pot16 = pot(16);
13641364+ ensure_sketch_len(0, 128, "0");
13651365+ ensure_sketch_len(128, 128, "128");
13661366+ ensure_sketch_len(pot16, pot16, "pot16");
13671367+ // due to ceiling to next_power_of_two
13681368+ ensure_sketch_len(pot16 + 1, pot(17), "pot16 + 1");
13691369+13701370+ // The following tests will allocate large memory (~8GiB).
13711371+ // Skip when running on Circle CI.
13721372+ if !cfg!(circleci) {
13731373+ // due to ceiling to next_power_of_two
13741374+ ensure_sketch_len(pot30 - 1, pot30, "pot30- 1");
13751375+ ensure_sketch_len(pot30, pot30, "pot30");
13761376+ ensure_sketch_len(u64::MAX, pot30, "u64::MAX");
13771377+ }
13781378+ };
13791379+ }
13801380+}
+249
crates/mini-moka-vendored/src/sync/builder.rs
···11+use super::Cache;
22+use crate::{common::builder_utils, common::concurrent::Weigher};
33+44+use std::{
55+ collections::hash_map::RandomState,
66+ hash::{BuildHasher, Hash},
77+ marker::PhantomData,
88+ sync::Arc,
99+ time::Duration,
1010+};
1111+1212+/// Builds a [`Cache`][cache-struct] or with various configuration knobs.
1313+///
1414+/// [cache-struct]: ./struct.Cache.html
1515+///
1616+/// # Examples
1717+///
1818+/// ```rust
1919+/// use mini_moka::sync::Cache;
2020+/// use std::time::Duration;
2121+///
2222+/// let cache = Cache::builder()
2323+/// // Max 10,000 entries
2424+/// .max_capacity(10_000)
2525+/// // Time to live (TTL): 30 minutes
2626+/// .time_to_live(Duration::from_secs(30 * 60))
2727+/// // Time to idle (TTI): 5 minutes
2828+/// .time_to_idle(Duration::from_secs( 5 * 60))
2929+/// // Create the cache.
3030+/// .build();
3131+///
3232+/// // This entry will expire after 5 minutes (TTI) if there is no get().
3333+/// cache.insert(0, "zero");
3434+///
3535+/// // This get() will extend the entry life for another 5 minutes.
3636+/// cache.get(&0);
3737+///
3838+/// // Even though we keep calling get(), the entry will expire
3939+/// // after 30 minutes (TTL) from the insert().
4040+/// ```
4141+///
4242+#[must_use]
4343+pub struct CacheBuilder<K, V, C> {
4444+ max_capacity: Option<u64>,
4545+ initial_capacity: Option<usize>,
4646+ weigher: Option<Weigher<K, V>>,
4747+ time_to_live: Option<Duration>,
4848+ time_to_idle: Option<Duration>,
4949+ cache_type: PhantomData<C>,
5050+}
5151+5252+impl<K, V> Default for CacheBuilder<K, V, Cache<K, V, RandomState>>
5353+where
5454+ K: Eq + Hash + Send + Sync + 'static,
5555+ V: Clone + Send + Sync + 'static,
5656+{
5757+ fn default() -> Self {
5858+ Self {
5959+ max_capacity: None,
6060+ initial_capacity: None,
6161+ weigher: None,
6262+ time_to_live: None,
6363+ time_to_idle: None,
6464+ cache_type: Default::default(),
6565+ }
6666+ }
6767+}
6868+6969+impl<K, V> CacheBuilder<K, V, Cache<K, V, RandomState>>
7070+where
7171+ K: Eq + Hash + Send + Sync + 'static,
7272+ V: Clone + Send + Sync + 'static,
7373+{
7474+ /// Construct a new `CacheBuilder` that will be used to build a `Cache` or
7575+ /// `SegmentedCache` holding up to `max_capacity` entries.
7676+ pub fn new(max_capacity: u64) -> Self {
7777+ Self {
7878+ max_capacity: Some(max_capacity),
7979+ ..Default::default()
8080+ }
8181+ }
8282+8383+ /// Builds a `Cache<K, V>`.
8484+ ///
8585+ /// If you want to build a `SegmentedCache<K, V>`, call `segments` method before
8686+ /// calling this method.
8787+ ///
8888+ /// # Panics
8989+ ///
9090+ /// Panics if configured with either `time_to_live` or `time_to_idle` higher than
9191+ /// 1000 years. This is done to protect against overflow when computing key
9292+ /// expiration.
9393+ pub fn build(self) -> Cache<K, V, RandomState> {
9494+ let build_hasher = RandomState::default();
9595+ builder_utils::ensure_expirations_or_panic(self.time_to_live, self.time_to_idle);
9696+ Cache::with_everything(
9797+ self.max_capacity,
9898+ self.initial_capacity,
9999+ build_hasher,
100100+ self.weigher,
101101+ self.time_to_live,
102102+ self.time_to_idle,
103103+ )
104104+ }
105105+106106+ /// Builds a `Cache<K, V, S>`, with the given `hasher`.
107107+ ///
108108+ /// If you want to build a `SegmentedCache<K, V>`, call `segments` method before
109109+ /// calling this method.
110110+ ///
111111+ /// # Panics
112112+ ///
113113+ /// Panics if configured with either `time_to_live` or `time_to_idle` higher than
114114+ /// 1000 years. This is done to protect against overflow when computing key
115115+ /// expiration.
116116+ pub fn build_with_hasher<S>(self, hasher: S) -> Cache<K, V, S>
117117+ where
118118+ S: BuildHasher + Clone + Send + Sync + 'static,
119119+ {
120120+ builder_utils::ensure_expirations_or_panic(self.time_to_live, self.time_to_idle);
121121+ Cache::with_everything(
122122+ self.max_capacity,
123123+ self.initial_capacity,
124124+ hasher,
125125+ self.weigher,
126126+ self.time_to_live,
127127+ self.time_to_idle,
128128+ )
129129+ }
130130+}
131131+132132+impl<K, V, C> CacheBuilder<K, V, C> {
133133+ /// Sets the max capacity of the cache.
134134+ pub fn max_capacity(self, max_capacity: u64) -> Self {
135135+ Self {
136136+ max_capacity: Some(max_capacity),
137137+ ..self
138138+ }
139139+ }
140140+141141+ /// Sets the initial capacity (number of entries) of the cache.
142142+ pub fn initial_capacity(self, number_of_entries: usize) -> Self {
143143+ Self {
144144+ initial_capacity: Some(number_of_entries),
145145+ ..self
146146+ }
147147+ }
148148+149149+ /// Sets the weigher closure of the cache.
150150+ ///
151151+ /// The closure should take `&K` and `&V` as the arguments and returns a `u32`
152152+ /// representing the relative size of the entry.
153153+ pub fn weigher(self, weigher: impl Fn(&K, &V) -> u32 + Send + Sync + 'static) -> Self {
154154+ Self {
155155+ weigher: Some(Arc::new(weigher)),
156156+ ..self
157157+ }
158158+ }
159159+160160+ /// Sets the time to live of the cache.
161161+ ///
162162+ /// A cached entry will be expired after the specified duration past from
163163+ /// `insert`.
164164+ ///
165165+ /// # Panics
166166+ ///
167167+ /// `CacheBuilder::build*` methods will panic if the given `duration` is longer
168168+ /// than 1000 years. This is done to protect against overflow when computing key
169169+ /// expiration.
170170+ pub fn time_to_live(self, duration: Duration) -> Self {
171171+ Self {
172172+ time_to_live: Some(duration),
173173+ ..self
174174+ }
175175+ }
176176+177177+ /// Sets the time to idle of the cache.
178178+ ///
179179+ /// A cached entry will be expired after the specified duration past from `get`
180180+ /// or `insert`.
181181+ ///
182182+ /// # Panics
183183+ ///
184184+ /// `CacheBuilder::build*` methods will panic if the given `duration` is longer
185185+ /// than 1000 years. This is done to protect against overflow when computing key
186186+ /// expiration.
187187+ pub fn time_to_idle(self, duration: Duration) -> Self {
188188+ Self {
189189+ time_to_idle: Some(duration),
190190+ ..self
191191+ }
192192+ }
193193+}
194194+195195+#[cfg(test)]
196196+mod tests {
197197+ use super::CacheBuilder;
198198+199199+ use std::time::Duration;
200200+201201+ #[test]
202202+ fn build_cache() {
203203+ // Cache<char, String>
204204+ let cache = CacheBuilder::new(100).build();
205205+ let policy = cache.policy();
206206+207207+ assert_eq!(policy.max_capacity(), Some(100));
208208+ assert_eq!(policy.time_to_live(), None);
209209+ assert_eq!(policy.time_to_idle(), None);
210210+211211+ cache.insert('a', "Alice");
212212+ assert_eq!(cache.get(&'a'), Some("Alice"));
213213+214214+ let cache = CacheBuilder::new(100)
215215+ .time_to_live(Duration::from_secs(45 * 60))
216216+ .time_to_idle(Duration::from_secs(15 * 60))
217217+ .build();
218218+ let policy = cache.policy();
219219+220220+ assert_eq!(policy.max_capacity(), Some(100));
221221+ assert_eq!(policy.time_to_live(), Some(Duration::from_secs(45 * 60)));
222222+ assert_eq!(policy.time_to_idle(), Some(Duration::from_secs(15 * 60)));
223223+224224+ cache.insert('a', "Alice");
225225+ assert_eq!(cache.get(&'a'), Some("Alice"));
226226+ }
227227+228228+ #[test]
229229+ #[should_panic(expected = "time_to_live is longer than 1000 years")]
230230+ fn build_cache_too_long_ttl() {
231231+ let thousand_years_secs: u64 = 1000 * 365 * 24 * 3600;
232232+ let builder: CacheBuilder<char, String, _> = CacheBuilder::new(100);
233233+ let duration = Duration::from_secs(thousand_years_secs);
234234+ builder
235235+ .time_to_live(duration + Duration::from_secs(1))
236236+ .build();
237237+ }
238238+239239+ #[test]
240240+ #[should_panic(expected = "time_to_idle is longer than 1000 years")]
241241+ fn build_cache_too_long_tti() {
242242+ let thousand_years_secs: u64 = 1000 * 365 * 24 * 3600;
243243+ let builder: CacheBuilder<char, String, _> = CacheBuilder::new(100);
244244+ let duration = Duration::from_secs(thousand_years_secs);
245245+ builder
246246+ .time_to_idle(duration + Duration::from_secs(1))
247247+ .build();
248248+ }
249249+}
+1120
crates/mini-moka-vendored/src/sync/cache.rs
···11+use super::{base_cache::BaseCache, CacheBuilder, ConcurrentCacheExt, EntryRef, Iter};
22+use crate::{
33+ common::{
44+ concurrent::{
55+ constants::{MAX_SYNC_REPEATS, WRITE_RETRY_INTERVAL_MICROS},
66+ housekeeper::{Housekeeper, InnerSync},
77+ Weigher, WriteOp,
88+ },
99+ time::Instant,
1010+ },
1111+ Policy,
1212+};
1313+1414+use crossbeam_channel::{Sender, TrySendError};
1515+use std::{
1616+ borrow::Borrow,
1717+ collections::hash_map::RandomState,
1818+ fmt,
1919+ hash::{BuildHasher, Hash},
2020+ sync::Arc,
2121+ time::Duration,
2222+};
2323+2424+/// A thread-safe concurrent in-memory cache built upon [`dashmap::DashMap`][dashmap].
2525+///
2626+/// The `Cache` uses `DashMap` as the central key-value storage. It performs a
2727+/// best-effort bounding of the map using an entry replacement algorithm to determine
2828+/// which entries to evict when the capacity is exceeded.
2929+///
3030+/// To use this cache, enable a crate feature called "dash" in your Cargo.toml.
3131+/// Please note that the API of `dash` cache will _be changed very often_ in next few
3232+/// releases as this is yet an experimental component.
3333+///
3434+/// # Examples
3535+///
3636+/// Cache entries are manually added using [`insert`](#method.insert) method, and are
3737+/// stored in the cache until either evicted or manually invalidated.
3838+///
3939+/// Here's an example of reading and updating a cache by using multiple threads:
4040+///
4141+/// ```rust
4242+/// use mini_moka::sync::Cache;
4343+///
4444+/// use std::thread;
4545+///
4646+/// fn value(n: usize) -> String {
4747+/// format!("value {}", n)
4848+/// }
4949+///
5050+/// const NUM_THREADS: usize = 16;
5151+/// const NUM_KEYS_PER_THREAD: usize = 64;
5252+///
5353+/// // Create a cache that can store up to 10,000 entries.
5454+/// let cache = Cache::new(10_000);
5555+///
5656+/// // Spawn threads and read and update the cache simultaneously.
5757+/// let threads: Vec<_> = (0..NUM_THREADS)
5858+/// .map(|i| {
5959+/// // To share the same cache across the threads, clone it.
6060+/// // This is a cheap operation.
6161+/// let my_cache = cache.clone();
6262+/// let start = i * NUM_KEYS_PER_THREAD;
6363+/// let end = (i + 1) * NUM_KEYS_PER_THREAD;
6464+///
6565+/// thread::spawn(move || {
6666+/// // Insert 64 entries. (NUM_KEYS_PER_THREAD = 64)
6767+/// for key in start..end {
6868+/// my_cache.insert(key, value(key));
6969+/// // get() returns Option<String>, a clone of the stored value.
7070+/// assert_eq!(my_cache.get(&key), Some(value(key)));
7171+/// }
7272+///
7373+/// // Invalidate every 4 element of the inserted entries.
7474+/// for key in (start..end).step_by(4) {
7575+/// my_cache.invalidate(&key);
7676+/// }
7777+/// })
7878+/// })
7979+/// .collect();
8080+///
8181+/// // Wait for all threads to complete.
8282+/// threads.into_iter().for_each(|t| t.join().expect("Failed"));
8383+///
8484+/// // Verify the result.
8585+/// for key in 0..(NUM_THREADS * NUM_KEYS_PER_THREAD) {
8686+/// if key % 4 == 0 {
8787+/// assert_eq!(cache.get(&key), None);
8888+/// } else {
8989+/// assert_eq!(cache.get(&key), Some(value(key)));
9090+/// }
9191+/// }
9292+/// ```
9393+///
9494+/// # Avoiding to clone the value at `get`
9595+///
9696+/// The return type of `get` method is `Option<V>` instead of `Option<&V>`. Every
9797+/// time `get` is called for an existing key, it creates a clone of the stored value
9898+/// `V` and returns it. This is because the `Cache` allows concurrent updates from
9999+/// threads so a value stored in the cache can be dropped or replaced at any time by
100100+/// any other thread. `get` cannot return a reference `&V` as it is impossible to
101101+/// guarantee the value outlives the reference.
102102+///
103103+/// If you want to store values that will be expensive to clone, wrap them by
104104+/// `std::sync::Arc` before storing in a cache. [`Arc`][rustdoc-std-arc] is a
105105+/// thread-safe reference-counted pointer and its `clone()` method is cheap.
106106+///
107107+/// [rustdoc-std-arc]: https://doc.rust-lang.org/stable/std/sync/struct.Arc.html
108108+///
109109+/// # Size-based Eviction
110110+///
111111+/// ```rust
112112+/// use std::convert::TryInto;
113113+/// use mini_moka::sync::Cache;
114114+///
115115+/// // Evict based on the number of entries in the cache.
116116+/// let cache = Cache::builder()
117117+/// // Up to 10,000 entries.
118118+/// .max_capacity(10_000)
119119+/// // Create the cache.
120120+/// .build();
121121+/// cache.insert(1, "one".to_string());
122122+///
123123+/// // Evict based on the byte length of strings in the cache.
124124+/// let cache = Cache::builder()
125125+/// // A weigher closure takes &K and &V and returns a u32
126126+/// // representing the relative size of the entry.
127127+/// .weigher(|_key, value: &String| -> u32 {
128128+/// value.len().try_into().unwrap_or(u32::MAX)
129129+/// })
130130+/// // This cache will hold up to 32MiB of values.
131131+/// .max_capacity(32 * 1024 * 1024)
132132+/// .build();
133133+/// cache.insert(2, "two".to_string());
134134+/// ```
135135+///
136136+/// If your cache should not grow beyond a certain size, use the `max_capacity`
137137+/// method of the [`CacheBuilder`][builder-struct] to set the upper bound. The cache
138138+/// will try to evict entries that have not been used recently or very often.
139139+///
140140+/// At the cache creation time, a weigher closure can be set by the `weigher` method
141141+/// of the `CacheBuilder`. A weigher closure takes `&K` and `&V` as the arguments and
142142+/// returns a `u32` representing the relative size of the entry:
143143+///
144144+/// - If the `weigher` is _not_ set, the cache will treat each entry has the same
145145+/// size of `1`. This means the cache will be bounded by the number of entries.
146146+/// - If the `weigher` is set, the cache will call the weigher to calculate the
147147+/// weighted size (relative size) on an entry. This means the cache will be bounded
148148+/// by the total weighted size of entries.
149149+///
150150+/// Note that weighted sizes are not used when making eviction selections.
151151+///
152152+/// [builder-struct]: ./struct.CacheBuilder.html
153153+///
154154+/// # Time-based Expirations
155155+///
156156+/// `Cache` supports the following expiration policies:
157157+///
158158+/// - **Time to live**: A cached entry will be expired after the specified duration
159159+/// past from `insert`.
160160+/// - **Time to idle**: A cached entry will be expired after the specified duration
161161+/// past from `get` or `insert`.
162162+///
163163+/// ```rust
164164+/// use mini_moka::sync::Cache;
165165+/// use std::time::Duration;
166166+///
167167+/// let cache = Cache::builder()
168168+/// // Time to live (TTL): 30 minutes
169169+/// .time_to_live(Duration::from_secs(30 * 60))
170170+/// // Time to idle (TTI): 5 minutes
171171+/// .time_to_idle(Duration::from_secs( 5 * 60))
172172+/// // Create the cache.
173173+/// .build();
174174+///
175175+/// // This entry will expire after 5 minutes (TTI) if there is no get().
176176+/// cache.insert(0, "zero");
177177+///
178178+/// // This get() will extend the entry life for another 5 minutes.
179179+/// cache.get(&0);
180180+///
181181+/// // Even though we keep calling get(), the entry will expire
182182+/// // after 30 minutes (TTL) from the insert().
183183+/// ```
184184+///
185185+/// # Thread Safety
186186+///
187187+/// All methods provided by the `Cache` are considered thread-safe, and can be safely
188188+/// accessed by multiple concurrent threads.
189189+///
190190+/// - `Cache<K, V, S>` requires trait bounds `Send`, `Sync` and `'static` for `K`
191191+/// (key), `V` (value) and `S` (hasher state).
192192+/// - `Cache<K, V, S>` will implement `Send` and `Sync`.
193193+///
194194+/// # Sharing a cache across threads
195195+///
196196+/// To share a cache across threads, do one of the followings:
197197+///
198198+/// - Create a clone of the cache by calling its `clone` method and pass it to other
199199+/// thread.
200200+/// - Wrap the cache by a `sync::OnceCell` or `sync::Lazy` from
201201+/// [once_cell][once-cell-crate] create, and set it to a `static` variable.
202202+///
203203+/// Cloning is a cheap operation for `Cache` as it only creates thread-safe
204204+/// reference-counted pointers to the internal data structures.
205205+///
206206+/// [once-cell-crate]: https://crates.io/crates/once_cell
207207+///
208208+/// # Hashing Algorithm
209209+///
210210+/// By default, `Cache` uses a hashing algorithm selected to provide resistance
211211+/// against HashDoS attacks. It will be the same one used by
212212+/// `std::collections::HashMap`, which is currently SipHash 1-3.
213213+///
214214+/// While SipHash's performance is very competitive for medium sized keys, other
215215+/// hashing algorithms will outperform it for small keys such as integers as well as
216216+/// large keys such as long strings. However those algorithms will typically not
217217+/// protect against attacks such as HashDoS.
218218+///
219219+/// The hashing algorithm can be replaced on a per-`Cache` basis using the
220220+/// [`build_with_hasher`][build-with-hasher-method] method of the
221221+/// `CacheBuilder`. Many alternative algorithms are available on crates.io, such
222222+/// as the [aHash][ahash-crate] crate.
223223+///
224224+/// [build-with-hasher-method]: ./struct.CacheBuilder.html#method.build_with_hasher
225225+/// [ahash-crate]: https://crates.io/crates/ahash
226226+///
227227+pub struct Cache<K, V, S = RandomState> {
228228+ base: BaseCache<K, V, S>,
229229+}
230230+231231+// TODO: https://github.com/moka-rs/moka/issues/54
232232+#[allow(clippy::non_send_fields_in_send_ty)]
233233+unsafe impl<K, V, S> Send for Cache<K, V, S>
234234+where
235235+ K: Send + Sync,
236236+ V: Send + Sync,
237237+ S: Send,
238238+{
239239+}
240240+241241+unsafe impl<K, V, S> Sync for Cache<K, V, S>
242242+where
243243+ K: Send + Sync,
244244+ V: Send + Sync,
245245+ S: Sync,
246246+{
247247+}
248248+249249+// NOTE: We cannot do `#[derive(Clone)]` because it will add `Clone` bound to `K`.
250250+impl<K, V, S> Clone for Cache<K, V, S> {
251251+ /// Makes a clone of this shared cache.
252252+ ///
253253+ /// This operation is cheap as it only creates thread-safe reference counted
254254+ /// pointers to the shared internal data structures.
255255+ fn clone(&self) -> Self {
256256+ Self {
257257+ base: self.base.clone(),
258258+ }
259259+ }
260260+}
261261+262262+impl<K, V, S> fmt::Debug for Cache<K, V, S>
263263+where
264264+ K: Eq + Hash + fmt::Debug,
265265+ V: fmt::Debug,
266266+ S: BuildHasher + Clone,
267267+{
268268+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
269269+ let mut d_map = f.debug_map();
270270+271271+ for r in self.iter() {
272272+ let (k, v) = r.pair();
273273+ d_map.entry(k, v);
274274+ }
275275+276276+ d_map.finish()
277277+ }
278278+}
279279+280280+impl<K, V> Cache<K, V, RandomState>
281281+where
282282+ K: Hash + Eq + Send + Sync + 'static,
283283+ V: Clone + Send + Sync + 'static,
284284+{
285285+ /// Constructs a new `Cache<K, V>` that will store up to the `max_capacity`.
286286+ ///
287287+ /// To adjust various configuration knobs such as `initial_capacity` or
288288+ /// `time_to_live`, use the [`CacheBuilder`][builder-struct].
289289+ ///
290290+ /// [builder-struct]: ./struct.CacheBuilder.html
291291+ pub fn new(max_capacity: u64) -> Self {
292292+ let build_hasher = RandomState::default();
293293+ Self::with_everything(Some(max_capacity), None, build_hasher, None, None, None)
294294+ }
295295+296296+ /// Returns a [`CacheBuilder`][builder-struct], which can builds a `Cache` with
297297+ /// various configuration knobs.
298298+ ///
299299+ /// [builder-struct]: ./struct.CacheBuilder.html
300300+ pub fn builder() -> CacheBuilder<K, V, Cache<K, V, RandomState>> {
301301+ CacheBuilder::default()
302302+ }
303303+}
304304+305305+impl<K, V, S> Cache<K, V, S> {
306306+ /// Returns a read-only cache policy of this cache.
307307+ ///
308308+ /// At this time, cache policy cannot be modified after cache creation.
309309+ /// A future version may support to modify it.
310310+ pub fn policy(&self) -> Policy {
311311+ self.base.policy()
312312+ }
313313+314314+ /// Returns an approximate number of entries in this cache.
315315+ ///
316316+ /// The value returned is _an estimate_; the actual count may differ if there are
317317+ /// concurrent insertions or removals, or if some entries are pending removal due
318318+ /// to expiration. This inaccuracy can be mitigated by performing a `sync()`
319319+ /// first.
320320+ ///
321321+ /// # Example
322322+ ///
323323+ /// ```rust
324324+ /// use mini_moka::sync::Cache;
325325+ ///
326326+ /// let cache = Cache::new(10);
327327+ /// cache.insert('n', "Netherland Dwarf");
328328+ /// cache.insert('l', "Lop Eared");
329329+ /// cache.insert('d', "Dutch");
330330+ ///
331331+ /// // Ensure an entry exists.
332332+ /// assert!(cache.contains_key(&'n'));
333333+ ///
334334+ /// // However, followings may print stale number zeros instead of threes.
335335+ /// println!("{}", cache.entry_count()); // -> 0
336336+ /// println!("{}", cache.weighted_size()); // -> 0
337337+ ///
338338+ /// // To mitigate the inaccuracy, bring `ConcurrentCacheExt` trait to
339339+ /// // the scope so we can use `sync` method.
340340+ /// use mini_moka::sync::ConcurrentCacheExt;
341341+ /// // Call `sync` to run pending internal tasks.
342342+ /// cache.sync();
343343+ ///
344344+ /// // Followings will print the actual numbers.
345345+ /// println!("{}", cache.entry_count()); // -> 3
346346+ /// println!("{}", cache.weighted_size()); // -> 3
347347+ /// ```
348348+ ///
349349+ pub fn entry_count(&self) -> u64 {
350350+ self.base.entry_count()
351351+ }
352352+353353+ /// Returns an approximate total weighted size of entries in this cache.
354354+ ///
355355+ /// The value returned is _an estimate_; the actual size may differ if there are
356356+ /// concurrent insertions or removals, or if some entries are pending removal due
357357+ /// to expiration. This inaccuracy can be mitigated by performing a `sync()`
358358+ /// first. See [`entry_count`](#method.entry_count) for a sample code.
359359+ pub fn weighted_size(&self) -> u64 {
360360+ self.base.weighted_size()
361361+ }
362362+}
363363+364364+impl<K, V, S> Cache<K, V, S>
365365+where
366366+ K: Hash + Eq + Send + Sync + 'static,
367367+ V: Clone + Send + Sync + 'static,
368368+ S: BuildHasher + Clone + Send + Sync + 'static,
369369+{
370370+ pub(crate) fn with_everything(
371371+ max_capacity: Option<u64>,
372372+ initial_capacity: Option<usize>,
373373+ build_hasher: S,
374374+ weigher: Option<Weigher<K, V>>,
375375+ time_to_live: Option<Duration>,
376376+ time_to_idle: Option<Duration>,
377377+ ) -> Self {
378378+ Self {
379379+ base: BaseCache::new(
380380+ max_capacity,
381381+ initial_capacity,
382382+ build_hasher,
383383+ weigher,
384384+ time_to_live,
385385+ time_to_idle,
386386+ ),
387387+ }
388388+ }
389389+390390+ /// Returns `true` if the cache contains a value for the key.
391391+ ///
392392+ /// Unlike the `get` method, this method is not considered a cache read operation,
393393+ /// so it does not update the historic popularity estimator or reset the idle
394394+ /// timer for the key.
395395+ ///
396396+ /// The key may be any borrowed form of the cache's key type, but `Hash` and `Eq`
397397+ /// on the borrowed form _must_ match those for the key type.
398398+ pub fn contains_key<Q>(&self, key: &Q) -> bool
399399+ where
400400+ Arc<K>: Borrow<Q>,
401401+ Q: Hash + Eq + ?Sized,
402402+ {
403403+ self.base.contains_key(key)
404404+ }
405405+406406+ /// Returns a _clone_ of the value corresponding to the key.
407407+ ///
408408+ /// If you want to store values that will be expensive to clone, wrap them by
409409+ /// `std::sync::Arc` before storing in a cache. [`Arc`][rustdoc-std-arc] is a
410410+ /// thread-safe reference-counted pointer and its `clone()` method is cheap.
411411+ ///
412412+ /// The key may be any borrowed form of the cache's key type, but `Hash` and `Eq`
413413+ /// on the borrowed form _must_ match those for the key type.
414414+ ///
415415+ /// [rustdoc-std-arc]: https://doc.rust-lang.org/stable/std/sync/struct.Arc.html
416416+ pub fn get<Q>(&self, key: &Q) -> Option<V>
417417+ where
418418+ Arc<K>: Borrow<Q>,
419419+ Q: Hash + Eq + ?Sized,
420420+ {
421421+ self.base.get_with_hash(key, self.base.hash(key))
422422+ }
423423+424424+ /// Deprecated, replaced with [`get`](#method.get)
425425+ #[doc(hidden)]
426426+ #[deprecated(since = "0.8.0", note = "Replaced with `get`")]
427427+ pub fn get_if_present<Q>(&self, key: &Q) -> Option<V>
428428+ where
429429+ Arc<K>: Borrow<Q>,
430430+ Q: Hash + Eq + ?Sized,
431431+ {
432432+ self.get(key)
433433+ }
434434+435435+ /// Inserts a key-value pair into the cache.
436436+ ///
437437+ /// If the cache has this key present, the value is updated.
438438+ pub fn insert(&self, key: K, value: V) {
439439+ let hash = self.base.hash(&key);
440440+ let key = Arc::new(key);
441441+ self.insert_with_hash(key, hash, value)
442442+ }
443443+444444+ pub(crate) fn insert_with_hash(&self, key: Arc<K>, hash: u64, value: V) {
445445+ let (op, now) = self.base.do_insert_with_hash(key, hash, value);
446446+ let hk = self.base.housekeeper.as_ref();
447447+ Self::schedule_write_op(
448448+ self.base.inner.as_ref(),
449449+ &self.base.write_op_ch,
450450+ op,
451451+ now,
452452+ hk,
453453+ )
454454+ .expect("Failed to insert");
455455+ }
456456+457457+ /// Discards any cached value for the key.
458458+ ///
459459+ /// The key may be any borrowed form of the cache's key type, but `Hash` and `Eq`
460460+ /// on the borrowed form _must_ match those for the key type.
461461+ pub fn invalidate<Q>(&self, key: &Q)
462462+ where
463463+ Arc<K>: Borrow<Q>,
464464+ Q: Hash + Eq + ?Sized,
465465+ {
466466+ if let Some(kv) = self.base.remove_entry(key) {
467467+ let op = WriteOp::Remove(kv);
468468+ let now = self.base.current_time_from_expiration_clock();
469469+ let hk = self.base.housekeeper.as_ref();
470470+ Self::schedule_write_op(
471471+ self.base.inner.as_ref(),
472472+ &self.base.write_op_ch,
473473+ op,
474474+ now,
475475+ hk,
476476+ )
477477+ .expect("Failed to remove");
478478+ }
479479+ }
480480+481481+ /// Discards all cached values.
482482+ ///
483483+ /// This method returns immediately and a background thread will evict all the
484484+ /// cached values inserted before the time when this method was called. It is
485485+ /// guaranteed that the `get` method must not return these invalidated values
486486+ /// even if they have not been evicted.
487487+ ///
488488+ /// Like the `invalidate` method, this method does not clear the historic
489489+ /// popularity estimator of keys so that it retains the client activities of
490490+ /// trying to retrieve an item.
491491+ pub fn invalidate_all(&self) {
492492+ self.base.invalidate_all();
493493+ }
494494+}
495495+496496+// Clippy beta 0.1.83 (f41c7ed9889 2024-10-31) warns about unused lifetimes on 'a.
497497+// This seems a false positive. The lifetimes are used in the trait bounds.
498498+// https://rust-lang.github.io/rust-clippy/master/index.html#extra_unused_lifetimes
499499+#[allow(clippy::extra_unused_lifetimes)]
500500+impl<'a, K, V, S> Cache<K, V, S>
501501+where
502502+ K: 'a + Eq + Hash,
503503+ V: 'a,
504504+ S: BuildHasher + Clone,
505505+{
506506+ /// Creates an iterator visiting all key-value pairs in arbitrary order. The
507507+ /// iterator element type is [`EntryRef<'a, K, V, S>`][moka-entry-ref].
508508+ ///
509509+ /// Unlike the `get` method, visiting entries via an iterator do not update the
510510+ /// historic popularity estimator or reset idle timers for keys.
511511+ ///
512512+ /// # Locking behavior
513513+ ///
514514+ /// This iterator relies on the iterator of [`dashmap::DashMap`][dashmap-iter],
515515+ /// which employs read-write locks. May deadlock if the thread holding an
516516+ /// iterator attempts to update the cache.
517517+ ///
518518+ /// [moka-entry-ref]: ./struct.EntryRef.html
519519+ /// [dashmap-iter]: <https://docs.rs/dashmap/*/dashmap/struct.DashMap.html#method.iter>
520520+ ///
521521+ /// # Examples
522522+ ///
523523+ /// ```rust
524524+ /// use mini_moka::sync::Cache;
525525+ ///
526526+ /// let cache = Cache::new(100);
527527+ /// cache.insert("Julia", 14);
528528+ ///
529529+ /// let mut iter = cache.iter();
530530+ /// let entry_ref = iter.next().unwrap();
531531+ /// assert_eq!(entry_ref.pair(), (&"Julia", &14));
532532+ /// assert_eq!(entry_ref.key(), &"Julia");
533533+ /// assert_eq!(entry_ref.value(), &14);
534534+ /// assert_eq!(*entry_ref, 14);
535535+ ///
536536+ /// assert!(iter.next().is_none());
537537+ /// ```
538538+ ///
539539+ pub fn iter(&self) -> Iter<'_, K, V, S> {
540540+ self.base.iter()
541541+ }
542542+}
543543+544544+impl<K, V, S> ConcurrentCacheExt<K, V> for Cache<K, V, S>
545545+where
546546+ K: Hash + Eq + Send + Sync + 'static,
547547+ V: Send + Sync + 'static,
548548+ S: BuildHasher + Clone + Send + Sync + 'static,
549549+{
550550+ fn sync(&self) {
551551+ self.base.inner.sync(MAX_SYNC_REPEATS);
552552+ }
553553+}
554554+555555+impl<'a, K, V, S> IntoIterator for &'a Cache<K, V, S>
556556+where
557557+ K: 'a + Eq + Hash,
558558+ V: 'a,
559559+ S: BuildHasher + Clone,
560560+{
561561+ type Item = EntryRef<'a, K, V>;
562562+563563+ type IntoIter = Iter<'a, K, V, S>;
564564+565565+ fn into_iter(self) -> Self::IntoIter {
566566+ self.iter()
567567+ }
568568+}
569569+570570+// private methods
571571+impl<K, V, S> Cache<K, V, S>
572572+where
573573+ K: Hash + Eq + Send + Sync + 'static,
574574+ V: Clone + Send + Sync + 'static,
575575+ S: BuildHasher + Clone + Send + Sync + 'static,
576576+{
577577+ #[inline]
578578+ fn schedule_write_op(
579579+ inner: &impl InnerSync,
580580+ ch: &Sender<WriteOp<K, V>>,
581581+ op: WriteOp<K, V>,
582582+ now: Instant,
583583+ housekeeper: Option<&Arc<Housekeeper>>,
584584+ ) -> Result<(), TrySendError<WriteOp<K, V>>> {
585585+ let mut op = op;
586586+587587+ // NOTES:
588588+ // - This will block when the channel is full.
589589+ // - We are doing a busy-loop here. We were originally calling `ch.send(op)?`,
590590+ // but we got a notable performance degradation.
591591+ loop {
592592+ BaseCache::<K, V, S>::apply_reads_writes_if_needed(inner, ch, now, housekeeper);
593593+ match ch.try_send(op) {
594594+ Ok(()) => break,
595595+ Err(TrySendError::Full(op1)) => {
596596+ op = op1;
597597+ std::thread::sleep(Duration::from_micros(WRITE_RETRY_INTERVAL_MICROS));
598598+ }
599599+ Err(e @ TrySendError::Disconnected(_)) => return Err(e),
600600+ }
601601+ }
602602+ Ok(())
603603+ }
604604+}
605605+606606+// For unit tests.
607607+#[cfg(test)]
608608+impl<K, V, S> Cache<K, V, S>
609609+where
610610+ K: Hash + Eq + Send + Sync + 'static,
611611+ V: Clone + Send + Sync + 'static,
612612+ S: BuildHasher + Clone + Send + Sync + 'static,
613613+{
614614+ pub(crate) fn is_table_empty(&self) -> bool {
615615+ self.entry_count() == 0
616616+ }
617617+618618+ pub(crate) fn reconfigure_for_testing(&mut self) {
619619+ self.base.reconfigure_for_testing();
620620+ }
621621+622622+ pub(crate) fn set_expiration_clock(&self, clock: Option<crate::common::time::Clock>) {
623623+ self.base.set_expiration_clock(clock);
624624+ }
625625+}
626626+627627+// To see the debug prints, run test as `cargo test -- --nocapture`
628628+#[cfg(test)]
629629+mod tests {
630630+ use super::{Cache, ConcurrentCacheExt};
631631+ use crate::common::time::Clock;
632632+633633+ use std::{sync::Arc, time::Duration};
634634+635635+ #[test]
636636+ fn basic_single_thread() {
637637+ let mut cache = Cache::new(3);
638638+ cache.reconfigure_for_testing();
639639+640640+ // Make the cache exterior immutable.
641641+ let cache = cache;
642642+643643+ cache.insert("a", "alice");
644644+ cache.insert("b", "bob");
645645+ assert_eq!(cache.get(&"a"), Some("alice"));
646646+ assert!(cache.contains_key(&"a"));
647647+ assert!(cache.contains_key(&"b"));
648648+ assert_eq!(cache.get(&"b"), Some("bob"));
649649+ cache.sync();
650650+ // counts: a -> 1, b -> 1
651651+652652+ cache.insert("c", "cindy");
653653+ assert_eq!(cache.get(&"c"), Some("cindy"));
654654+ assert!(cache.contains_key(&"c"));
655655+ // counts: a -> 1, b -> 1, c -> 1
656656+ cache.sync();
657657+658658+ assert!(cache.contains_key(&"a"));
659659+ assert_eq!(cache.get(&"a"), Some("alice"));
660660+ assert_eq!(cache.get(&"b"), Some("bob"));
661661+ assert!(cache.contains_key(&"b"));
662662+ cache.sync();
663663+ // counts: a -> 2, b -> 2, c -> 1
664664+665665+ // "d" should not be admitted because its frequency is too low.
666666+ cache.insert("d", "david"); // count: d -> 0
667667+ cache.sync();
668668+ assert_eq!(cache.get(&"d"), None); // d -> 1
669669+ assert!(!cache.contains_key(&"d"));
670670+671671+ cache.insert("d", "david");
672672+ cache.sync();
673673+ assert!(!cache.contains_key(&"d"));
674674+ assert_eq!(cache.get(&"d"), None); // d -> 2
675675+676676+ // "d" should be admitted and "c" should be evicted
677677+ // because d's frequency is higher than c's.
678678+ cache.insert("d", "dennis");
679679+ cache.sync();
680680+ assert_eq!(cache.get(&"a"), Some("alice"));
681681+ assert_eq!(cache.get(&"b"), Some("bob"));
682682+ assert_eq!(cache.get(&"c"), None);
683683+ assert_eq!(cache.get(&"d"), Some("dennis"));
684684+ assert!(cache.contains_key(&"a"));
685685+ assert!(cache.contains_key(&"b"));
686686+ assert!(!cache.contains_key(&"c"));
687687+ assert!(cache.contains_key(&"d"));
688688+689689+ cache.invalidate(&"b");
690690+ assert_eq!(cache.get(&"b"), None);
691691+ assert!(!cache.contains_key(&"b"));
692692+ }
693693+694694+ #[test]
695695+ fn size_aware_eviction() {
696696+ let weigher = |_k: &&str, v: &(&str, u32)| v.1;
697697+698698+ let alice = ("alice", 10);
699699+ let bob = ("bob", 15);
700700+ let bill = ("bill", 20);
701701+ let cindy = ("cindy", 5);
702702+ let david = ("david", 15);
703703+ let dennis = ("dennis", 15);
704704+705705+ let mut cache = Cache::builder().max_capacity(31).weigher(weigher).build();
706706+ cache.reconfigure_for_testing();
707707+708708+ // Make the cache exterior immutable.
709709+ let cache = cache;
710710+711711+ cache.insert("a", alice);
712712+ cache.insert("b", bob);
713713+ assert_eq!(cache.get(&"a"), Some(alice));
714714+ assert!(cache.contains_key(&"a"));
715715+ assert!(cache.contains_key(&"b"));
716716+ assert_eq!(cache.get(&"b"), Some(bob));
717717+ cache.sync();
718718+ // order (LRU -> MRU) and counts: a -> 1, b -> 1
719719+720720+ cache.insert("c", cindy);
721721+ assert_eq!(cache.get(&"c"), Some(cindy));
722722+ assert!(cache.contains_key(&"c"));
723723+ // order and counts: a -> 1, b -> 1, c -> 1
724724+ cache.sync();
725725+726726+ assert!(cache.contains_key(&"a"));
727727+ assert_eq!(cache.get(&"a"), Some(alice));
728728+ assert_eq!(cache.get(&"b"), Some(bob));
729729+ assert!(cache.contains_key(&"b"));
730730+ cache.sync();
731731+ // order and counts: c -> 1, a -> 2, b -> 2
732732+733733+ // To enter "d" (weight: 15), it needs to evict "c" (w: 5) and "a" (w: 10).
734734+ // "d" must have higher count than 3, which is the aggregated count
735735+ // of "a" and "c".
736736+ cache.insert("d", david); // count: d -> 0
737737+ cache.sync();
738738+ assert_eq!(cache.get(&"d"), None); // d -> 1
739739+ assert!(!cache.contains_key(&"d"));
740740+741741+ cache.insert("d", david);
742742+ cache.sync();
743743+ assert!(!cache.contains_key(&"d"));
744744+ assert_eq!(cache.get(&"d"), None); // d -> 2
745745+746746+ cache.insert("d", david);
747747+ cache.sync();
748748+ assert_eq!(cache.get(&"d"), None); // d -> 3
749749+ assert!(!cache.contains_key(&"d"));
750750+751751+ cache.insert("d", david);
752752+ cache.sync();
753753+ assert!(!cache.contains_key(&"d"));
754754+ assert_eq!(cache.get(&"d"), None); // d -> 4
755755+756756+ // Finally "d" should be admitted by evicting "c" and "a".
757757+ cache.insert("d", dennis);
758758+ cache.sync();
759759+ assert_eq!(cache.get(&"a"), None);
760760+ assert_eq!(cache.get(&"b"), Some(bob));
761761+ assert_eq!(cache.get(&"c"), None);
762762+ assert_eq!(cache.get(&"d"), Some(dennis));
763763+ assert!(!cache.contains_key(&"a"));
764764+ assert!(cache.contains_key(&"b"));
765765+ assert!(!cache.contains_key(&"c"));
766766+ assert!(cache.contains_key(&"d"));
767767+768768+ // Update "b" with "bill" (w: 15 -> 20). This should evict "d" (w: 15).
769769+ cache.insert("b", bill);
770770+ cache.sync();
771771+ assert_eq!(cache.get(&"b"), Some(bill));
772772+ assert_eq!(cache.get(&"d"), None);
773773+ assert!(cache.contains_key(&"b"));
774774+ assert!(!cache.contains_key(&"d"));
775775+776776+ // Re-add "a" (w: 10) and update "b" with "bob" (w: 20 -> 15).
777777+ cache.insert("a", alice);
778778+ cache.insert("b", bob);
779779+ cache.sync();
780780+ assert_eq!(cache.get(&"a"), Some(alice));
781781+ assert_eq!(cache.get(&"b"), Some(bob));
782782+ assert_eq!(cache.get(&"d"), None);
783783+ assert!(cache.contains_key(&"a"));
784784+ assert!(cache.contains_key(&"b"));
785785+ assert!(!cache.contains_key(&"d"));
786786+787787+ // Verify the sizes.
788788+ assert_eq!(cache.entry_count(), 2);
789789+ assert_eq!(cache.weighted_size(), 25);
790790+ }
791791+792792+ #[test]
793793+ fn basic_multi_threads() {
794794+ let num_threads = 4;
795795+ let cache = Cache::new(100);
796796+797797+ // https://rust-lang.github.io/rust-clippy/master/index.html#needless_collect
798798+ #[allow(clippy::needless_collect)]
799799+ let handles = (0..num_threads)
800800+ .map(|id| {
801801+ let cache = cache.clone();
802802+ std::thread::spawn(move || {
803803+ cache.insert(10, format!("{}-100", id));
804804+ cache.get(&10);
805805+ cache.insert(20, format!("{}-200", id));
806806+ cache.invalidate(&10);
807807+ })
808808+ })
809809+ .collect::<Vec<_>>();
810810+811811+ handles.into_iter().for_each(|h| h.join().expect("Failed"));
812812+813813+ assert!(cache.get(&10).is_none());
814814+ assert!(cache.get(&20).is_some());
815815+ assert!(!cache.contains_key(&10));
816816+ assert!(cache.contains_key(&20));
817817+ }
818818+819819+ #[test]
820820+ fn invalidate_all() {
821821+ let mut cache = Cache::new(100);
822822+ cache.reconfigure_for_testing();
823823+824824+ // Make the cache exterior immutable.
825825+ let cache = cache;
826826+827827+ cache.insert("a", "alice");
828828+ cache.insert("b", "bob");
829829+ cache.insert("c", "cindy");
830830+ assert_eq!(cache.get(&"a"), Some("alice"));
831831+ assert_eq!(cache.get(&"b"), Some("bob"));
832832+ assert_eq!(cache.get(&"c"), Some("cindy"));
833833+ assert!(cache.contains_key(&"a"));
834834+ assert!(cache.contains_key(&"b"));
835835+ assert!(cache.contains_key(&"c"));
836836+837837+ // `cache.sync()` is no longer needed here before invalidating. The last
838838+ // modified timestamp of the entries were updated when they were inserted.
839839+ // https://github.com/moka-rs/moka/issues/155
840840+841841+ cache.invalidate_all();
842842+ cache.sync();
843843+844844+ cache.insert("d", "david");
845845+ cache.sync();
846846+847847+ assert!(cache.get(&"a").is_none());
848848+ assert!(cache.get(&"b").is_none());
849849+ assert!(cache.get(&"c").is_none());
850850+ assert_eq!(cache.get(&"d"), Some("david"));
851851+ assert!(!cache.contains_key(&"a"));
852852+ assert!(!cache.contains_key(&"b"));
853853+ assert!(!cache.contains_key(&"c"));
854854+ assert!(cache.contains_key(&"d"));
855855+ }
856856+857857+ #[test]
858858+ fn time_to_live() {
859859+ let mut cache = Cache::builder()
860860+ .max_capacity(100)
861861+ .time_to_live(Duration::from_secs(10))
862862+ .build();
863863+864864+ cache.reconfigure_for_testing();
865865+866866+ let (clock, mock) = Clock::mock();
867867+ cache.set_expiration_clock(Some(clock));
868868+869869+ // Make the cache exterior immutable.
870870+ let cache = cache;
871871+872872+ cache.insert("a", "alice");
873873+ cache.sync();
874874+875875+ mock.increment(Duration::from_secs(5)); // 5 secs from the start.
876876+ cache.sync();
877877+878878+ assert_eq!(cache.get(&"a"), Some("alice"));
879879+ assert!(cache.contains_key(&"a"));
880880+881881+ mock.increment(Duration::from_secs(5)); // 10 secs.
882882+ assert_eq!(cache.get(&"a"), None);
883883+ assert!(!cache.contains_key(&"a"));
884884+885885+ assert_eq!(cache.iter().count(), 0);
886886+887887+ cache.sync();
888888+ assert!(cache.is_table_empty());
889889+890890+ cache.insert("b", "bob");
891891+ cache.sync();
892892+893893+ assert_eq!(cache.entry_count(), 1);
894894+895895+ mock.increment(Duration::from_secs(5)); // 15 secs.
896896+ cache.sync();
897897+898898+ assert_eq!(cache.get(&"b"), Some("bob"));
899899+ assert!(cache.contains_key(&"b"));
900900+ assert_eq!(cache.entry_count(), 1);
901901+902902+ cache.insert("b", "bill");
903903+ cache.sync();
904904+905905+ mock.increment(Duration::from_secs(5)); // 20 secs
906906+ cache.sync();
907907+908908+ assert_eq!(cache.get(&"b"), Some("bill"));
909909+ assert!(cache.contains_key(&"b"));
910910+ assert_eq!(cache.entry_count(), 1);
911911+912912+ mock.increment(Duration::from_secs(5)); // 25 secs
913913+ assert_eq!(cache.get(&"a"), None);
914914+ assert_eq!(cache.get(&"b"), None);
915915+ assert!(!cache.contains_key(&"a"));
916916+ assert!(!cache.contains_key(&"b"));
917917+918918+ assert_eq!(cache.iter().count(), 0);
919919+920920+ cache.sync();
921921+ assert!(cache.is_table_empty());
922922+ }
923923+924924+ #[test]
925925+ fn time_to_idle() {
926926+ let mut cache = Cache::builder()
927927+ .max_capacity(100)
928928+ .time_to_idle(Duration::from_secs(10))
929929+ .build();
930930+931931+ cache.reconfigure_for_testing();
932932+933933+ let (clock, mock) = Clock::mock();
934934+ cache.set_expiration_clock(Some(clock));
935935+936936+ // Make the cache exterior immutable.
937937+ let cache = cache;
938938+939939+ cache.insert("a", "alice");
940940+ cache.sync();
941941+942942+ mock.increment(Duration::from_secs(5)); // 5 secs from the start.
943943+ cache.sync();
944944+945945+ assert_eq!(cache.get(&"a"), Some("alice"));
946946+947947+ mock.increment(Duration::from_secs(5)); // 10 secs.
948948+ cache.sync();
949949+950950+ cache.insert("b", "bob");
951951+ cache.sync();
952952+953953+ assert_eq!(cache.entry_count(), 2);
954954+955955+ mock.increment(Duration::from_secs(2)); // 12 secs.
956956+ cache.sync();
957957+958958+ // contains_key does not reset the idle timer for the key.
959959+ assert!(cache.contains_key(&"a"));
960960+ assert!(cache.contains_key(&"b"));
961961+ cache.sync();
962962+963963+ assert_eq!(cache.entry_count(), 2);
964964+965965+ mock.increment(Duration::from_secs(3)); // 15 secs.
966966+ assert_eq!(cache.get(&"a"), None);
967967+ assert_eq!(cache.get(&"b"), Some("bob"));
968968+ assert!(!cache.contains_key(&"a"));
969969+ assert!(cache.contains_key(&"b"));
970970+971971+ assert_eq!(cache.iter().count(), 1);
972972+973973+ cache.sync();
974974+ assert_eq!(cache.entry_count(), 1);
975975+976976+ mock.increment(Duration::from_secs(10)); // 25 secs
977977+ assert_eq!(cache.get(&"a"), None);
978978+ assert_eq!(cache.get(&"b"), None);
979979+ assert!(!cache.contains_key(&"a"));
980980+ assert!(!cache.contains_key(&"b"));
981981+982982+ assert_eq!(cache.iter().count(), 0);
983983+984984+ cache.sync();
985985+ assert!(cache.is_table_empty());
986986+ }
987987+988988+ #[test]
989989+ fn test_iter() {
990990+ const NUM_KEYS: usize = 50;
991991+992992+ fn make_value(key: usize) -> String {
993993+ format!("val: {}", key)
994994+ }
995995+996996+ let cache = Cache::builder()
997997+ .max_capacity(100)
998998+ .time_to_idle(Duration::from_secs(10))
999999+ .build();
10001000+10011001+ for key in 0..NUM_KEYS {
10021002+ cache.insert(key, make_value(key));
10031003+ }
10041004+10051005+ let mut key_set = std::collections::HashSet::new();
10061006+10071007+ for entry in &cache {
10081008+ let (key, value) = entry.pair();
10091009+ assert_eq!(value, &make_value(*key));
10101010+10111011+ key_set.insert(*key);
10121012+ }
10131013+10141014+ // Ensure there are no missing or duplicate keys in the iteration.
10151015+ assert_eq!(key_set.len(), NUM_KEYS);
10161016+10171017+ // DO NOT REMOVE THE COMMENT FROM THIS BLOCK.
10181018+ // This block demonstrates how you can write a code to get a deadlock.
10191019+ // {
10201020+ // let mut iter = cache.iter();
10211021+ // let _ = iter.next();
10221022+10231023+ // for key in 0..NUM_KEYS {
10241024+ // cache.insert(key, make_value(key));
10251025+ // println!("{}", key);
10261026+ // }
10271027+10281028+ // let _ = iter.next();
10291029+ // }
10301030+ }
10311031+10321032+ /// Runs 16 threads at the same time and ensures no deadlock occurs.
10331033+ ///
10341034+ /// - Eight of the threads will update key-values in the cache.
10351035+ /// - Eight others will iterate the cache.
10361036+ ///
10371037+ #[test]
10381038+ fn test_iter_multi_threads() {
10391039+ use std::collections::HashSet;
10401040+10411041+ const NUM_KEYS: usize = 1024;
10421042+ const NUM_THREADS: usize = 16;
10431043+10441044+ fn make_value(key: usize) -> String {
10451045+ format!("val: {}", key)
10461046+ }
10471047+10481048+ let cache = Cache::builder()
10491049+ .max_capacity(2048)
10501050+ .time_to_idle(Duration::from_secs(10))
10511051+ .build();
10521052+10531053+ // Initialize the cache.
10541054+ for key in 0..NUM_KEYS {
10551055+ cache.insert(key, make_value(key));
10561056+ }
10571057+10581058+ let rw_lock = Arc::new(std::sync::RwLock::<()>::default());
10591059+ let write_lock = rw_lock.write().unwrap();
10601060+10611061+ // https://rust-lang.github.io/rust-clippy/master/index.html#needless_collect
10621062+ #[allow(clippy::needless_collect)]
10631063+ let handles = (0..NUM_THREADS)
10641064+ .map(|n| {
10651065+ let cache = cache.clone();
10661066+ let rw_lock = Arc::clone(&rw_lock);
10671067+10681068+ if n % 2 == 0 {
10691069+ // This thread will update the cache.
10701070+ std::thread::spawn(move || {
10711071+ let read_lock = rw_lock.read().unwrap();
10721072+ for key in 0..NUM_KEYS {
10731073+ // TODO: Update keys in a random order?
10741074+ cache.insert(key, make_value(key));
10751075+ }
10761076+ std::mem::drop(read_lock);
10771077+ })
10781078+ } else {
10791079+ // This thread will iterate the cache.
10801080+ std::thread::spawn(move || {
10811081+ let read_lock = rw_lock.read().unwrap();
10821082+ let mut key_set = HashSet::new();
10831083+ for entry in &cache {
10841084+ let (key, value) = entry.pair();
10851085+ assert_eq!(value, &make_value(*key));
10861086+ key_set.insert(*key);
10871087+ }
10881088+ // Ensure there are no missing or duplicate keys in the iteration.
10891089+ assert_eq!(key_set.len(), NUM_KEYS);
10901090+ std::mem::drop(read_lock);
10911091+ })
10921092+ }
10931093+ })
10941094+ .collect::<Vec<_>>();
10951095+10961096+ // Let these threads to run by releasing the write lock.
10971097+ std::mem::drop(write_lock);
10981098+10991099+ handles.into_iter().for_each(|h| h.join().expect("Failed"));
11001100+11011101+ // Ensure there are no missing or duplicate keys in the iteration.
11021102+ let key_set = cache.iter().map(|ent| *ent.key()).collect::<HashSet<_>>();
11031103+ assert_eq!(key_set.len(), NUM_KEYS);
11041104+ }
11051105+11061106+ #[test]
11071107+ fn test_debug_format() {
11081108+ let cache = Cache::new(10);
11091109+ cache.insert('a', "alice");
11101110+ cache.insert('b', "bob");
11111111+ cache.insert('c', "cindy");
11121112+11131113+ let debug_str = format!("{:?}", cache);
11141114+ assert!(debug_str.starts_with('{'));
11151115+ assert!(debug_str.contains(r#"'a': "alice""#));
11161116+ assert!(debug_str.contains(r#"'b': "bob""#));
11171117+ assert!(debug_str.contains(r#"'c': "cindy""#));
11181118+ assert!(debug_str.ends_with('}'));
11191119+ }
11201120+}
+64
crates/mini-moka-vendored/src/sync/iter.rs
···11+use super::{base_cache::BaseCache, mapref::EntryRef};
22+use crate::common::concurrent::ValueEntry;
33+44+use std::{
55+ hash::{BuildHasher, Hash},
66+ sync::Arc,
77+};
88+use triomphe::Arc as TrioArc;
99+1010+pub(crate) type DashMapIter<'a, K, V, S> =
1111+ dashmap::iter::Iter<'a, Arc<K>, TrioArc<ValueEntry<K, V>>, S>;
1212+1313+pub struct Iter<'a, K, V, S> {
1414+ cache: &'a BaseCache<K, V, S>,
1515+ map_iter: DashMapIter<'a, K, V, S>,
1616+}
1717+1818+impl<'a, K, V, S> Iter<'a, K, V, S> {
1919+ pub(crate) fn new(cache: &'a BaseCache<K, V, S>, map_iter: DashMapIter<'a, K, V, S>) -> Self {
2020+ Self { cache, map_iter }
2121+ }
2222+}
2323+2424+impl<'a, K, V, S> Iterator for Iter<'a, K, V, S>
2525+where
2626+ K: Eq + Hash,
2727+ S: BuildHasher + Clone,
2828+{
2929+ type Item = EntryRef<'a, K, V>;
3030+3131+ fn next(&mut self) -> Option<Self::Item> {
3232+ for map_ref in &mut self.map_iter {
3333+ if !self.cache.is_expired_entry(map_ref.value()) {
3434+ return Some(EntryRef::new(map_ref));
3535+ }
3636+ }
3737+3838+ None
3939+ }
4040+}
4141+4242+// Clippy beta 0.1.83 (f41c7ed9889 2024-10-31) warns about unused lifetimes on 'a.
4343+// This seems a false positive. The lifetimes are used in the trait bounds.
4444+// https://rust-lang.github.io/rust-clippy/master/index.html#extra_unused_lifetimes
4545+#[allow(clippy::extra_unused_lifetimes)]
4646+unsafe impl<'a, K, V, S> Send for Iter<'_, K, V, S>
4747+where
4848+ K: 'a + Eq + Hash + Send,
4949+ V: 'a + Send,
5050+ S: 'a + BuildHasher + Clone,
5151+{
5252+}
5353+5454+// Clippy beta 0.1.83 (f41c7ed9889 2024-10-31) warns about unused lifetimes on 'a.
5555+// This seems a false positive. The lifetimes are used in the trait bounds.
5656+// https://rust-lang.github.io/rust-clippy/master/index.html#extra_unused_lifetimes
5757+#[allow(clippy::extra_unused_lifetimes)]
5858+unsafe impl<'a, K, V, S> Sync for Iter<'_, K, V, S>
5959+where
6060+ K: 'a + Eq + Hash + Sync,
6161+ V: 'a + Sync,
6262+ S: 'a + BuildHasher + Clone,
6363+{
6464+}