cachefs

package module
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 12, 2025 License: MIT Imports: 14 Imported by: 0

README

cachefs

Go Reference Go Report Card CI License

A sophisticated caching filesystem implementation for the AbsFS ecosystem, providing enterprise-grade write-through, write-back, and write-around caching with advanced eviction policies.

Overview

cachefs extends beyond the basic caching capabilities of corfs (copy-on-read filesystem) by implementing a full-featured cache layer with multiple write modes, configurable eviction policies, and intelligent cache invalidation strategies. It's designed for scenarios requiring high performance, memory efficiency, and fine-grained control over caching behavior.

Key Differentiators from corfs
  • Multiple Write Modes: Write-through, write-back, and write-around policies
  • Advanced Eviction: LRU (Least Recently Used), LFU (Least Frequently Used), and TTL-based eviction
  • Memory Management: Configurable cache size limits with automatic eviction
  • Cache Invalidation: Time-based, event-based, and manual invalidation strategies
  • Metadata Caching: Separate control over data and metadata caching
  • Performance Monitoring: Built-in metrics and statistics tracking
  • Async Write-Back: Background write-back with configurable flush intervals

Features

Cache Policies
Eviction Policies
  • LRU (Least Recently Used): Evicts least recently accessed items
  • LFU (Least Frequently Used): Evicts least frequently accessed items
  • TTL (Time-To-Live): Automatic expiration based on age
  • Hybrid: Combination of LRU/LFU with TTL constraints
Write Modes
  • Write-Through: Synchronous writes to both cache and backing store
  • Write-Back: Asynchronous writes with configurable flush intervals
  • Write-Around: Bypass cache on writes, only cache on reads
Memory Management
  • Configurable maximum cache size (bytes or entry count)
  • Automatic eviction when limits are reached
  • Memory pressure detection and adaptive behavior
  • Separate limits for data and metadata caches
  • CacheFS: Implements absfs.FileSystem (no symlink methods)
  • SymlinkCacheFS: Implements absfs.SymlinkFileSystem with full symlink support
  • Use NewSymlinkFS() when your backing filesystem supports symlinks
Cache Invalidation
  • Time-Based: TTL expiration with configurable durations
  • Event-Based: Invalidation on file system events
  • Manual: Explicit cache clear operations
  • Pattern-Based: Invalidate by path patterns or prefixes

Implementation Phases

Phase 1: Core Infrastructure
  • Basic cache entry structure
  • LRU eviction implementation
  • Write-through mode
  • Simple memory limit enforcement
  • Basic statistics tracking
Phase 2: Advanced Eviction
  • LFU eviction policy
  • TTL-based expiration
  • Hybrid eviction strategies
  • Metadata caching separate from data
Phase 3: Write-Back Support
  • Async write-back implementation
  • Background flush scheduler
  • Write-back queue management
  • Dirty entry tracking and persistence
Phase 4: Performance Optimization
  • Lock-free data structures where possible
  • Sharded cache to reduce contention
  • Batch operations support
  • Memory pooling for cache entries
Phase 5: Monitoring and Observability
  • Detailed metrics (hit rate, eviction rate, etc.)
  • Cache warming strategies
  • Performance benchmarking suite
  • Comparison benchmarks vs corfs

API Design

Basic Usage
package main

import (
    "github.com/absfs/absfs"
    "github.com/absfs/cachefs"
    "github.com/absfs/memfs"
)

func main() {
    // Create backing filesystem
    backing, _ := memfs.NewFS()

    // Create cache with default settings (write-through, LRU, 100MB limit)
    cache := cachefs.New(backing)

    // Use as standard absfs.FileSystem
    file, err := cache.OpenFile("/data/file.txt", os.O_RDWR|os.O_CREATE, 0644)
    if err != nil {
        panic(err)
    }
    defer file.Close()

    // Reads and writes are automatically cached
    data := make([]byte, 1024)
    n, err := file.Read(data)
}
package main

import (
    "github.com/absfs/cachefs"
    "github.com/absfs/memfs"
)

func main() {
    // Create backing filesystem with symlink support
    backing, _ := memfs.NewFS()

    // Create SymlinkCacheFS for full symlink support
    cache := cachefs.NewSymlinkFS(backing)

    // All symlink operations are passed through to backing
    cache.Symlink("/target/file.txt", "/link.txt")
    target, _ := cache.Readlink("/link.txt")
    info, _ := cache.Lstat("/link.txt")
}
Advanced Configuration
// Create cache with custom configuration
cache := cachefs.New(backing,
    cachefs.WithWriteMode(cachefs.WriteModeWriteBack),
    cachefs.WithEvictionPolicy(cachefs.EvictionLRU),
    cachefs.WithMaxBytes(500 * 1024 * 1024), // 500 MB
    cachefs.WithTTL(5 * time.Minute),
    cachefs.WithFlushInterval(30 * time.Second),
    cachefs.WithMetadataCache(true),
)

// Access cache statistics
stats := cache.Stats()
fmt.Printf("Hit Rate: %.2f%%\n", stats.HitRate()*100)
fmt.Printf("Evictions: %d\n", stats.Evictions)
fmt.Printf("Memory Used: %d bytes\n", stats.BytesUsed)
Write Mode Configuration
// Write-through: synchronous, always consistent
wt := cachefs.New(backing,
    cachefs.WithWriteMode(cachefs.WriteModeWriteThrough),
)

// Write-back: async writes, better performance, risk of data loss
wb := cachefs.New(backing,
    cachefs.WithWriteMode(cachefs.WriteModeWriteBack),
    cachefs.WithFlushInterval(10 * time.Second),
    cachefs.WithFlushOnClose(true),
)

// Write-around: bypass cache on writes, cache only reads
wa := cachefs.New(backing,
    cachefs.WithWriteMode(cachefs.WriteModeWriteAround),
)
Eviction Policy Configuration
// LRU: good for temporal locality
lru := cachefs.New(backing,
    cachefs.WithEvictionPolicy(cachefs.EvictionLRU),
    cachefs.WithMaxEntries(10000),
)

// LFU: good for frequency-based access patterns
lfu := cachefs.New(backing,
    cachefs.WithEvictionPolicy(cachefs.EvictionLFU),
    cachefs.WithMaxBytes(1024 * 1024 * 1024), // 1 GB
)

// TTL: automatic expiration
ttl := cachefs.New(backing,
    cachefs.WithEvictionPolicy(cachefs.EvictionTTL),
    cachefs.WithTTL(10 * time.Minute),
)

// Hybrid: combine LRU with TTL
hybrid := cachefs.New(backing,
    cachefs.WithEvictionPolicy(cachefs.EvictionHybrid),
    cachefs.WithTTL(15 * time.Minute),
    cachefs.WithMaxBytes(512 * 1024 * 1024),
)
Cache Invalidation
// Invalidate specific path
cache.Invalidate("/data/stale.txt")

// Invalidate by pattern
cache.InvalidatePattern("/data/*.tmp")

// Invalidate by prefix
cache.InvalidatePrefix("/cache/")

// Clear entire cache
cache.Clear()

// Flush dirty entries (write-back mode)
cache.Flush()
Statistics and Monitoring
// Get current statistics
stats := cache.Stats()
fmt.Printf("Hits: %d, Misses: %d, Hit Rate: %.2f%%\n",
    stats.Hits, stats.Misses, stats.HitRate()*100)

// Reset statistics
cache.ResetStats()

// Export metrics
metrics := cache.Metrics()
// metrics can be exposed via Prometheus, statsd, etc.

Memory Management

Size Limits
// Limit by total bytes
cache := cachefs.New(backing,
    cachefs.WithMaxBytes(1024 * 1024 * 1024), // 1 GB
)

// Limit by entry count
cache := cachefs.New(backing,
    cachefs.WithMaxEntries(50000),
)

// Limit both
cache := cachefs.New(backing,
    cachefs.WithMaxBytes(500 * 1024 * 1024),
    cachefs.WithMaxEntries(100000),
)
Separate Data and Metadata Caches
// Configure separate limits for metadata
cache := cachefs.New(backing,
    cachefs.WithMaxBytes(1024 * 1024 * 1024), // 1 GB for data
    cachefs.WithMetadataCache(true),
    cachefs.WithMetadataMaxEntries(100000), // Cache lots of metadata
)
Memory Pressure Handling
// Adaptive behavior under memory pressure
cache := cachefs.New(backing,
    cachefs.WithMemoryPressureHandler(func(used, limit uint64) {
        // Custom logic when approaching memory limits
        // e.g., reduce TTL, increase eviction rate
    }),
)

Cache Invalidation Strategies

Time-Based Invalidation
// Global TTL for all entries
cache := cachefs.New(backing,
    cachefs.WithTTL(5 * time.Minute),
)

// Per-path TTL configuration
cache.SetPathTTL("/dynamic/*", 30 * time.Second)
cache.SetPathTTL("/static/*", 1 * time.Hour)
Event-Based Invalidation
// Invalidate on file system events
cache := cachefs.New(backing,
    cachefs.WithInvalidateOnEvent(true),
)

// Custom event handlers
cache.OnFileModified(func(path string) {
    cache.Invalidate(path)
})
Write-Based Invalidation
// Write-through: no invalidation needed (always consistent)
wt := cachefs.New(backing,
    cachefs.WithWriteMode(cachefs.WriteModeWriteThrough),
)

// Write-around: invalidate on write (cache only reads)
wa := cachefs.New(backing,
    cachefs.WithWriteMode(cachefs.WriteModeWriteAround),
)

// Write-back: invalidate on flush
wb := cachefs.New(backing,
    cachefs.WithWriteMode(cachefs.WriteModeWriteBack),
    cachefs.WithInvalidateOnFlush(true),
)

Performance Benchmarks vs corfs

Expected performance characteristics:

Read Performance
  • Cache Hit: 10-100x faster than backing store (memory access)
  • Cache Miss: Slightly slower than direct access (overhead of cache check)
  • vs corfs: Similar cache hit performance, better eviction strategies
Write Performance
  • Write-Through: Similar to direct writes (synchronous)
  • Write-Back: 10-50x faster than direct writes (async)
  • Write-Around: Same as direct writes (bypass cache)
  • vs corfs: Write-back mode significantly faster, write-through comparable
Memory Efficiency
  • LRU/LFU: Better memory utilization than naive caching
  • TTL: Automatic cleanup of stale entries
  • vs corfs: More sophisticated memory management, configurable limits
Benchmark Suite
// Example benchmark results (target metrics)
BenchmarkCacheFS_ReadHit-8         10000000    120 ns/op
BenchmarkCacheFS_ReadMiss-8         1000000   1500 ns/op
BenchmarkCacheFS_WriteThrough-8     1000000   1800 ns/op
BenchmarkCacheFS_WriteBack-8       10000000    200 ns/op
BenchmarkCacheFS_LRU_Eviction-8     5000000    350 ns/op
BenchmarkCacheFS_LFU_Eviction-8     5000000    380 ns/op

// Comparison with corfs
BenchmarkCorFS_ReadHit-8           10000000    130 ns/op
BenchmarkCorFS_ReadMiss-8           1000000   1450 ns/op
BenchmarkCorFS_Write-8              1000000   1750 ns/op

Use Cases

High-Read Workloads
// Web server serving static assets
cache := cachefs.New(osfs.New(),
    cachefs.WithEvictionPolicy(cachefs.EvictionLRU),
    cachefs.WithMaxBytes(2 * 1024 * 1024 * 1024), // 2 GB
    cachefs.WithTTL(1 * time.Hour),
)
Write-Heavy with Batch Processing
// Log aggregation with periodic flush
cache := cachefs.New(backing,
    cachefs.WithWriteMode(cachefs.WriteModeWriteBack),
    cachefs.WithFlushInterval(5 * time.Minute),
    cachefs.WithMaxBytes(500 * 1024 * 1024),
)
Mixed Read/Write with Strong Consistency
// Database-like workload
cache := cachefs.New(backing,
    cachefs.WithWriteMode(cachefs.WriteModeWriteThrough),
    cachefs.WithEvictionPolicy(cachefs.EvictionHybrid),
    cachefs.WithMaxEntries(100000),
    cachefs.WithTTL(10 * time.Minute),
)

Testing

# Run all tests
go test ./...

# Run with coverage
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out

# Run benchmarks
go test -bench=. -benchmem

# Compare with corfs
go test -bench=. -benchmem > cachefs.bench
cd ../corfs
go test -bench=. -benchmem > corfs.bench
benchcmp corfs.bench ../cachefs/cachefs.bench

Contributing

Contributions are welcome! Please ensure:

  • All tests pass
  • Code coverage remains above 80%
  • Benchmarks show no performance regressions
  • Documentation is updated

License

MIT License - See LICENSE file for details

  • absfs - Core filesystem abstraction
  • corfs - Copy-on-read filesystem
  • memfs - In-memory filesystem
  • osfs - Operating system filesystem wrapper

Documentation

Overview

Example (Advanced)

Example_advanced demonstrates advanced configuration

package main

import (
	"fmt"
	"io"
	"io/fs"
	"os"
	"time"

	"github.com/absfs/absfs"
	"github.com/absfs/cachefs"
)

// Simple in-memory filesystem for examples
type simpleFS struct {
	files map[string][]byte
}

func newSimpleFS() *simpleFS {
	return &simpleFS{files: make(map[string][]byte)}
}

func (s *simpleFS) Chdir(dir string) error               { return nil }
func (s *simpleFS) Getwd() (string, error)               { return "/", nil }
func (s *simpleFS) TempDir() string                      { return "/tmp" }
func (s *simpleFS) Open(name string) (absfs.File, error) { return s.OpenFile(name, os.O_RDONLY, 0) }
func (s *simpleFS) Create(name string) (absfs.File, error) {
	return s.OpenFile(name, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666)
}
func (s *simpleFS) Mkdir(name string, perm os.FileMode) error    { return nil }
func (s *simpleFS) MkdirAll(name string, perm os.FileMode) error { return nil }
func (s *simpleFS) Remove(name string) error                     { delete(s.files, name); return nil }
func (s *simpleFS) RemoveAll(path string) error                  { delete(s.files, path); return nil }
func (s *simpleFS) Stat(name string) (os.FileInfo, error)        { return nil, os.ErrNotExist }
func (s *simpleFS) Rename(oldname, newname string) error {
	s.files[newname] = s.files[oldname]
	delete(s.files, oldname)
	return nil
}
func (s *simpleFS) Chmod(name string, mode os.FileMode) error                   { return nil }
func (s *simpleFS) Chtimes(name string, atime time.Time, mtime time.Time) error { return nil }
func (s *simpleFS) Chown(name string, uid, gid int) error                       { return nil }
func (s *simpleFS) Truncate(name string, size int64) error                      { return nil }
func (s *simpleFS) OpenFile(name string, flag int, perm os.FileMode) (absfs.File, error) {
	return &simpleFile{fs: s, name: name}, nil
}
func (s *simpleFS) ReadDir(name string) ([]fs.DirEntry, error) {
	return nil, nil
}
func (s *simpleFS) ReadFile(name string) ([]byte, error) {
	data, ok := s.files[name]
	if !ok {
		return nil, os.ErrNotExist
	}
	return data, nil
}
func (s *simpleFS) Sub(dir string) (fs.FS, error) {
	return absfs.FilerToFS(s, dir)
}

type simpleFile struct {
	fs   *simpleFS
	name string
	pos  int64
}

func (f *simpleFile) Name() string { return f.name }
func (f *simpleFile) Read(p []byte) (n int, err error) {
	data := f.fs.files[f.name]
	if f.pos >= int64(len(data)) {
		return 0, io.EOF
	}
	n = copy(p, data[f.pos:])
	f.pos += int64(n)
	if f.pos >= int64(len(data)) && n < len(p) {
		err = io.EOF
	}
	return
}
func (f *simpleFile) Write(p []byte) (n int, err error) {
	f.fs.files[f.name] = append(f.fs.files[f.name], p...)
	return len(p), nil
}
func (f *simpleFile) Close() error                                 { return nil }
func (f *simpleFile) Sync() error                                  { return nil }
func (f *simpleFile) Stat() (os.FileInfo, error)                   { return nil, nil }
func (f *simpleFile) Readdir(int) ([]os.FileInfo, error)           { return nil, nil }
func (f *simpleFile) Readdirnames(int) ([]string, error)           { return nil, nil }
func (f *simpleFile) ReadDir(int) ([]fs.DirEntry, error)           { return nil, nil }
func (f *simpleFile) Seek(offset int64, whence int) (int64, error) { f.pos = offset; return f.pos, nil }
func (f *simpleFile) ReadAt(b []byte, off int64) (n int, err error) {
	data := f.fs.files[f.name]
	return copy(b, data[off:]), nil
}
func (f *simpleFile) WriteAt(b []byte, off int64) (n int, err error) { return len(b), nil }
func (f *simpleFile) Truncate(size int64) error                      { return nil }
func (f *simpleFile) WriteString(s string) (n int, err error)        { return f.Write([]byte(s)) }

func main() {
	backing := newSimpleFS()

	// Create cache with custom configuration
	cache := cachefs.New(backing,
		cachefs.WithWriteMode(cachefs.WriteModeWriteThrough),
		cachefs.WithEvictionPolicy(cachefs.EvictionLRU),
		cachefs.WithMaxBytes(1024*1024), // 1 MB
		cachefs.WithTTL(5*time.Minute),
		cachefs.WithMaxEntries(1000),
	)

	// Use the cache
	backing.files["/config.txt"] = []byte("configuration data")
	file, _ := cache.OpenFile("/config.txt", os.O_RDONLY, 0644)
	data := make([]byte, 18)
	file.Read(data)
	file.Close()

	fmt.Printf("Data: %s\n", string(data))
	fmt.Printf("Cache hit rate: %.0f%%\n", cache.Stats().HitRate()*100)

}
Output:

Data: configuration data
Cache hit rate: 0%
Example (Basic)

Example_basic demonstrates basic usage of cachefs

package main

import (
	"fmt"
	"io"
	"io/fs"
	"os"
	"time"

	"github.com/absfs/absfs"
	"github.com/absfs/cachefs"
)

// Simple in-memory filesystem for examples
type simpleFS struct {
	files map[string][]byte
}

func newSimpleFS() *simpleFS {
	return &simpleFS{files: make(map[string][]byte)}
}

func (s *simpleFS) Chdir(dir string) error               { return nil }
func (s *simpleFS) Getwd() (string, error)               { return "/", nil }
func (s *simpleFS) TempDir() string                      { return "/tmp" }
func (s *simpleFS) Open(name string) (absfs.File, error) { return s.OpenFile(name, os.O_RDONLY, 0) }
func (s *simpleFS) Create(name string) (absfs.File, error) {
	return s.OpenFile(name, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666)
}
func (s *simpleFS) Mkdir(name string, perm os.FileMode) error    { return nil }
func (s *simpleFS) MkdirAll(name string, perm os.FileMode) error { return nil }
func (s *simpleFS) Remove(name string) error                     { delete(s.files, name); return nil }
func (s *simpleFS) RemoveAll(path string) error                  { delete(s.files, path); return nil }
func (s *simpleFS) Stat(name string) (os.FileInfo, error)        { return nil, os.ErrNotExist }
func (s *simpleFS) Rename(oldname, newname string) error {
	s.files[newname] = s.files[oldname]
	delete(s.files, oldname)
	return nil
}
func (s *simpleFS) Chmod(name string, mode os.FileMode) error                   { return nil }
func (s *simpleFS) Chtimes(name string, atime time.Time, mtime time.Time) error { return nil }
func (s *simpleFS) Chown(name string, uid, gid int) error                       { return nil }
func (s *simpleFS) Truncate(name string, size int64) error                      { return nil }
func (s *simpleFS) OpenFile(name string, flag int, perm os.FileMode) (absfs.File, error) {
	return &simpleFile{fs: s, name: name}, nil
}
func (s *simpleFS) ReadDir(name string) ([]fs.DirEntry, error) {
	return nil, nil
}
func (s *simpleFS) ReadFile(name string) ([]byte, error) {
	data, ok := s.files[name]
	if !ok {
		return nil, os.ErrNotExist
	}
	return data, nil
}
func (s *simpleFS) Sub(dir string) (fs.FS, error) {
	return absfs.FilerToFS(s, dir)
}

type simpleFile struct {
	fs   *simpleFS
	name string
	pos  int64
}

func (f *simpleFile) Name() string { return f.name }
func (f *simpleFile) Read(p []byte) (n int, err error) {
	data := f.fs.files[f.name]
	if f.pos >= int64(len(data)) {
		return 0, io.EOF
	}
	n = copy(p, data[f.pos:])
	f.pos += int64(n)
	if f.pos >= int64(len(data)) && n < len(p) {
		err = io.EOF
	}
	return
}
func (f *simpleFile) Write(p []byte) (n int, err error) {
	f.fs.files[f.name] = append(f.fs.files[f.name], p...)
	return len(p), nil
}
func (f *simpleFile) Close() error                                 { return nil }
func (f *simpleFile) Sync() error                                  { return nil }
func (f *simpleFile) Stat() (os.FileInfo, error)                   { return nil, nil }
func (f *simpleFile) Readdir(int) ([]os.FileInfo, error)           { return nil, nil }
func (f *simpleFile) Readdirnames(int) ([]string, error)           { return nil, nil }
func (f *simpleFile) ReadDir(int) ([]fs.DirEntry, error)           { return nil, nil }
func (f *simpleFile) Seek(offset int64, whence int) (int64, error) { f.pos = offset; return f.pos, nil }
func (f *simpleFile) ReadAt(b []byte, off int64) (n int, err error) {
	data := f.fs.files[f.name]
	return copy(b, data[off:]), nil
}
func (f *simpleFile) WriteAt(b []byte, off int64) (n int, err error) { return len(b), nil }
func (f *simpleFile) Truncate(size int64) error                      { return nil }
func (f *simpleFile) WriteString(s string) (n int, err error)        { return f.Write([]byte(s)) }

func main() {
	// Create backing filesystem
	backing := newSimpleFS()

	// Create cache with default settings (write-through, LRU, 100MB limit)
	cache := cachefs.New(backing)

	// Write a file
	backing.files["/data/file.txt"] = []byte("Hello, CacheFS!")

	// Read the file (will be cached)
	file, _ := cache.OpenFile("/data/file.txt", os.O_RDONLY, 0644)
	data := make([]byte, 15)
	n, _ := file.Read(data)
	file.Close()

	fmt.Printf("Read: %s\n", string(data[:n]))

	// Second read will be a cache hit
	file2, _ := cache.OpenFile("/data/file.txt", os.O_RDONLY, 0644)
	data2 := make([]byte, 15)
	file2.Read(data2)
	file2.Close()

	// Check statistics
	stats := cache.Stats()
	fmt.Printf("Hits: %d, Misses: %d\n", stats.Hits(), stats.Misses())

}
Output:

Read: Hello, CacheFS!
Hits: 1, Misses: 1
Example (Eviction)

Example_eviction demonstrates cache eviction

package main

import (
	"fmt"
	"io"
	"io/fs"
	"os"
	"time"

	"github.com/absfs/absfs"
	"github.com/absfs/cachefs"
)

// Simple in-memory filesystem for examples
type simpleFS struct {
	files map[string][]byte
}

func newSimpleFS() *simpleFS {
	return &simpleFS{files: make(map[string][]byte)}
}

func (s *simpleFS) Chdir(dir string) error               { return nil }
func (s *simpleFS) Getwd() (string, error)               { return "/", nil }
func (s *simpleFS) TempDir() string                      { return "/tmp" }
func (s *simpleFS) Open(name string) (absfs.File, error) { return s.OpenFile(name, os.O_RDONLY, 0) }
func (s *simpleFS) Create(name string) (absfs.File, error) {
	return s.OpenFile(name, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666)
}
func (s *simpleFS) Mkdir(name string, perm os.FileMode) error    { return nil }
func (s *simpleFS) MkdirAll(name string, perm os.FileMode) error { return nil }
func (s *simpleFS) Remove(name string) error                     { delete(s.files, name); return nil }
func (s *simpleFS) RemoveAll(path string) error                  { delete(s.files, path); return nil }
func (s *simpleFS) Stat(name string) (os.FileInfo, error)        { return nil, os.ErrNotExist }
func (s *simpleFS) Rename(oldname, newname string) error {
	s.files[newname] = s.files[oldname]
	delete(s.files, oldname)
	return nil
}
func (s *simpleFS) Chmod(name string, mode os.FileMode) error                   { return nil }
func (s *simpleFS) Chtimes(name string, atime time.Time, mtime time.Time) error { return nil }
func (s *simpleFS) Chown(name string, uid, gid int) error                       { return nil }
func (s *simpleFS) Truncate(name string, size int64) error                      { return nil }
func (s *simpleFS) OpenFile(name string, flag int, perm os.FileMode) (absfs.File, error) {
	return &simpleFile{fs: s, name: name}, nil
}
func (s *simpleFS) ReadDir(name string) ([]fs.DirEntry, error) {
	return nil, nil
}
func (s *simpleFS) ReadFile(name string) ([]byte, error) {
	data, ok := s.files[name]
	if !ok {
		return nil, os.ErrNotExist
	}
	return data, nil
}
func (s *simpleFS) Sub(dir string) (fs.FS, error) {
	return absfs.FilerToFS(s, dir)
}

type simpleFile struct {
	fs   *simpleFS
	name string
	pos  int64
}

func (f *simpleFile) Name() string { return f.name }
func (f *simpleFile) Read(p []byte) (n int, err error) {
	data := f.fs.files[f.name]
	if f.pos >= int64(len(data)) {
		return 0, io.EOF
	}
	n = copy(p, data[f.pos:])
	f.pos += int64(n)
	if f.pos >= int64(len(data)) && n < len(p) {
		err = io.EOF
	}
	return
}
func (f *simpleFile) Write(p []byte) (n int, err error) {
	f.fs.files[f.name] = append(f.fs.files[f.name], p...)
	return len(p), nil
}
func (f *simpleFile) Close() error                                 { return nil }
func (f *simpleFile) Sync() error                                  { return nil }
func (f *simpleFile) Stat() (os.FileInfo, error)                   { return nil, nil }
func (f *simpleFile) Readdir(int) ([]os.FileInfo, error)           { return nil, nil }
func (f *simpleFile) Readdirnames(int) ([]string, error)           { return nil, nil }
func (f *simpleFile) ReadDir(int) ([]fs.DirEntry, error)           { return nil, nil }
func (f *simpleFile) Seek(offset int64, whence int) (int64, error) { f.pos = offset; return f.pos, nil }
func (f *simpleFile) ReadAt(b []byte, off int64) (n int, err error) {
	data := f.fs.files[f.name]
	return copy(b, data[off:]), nil
}
func (f *simpleFile) WriteAt(b []byte, off int64) (n int, err error) { return len(b), nil }
func (f *simpleFile) Truncate(size int64) error                      { return nil }
func (f *simpleFile) WriteString(s string) (n int, err error)        { return f.Write([]byte(s)) }

func main() {
	backing := newSimpleFS()

	// Small cache that will trigger eviction
	cache := cachefs.New(backing,
		cachefs.WithMaxBytes(100),
	)

	// Add files that exceed cache size
	backing.files["/file1.txt"] = make([]byte, 60)
	backing.files["/file2.txt"] = make([]byte, 60)

	// Read first file
	file1, _ := cache.OpenFile("/file1.txt", os.O_RDONLY, 0644)
	buf1 := make([]byte, 60)
	file1.Read(buf1)
	file1.Close()

	fmt.Printf("Entries after file1: %d\n", cache.Stats().Entries())

	// Read second file - should trigger eviction of file1
	file2, _ := cache.OpenFile("/file2.txt", os.O_RDONLY, 0644)
	buf2 := make([]byte, 60)
	file2.Read(buf2)
	file2.Close()

	fmt.Printf("Entries after file2: %d\n", cache.Stats().Entries())
	fmt.Printf("Total evictions: %d\n", cache.Stats().Evictions())

}
Output:

Entries after file1: 1
Entries after file2: 1
Total evictions: 1

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

func CompareSnapshots

func CompareSnapshots(before, after *Snapshot) map[string]int64

CompareSnapshots returns the delta between two snapshots

Types

type CacheFS

type CacheFS struct {
	// contains filtered or unexported fields
}

CacheFS is a caching filesystem that wraps another filesystem

func New

func New(backing absfs.FileSystem, opts ...Option) *CacheFS

New creates a new CacheFS with the given backing filesystem and options

func (*CacheFS) BatchFlush

func (c *CacheFS) BatchFlush(paths []string) error

BatchFlush flushes specific dirty entries to the backing store. Returns the first error encountered, but attempts to flush all entries.

func (*CacheFS) BatchInvalidate

func (c *CacheFS) BatchInvalidate(paths []string) int

BatchInvalidate invalidates multiple paths in a single lock acquisition. Returns the number of entries actually invalidated.

func (*CacheFS) BatchStats

func (c *CacheFS) BatchStats(paths []string) map[string]EntryStats

BatchStats returns statistics for multiple entries in a single lock acquisition. Entries not in cache will have Cached=false.

func (*CacheFS) Chdir

func (c *CacheFS) Chdir(dir string) error

Chdir changes the current working directory

func (*CacheFS) Chmod

func (c *CacheFS) Chmod(name string, mode fs.FileMode) error

Chmod changes file permissions

func (*CacheFS) Chown

func (c *CacheFS) Chown(name string, uid, gid int) error

Chown changes file owner and group

func (*CacheFS) Chtimes

func (c *CacheFS) Chtimes(name string, atime time.Time, mtime time.Time) error

Chtimes changes file access and modification times

func (*CacheFS) CleanExpired

func (c *CacheFS) CleanExpired()

CleanExpired removes expired entries (can be called manually or periodically)

func (*CacheFS) Clear

func (c *CacheFS) Clear()

Clear removes all entries from the cache

func (*CacheFS) Close

func (c *CacheFS) Close() error

Close stops the background flush goroutine and flushes all dirty entries

func (*CacheFS) Create

func (c *CacheFS) Create(name string) (absfs.File, error)

Create creates or truncates a file

func (*CacheFS) DetailedStats

func (c *CacheFS) DetailedStats() *DetailedStats

DetailedStats returns comprehensive statistics about the cache

func (*CacheFS) ExportJSON

func (c *CacheFS) ExportJSON() ([]byte, error)

ExportJSON exports cache statistics as JSON

func (*CacheFS) ExportPrometheus

func (c *CacheFS) ExportPrometheus() string

ExportPrometheus exports cache statistics in Prometheus format

func (*CacheFS) Flush

func (c *CacheFS) Flush() error

Flush flushes all dirty entries to the backing store (write-back mode)

func (*CacheFS) GetWarmingProgress

func (c *CacheFS) GetWarmingProgress() WarmProgress

GetWarmingProgress returns the current warming progress This is useful when using the Async warming methods

func (*CacheFS) Getwd

func (c *CacheFS) Getwd() (string, error)

Getwd returns the current working directory

func (*CacheFS) Invalidate

func (c *CacheFS) Invalidate(path string)

Invalidate removes a specific path from the cache

func (*CacheFS) InvalidatePattern

func (c *CacheFS) InvalidatePattern(pattern string) error

InvalidatePattern invalidates cache entries matching a glob pattern

func (*CacheFS) InvalidatePrefix

func (c *CacheFS) InvalidatePrefix(prefix string)

InvalidatePrefix invalidates all cache entries with the given path prefix

func (*CacheFS) Mkdir

func (c *CacheFS) Mkdir(name string, perm fs.FileMode) error

Mkdir creates a directory

func (*CacheFS) MkdirAll

func (c *CacheFS) MkdirAll(name string, perm fs.FileMode) error

MkdirAll creates a directory and all parent directories

func (*CacheFS) Open

func (c *CacheFS) Open(name string) (absfs.File, error)

Open opens a file for reading

func (*CacheFS) OpenFile

func (c *CacheFS) OpenFile(name string, flag int, perm fs.FileMode) (absfs.File, error)

OpenFile opens a file with the specified flags and permissions

func (*CacheFS) Prefetch

func (c *CacheFS) Prefetch(paths []string, opts *PrefetchOptions) error

Prefetch pre-loads multiple files into the cache in parallel. Useful for cache warming scenarios.

func (*CacheFS) ReadDir

func (c *CacheFS) ReadDir(name string) ([]fs.DirEntry, error)

ReadDir reads the named directory and returns a list of directory entries

func (*CacheFS) ReadFile

func (c *CacheFS) ReadFile(name string) ([]byte, error)

ReadFile reads the named file and returns its contents

func (*CacheFS) Remove

func (c *CacheFS) Remove(name string) error

Remove removes a file or empty directory

func (*CacheFS) RemoveAll

func (c *CacheFS) RemoveAll(path string) error

RemoveAll removes a path and all children

func (*CacheFS) Rename

func (c *CacheFS) Rename(oldname, newname string) error

Rename renames a file or directory

func (*CacheFS) ResetStats

func (c *CacheFS) ResetStats()

ResetStats resets all cache statistics

func (*CacheFS) Stat

func (c *CacheFS) Stat(name string) (fs.FileInfo, error)

Stat returns file information

func (*CacheFS) Stats

func (c *CacheFS) Stats() *Stats

Stats returns the current cache statistics

func (*CacheFS) Sub

func (c *CacheFS) Sub(dir string) (fs.FS, error)

Sub returns an fs.FS rooted at the given directory

func (*CacheFS) TakeSnapshot

func (c *CacheFS) TakeSnapshot() *Snapshot

TakeSnapshot captures current cache statistics with a timestamp

func (*CacheFS) TempDir

func (c *CacheFS) TempDir() string

TempDir returns the temporary directory

func (*CacheFS) Truncate

func (c *CacheFS) Truncate(name string, size int64) error

Truncate changes the size of a file

func (*CacheFS) WarmCache

func (c *CacheFS) WarmCache(paths []string, opts ...WarmOption) error

WarmCache pre-loads a list of files into the cache

func (*CacheFS) WarmCacheAsync

func (c *CacheFS) WarmCacheAsync(paths []string, done chan<- error, opts ...WarmOption)

WarmCacheAsync pre-loads files asynchronously in the background

func (*CacheFS) WarmCacheFromDir

func (c *CacheFS) WarmCacheFromDir(dir string, opts ...WarmOption) error

WarmCacheFromDir recursively loads all files in a directory into cache

func (*CacheFS) WarmCacheFromFile

func (c *CacheFS) WarmCacheFromFile(listPath string, opts ...WarmOption) error

WarmCacheFromFile loads files listed in a text file (one path per line)

func (*CacheFS) WarmCacheFromPattern

func (c *CacheFS) WarmCacheFromPattern(pattern string, opts ...WarmOption) error

WarmCacheFromPattern loads all files matching a glob pattern into cache Note: This requires the backing filesystem to implement a Glob method. If not available, use WarmCache with pre-computed paths instead.

func (*CacheFS) WarmCacheFromPatternAsync

func (c *CacheFS) WarmCacheFromPatternAsync(pattern string, done chan<- error, opts ...WarmOption)

WarmCacheFromPatternAsync loads files matching pattern asynchronously

func (*CacheFS) WarmCacheSmart

func (c *CacheFS) WarmCacheSmart(opts ...WarmOption) error

WarmCacheSmart loads files based on access patterns and frequency This analyzes recent access patterns and pre-loads likely-needed files

type DetailedStats

type DetailedStats struct {
	// Basic counters
	Hits      uint64 `json:"hits"`
	Misses    uint64 `json:"misses"`
	Evictions uint64 `json:"evictions"`
	BytesUsed uint64 `json:"bytes_used"`
	Entries   uint64 `json:"entries"`

	// Hit rate
	HitRate float64 `json:"hit_rate"`

	// Write-back specific
	FlushCount   uint64 `json:"flush_count,omitempty"`
	FlushErrors  uint64 `json:"flush_errors,omitempty"`
	DirtyEntries uint64 `json:"dirty_entries,omitempty"`

	// Per-operation counters
	ReadOps       uint64 `json:"read_ops,omitempty"`
	WriteOps      uint64 `json:"write_ops,omitempty"`
	InvalidateOps uint64 `json:"invalidate_ops,omitempty"`

	// Configuration
	MaxBytes       uint64 `json:"max_bytes"`
	MaxEntries     uint64 `json:"max_entries"`
	TTL            string `json:"ttl,omitempty"`
	WriteMode      string `json:"write_mode"`
	EvictionPolicy string `json:"eviction_policy"`
}

DetailedStats contains comprehensive cache statistics and metrics

type EntryStats

type EntryStats struct {
	Path        string
	Size        int64
	Dirty       bool
	AccessCount uint64
	Cached      bool
}

EntryStats contains statistics for a single cache entry

type EvictionPolicy

type EvictionPolicy int

EvictionPolicy defines how entries are evicted when cache is full

const (
	// EvictionLRU evicts least recently used entries
	EvictionLRU EvictionPolicy = iota
	// EvictionLFU evicts least frequently used entries
	EvictionLFU
	// EvictionTTL evicts entries based on time-to-live
	EvictionTTL
	// EvictionHybrid combines LRU/LFU with TTL
	EvictionHybrid
)

type Option

type Option func(*CacheFS)

Option is a function that configures a CacheFS

func WithEvictionPolicy

func WithEvictionPolicy(policy EvictionPolicy) Option

WithEvictionPolicy sets the eviction policy for the cache

func WithFlushInterval

func WithFlushInterval(interval time.Duration) Option

WithFlushInterval sets the interval for flushing dirty entries in write-back mode

func WithFlushOnClose

func WithFlushOnClose(enable bool) Option

WithFlushOnClose enables flushing dirty entries when files are closed

func WithMaxBytes

func WithMaxBytes(bytes uint64) Option

WithMaxBytes sets the maximum cache size in bytes

func WithMaxEntries

func WithMaxEntries(entries uint64) Option

WithMaxEntries sets the maximum number of cache entries

func WithMetadataCache

func WithMetadataCache(enable bool) Option

WithMetadataCache enables separate metadata caching

func WithMetadataMaxEntries

func WithMetadataMaxEntries(entries uint64) Option

WithMetadataMaxEntries sets the maximum number of metadata cache entries

func WithTTL

func WithTTL(ttl time.Duration) Option

WithTTL sets the time-to-live for cache entries

func WithWriteMode

func WithWriteMode(mode WriteMode) Option

WithWriteMode sets the write mode for the cache

type PrefetchOptions

type PrefetchOptions struct {
	// Workers is the number of concurrent workers to use for prefetching.
	// If 0, defaults to number of CPUs.
	Workers int

	// SkipErrors continues prefetching even if some files fail to load.
	// Errors are still returned but don't stop the operation.
	SkipErrors bool
}

PrefetchOptions configures the Prefetch operation

type Snapshot

type Snapshot struct {
	Timestamp time.Time
	Stats     *DetailedStats
}

Snapshot captures a point-in-time view of cache statistics

type Stats

type Stats struct {
	// contains filtered or unexported fields
}

Stats represents cache statistics using lock-free atomic operations

func (*Stats) BytesUsed

func (s *Stats) BytesUsed() uint64

BytesUsed returns the total bytes used by cache

func (*Stats) Entries

func (s *Stats) Entries() uint64

Entries returns the number of cached entries

func (*Stats) Evictions

func (s *Stats) Evictions() uint64

Evictions returns the number of evicted entries

func (*Stats) HitRate

func (s *Stats) HitRate() float64

HitRate returns the cache hit rate as a value between 0 and 1

func (*Stats) Hits

func (s *Stats) Hits() uint64

Hits returns the number of cache hits

func (*Stats) Misses

func (s *Stats) Misses() uint64

Misses returns the number of cache misses

type SymlinkCacheFS

type SymlinkCacheFS struct {
	*CacheFS
	// contains filtered or unexported fields
}

SymlinkCacheFS wraps a SymlinkFileSystem with caching

func NewSymlinkFS

func NewSymlinkFS(backing absfs.SymlinkFileSystem, opts ...Option) *SymlinkCacheFS

NewSymlinkFS creates a new SymlinkCacheFS with the given backing SymlinkFileSystem and options

func (*SymlinkCacheFS) Lchown

func (c *SymlinkCacheFS) Lchown(name string, uid, gid int) error

Lchown changes the owner and group of a symlink

func (*SymlinkCacheFS) Lstat

func (c *SymlinkCacheFS) Lstat(name string) (fs.FileInfo, error)

Lstat returns file info without following symlinks

func (c *SymlinkCacheFS) Readlink(name string) (string, error)

Readlink returns the destination of a symlink

func (c *SymlinkCacheFS) Symlink(oldname, newname string) error

Symlink creates a symbolic link

type WarmOption

type WarmOption func(*warmConfig)

WarmOption configures cache warming behavior

func WithWarmPriority

func WithWarmPriority(p WarmPriority) WarmOption

WithWarmPriority sets the priority level for warming

func WithWarmProgress

func WithWarmProgress(fn func(WarmProgress)) WarmOption

WithWarmProgress sets a progress callback for warming

func WithWarmSkipErrors

func WithWarmSkipErrors(skip bool) WarmOption

WithWarmSkipErrors configures whether to skip errors during warming

func WithWarmWorkers

func WithWarmWorkers(n int) WarmOption

WithWarmWorkers sets the number of parallel workers for warming

type WarmPriority

type WarmPriority int

WarmPriority defines the priority level for cache warming

const (
	// WarmPriorityHigh loads files first and prevents eviction
	WarmPriorityHigh WarmPriority = iota
	// WarmPriorityNormal uses regular cache behavior
	WarmPriorityNormal
	// WarmPriorityLow loads only if space available
	WarmPriorityLow
)

type WarmProgress

type WarmProgress struct {
	Total     int
	Completed int
	Errors    int
	BytesRead uint64
}

WarmProgress tracks cache warming progress

type WriteMode

type WriteMode int

WriteMode defines the cache write behavior

const (
	// WriteModeWriteThrough writes to both cache and backing store synchronously
	WriteModeWriteThrough WriteMode = iota
	// WriteModeWriteBack writes to cache immediately and backing store asynchronously
	WriteModeWriteBack
	// WriteModeWriteAround bypasses cache on writes, only caches reads
	WriteModeWriteAround
)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL