Python's decorator syntax is a surface-level entry point into one of the language's richest design spaces. Beyond the basic @property and @staticmethod that appear in tutorials, the standard library ships a collection of specialized utility decorators built for performance optimization, type dispatch, comparison generation, resource management, and production-grade reliability patterns. This article covers those decorators in depth — how they work internally, when to reach for each one, and how to build custom variants that solve real problems.
A decorator in Python is any callable that accepts a function (or class) as its argument and returns a replacement callable. That simplicity is intentional — it means decorators compose cleanly, can carry state, and can be parameterized using factory functions that return the actual decorator. The utility decorators covered here take that composability seriously. Several of them come from functools, Python's module for higher-order function support. Others come from contextlib. A final group is custom-built, but follows patterns so common in production codebases that they belong in any working Python developer's toolkit.
Standard Library Utility Decorators
The functools module is the primary home of Python's specialized utility decorators. Each one solves a distinct class of problem, and understanding them individually before combining them is the right approach.
functools.lru_cache — Memoization with Bounded Memory
lru_cache as a sticky notepad stapled to the function. The first time you ask a question, the answer gets written down. Every time after that, the notepad is checked first. If the notepad is full, the oldest note gets thrown away to make room.@functools.lru_cache (added in Python 3.2) wraps a function with a memoizing callable that saves up to maxsize recent results. The underlying storage is a dictionary keyed by the argument tuple, which means all positional and keyword arguments must be hashable. The "LRU" in the name stands for Least Recently Used — when the cache reaches capacity, the result that was accessed least recently is evicted first. The Python Software Foundation's documentation notes that the LRU feature performs best when maxsize is a power of two — a meaningful tuning detail for high-throughput services.
from functools import lru_cache
@lru_cache(maxsize=128)
def fibonacci(n: int) -> int:
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
# First call computes recursively; subsequent calls hit cache
print(fibonacci(40)) # 102334155
# Inspect cache behavior
info = fibonacci.cache_info()
print(f"Hits: {info.hits}, Misses: {info.misses}, Size: {info.currsize}")
# Hits: 38, Misses: 41, Size: 41
# Clear cache when needed
fibonacci.cache_clear()
Setting maxsize=None removes the size limit entirely, turning the decorator into an unbounded memoization cache. Python 3.9 introduced @functools.cache as a shorthand for exactly this configuration — it is faster than lru_cache(maxsize=None) because it skips the LRU tracking logic entirely. Python 3.9 also added cache_parameters() to lru_cache-wrapped functions, which returns a dictionary of the active maxsize and typed values — useful for introspection in test suites and runtime configuration validation. Use @cache when you know the argument space is finite and manageable; use @lru_cache(maxsize=N) when you need memory-bounded caching on a long-running process.
The typed parameter (available since Python 3.3) controls whether arguments of different types are treated as distinct cache entries. With typed=True, f(3) and f(3.0) are cached separately even though they compare as equal. This matters when your function's return value differs based on the argument type — for example, a serializer that produces different output for int versus float. One additional precision point: the official Python documentation notes that f(a=1, b=2) and f(b=2, a=1) are considered distinct cache entries because they differ in keyword argument order, not just value. Design cached functions to use positional arguments wherever possible to avoid unintended cache misses from argument reordering.
Per the Python Software Foundation's functools documentation, all arguments to a cached function must be hashable because a dictionary backs the results store. Calls that differ only in keyword argument ordering are treated as separate cache entries even when the argument values are identical.
from functools import cache
@cache
def count_paths(rows: int, cols: int) -> int:
"""Count unique paths in a grid from top-left to bottom-right."""
if rows == 1 or cols == 1:
return 1
return count_paths(rows - 1, cols) + count_paths(rows, cols - 1)
print(count_paths(10, 10)) # 48620 — computed once, reused for all sub-problems
print(count_paths(20, 20)) # 35345263800
The cache is threadsafe — the underlying dictionary will remain coherent during concurrent updates. However, a wrapped function may be called more than once for the same arguments if a second thread makes an additional call before the first completes and caches its result. Design your cached functions to be idempotent.
Recursive algorithms that recalculate overlapping subproblems — Fibonacci, shortest paths, combinatorial counters — go from exponential to linear time with a single decorator. But in long-running services, an unbounded @cache on a function that takes user-supplied inputs can become a memory leak. The maxsize parameter on lru_cache is your pressure-release valve. Set it to the largest number of concurrent unique inputs you expect, not to None.
functools.cached_property — Instance-Level Lazy Computation
cached_property is lazy initialization as a first-class citizen. The value does not exist until someone asks for it. The moment someone does, it springs into existence and parks itself directly on the object — not on the class, not in a side-cache. The class no longer has any involvement in subsequent reads.@functools.cached_property combines property-style access with one-time computation. On first access, it calls the method and writes the result to the instance's __dict__ under the same attribute name. Every subsequent access reads directly from the instance dictionary, bypassing the descriptor entirely. This makes it significantly cheaper than @property for expensive computations that do not change after initialization.
from functools import cached_property
import statistics
class SalesReport:
def __init__(self, transactions: list[float]):
self.transactions = transactions
@cached_property
def mean(self) -> float:
print("Computing mean...")
return statistics.mean(self.transactions)
@cached_property
def stdev(self) -> float:
print("Computing standard deviation...")
return statistics.stdev(self.transactions)
@cached_property
def summary(self) -> dict:
# Both mean and stdev are already cached when this runs
return {"mean": self.mean, "stdev": self.stdev, "n": len(self.transactions)}
data = [12.5, 14.3, 11.8, 15.0, 13.6, 12.9, 14.7]
report = SalesReport(data)
print(report.mean) # Computing mean... -> 13.542...
print(report.mean) # No recomputation — reads from instance dict
print(report.summary) # No recomputation for mean or stdev
To invalidate the cache for a specific instance, delete the attribute: del report.mean. The next access will recompute and re-cache. This is more targeted than class-level cache invalidation and suits scenarios where individual records need to be refreshed without affecting others.
cached_property requires that the instance's __dict__ attribute exists and is a mutable mapping. It will not work with classes that define __slots__ without explicitly including '__dict__' as one of the slots. Additionally, the Python documentation notes that cached_property is not thread-safe by design — if multiple threads access an uncomputed property simultaneously, the method may be called more than once before the result is written to the instance dictionary. For thread-safe lazy initialization in concurrent code, protect the first access with a threading.Lock or use a different caching strategy.
functools.singledispatch — Type-Based Function Dispatch
@functools.singledispatch (added in Python 3.4 via PEP 443) transforms a function into a generic function that dispatches to different implementations based on the type of the first argument. The base function serves as the fallback for any type without a registered implementation. Additional implementations are registered using the .register() decorator on the generic function object. Starting in Python 3.7, the .register() attribute supports using type annotations directly rather than passing the type as an argument. Python 3.11 further extended this to accept typing.Union as a type annotation, enabling registration across multiple types in a single declaration.
from functools import singledispatch
from datetime import date, datetime
from decimal import Decimal
@singledispatch
def serialize(value) -> str:
"""Fallback: convert to string representation."""
return str(value)
@serialize.register
def _(value: int) -> str:
return f"INT:{value}"
@serialize.register
def _(value: float) -> str:
return f"FLOAT:{value:.6f}"
@serialize.register
def _(value: Decimal) -> str:
return f"DECIMAL:{value:.10f}"
# Register the superclass (date) before the subclass (datetime).
# singledispatch resolves by MRO — the more specific type must be
# registered last, matching the rule demonstrated in Bug Challenge 3.
@serialize.register
def _(value: date) -> str:
return value.strftime("%Y-%m-%d")
@serialize.register # datetime subclasses date — registered after to take precedence
def _(value: datetime) -> str:
return value.isoformat()
@serialize.register(list)
def _(value) -> str:
return "[" + ", ".join(serialize(item) for item in value) + "]"
print(serialize(42)) # INT:42
print(serialize(3.14159)) # FLOAT:3.141590
print(serialize(Decimal("1.0000000001"))) # DECIMAL:1.0000000001
print(serialize(date(2026, 3, 29))) # 2026-03-29
print(serialize(datetime(2026, 3, 29, 9, 0))) # 2026-03-29T09:00:00
print(serialize([1, 2.5, date(2026, 1, 1)])) # [INT:1, FLOAT:2.500000, 2026-01-01]
Python resolves dispatch by walking the MRO (Method Resolution Order) of the argument's type. If no exact type match is found, it looks for a match on the nearest ancestor class. This means registering an implementation for numbers.Number will handle any numeric subtype that does not have its own registration. Introspecting registered implementations is straightforward via serialize.registry.
PEP 443 defines a generic function as one composed of multiple implementations for the same operation across different types. When implementation selection depends solely on the type of the first argument, that is single dispatch — and the dispatch algorithm selects the correct implementation at call time.
For dispatch on method arguments inside a class, use functools.singledispatchmethod (added in Python 3.8). It handles the implicit self argument correctly and integrates with @classmethod when needed.
The open/closed principle says code should be open for extension but closed for modification. An isinstance chain violates this — adding a new type requires editing the original function. A singledispatch generic function is open for extension: any module can register a new implementation without touching the base. This is how serializers, formatters, and converters stay maintainable as a codebase grows.
functools.total_ordering — Generating Rich Comparison Methods
Implementing all six comparison operators (__eq__, __lt__, __le__, __gt__, __ge__, __ne__) for a custom class is repetitive. @functools.total_ordering fills in the missing methods from a minimum set. You provide __eq__ and one of __lt__, __le__, __gt__, or __ge__; the decorator derives the rest.
from functools import total_ordering
@total_ordering
class SemanticVersion:
def __init__(self, major: int, minor: int, patch: int):
self.major = major
self.minor = minor
self.patch = patch
def _key(self) -> tuple[int, int, int]:
return (self.major, self.minor, self.patch)
def __eq__(self, other) -> bool:
if not isinstance(other, SemanticVersion):
return NotImplemented
return self._key() == other._key()
def __lt__(self, other) -> bool:
if not isinstance(other, SemanticVersion):
return NotImplemented
return self._key() < other._key()
def __hash__(self) -> int:
# Defining __eq__ sets __hash__ to None; restore it explicitly
# so instances remain usable in sets and as dict keys.
return hash(self._key())
def __repr__(self) -> str:
return f"v{self.major}.{self.minor}.{self.patch}"
v1 = SemanticVersion(1, 9, 3)
v2 = SemanticVersion(2, 0, 0)
v3 = SemanticVersion(1, 9, 3)
print(v1 < v2) # True
print(v2 > v1) # True — derived by total_ordering
print(v1 <= v3) # True — derived by total_ordering
print(v1 >= v2) # False — derived by total_ordering
print(sorted([v2, v1, SemanticVersion(1, 10, 0)]))
# [v1.9.3, v1.10.0, v2.0.0]
The documentation notes that total_ordering does not override methods already declared in the class or its superclasses. Return NotImplemented (not False) when the other operand is not a recognized type — this allows Python to try the reflected operation on the right-hand operand before raising TypeError.
The Python documentation explicitly states that total_ordering comes at the cost of slower execution and more complex stack traces for the derived comparison methods. The derived methods call the ones you defined internally, which adds call overhead on every comparison. If performance benchmarking identifies comparison as a bottleneck, implementing all six rich comparison methods directly is the correct fix. The decorator is a convenience tool, not a zero-cost abstraction.
The functools documentation notes that while
total_orderingmakes it easy to create well-behaved ordered types, it comes with slower execution and more complex stack traces for the derived methods. If performance benchmarking shows this is a bottleneck, implementing all six methods is likely to provide an easy speed boost.
functools.wraps — Preserving Function Identity
Every decorator that wraps a function should apply @functools.wraps(func) to the inner wrapper. Without it, the wrapper function replaces the original's metadata with its own values, and the wrapper's __dict__ is not updated with entries from the original function's __dict__. To be precise: functools.wraps is a convenience wrapper around functools.update_wrapper, which assigns the attributes listed in WRAPPER_ASSIGNMENTS directly and merges (updates) the wrapper's __dict__ with entries from the original's — it does not replace the wrapper's dictionary wholesale. The default WRAPPER_ASSIGNMENTS tuple covers __module__, __name__, __qualname__, __annotations__, and __doc__. Python 3.12 added __type_params__ to this tuple as part of the PEP 695 type parameter syntax implementation — code that introspects generic functions on 3.12+ will see this attribute propagated correctly only when @functools.wraps is present. Omitting @functools.wraps breaks introspection tools, documentation generators, debuggers, and type checkers. For a focused look at preserving function metadata in Python decorators, including docstring and name propagation patterns, see the dedicated guide.
from functools import wraps
def without_wraps(func):
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
def with_wraps(func):
@wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
@without_wraps
def greet_bad(name: str) -> str:
"""Return a greeting string."""
return f"Hello, {name}"
@with_wraps
def greet_good(name: str) -> str:
"""Return a greeting string."""
return f"Hello, {name}"
print(greet_bad.__name__) # wrapper
print(greet_bad.__doc__) # None
print(greet_good.__name__) # greet_good
print(greet_good.__doc__) # Return a greeting string.
print(greet_good.__wrapped__) #
@functools.wraps also sets __wrapped__ on the wrapper, giving direct access to the original unwrapped function. This is how tools like inspect.unwrap() traverse decorator chains, and how test frameworks can patch the underlying callable without removing the decorator.
According to the functools documentation,
update_wrapperautomatically sets a__wrapped__attribute on the wrapper pointing to the original function. This enables introspection tools and cached decorators likelru_cacheto be bypassed when direct access to the underlying callable is needed. As of Python 3.12,WRAPPER_ASSIGNMENTSalso includes__type_params__, ensuring that generic functions using PEP 695 type parameter syntax propagate that attribute correctly through decorator chains.
@lru_cache(maxsize=128) and call fib(40) twice. On the second call, what happens?lru_cache wrapper stores results in an internal dictionary keyed by the argument tuple. On the second call, the key (40,) is already present so the cached value is returned immediately — the recursive function body is never touched. Verify this by checking fib.cache_info().hits.
lru_cache has no effect on Python's recursion depth limit. Caching actually makes deep recursion safer by eliminating redundant recursive calls — once a depth is computed, it is never visited again for the same input.
__eq__ and __lt__ and is decorated with @total_ordering. Which methods does the decorator generate?total_ordering only fills in missing ordering comparisons. It does not generate __ne__ (Python 3 derives that automatically from __eq__), and it has no knowledge of __hash__ or __repr__ — those are entirely outside its scope.
__eq__ and __lt__, the decorator derives __le__, __gt__, and __ge__. It never overrides methods the class has already defined — if you had also written __le__ yourself, the decorator would leave it alone.
total_ordering never touches methods already defined in the class. The whole point is to fill in the gaps. If you provided __lt__ and __eq__, those stay exactly as you wrote them.
@functools.wraps(func) from a custom decorator. What is the most immediate concrete consequence?@functools.wraps, the inner wrapper function becomes the visible identity of the decorated function. Any code that inspects __name__ — logging formatters, stack traces, pytest output, Sphinx autodoc — will see 'wrapper' instead of the original name, and the docstring is gone entirely.
lru_cache keys its cache on the argument tuple, not on function identity metadata. Omitting @functools.wraps does not affect the cache at all. The real impact is on introspection: __name__, __doc__, __module__, __qualname__, and __annotations__ all report the wrapper's values instead of the original's.
Context Management and Resource Control
The contextlib module provides decorators that bridge the gap between context managers and function decorators, enabling clean resource management patterns without writing full __enter__/__exit__ classes.
contextlib.contextmanager — Generator-Based Context Managers
@contextlib.contextmanager converts a generator function into a context manager. Everything before the single yield statement executes as __enter__; everything in the finally block after yield executes as __exit__. The yielded value becomes the target of the as clause in the with statement.
from contextlib import contextmanager
import os
import shutil
import tempfile
@contextmanager
def temp_directory():
"""Create a temporary directory and clean it up on exit."""
tmpdir = tempfile.mkdtemp()
try:
yield tmpdir
finally:
shutil.rmtree(tmpdir, ignore_errors=True)
@contextmanager
def patched_env(key: str, value: str):
"""Temporarily set an environment variable."""
original = os.environ.get(key)
os.environ[key] = value
try:
yield
finally:
if original is None:
os.environ.pop(key, None)
else:
os.environ[key] = original
# Usage as a context manager
with temp_directory() as tmpdir:
path = os.path.join(tmpdir, "output.txt")
with open(path, "w") as f:
f.write("temporary data")
print(os.path.exists(tmpdir)) # True
print(os.path.exists(tmpdir)) # False — cleaned up
# Usage as a function decorator — every call gets its own context
with patched_env("APP_ENV", "testing"):
print(os.environ["APP_ENV"]) # testing
print(os.environ.get("APP_ENV")) # None (or original value)
Because contextmanager builds on ContextDecorator, the resulting context manager can also be used directly as a function decorator using the @ctx_manager() syntax. A new generator instance is created on each function call, so the context manager remains reusable across multiple invocations.
from contextlib import contextmanager
import time
@contextmanager
def timed_block(label: str):
start = time.perf_counter()
try:
yield
finally:
elapsed = time.perf_counter() - start
print(f"[{label}] completed in {elapsed:.4f}s")
# As a context manager
with timed_block("matrix multiply"):
result = sum(i * j for i in range(500) for j in range(500))
# As a decorator — note the call syntax ()
@timed_block("sort benchmark")
def run_sort():
data = list(range(10_000, 0, -1))
data.sort()
run_sort() # [sort benchmark] completed in 0.0008s
contextlib.asynccontextmanager — Async Resource Management
For async code, @contextlib.asynccontextmanager does the same job with an asynchronous generator. It was added in Python 3.7. Support for using the resulting context manager as a function decorator (via the @ctx_manager() syntax) was added in Python 3.10. The decorated function must be an async def generator containing exactly one yield.
import asyncio
from contextlib import asynccontextmanager
@asynccontextmanager
async def managed_connection(host: str, port: int):
"""Simulate acquiring and releasing an async database connection."""
print(f"Connecting to {host}:{port}...")
conn = {"host": host, "port": port, "active": True}
try:
yield conn
finally:
conn["active"] = False
print(f"Connection to {host}:{port} closed.")
async def fetch_users():
async with managed_connection("db.internal", 5432) as conn:
print(f"Running query on {conn['host']}")
await asyncio.sleep(0.01) # Simulate I/O
return ["alice", "bob", "charlie"]
asyncio.run(fetch_users())
# Connecting to db.internal:5432...
# Running query on db.internal
# Connection to db.internal:5432 closed.
Comparison: Decorator Categories and Use Cases
maxsize is reached. Set maxsize to a power of two for best performance.lru_cache(maxsize=None) because it skips LRU tracking overhead. Added in Python 3.9.__dict__ on first access; subsequent reads bypass the descriptor entirely. Requires __dict__ — incompatible with bare __slots__.isinstance chains. Register implementations per type; dispatch resolves via MRO. Use @singledispatchmethod inside classes.__le__, __gt__, and __ge__ from a class that defines __eq__ plus one other comparison method. Carries a performance cost — derived methods add call overhead.__name__, __doc__, __annotations__) in custom decorators. Sets __wrapped__ so inspect.unwrap() can traverse the decorator chain.yield is setup; finally after is teardown. Can also be used as a function decorator.async with statements. Added in Python 3.7; decorator support added in Python 3.10.Custom Utility Decorator Patterns
Standard library decorators cover general-purpose needs, but production codebases frequently require custom decorators for cross-cutting concerns such as retry logic, rate limiting, input validation, timing instrumentation, and thread-safe execution guards. The patterns below are written with @functools.wraps throughout and designed to be composable. Each one also identifies the production-grade extensions that are commonly skipped in simplified treatments — circuit breakers, distributed rate limiting, generic-aware type enforcement, idempotency guards, and deadline propagation — because understanding why those extensions exist matters as much as the pattern itself.
Parameterized Retry with Exponential Backoff
Network calls, file I/O operations, and external API requests fail intermittently. A retry decorator with configurable attempts, delay, and backoff factor handles transient failures without cluttering business logic.
import json
import logging
import time
import urllib.request
from functools import wraps, lru_cache
logger = logging.getLogger(__name__)
def retry(
max_attempts: int = 3,
delay: float = 1.0,
backoff: float = 2.0,
exceptions: tuple[type[BaseException], ...] = (Exception,)
):
"""Retry a function on specified exceptions with exponential backoff."""
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
current_delay = delay
for attempt in range(1, max_attempts + 1):
try:
return func(*args, **kwargs)
except exceptions as exc:
if attempt == max_attempts:
logger.error(
"%s failed after %d attempts: %s",
func.__name__, max_attempts, exc
)
raise
logger.warning(
"%s attempt %d/%d failed: %s. Retrying in %.1fs...",
func.__name__, attempt, max_attempts, exc, current_delay
)
time.sleep(current_delay)
current_delay *= backoff
return wrapper
return decorator
# Usage — only retries on connection-related errors
@retry(max_attempts=4, delay=0.5, backoff=2.0, exceptions=(ConnectionError, TimeoutError))
def fetch_config(endpoint: str) -> dict:
with urllib.request.urlopen(endpoint, timeout=5) as resp:
return json.loads(resp.read())
# Stacking with other decorators
@retry(max_attempts=3, delay=1.0)
@lru_cache(maxsize=64)
def get_user_profile(user_id: int) -> dict:
# Cached results never trigger retry; only uncached calls can fail
return {"id": user_id, "name": "example"}
The retry decorator above addresses transient failures — a dependency that occasionally hiccups but recovers. It does not address a dependency that has entered a sustained failure state. In that case, retrying aggressively amplifies load on an already-stressed service. The production extension is a circuit breaker: a stateful decorator that tracks consecutive failure counts and, above a threshold, enters an open state where calls are rejected immediately without attempting the underlying function. After a configurable cool-off period, it enters a half-open state, allowing a probe call. If that succeeds, the circuit closes. Implementing this correctly requires a shared state object (across calls), monotonic time tracking, and thread-safe state transitions. Libraries like pybreaker and tenacity provide this; if building from scratch, the state machine is the complexity, not the decorator shell.
Rate Limiter Decorator
Rate limiting is a common requirement when calling external APIs or protecting shared resources. The decorator below uses a token bucket approach: calls are allowed up to calls times per period seconds, with excess calls blocked until a slot is available.
import time
import threading
from collections import deque
from functools import wraps
def rate_limit(calls: int, period: float):
"""Allow at most `calls` invocations per `period` seconds."""
def decorator(func):
lock = threading.Lock()
call_times: deque[float] = deque()
@wraps(func)
def wrapper(*args, **kwargs):
while True:
with lock:
now = time.monotonic()
# Remove timestamps outside the current window
while call_times and call_times[0] <= now - period:
call_times.popleft()
if len(call_times) < calls:
call_times.append(now)
break
# Compute wait time, then release the lock before sleeping
# so other threads are not blocked during the delay.
sleep_for = max(0.0, period - (now - call_times[0]))
time.sleep(sleep_for)
return func(*args, **kwargs)
return wrapper
return decorator
@rate_limit(calls=5, period=1.0)
def call_api(endpoint: str) -> str:
return f"response from {endpoint}"
# Will process 5 calls immediately, then wait before the 6th
for i in range(8):
result = call_api(f"/api/resource/{i}")
print(f"Call {i + 1}: {result}")
The sliding-window implementation above is in-process: the deque of timestamps lives in memory on one machine and is not shared across instances. In a horizontally scaled service, each process enforces its own limit independently — meaning ten processes each allowing five calls per second collectively produce fifty calls per second to the upstream API. The production extension is a distributed rate limiter backed by a shared store, typically Redis, using atomic Lua scripts or the INCR/EXPIRE approach to enforce a global window. A further refinement is per-key rate limiting: isolating limits by caller ID, tenant, or endpoint so one aggressive caller cannot exhaust the budget for others. The decorator shell stays identical; the storage backend changes from a local deque to a Redis client operation, with the same lock-and-sleep logic wrapped around it.
Type Enforcement Decorator
Python's type hints are annotations, not runtime constraints. A type enforcement decorator bridges that gap — it inspects the function signature at decoration time and validates argument types on every call, raising TypeError with a descriptive message when a mismatch is detected.
import inspect
from functools import wraps
from typing import Any, Callable, TypeVar, get_type_hints
F = TypeVar("F", bound=Callable[..., Any])
def enforce_types(func: F) -> F:
"""Validate argument types against the function's annotations at call time."""
sig = inspect.signature(func)
# get_type_hints() resolves string annotations (PEP 563 / from __future__ import annotations)
# and forward references that func.__annotations__ would leave as raw strings.
try:
hints = get_type_hints(func)
except Exception:
hints = func.__annotations__
@wraps(func)
def wrapper(*args, **kwargs):
bound = sig.bind(*args, **kwargs)
bound.apply_defaults()
for param_name, value in bound.arguments.items():
if param_name in hints and param_name != "return":
expected = hints[param_name]
# Skip parameterized generics (e.g. list[float], Optional[str])
# and Union types — isinstance() raises TypeError for these.
if not isinstance(expected, type):
continue
if not isinstance(value, expected):
raise TypeError(
f"{func.__name__}() argument '{param_name}' "
f"must be {expected.__name__}, got {type(value).__name__}"
)
return func(*args, **kwargs)
return wrapper # type: ignore[return-value]
@enforce_types
def calculate_discount(price: float, rate: float, label: str) -> float:
"""Apply a discount rate to a price."""
return price * (1.0 - rate)
print(calculate_discount(99.99, 0.15, "member")) # 84.99149999999999
try:
calculate_discount("free", 0.10, "promo")
except TypeError as e:
print(e)
# calculate_discount() argument 'price' must be float, got str
This pattern works well for boundary enforcement in library APIs and data pipelines. For full runtime type checking in large codebases, consider beartype or typeguard, which handle union types, generics, and Optional transparently and with better performance.
The enforce_types implementation above intentionally skips parameterized generics — list[float], Optional[str], Union[int, str] — because isinstance() cannot handle them. A production-grade extension would use typing.get_origin() and typing.get_args() to unpack parameterized generics and recursively check container contents. This is what beartype does at decoration time (not call time), making it significantly faster. A separate but related pattern is the idempotency decorator: it hashes the function's arguments, records which calls have already succeeded in a persistent store, and skips re-execution for repeated calls with the same inputs. This is critical for payment processing, infrastructure provisioning, and any operation where duplicate execution causes data corruption. The decorator signature is identical to a cache decorator, but the backend is a write-ahead log rather than an in-memory dictionary, and eviction is based on explicit acknowledgment rather than LRU policy.
Class-Based Stateful Decorators
When a decorator needs to carry state across calls — call counts, cumulative timing, call history — a class-based implementation is cleaner than a closure with mutable variables. The class implements __init__ to store the function and __call__ to act as the wrapper. Using try/finally in __call__ ensures timing is recorded even when the wrapped function raises an exception — only successful calls increment call_count, but all calls contribute to total_time.
import time
from functools import update_wrapper
class Profiler:
"""Track cumulative call count and total execution time.
Timing is recorded via try/finally so partial runs caused by exceptions
still contribute to total_time. Only successful calls increment call_count.
"""
def __init__(self, func):
update_wrapper(self, func)
self.func = func
self.call_count = 0
self.total_time = 0.0
def __call__(self, *args, **kwargs):
start = time.perf_counter()
try:
result = self.func(*args, **kwargs)
self.call_count += 1
return result
finally:
# Always record elapsed time, even if the function raised.
self.total_time += time.perf_counter() - start
@property
def average_time(self) -> float:
if self.call_count == 0:
return 0.0
return self.total_time / self.call_count
def stats(self) -> dict[str, object]:
return {
"function": self.func.__name__,
"calls": self.call_count,
"total_s": round(self.total_time, 6),
"avg_s": round(self.average_time, 6),
}
@Profiler
def process_record(record: dict) -> dict:
"""Simulate record processing with variable work."""
time.sleep(0.001)
return {k: str(v).upper() for k, v in record.items()}
records = [{"id": i, "name": f"user_{i}"} for i in range(20)]
for r in records:
process_record(r)
print(process_record.stats())
# {'function': 'process_record', 'calls': 20, 'total_s': 0.02..., 'avg_s': 0.001...}
The Profiler class above collects timing data in memory on a single instance. In production, that data needs to reach a metrics backend — Prometheus, Datadog, OpenTelemetry — not just a local dictionary. The extension is a decorator that emits structured spans or histogram observations directly, using the function name, argument cardinality, and outcome (success vs. exception type) as metric labels. A deeper concern is deadline propagation: rather than simply measuring elapsed time, a deadline-aware decorator accepts a deadline timestamp from the calling context (often via a thread-local or contextvars.ContextVar) and raises a DeadlineExceeded error before invoking the function if the remaining budget is already exhausted. This prevents cascading latency — a pattern that is standard in gRPC service implementations and increasingly common in async Python services using asyncio.timeout() (added in Python 3.11).
Universal Async/Sync Timer
A decorator that works on both synchronous and asynchronous functions requires runtime detection of whether the wrapped callable is a coroutine function. Using inspect.iscoroutinefunction() at decoration time, the factory returns the correct wrapper variant. asyncio.iscoroutinefunction() was deprecated in Python 3.14 (CPython issue gh-122875, contributed by Jiahao Li and Kumar Aditya) and is scheduled for removal in Python 3.16; inspect.iscoroutinefunction() has been the canonical form since Python 3.5 and is the correct choice for all new code.
import asyncio
import inspect
import time
from functools import wraps
def timer(func):
"""Measure and report execution time for sync and async functions."""
# inspect.iscoroutinefunction is the canonical form; asyncio.iscoroutinefunction
# was deprecated in Python 3.14 and is scheduled for removal in Python 3.16.
if inspect.iscoroutinefunction(func):
@wraps(func)
async def async_wrapper(*args, **kwargs):
start = time.perf_counter()
result = await func(*args, **kwargs)
print(f"[async] {func.__name__} -> {time.perf_counter() - start:.4f}s")
return result
return async_wrapper
else:
@wraps(func)
def sync_wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
print(f"[sync] {func.__name__} -> {time.perf_counter() - start:.4f}s")
return result
return sync_wrapper
@timer
def compute_sum(limit: int) -> int:
return sum(range(limit))
@timer
async def fetch_data(url: str) -> str:
await asyncio.sleep(0.05)
return f"data from {url}"
compute_sum(10_000_000) # [sync] compute_sum -> 0.2133s
asyncio.run(fetch_data("https://api.example.com/data")) # [async] fetch_data -> 0.0501s
Decorator Stacking Order
When multiple decorators are stacked on a function, they apply from innermost (closest to the function) to outermost (furthest from the function) during decoration, but execute from outermost to innermost at call time. Understanding this order prevents subtle bugs when combining decorators like @retry and @timer. For a complete reference on Python decorator stacking order including edge cases and common pitfalls, see the dedicated guide.
# Decoration order (bottom-up): enforce_types applied first, then timer, then retry
# Call order (top-down): retry wraps timer wraps enforce_types wraps the function
import random
@retry(max_attempts=3, delay=0.1, backoff=2.0)
@timer
@enforce_types
def unstable_compute(value: float, scale: float) -> float:
if random.random() < 0.4:
raise ConnectionError("simulated transient failure")
return value * scale
# retry sees the timer-wrapped version
# timer sees the enforce_types-wrapped version
# enforce_types sees the original function
# Equivalent to:
# unstable_compute = retry(max_attempts=3, delay=0.1, backoff=2.0)(timer(enforce_types(unstable_compute)))
'wrapper' in log output and stack traces. What is wrong?import time
def retry(max_attempts=3, delay=1.0):
def decorator(func):
def wrapper(*args, **kwargs):
for attempt in range(1, max_attempts + 1):
try:
return func(*args, **kwargs)
except Exception:
if attempt == max_attempts:
raise
time.sleep(delay)
return wrapper
return decorator
@retry(max_attempts=4, delay=0.5)
def fetch_data(url: str) -> dict:
"""Fetch JSON from the given URL."""
import urllib.request, json
with urllib.request.urlopen(url) as r:
return json.loads(r.read())
@functools.wraps(func) is missing from the inner wrapper.
wrapper function's own name and empty docstring replace the original function's metadata. Every log line, stack trace, monitoring dashboard, and Sphinx autodoc page that reads __name__ will show 'wrapper' instead of 'fetch_data' — making production debugging significantly harder and documentation meaningless.
@functools.wraps(func) to every inner wrapper in every decorator you write. It costs one line and pays for itself the first time you need to read a stack trace.
cached_property is supposed to cache an expensive computation per instance, but the method recalculates on every access instead of caching. What is the structural cause?from functools import cached_property
class DataPipeline:
__slots__ = ('_data',)
def __init__(self, data: list[float]):
self._data = data
@cached_property
def summary(self) -> dict:
print("computing summary...")
return {
'count': len(self._data),
'total': sum(self._data),
'mean': sum(self._data) / len(self._data),
}
pipeline = DataPipeline([1.0, 2.0, 3.0])
print(pipeline.summary) # prints "computing summary..." every time
__slots__ is defined without including '__dict__'.
cached_property works by writing the computed result directly into the instance's __dict__ on first access. When __slots__ is declared without '__dict__', the instance has no __dict__ at all. The descriptor's write silently fails, so it falls back to recomputing every time.
'__dict__' to the slot list restores the mutable mapping that cached_property requires. If you need the memory savings of __slots__ without __dict__, drop cached_property entirely and use a regular method with a manual _cache attribute stored in a named slot.
singledispatch function registers both int and bool handlers, but the bool implementation is never reached. The output for True is always "integer: True". What is happening?from functools import singledispatch
@singledispatch
def process(value):
return f"unknown: {value}"
@process.register
def _(value: bool) -> str:
return f"boolean: {value}"
@process.register
def _(value: int) -> str:
return f"integer: {value}"
print(process(True)) # expected: boolean: True
# actual: integer: True
int is registered after bool, and later registrations overwrite earlier ones for types that share an MRO.
bool is a subclass of int. When singledispatch resolves dispatch for True, it walks the MRO: [bool, int, object]. It finds int registered and, because int was registered after bool, the internal cache for the bool type was rebuilt to point at the int handler. The bool-specific handler is effectively shadowed.
process.dispatch(bool) — it returns the function that will be invoked, making it easy to catch this mistake before it reaches production.
How to Use Python Specialized Utility Decorators
The following steps cover the practical application sequence for the decorators in this article. Each step corresponds to a distinct use case you will encounter in real Python codebases.
-
Apply
@lru_cacheto a pure function with hashable arguments. Importlru_cachefromfunctoolsand decorate any pure function whose inputs are hashable. Setmaxsizeto a power of two for optimal performance on long-running services, or use@cachewhen the input space is finite and known. After running the function, call.cache_info()to inspect hit and miss counts, and.cache_parameters()(Python 3.9+) to verify the activemaxsizeandtypedsettings programmatically. -
Use
@cached_propertyfor expensive per-instance computed values. Importcached_propertyfromfunctools. Define a zero-argument method (beyondself) and decorate it. The result is computed once on first access and written to the instance's__dict__. If the class uses__slots__, include'__dict__'in the slot list or the caching mechanism will silently fail. -
Convert
isinstancechains to@singledispatch. Importsingledispatchfromfunctoolsand decorate the base fallback function. Register type-specific implementations using@func.registerwith type annotations on the first argument. Always register more specific types (subclasses) after more general ones to prevent MRO shadowing. -
Generate comparison methods with
@total_ordering. Importtotal_orderingfromfunctoolsand decorate the class. Provide__eq__and at least one ordering method. The decorator derives the remaining comparison methods automatically. ReturnNotImplementedrather thanFalsefor unrecognized operand types. Be aware thattotal_orderingcarries a performance cost: the derived methods add call overhead on each comparison. If profiling shows comparisons as a bottleneck, implement all six methods directly. -
Apply
@functools.wrapsinside every custom decorator. Place@functools.wraps(func)on the inner wrapper function in every decorator you write. This preserves__name__,__doc__,__module__,__qualname__, and__annotations__from the original function, and sets__wrapped__for introspection tools. -
Create generator-based context managers with
@contextmanager. Importcontextmanagerfromcontextlib. Write a generator function with setup code before the singleyieldand cleanup code in afinallyblock after it. The yielded value becomes the target of theasclause. For coroutine-based code, use@asynccontextmanagerinstead.
Frequently Asked Questions
functools.lru_cache (added in Python 3.2) stores up to a configurable maxsize of recent results and evicts the least recently used entry when capacity is reached. functools.cache (added in Python 3.9) is an unbounded cache equivalent to lru_cache(maxsize=None), but faster because it skips the LRU tracking overhead. Use lru_cache with a maxsize for long-running processes where memory must stay bounded; use cache for finite computation problems like dynamic programming.
Use cached_property (added in Python 3.8) when the computed value is expensive and does not change after the instance is initialized. Unlike @property, which recomputes on every access, cached_property stores the result in the instance's __dict__ on first access, making every subsequent read a simple dictionary lookup. It does not work with classes that use __slots__ without including '__dict__' as a slot, and it is not thread-safe.
functools.singledispatch (added in Python 3.4 via PEP 443) transforms a function into a generic function that selects an implementation based on the type of the first argument. It is the preferred alternative to long isinstance() chains when you need different behavior per type, especially when implementations need to be added from outside the original module. For method dispatch inside a class, use functools.singledispatchmethod (added in Python 3.8).
Without @functools.wraps, the wrapper function overwrites the original function's __name__, __doc__, __module__, __qualname__, and __annotations__ attributes. This breaks documentation generators, test framework introspection, type checkers, and any code that relies on function metadata. @functools.wraps also sets the __wrapped__ attribute on the wrapper, allowing tools like inspect.unwrap() to traverse the decorator chain and access the underlying function directly.
contextlib.contextmanager converts a regular generator function with a single yield into a synchronous context manager. contextlib.asynccontextmanager (added in Python 3.7) does the same for async generator functions, creating asynchronous context managers suitable for use in async with statements. Both can be used as function decorators: contextmanager via ContextDecorator (since Python 3.2), asynccontextmanager since Python 3.10.
functools.total_ordering is a class decorator that automatically generates the missing rich comparison methods from a class that defines __eq__ and at least one of __lt__, __le__, __gt__, or __ge__. This eliminates writing all six comparison methods by hand. The decorator does not override comparison methods already defined in the class or its superclasses.
When multiple decorators are stacked, they are applied from innermost (closest to the function definition) to outermost during decoration. At call time, execution flows from outermost to innermost. With @retry on top, @timer in the middle, and @enforce_types closest to the function, retry wraps the timer-wrapped version, which wraps the enforce_types-wrapped version of the original function. This is equivalent to: func = retry()(timer(enforce_types(func))).
Key Takeaways
- Match the caching decorator to the data lifetime: Use
@lru_cache(maxsize=N)for long-running processes where memory must be bounded,@cachefor finite computation trees, and@cached_propertyfor per-instance computed attributes that do not change after first access. - Prefer
@singledispatchoverisinstancechains: Type dispatch viasingledispatchis more maintainable, follows the open/closed principle, and allows extension from outside the original module without modifying the base function. - Always apply
@functools.wrapsin custom decorators: Omitting it silently corrupts function metadata, breaks introspection, and causes hard-to-trace failures in documentation generators, test frameworks, and type checkers. It costs nothing to include. - Use
@contextmanagerfor simple resource management; reach for a class when managing complex state: Generator-based context managers are concise for setup/teardown with minimal branching. When__exit__logic becomes complex or the context manager needs to store mutable state across uses, a class with explicit methods is clearer. - Design custom utility decorators with production concerns in mind from the start: Thread safety (rate limiter), observable state (profiler), and clean stacking behavior (sync/async timer) should be first-class requirements, not afterthoughts.
Decorators are at their strongest when they encapsulate a single, well-defined cross-cutting concern — caching, validation, retry, timing — and leave the decorated function free to express pure business logic. The standard library's utility decorators embody this principle, and well-designed custom decorators follow the same contract: transparent, composable, and respectful of the original function's identity through @functools.wraps.