Understanding what a decorator is and knowing where to use one are two different skills. The @decorator syntax was introduced in Python 2.4 via PEP 318 as a cleaner way to apply transformations to functions at the point of definition. This article covers ten practical decorator patterns that solve real problems in Python codebases: measuring performance, logging function calls, caching expensive computations, validating inputs, limiting call frequency, enforcing permissions, retrying failures, restricting instantiation, flagging deprecated code, and tracing execution flow. Each example includes a complete, copy-ready implementation.
"The current method … places the actual transformation after the function body."
— PEP 318, Python Software Foundation — the original problem decorator syntax solved
Every decorator in this article uses functools.wraps to preserve the original function's metadata. Without it, the decorated function loses its __name__, __qualname__, __doc__, __module__, and __annotations__, and they are replaced by those of the wrapper — breaking debuggers, documentation generators, and introspection tools. functools.wraps also adds a __wrapped__ attribute pointing back to the original function, which enables stack unwrapping and cache bypassing (Python docs: functools). Simple decorators use two nesting levels: an outer function receives the target function, and a wrapper replaces it. Parameterized decorators add a third level: the outermost function captures configuration and returns the actual decorator. The differences between all ten examples are in what the wrapper does before, after, or around the original call — the skeleton is identical.
"Without the use of this decorator factory, the name … would have been 'wrapper'."
— Python docs: functools.wraps, Python Software Foundation
Click any layer in the code to learn what it does and when it runs.
1. Execution Timing
Measuring how long a function takes to run is the single simplest useful decorator. It wraps the call with a start timestamp, executes the function, computes the elapsed time, and reports it. This is valuable during development for identifying bottlenecks and in production for feeding performance metrics to monitoring systems.
import time
import functools
def timer(func):
"""Log execution time of the decorated function."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
elapsed = time.perf_counter() - start
print(f"{func.__name__} completed in {elapsed:.4f}s")
return result
return wrapper
@timer
def parse_logfile(path):
with open(path) as f:
return [line.strip() for line in f if "ERROR" in line]
errors = parse_logfile("/var/log/app.log")
# parse_logfile completed in 0.0312s
time.perf_counter() is preferred over time.time() because it uses the system's highest-resolution performance counter clock, is monotonic (never goes backward), and is unaffected by system clock adjustments such as NTP synchronization. It was introduced in Python 3.3 via PEP 418 specifically for short-duration benchmarking. For sub-microsecond work where floating-point precision matters, time.perf_counter_ns() returns an integer nanosecond count added in Python 3.7. Use time.monotonic() for timeouts and delays; reserve perf_counter() for benchmarking.
"
— PEP 418, Python Software Foundationtime.perf_counter()should be used … to get the most precise performance counter."
2. Call Logging
A logging decorator creates an audit trail of every function invocation. It records the function name, the arguments it received, and the value it returned. This is especially useful in API-heavy applications and data pipelines where tracing the sequence of calls matters.
import logging
import functools
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def log_calls(func):
"""Log function name, arguments, and return value."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
args_repr = [repr(a) for a in args]
kwargs_repr = [f"{k}={v!r}" for k, v in kwargs.items()]
signature = ", ".join(args_repr + kwargs_repr)
logger.info("Calling %s(%s)", func.__name__, signature)
result = func(*args, **kwargs)
logger.info("%s returned %r", func.__name__, result)
return result
return wrapper
@log_calls
def calculate_tax(income, rate=0.25):
return income * rate
calculate_tax(85000, rate=0.30)
# INFO:__main__:Calling calculate_tax(85000, rate=0.3)
# INFO:__main__:calculate_tax returned 25500.0
The repr() calls ensure that string arguments are displayed with quotes, making it easy to distinguish "42" from 42 in the log output. For production use, you would replace print-based logging with structured logging that feeds into your observability stack.
3. Memoization (Result Caching)
Memoization stores the result of a function call keyed by its arguments. When the same arguments appear again, the cached result is returned without re-executing the function. This transforms expensive recursive algorithms from exponential to linear time and eliminates redundant API calls or database queries.
import functools
def memoize(func):
"""Cache function results based on arguments."""
cache = {}
@functools.wraps(func)
def wrapper(*args):
if args in cache:
return cache[args]
result = func(*args)
cache[args] = result
return result
wrapper.cache = cache
wrapper.cache_clear = cache.clear
return wrapper
@memoize
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
print(fibonacci(80)) # 23416728348467684
print(len(fibonacci.cache)) # 81 entries cached
Python's standard library provides functools.lru_cache, which does the same thing with additional features: a configurable maximum cache size, automatic eviction of the least recently used entries, and thread-safe cache storage. As the Python docs note, "the cache is threadsafe so that the wrapped function can be used in multiple threads." For simple cases, the custom version above is instructive. For production, prefer lru_cache:
import functools
@functools.lru_cache(maxsize=256)
def expensive_query(user_id, date_range):
# Simulates a slow database call
import time
time.sleep(0.5)
return {"user_id": user_id, "records": 142}
# First call: 0.5s. Second call with same args: instant.
result = expensive_query("u_8837", "2026-Q1")
Memoization only works for functions with hashable arguments. Lists, dictionaries, and sets cannot be used as cache keys. If your function accepts mutable arguments, convert them to tuples or frozensets before caching.
You call fibonacci(10) using the custom @memoize decorator. The cache now has 11 entries. You then call fibonacci(10) again. How many times does the fibonacci body actually execute on the second call?
if args in cache first and returns cache[args] directly. The function body is never entered. That is the entire point of memoization — eliminate redundant computation entirely.
if args in cache: return cache[args]. On a cache hit, execution never reaches func(*args). There is no verification step — the cached value is returned unconditionally.
Well done — memoization has some real gotchas and you've seen them all.
4. Input Validation
A validation decorator checks that a function's arguments meet expected criteria before the function body executes. This catches errors at the boundary of the function call rather than deep inside the implementation, producing clearer error messages and preventing invalid state from propagating.
import functools
import inspect
def validate_types(**expected_types):
"""Validate argument types against expected annotations."""
def decorator(func):
sig = inspect.signature(func)
@functools.wraps(func)
def wrapper(*args, **kwargs):
bound = sig.bind(*args, **kwargs)
bound.apply_defaults()
for param_name, expected in expected_types.items():
if param_name in bound.arguments:
value = bound.arguments[param_name]
if not isinstance(value, expected):
raise TypeError(
f"{func.__name__}() parameter '{param_name}' "
f"expected {expected.__name__}, got {type(value).__name__}"
)
return func(*args, **kwargs)
return wrapper
return decorator
@validate_types(name=str, age=int)
def create_user(name, age, email=None):
return {"name": name, "age": age, "email": email}
create_user("Kandi", 30) # Works
create_user("Kandi", "thirty") # TypeError: parameter 'age' expected int, got str
The inspect.signature and bind combination maps positional and keyword arguments to their parameter names regardless of how the caller passes them. This means create_user("Kandi", age=30) and create_user("Kandi", 30) are both validated correctly.
5. Rate Limiting
A rate limiting decorator prevents a function from being called more frequently than a specified threshold. This is critical when consuming third-party APIs that enforce call quotas -- exceeding the limit can result in temporary bans or elevated costs. For production API throttling strategies including sliding windows and token buckets, see the guide to fixed window vs sliding window vs token bucket rate limiting.
import time
import functools
def rate_limit(calls_per_second=1):
"""Throttle function calls to a maximum frequency."""
min_interval = 1.0 / calls_per_second
def decorator(func):
last_called = [0.0]
@functools.wraps(func)
def wrapper(*args, **kwargs):
elapsed = time.monotonic() - last_called[0]
if elapsed < min_interval:
time.sleep(min_interval - elapsed)
last_called[0] = time.monotonic()
return func(*args, **kwargs)
return wrapper
return decorator
@rate_limit(calls_per_second=2)
def fetch_stock_price(symbol):
# Simulates API call
return {"symbol": symbol, "price": 182.63}
# These calls will be spaced at least 0.5s apart
for sym in ["AAPL", "GOOG", "MSFT", "AMZN"]:
print(fetch_stock_price(sym))
The last_called value is stored in a list rather than a plain variable because closures in Python can read but not rebind variables from enclosing scopes without the nonlocal keyword. Using a mutable container like a list sidesteps this restriction. Alternatively, you could use nonlocal last_called with a plain float.
6. Access Control
An access control decorator checks authorization before allowing a function to execute. This pattern appears throughout web frameworks -- Flask uses @login_required, Django uses @permission_required -- but the underlying mechanism is the same: inspect the caller's credentials and either proceed or reject.
import functools
def require_permission(permission):
"""Block execution unless the user has the required permission."""
def decorator(func):
@functools.wraps(func)
def wrapper(user, *args, **kwargs):
user_perms = user.get("permissions", [])
if permission not in user_perms:
raise PermissionError(
f"User '{user.get('name')}' lacks '{permission}' permission"
)
return func(user, *args, **kwargs)
return wrapper
return decorator
@require_permission("delete")
def remove_document(user, doc_id):
print(f"Document {doc_id} removed by {user['name']}")
admin = {"name": "Kandi", "permissions": ["read", "write", "delete"]}
viewer = {"name": "Guest", "permissions": ["read"]}
remove_document(admin, "doc_991") # Document doc_991 removed by Kandi
remove_document(viewer, "doc_991") # PermissionError raised
The parameterized structure -- require_permission("delete") returning a decorator -- allows different functions to require different permissions while sharing the same enforcement logic.
7. Automatic Retry
A retry decorator re-executes a function when it raises a specified exception. This is essential for network calls, database connections, and any operation subject to transient failures. Adding exponential backoff prevents overwhelming a recovering service.
import time
import functools
def retry(max_tries=3, delay=1.0, backoff=2, exceptions=(Exception,)):
"""Retry with exponential backoff on specified exceptions."""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
current_delay = delay
last_exc = None
for attempt in range(1, max_tries + 1):
try:
return func(*args, **kwargs)
except exceptions as exc:
last_exc = exc
if attempt < max_tries:
print(f"Retry {attempt}/{max_tries} for "
f"{func.__name__} in {current_delay:.1f}s")
time.sleep(current_delay)
current_delay *= backoff
raise last_exc
return wrapper
return decorator
@retry(max_tries=4, delay=0.5, exceptions=(ConnectionError, TimeoutError))
def fetch_weather(city):
import random
if random.random() < 0.7:
raise ConnectionError("Service unavailable")
return {"city": city, "temp_c": 22}
For production retry logic with jitter, async support, and composable stop conditions, consider the tenacity library. A custom decorator like this one is appropriate when you want zero dependencies or need to understand the mechanism from the ground up.
The decorator below is supposed to retry a function up to 3 times with a 1-second delay on ConnectionError. It runs without crashing — but it never actually retries. Study the code and identify the problem.
import time, functools
def retry(max_tries=3, delay=1.0, exceptions=(Exception,)):
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
current_delay = delay
for attempt in range(1, max_tries + 1):
try:
return func(*args, **kwargs)
except exceptions as exc:
if attempt == max_tries:
time.sleep(current_delay)
current_delay *= 2
else:
raise exc
return wrapper
return decorator
@retry(max_tries=3, delay=1.0, exceptions=(ConnectionError,))
def fetch(url):
raise ConnectionError("timeout")
What is the bug?
if attempt < max_tries — sleep and continue on all attempts except the last, then let the final exception propagate. As written, the first failure immediately hits the else: raise exc branch and exits. The sleep is only reached on the final attempt — after which the loop ends anyway. This is a logic inversion that passes all type checks and runs cleanly, making it easy to miss.
except clause does accept a variable that holds an exception tuple — that is explicitly documented. And current_delay is a local variable reassigned within the same function scope, so no nonlocal is needed. Look more carefully at the condition on the highlighted line and think about which attempts it allows to continue and which it raises on.
except exceptions as exc:
if attempt < max_tries: # ← was: == max_tries
time.sleep(current_delay)
current_delay *= 2
else:
raise exc
8. Singleton Pattern
The singleton decorator ensures a class produces only one instance. Subsequent calls to the constructor return the existing instance instead of creating a new one. This is useful for connection pools, configuration managers, and logger instances that should exist exactly once.
import functools
def singleton(cls):
"""Ensure only one instance of the decorated class exists."""
instances = {}
@functools.wraps(cls, updated=[])
def get_instance(*args, **kwargs):
if cls not in instances:
instances[cls] = cls(*args, **kwargs)
return instances[cls]
return get_instance
@singleton
class DatabasePool:
def __init__(self, host, port=5432):
self.host = host
self.port = port
print(f"Pool created for {host}:{port}")
def query(self, sql):
return f"Executing: {sql}"
pool_a = DatabasePool("db.example.com") # Pool created for db.example.com:5432
pool_b = DatabasePool("db.example.com") # No output -- returns existing instance
print(pool_a is pool_b) # True
Note that this decorator is applied to a class, not a function. The @singleton syntax replaces the class with get_instance, so calling DatabasePool(...) routes through the decorator's instance cache. The updated=[] argument in functools.wraps prevents it from trying to copy the __dict__ attribute, which would fail on a class.
This implementation is not thread-safe. Two threads can both evaluate the cls not in instances check simultaneously and each proceed to create a new instance, violating the singleton guarantee. For concurrent applications, protect the creation block with a threading.Lock. Use the pattern above for single-threaded code or as a conceptual reference.
9. Deprecation Warnings
When you need to phase out a function but cannot remove it immediately, a deprecation decorator warns callers that they should migrate to a replacement. Python's built-in warnings module integrates with this pattern to issue alerts that can be silenced, escalated to errors, or filtered by category. Python 3.13 introduced warnings.deprecated as a first-party decorator via PEP 702, which also signals static type checkers (mypy, pyright) at usage sites. The custom implementation below works on all Python versions and illustrates the mechanism:
import warnings
import functools
def deprecated(reason="", replacement=""):
"""Emit a DeprecationWarning when the decorated function is called."""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
msg = f"{func.__name__}() is deprecated."
if reason:
msg += f" Reason: {reason}."
if replacement:
msg += f" Use {replacement}() instead."
warnings.warn(msg, category=DeprecationWarning, stacklevel=2)
return func(*args, **kwargs)
return wrapper
return decorator
@deprecated(reason="uses legacy auth", replacement="authenticate_v2")
def authenticate(username, password):
return username == "admin" and password == "secret"
authenticate("admin", "secret")
# DeprecationWarning: authenticate() is deprecated.
# Reason: uses legacy auth. Use authenticate_v2() instead.
The stacklevel=2 parameter is critical. With the default stacklevel=1, the warning would point to the line inside the wrapper where warnings.warn() is called — which is inside your decorator code and unhelpful to the developer reading it. With stacklevel=2, Python skips the wrapper's frame and points to the caller of the deprecated function, which is exactly where the change needs to be made. As the Python docs note, omitting the correct stacklevel causes the warning to point to the wrong source — which defeats its purpose entirely.
"This makes the warning refer to deprecated_api's caller, rather than to the source."
— Python docs: warnings module, Python Software Foundation
10. Debug Tracing
A tracing decorator records the full lifecycle of a function call: the arguments going in, the return value coming out, and any exception that gets raised. This is more comprehensive than the logging decorator -- it captures failure paths as well as successes and formats the output for easy scanning during debugging sessions.
import functools
import traceback
def trace(func):
"""Trace function calls, returns, and exceptions."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
call_id = id(args) % 10000
args_str = ", ".join(
[repr(a) for a in args] +
[f"{k}={v!r}" for k, v in kwargs.items()]
)
print(f"[TRACE {call_id}] -> {func.__name__}({args_str})")
try:
result = func(*args, **kwargs)
print(f"[TRACE {call_id}] <- {func.__name__} returned {result!r}")
return result
except Exception as exc:
print(f"[TRACE {call_id}] !! {func.__name__} raised "
f"{type(exc).__name__}: {exc}")
raise
return wrapper
@trace
def divide(a, b):
return a / b
divide(10, 3)
# [TRACE 8821] -> divide(10, 3)
# [TRACE 8821] <- divide returned 3.3333333333333335
divide(10, 0)
# [TRACE 4412] -> divide(10, 0)
# [TRACE 4412] !! divide raised ZeroDivisionError: division by zero
The call_id correlates the entry and exit lines for the same call, making it easier to match which return belongs to which invocation in concurrent or recursive code. Note that id(args) % 10000 uses the identity of the arguments tuple — in CPython this is its memory address — so IDs can theoretically collide or repeat across calls once objects are garbage collected. For more reliable correlation in production tracing, generate a uuid.uuid4() or an incrementing counter instead.
Stacking Decorators: Application Order vs. Call Order
When you apply more than one decorator to a function, two different orderings are in play and confusing them is one of the most common decorator mistakes — for a full frame-by-frame trace of how Python resolves chained decorator execution order, the dedicated guide walks through the stack in detail. Application order is bottom-up: the decorator closest to the function definition wraps first. Call order for the "before" code is top-down: the outermost wrapper's preamble runs first when the function is invoked. The two orderings are mirror images of each other.
import functools
def bold(func):
@functools.wraps(func)
def wrapper(*a, **kw):
print("bold: BEFORE")
result = func(*a, **kw)
print("bold: AFTER")
return result
return wrapper
def italic(func):
@functools.wraps(func)
def wrapper(*a, **kw):
print("italic: BEFORE")
result = func(*a, **kw)
print("italic: AFTER")
return result
return wrapper
@bold # applied second → outermost wrapper
@italic # applied first → innermost wrapper
def greet(name):
print(f"Hello, {name}")
greet("world")
# bold: BEFORE ← outermost "before" runs first
# italic: BEFORE ← innermost "before" runs second
# Hello, world ← function body
# italic: AFTER ← innermost "after" runs third
# bold: AFTER ← outermost "after" runs last
The equivalent without decorator syntax is greet = bold(italic(greet)). italic wraps first, then bold wraps the result — so bold's wrapper is what actually gets called. This bottom-up application, top-down call-entry model has a direct consequence for the ten patterns in this article: when you combine @timer and @retry, the order determines whether the timer measures only the successful final call or all attempts including retried ones.
# timer wraps retry: measures total time INCLUDING all retries and sleeps
@timer
@retry(max_tries=3, delay=0.5)
def call_external_api(endpoint):
...
# retry wraps timer: each individual attempt is timed separately
@retry(max_tries=3, delay=0.5)
@timer
def call_external_api(endpoint):
...
There is no universally correct order — it depends on what you want to measure. As a rule of thumb, place time-sensitive decorators like @timer outermost (top) when you want end-to-end measurement, and innermost (bottom) when you want per-attempt measurement. Place @log_calls outermost so it records the call as the caller sees it, not as the inner implementation sees it.
Class-Based Decorators
Any callable can be a decorator — not just a function. A class that implements __init__ and __call__ works identically to a function decorator but has significant advantages: state is stored in instance attributes instead of mutable closure variables, you can add public methods (like cache_clear()), and you can use inheritance to extend behavior. When Python evaluates @MyDecorator, it calls MyDecorator(func), producing an instance. When the decorated function is called, Python calls that instance's __call__ method.
import functools
class CallCounter:
"""Counts how many times the decorated function has been called."""
def __init__(self, func):
functools.update_wrapper(self, func) # preserves __name__, __doc__ etc.
self.func = func
self.call_count = 0
def __call__(self, *args, **kwargs):
self.call_count += 1
return self.func(*args, **kwargs)
def reset(self):
self.call_count = 0
@CallCounter
def process_record(record_id):
return f"processed {record_id}"
process_record("a")
process_record("b")
process_record("c")
print(process_record.call_count) # 3 — accessible as an attribute
process_record.reset()
print(process_record.call_count) # 0
Note the use of functools.update_wrapper(self, func) rather than @functools.wraps. Inside a class, @functools.wraps cannot decorate __call__ directly because the function being preserved is self.func, not the method itself. update_wrapper is the lower-level call that functools.wraps delegates to and works correctly here.
Use a function decorator when the behavior is stateless or state can be expressed cleanly with a closure variable. Use a class-based decorator when you need public methods to inspect or mutate state (cache_clear(), reset()), when you want to use inheritance to share logic across decorator variants, or when the decorator wraps multiple methods on the same object.
Decorating Methods: The self Problem
The decorators in this article are written for module-level functions. Applying them unchanged to instance methods introduces two problems. First, self is passed as the first positional argument, so args[0] inside the wrapper is the instance — not a data argument. That is usually harmless. Second, and more seriously, the memoization decorator stores a tuple of all arguments as the cache key. When that tuple includes self, the cache holds a strong reference to every instance that ever called the method, preventing garbage collection for the lifetime of the decorator.
import functools
class DataProcessor:
def __init__(self, name):
self.name = name
# Correct: functools.cached_property caches per-instance, GC-safe
@functools.cached_property
def expensive_config(self):
print(f"Computing config for {self.name}")
return {"threshold": 0.95, "retries": 3}
# Correct: lru_cache on a method requires maxsize=None or careful sizing;
# the instance (self) becomes part of the key, so per-instance caching works
# but the method holds a reference to self — use weakrefs for long-lived objects
@functools.lru_cache(maxsize=256)
def score(self, value: float) -> float:
return value * self.expensive_config["threshold"]
p = DataProcessor("model-a")
print(p.score(1.0)) # Computing config for model-a → 0.95
print(p.score(1.0)) # Cached — no recomputation
For caching instance methods, functools.cached_property (Python 3.8+) is the right tool when the result depends only on self — it stores the result in the instance's own __dict__, which means garbage collection works normally. For methods with arguments, functools.lru_cache with a bounded maxsize limits the memory impact. The custom unbounded memoize decorator from section 3 should not be applied to instance methods without modification.
Decorators That Work With and Without Parentheses
A common frustration: you write @retry(max_tries=3) and it works, but then a colleague writes @retry without parentheses and gets a TypeError at call time. The two forms are structurally different — @retry(max_tries=3) calls retry(max_tries=3) and uses the result as a decorator, while @retry passes the function directly to retry. A single decorator can handle both forms with a small conditional trick using a keyword-only sentinel.
import functools
def retry(_func=None, *, max_tries=3, delay=1.0):
"""Retry decorator usable as @retry or @retry(max_tries=5).
The leading underscore on _func signals it is positional-only.
The * forces max_tries and delay to be keyword arguments,
so @retry(3) is a TypeError rather than a silent misuse.
"""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
last_exc = None
for attempt in range(1, max_tries + 1):
try:
return func(*args, **kwargs)
except Exception as exc:
last_exc = exc
if attempt < max_tries:
import time
time.sleep(delay)
raise last_exc
return wrapper
# Called as @retry (no parentheses): _func is the decorated function
if _func is not None:
return decorator(_func)
# Called as @retry(...) (with parentheses): return the decorator
return decorator
@retry # works — uses defaults: max_tries=3, delay=1.0
def fetch_a():
...
@retry(max_tries=5, delay=0.2) # works — custom settings
def fetch_b():
...
The _func=None sentinel is positional; all other parameters are keyword-only (enforced by the bare *). When Python evaluates @retry, it calls retry(fetch_a), so _func receives the function and the decorator is applied immediately. When Python evaluates @retry(max_tries=5), it calls retry(max_tries=5), so _func is None and the function returns the inner decorator to be applied next. The * also prevents the accidental @retry(5) call, which would silently assign 5 to _func and fail cryptically at call time.
Decorator Performance Overhead
Every decorator adds at least one function call per invocation: Python must enter the wrapper, evaluate any pre-call logic, call the original function, evaluate any post-call logic, and return. For functions called millions of times in a tight loop — sorting keys, numerical kernels, inner rendering loops — this overhead is measurable. For I/O-bound or infrequently called code it is negligible.
import timeit
import functools
def noop_decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
def add(a, b):
return a + b
@noop_decorator
def add_decorated(a, b):
return a + b
n = 1_000_000
bare = timeit.timeit("add(1, 2)", globals=globals(), number=n)
wrapped = timeit.timeit("add_decorated(1, 2)", globals=globals(), number=n)
print(f"bare: {bare:.3f}s over {n:,} calls")
print(f"wrapped: {wrapped:.3f}s over {n:,} calls")
print(f"overhead per call: ~{(wrapped-bare)*1e6/n:.0f} ns")
# bare: 0.041s over 1,000,000 calls
# wrapped: 0.094s over 1,000,000 calls
# overhead per call: ~53 ns
A minimal pass-through wrapper costs roughly 50–100 nanoseconds per call on a modern CPU — the overhead of two extra Python frame creations. For functions that run in microseconds or faster and are called in hot loops, this matters. The standard mitigations are: (1) use __slots__ in class-based decorators to reduce attribute lookup cost; (2) move guard conditions to module-level constants and short-circuit early; (3) in truly performance-critical paths, apply the decorator conditionally via a flag or remove it entirely in production builds using PYTHONOPTIMIZE. The patterns in this article — retry, rate limiting, memoization — all operate on I/O-bound or infrequently-invoked functions where the overhead is irrelevant compared to network or disk latency.
Quick Reference Table
Async-Safe Decorator State with contextvars
Every closure-based decorator in this article stores state in a Python variable — a dict, a float, a list. That works perfectly in single-threaded synchronous code. In asyncio, it silently breaks.
When you run asyncio.gather() or asyncio.create_task(), multiple coroutines execute concurrently on the same OS thread. A closure variable is not per-task — it is per-decorator-instance. Two concurrent tasks calling the same rate-limited function share the same last_called list. Task A updates it; Task B reads the value Task A just wrote. The decorator throttles both tasks as if they were a single caller, which is not what you want if each task has its own independent call budget.
The standard library's contextvars module (Python 3.7+) solves this. A ContextVar stores an independent value per execution context. Each task created with asyncio.create_task() or loop.run_in_executor() receives a shallow copy of the current context, so its ContextVar values start from whatever was set at the point of task creation and then diverge independently.
import asyncio
import time
import contextvars
import functools
def async_rate_limit(calls_per_second: float):
min_interval = 1.0 / calls_per_second
# ContextVar created per decorator instance — each decorated function
# gets its own independent per-context last-call tracker
_last_called: contextvars.ContextVar[float] = contextvars.ContextVar(
f'_last_called_{calls_per_second}', default=0.0
)
def decorator(func):
@functools.wraps(func)
async def wrapper(*args, **kwargs):
now = time.monotonic()
last = _last_called.get()
wait = min_interval - (now - last)
if wait > 0:
await asyncio.sleep(wait) # yields control; does not block thread
_last_called.set(time.monotonic())
return await func(*args, **kwargs)
return wrapper
return decorator
@async_rate_limit(calls_per_second=2)
async def fetch(url: str) -> str:
await asyncio.sleep(0.01) # simulate I/O
return f"response from {url}"
async def main():
# Each task gets its own context copy — rate limits do not bleed across tasks
results = await asyncio.gather(
fetch("https://api.example.com/a"),
fetch("https://api.example.com/b"),
)
print(results)
asyncio.run(main())
When the simpler closure approach is still correct
If your intent is to enforce a global rate limit shared across all concurrent callers — for example, a third-party API that caps your account's total request rate — then a closure variable or a threading.Lock-protected counter is exactly right. Use ContextVar when each caller's context should be isolated. Use a shared variable when the limit is global.
threading.local() is not a substitute here. It isolates by OS thread, not by asyncio task. All coroutines on the same event loop thread share the same threading.local() value. ContextVar is the correct abstraction for asyncio-aware state isolation.
The Registration Pattern
Every decorator in this article returns a wrapper function that replaces the original. The registration pattern does something different: it stores a reference to the function in a registry at decoration time and returns the function completely unchanged. No wrapper. No overhead. No functools.wraps needed because the original function is the return value.
This is the mechanism behind Flask's @app.route, pytest's @pytest.fixture, and functools.singledispatch. At module import time, the decorator fires, inserts the function into a dict, and hands the function back untouched. The registry accumulates entries as the module loads. A dispatch() method later selects the right handler by key.
from collections.abc import Callable
class CommandRegistry:
"""
Registers handler functions by command name.
Handlers are stored at decoration time and invoked by name at runtime.
"""
def __init__(self):
self._registry: dict[str, Callable] = {}
def register(self, name: str):
"""Decorator factory: register func under the given command name."""
def decorator(func):
if name in self._registry:
raise ValueError(f"Command '{name}' is already registered.")
self._registry[name] = func
return func # original function returned unchanged
return decorator
def dispatch(self, name: str, *args, **kwargs):
"""Call the registered handler for name, raising KeyError if absent."""
if name not in self._registry:
raise KeyError(f"No handler registered for command '{name}'.")
return self._registry[name](*args, **kwargs)
def commands(self) -> list[str]:
return list(self._registry.keys())
cli = CommandRegistry()
@cli.register("build")
def handle_build(target: str) -> str:
return f"Building {target}..."
@cli.register("test")
def handle_test(suite: str) -> str:
return f"Running test suite: {suite}"
@cli.register("deploy")
def handle_deploy(env: str) -> str:
return f"Deploying to {env}..."
# Dispatch by name — no if/elif chain, no manual mapping table
print(cli.dispatch("build", "release")) # Building release...
print(cli.commands()) # ['build', 'test', 'deploy']
The key insight is that the decorator's job here is entirely a side effect: populate the registry. The function body is irrelevant to the decorator; the decorator does not care what the function does, only what name to file it under. This separates handler registration from handler logic, eliminates a parallel data structure that must be kept in sync with the function definitions, and makes the set of available commands self-documenting — you read the @cli.register lines, not a separate mapping dict somewhere else in the module.
Connection to functools.singledispatch
functools.singledispatch uses this pattern to build a type-dispatch table. Each @dispatch.register(SomeType) call stores the function under that type at decoration time. The main dispatcher inspects the first argument's type at call time and routes to the correct implementation. The same structure — decorator stores, dispatcher retrieves — drives the entire mechanism.
Protocol Enforcement with Class Decorators
A class decorator receives the class object itself (not an instance) immediately after the class body executes. This fires at import time — before __init__ is ever called, before any instance exists. That timing is what makes it useful for protocol enforcement.
abc.ABCMeta enforces abstract methods at instantiation time. If you define a plugin system where third-party code subclasses your base class, you cannot guarantee the subclass author will use your base. A class decorator enforces the interface at class-definition time without requiring inheritance:
import inspect
def requires_interface(*method_specs: tuple[str, int]):
"""
Class decorator that verifies required methods exist on the decorated class
and accept the specified number of parameters (excluding self).
Usage: @requires_interface(("process", 1), ("validate", 2))
Fires at import time — before any instance is created.
Raises TypeError immediately if the class does not satisfy the interface.
"""
def decorator(cls):
for method_name, param_count in method_specs:
method = getattr(cls, method_name, None)
if method is None:
raise TypeError(
f"{cls.__name__} is missing required method '{method_name}'"
)
sig = inspect.signature(method)
# subtract 1 for 'self'
actual = len([
p for p in sig.parameters.values()
if p.default is inspect.Parameter.empty
and p.kind not in (
inspect.Parameter.VAR_POSITIONAL,
inspect.Parameter.VAR_KEYWORD
)
]) - 1
if actual != param_count:
raise TypeError(
f"{cls.__name__}.{method_name} must accept exactly "
f"{param_count} required parameter(s) (excluding self), "
f"found {actual}"
)
return cls
return decorator
# Interface contract: process(payload) and validate(payload, schema)
@requires_interface(("process", 1), ("validate", 2))
class JSONPlugin:
def process(self, payload: dict) -> dict:
return {k: str(v) for k, v in payload.items()}
def validate(self, payload: dict, schema: dict) -> bool:
return all(k in payload for k in schema)
# This class violates the interface — TypeError raised at import time, not at runtime
# @requires_interface(("process", 1), ("validate", 2))
# class BrokenPlugin:
# def process(self, payload):
# pass
# # validate is missing entirely
The critical difference from abc.ABCMeta: the error surfaces the moment the class definition is parsed, not when someone constructs an instance later in a different module. In a plugin architecture where you load third-party classes by name at startup, catching the violation at load time prevents silent runtime failures deep inside business logic.
When to use abc.ABCMeta instead
If you control the base class and can require inheritance, abc.ABCMeta is the cleaner tool — it has editor support, type checker integration, and a well-understood contract. The class decorator approach is the right choice when you are enforcing an interface on classes you do not control, when the plugin author may not use your base class, or when you need to check method signatures rather than just method existence.
Python Points to Remember
- Decorators are syntactic sugar for function replacement. The
@decoratorsyntax, introduced in Python 2.4 via PEP 318, is equivalent to writingfunc = decorator(func)after the definition. Nothing magical happens at the@sign — Python evaluates the decorator expression and passes the function to it. The result replaces the original name in the enclosing scope. - Simple and parameterized decorators have different nesting depths. A simple decorator (like
@timer) uses two levels: an outer function receives the target, and a wrapper replaces it. A parameterized decorator (like@retry(max_tries=4)) uses three levels: the outermost function captures configuration and returns the actual decorator, which in turn returns the wrapper. Confusing these two structures is the single most common decorator bug. - Always use
functools.wraps— and understand what it copies. Per the Python docs,functools.wrapscopies__module__,__name__,__qualname__,__annotations__, and__doc__from the wrapped function to the wrapper (these areWRAPPER_ASSIGNMENTS). It also adds a__wrapped__attribute pointing to the original function, which lets introspection tools unwrap decorator chains and letsfunctools.lru_cachebypass the wrapper when needed. Without it, debuggers, documentation generators, and serialization libraries will see the wrapper, not the function you intended. - Decorator state lives in the closure — and that has consequences. The memoization cache, the rate limiter's last-call timestamp, and the singleton's instance dictionary all persist between calls because they are defined in the decorator's enclosing scope and captured by the wrapper's closure. This also means state is shared across all callers of the same decorated function. For example,
@rate_limitapplied tofetch_stock_pricethrottles all calls to that function globally, not per-caller. Design accordingly. - The singleton decorator is not thread-safe without a lock. The
cls not in instancescheck and the subsequent assignment are not atomic. Two threads can both pass the check before either completes the assignment, creating two instances. Add athreading.Lockaround the creation block in concurrent applications. This limitation applies to the decorator pattern; the__new__-based singleton and metaclass approaches have the same issue unless explicitly locked. stacklevel=2inwarnings.warn()is not optional. Inside a decorator wrapper, the call stack has an extra frame from the wrapper itself. Withstacklevel=1(the default), the deprecation warning points to the line inside your wrapper wherewarnings.warn()is called — which developers cannot act on. Withstacklevel=2, it points to the caller of the deprecated function. As the Python docs explain, omitting this "would defeat the purpose of the warning message." If your decorator itself is called from another helper, you may needstacklevel=3.- Python 3.13 added
warnings.deprecatedas a first-party decorator. PEP 702 introduced@warnings.deprecated(message), which emits aDeprecationWarningat runtime and also signals static type checkers (mypy, pyright) to emit diagnostics at usage sites — without running the code. The custom implementation in section 9 remains valid for Python versions below 3.13 and for understanding the mechanism. - Decorator stacking order is bottom-up on application, top-down on call entry.
@bold @italic def f()meansbold(italic(f)). The outermost wrapper runs its before-code first when the function is called. This determines whether@timer @retrymeasures total elapsed time (including retries) or only individual attempts. Always reason through the equivalent nested form before stacking decorators that interact. - Class-based decorators are the right tool when state needs public methods. Store the wrapped function in
__init__, implement the wrapping logic in__call__, and callfunctools.update_wrapper(self, func)rather than using@functools.wraps(which cannot decorate__call__directly for this pattern). Add inspection or mutation methods likereset()orcache_clear()as normal instance methods. - The custom memoize decorator is not safe on instance methods. Using it on a method causes the cache to hold strong references to every
selfthat ever called the method, blocking garbage collection. Usefunctools.cached_propertyfor computed attributes orfunctools.lru_cachewith boundedmaxsizefor methods that take arguments. - Choose custom decorators or libraries based on what actually varies. A custom
@retrydecorator is the right tool when you understand the failure modes and need zero dependencies. When requirements grow — async support, jitter, composable stop conditions, structured logging — the tenacity library provides tested solutions. Similarly,functools.lru_cacheis thread-safe and bounded in ways the custom memoize decorator is not. The right question is not "should I write it or use a library?" but "what properties do I actually need?" - Use
ContextVarfor async-safe decorator state, not closure variables. Closure variables are shared across all concurrent asyncio tasks on the same thread.contextvars.ContextVar(Python 3.7+) gives each task an independent copy of the value, created automatically when the task is spawned withasyncio.create_task().threading.local()does not solve this — it isolates by OS thread, not by coroutine context. - The registration pattern returns the original function unchanged. Unlike every other decorator in this article, a registry decorator's job is a side effect at decoration time: insert the function into a dict, then return it unmodified. There is no wrapper, no overhead, and no
functools.wrapsneeded. This is how Flask's@app.route, pytest's@pytest.fixture, andfunctools.singledispatchall work under the hood. - A class decorator fires at import time, before any instance exists. This is what makes it the right tool for protocol enforcement on plugin classes you do not control. The check runs the moment the class body finishes executing — not when an instance is constructed.
abc.ABCMetacatches missing methods at instantiation; a class decorator catches them at class-definition time, which is earlier and more predictable in a plugin-loading architecture.
Each of the ten patterns in this article solves a problem that appears repeatedly across Python codebases. The mechanism is always the same: a wrapper function intercepts the call and adds behavior without touching the original. Once you can read and write the three-level structure fluently, any new cross-cutting concern becomes a single decorator definition and a single line at the call site.
Frequently Asked Questions
- What are the most useful Python decorator use cases?
-
The ten most practical Python decorator use cases are: execution timing (
time.perf_counter()), function call logging, result caching via memoization, argument type validation usinginspect.signature, rate limiting, access control and authorization, automatic retry with exponential backoff, the singleton design pattern, deprecation warnings, and debug tracing. Each adds behavior before, after, or around a function call without modifying the function body. - Why must you use functools.wraps in a Python decorator?
-
Without
functools.wraps, the decorated function loses its original__name__,__qualname__,__doc__,__module__, and__annotations__attributes. These are replaced by those of the wrapper function, which breaks introspection tools, documentation generators, debuggers, and any library that inspects function metadata.functools.wrapsalso adds a__wrapped__attribute pointing to the original function, enabling stack unwrapping and cache bypassing. Per the Python docs,WRAPPER_ASSIGNMENTScovers__module__,__name__,__qualname__,__annotations__, and__doc__. - How do you build a memoization decorator in Python?
-
A memoization decorator stores return values in a dictionary keyed by the function's arguments (as a tuple). On subsequent calls with the same arguments, it returns the cached value without re-executing the function. Python's standard library provides
functools.lru_cachefor bounded caching, but a custom version using a dict closure is appropriate when you need unbounded caching, cache inspection, or manual cache clearing. Note that all arguments must be hashable for the tuple key to work. - How does a rate limiting decorator work in Python?
-
A rate limiting decorator records the timestamp of the last call using
time.monotonic(). Before each invocation, it calculates elapsed time since the last call. If that elapsed time is less than the minimum interval (1 / calls_per_second), the decorator sleeps for the remaining gap before proceeding. This ensures the function is never called faster than the specified rate. The last-called timestamp is stored in a mutable list rather than a plain float because Python closures can read but not rebind enclosing-scope variables without thenonlocalkeyword. - Is the singleton decorator pattern thread-safe in Python?
-
No. The decorator-based singleton implementation that uses a plain dictionary for instance storage is not thread-safe. Two threads can both evaluate the
cls not in instancescheck simultaneously and each proceed to create a new instance, violating the singleton guarantee. For thread safety, the creation block must be protected with athreading.Lock. The pattern shown in this article is appropriate for single-threaded code and is a clean illustration of the mechanism; add a lock before using it in concurrent applications. - What changed with deprecation warnings in Python 3.13?
-
Python 3.13 introduced
warnings.deprecatedas a built-in decorator via PEP 702. It marks functions, classes, and overloads as deprecated so that static type checkers (mypy, pyright) emit diagnostics at usage sites, not just at runtime. Prior to 3.13, the standard approach was to callwarnings.warn(..., category=DeprecationWarning, stacklevel=2)inside a custom decorator wrapper. Thestacklevel=2argument is critical: it makes the warning point to the caller's line rather than the line inside the decorator itself. - What does stacklevel=2 do in warnings.warn?
-
The
stacklevelparameter tellswarnings.warnhow many levels up the call stack to attribute the warning. Withstacklevel=1(the default), the warning points to the line inside the decorator wherewarnings.warn()is called — which is unhelpful because that line is inside your decorator code. Withstacklevel=2, the warning skips the decorator's frame and points to the caller of the decorated function, which is exactly where the developer needs to make the change. This is the pattern recommended in the Python standard library documentation. - When should you use a custom decorator versus a library?
-
Write a custom decorator when you need zero external dependencies, when the behavior is specific to your codebase, or when you want full understanding of the mechanism. Use a library when requirements grow beyond a simple wrapper: for retry logic with jitter and async support, consider tenacity; for bounded caching with thread-safe LRU eviction, use
functools.lru_cache; for production rate limiting across distributed processes, use a library built on a shared store. The custom implementations in this article are production-viable for single-process, synchronous use cases. - What order do stacked Python decorators execute in?
-
Decorators are applied bottom-up: the decorator closest to the
defwraps first. On invocation, the outermost decorator's before-code runs first (top-down entry), then after-code exits innermost-first (bottom-up). Writing@bold @italic def f()is equivalent tobold(italic(f)). Order matters in practice:@timer @retrymeasures total elapsed time including retries and sleep delays, while@retry @timermeasures each individual attempt separately. - How do you write a decorator that works with and without parentheses?
-
Use a
_func=Nonepositional sentinel with keyword-only arguments enforced by a bare*. When Python evaluates@retry, it callsretry(func)so_funcis the function and you apply the decorator immediately. When Python evaluates@retry(max_tries=3), it callsretry(max_tries=3)so_funcisNoneand you return the inner decorator. The*also prevents the silent misuse@retry(5). - Why does the memoization decorator cause memory leaks on instance methods?
-
The custom
memoizedecorator stores argument tuples as cache keys. When applied to an instance method,selfis part of every tuple. The cache holds a strong reference to each instance, preventing garbage collection for the decorator's lifetime (typically the module lifetime). Usefunctools.cached_property(Python 3.8+) for per-instance computed attributes, andfunctools.lru_cachewith boundedmaxsizefor methods with arguments.