In Python, an instance attribute and a class method can share the same name. When that happens, the instance attribute blocks access to the class method — not by deleting it, but by standing in front of it. That is called shadowing, and with @classmethod, it happens without any warning. Understanding why requires going one level deeper: into Python's descriptor protocol and the C-level lookup machinery that drives every attribute access.
When you access an attribute on a Python object, Python does not just look in one place. It searches in a specific order, and whichever location produces a result first wins. Understanding that order is the key to understanding what shadowing is, why it happens, and when it matters.
How Python looks up attributes
When you write obj.name, Python does not immediately hand you whatever is stored in the class. It follows a sequence of lookups, defined by the descriptor protocol and the object's __dict__. The Python Data Model reference frames the priority chain precisely: docs.python.org (opens in new window) states that instance lookup gives
"data descriptors the highest priority, followed by instance variables, then non-data descriptors."That ordering — not a convention but a guarantee in CPython's
object.__getattribute__ — governs every attribute access you write. The three tiers are:
- Data descriptors defined on the class (or its parents) — things like
propertyobjects, which define__get__,__set__, and__delete__; but any descriptor that defines__set__or__delete__qualifies as a data descriptor even without__get__ - Instance attributes, stored in the instance's own
__dict__ - Non-data descriptors defined on the class, and other class-level attributes
The distinction between a data descriptor and a non-data descriptor is the crux of this entire topic. A data descriptor defines __set__ and/or __delete__ (with or without __get__). Because it controls assignment or deletion, Python gives it priority over the instance's own dictionary. A non-data descriptor defines only __get__ — no __set__, no __delete__. It has no say over what gets stored in __dict__, so instance attributes outrank it. The Python Data Model states this precisely:
"If an object definesThe guide continues: anything that implements only__set__()or__delete__(), it is considered a data descriptor."
__get__() is classified as a non-data descriptor. That language comes directly from the Descriptor HowTo Guide (opens in new window) authored by Raymond Hettinger and maintained by the Python Software Foundation. In practice, data descriptors typically define both __get__ and __set__ — that is the normal pattern — but a class that defines only __set__ is still a data descriptor and still outranks instance attributes. This precision matters when you build a ProtectedClassMethod wrapper later: the wrapper earns tier-1 status by defining __set__, regardless of whether it also defines __get__.
Regular functions defined in a class body are also non-data descriptors. They implement __get__, which is how Python binds them to the instance when you call obj.method(). Instance attributes can shadow plain methods by the same mechanism described here. Per the Python Data Model, standard Python methods — including @staticmethod and @classmethod — are implemented as non-data descriptors, which is precisely why instance-level shadowing is possible for all three.
obj.name — Python searches in this order:
__dict__Click a tier to learn what lives there and why its position matters.
Inside PyObject_GenericGetAttr
The three-tier lookup is not just a conceptual model — it maps directly to CPython's C implementation. When Python evaluates obj.name, it ultimately calls type(obj).__getattribute__(obj, "name"). Unless you have overridden __getattribute__ yourself, this resolves to PyObject_GenericGetAttr, the C function in CPython that implements standard attribute access for all user-defined classes.
Raymond Hettinger's Descriptor HowTo Guide (opens in new window) in the official Python documentation provides a pure Python equivalent that mirrors the C logic exactly. It uses a sentinel object (null = object()) to distinguish a missing attribute from one whose value happens to be None, avoiding the ambiguity that arises when None is used as a default:
# Emulate PyObject_GenericGetAttr() — the C function behind object.__getattribute__
# Source: docs.python.org/3/howto/descriptor.html (Raymond Hettinger)
def object_getattribute(obj, name):
"Emulate PyObject_GenericGetAttr() in Objects/object.c"
null = object()
objtype = type(obj)
cls_var = getattr(objtype, name, null)
descr_get = getattr(type(cls_var), '__get__', null)
# Step 1: data descriptor on the type wins
if descr_get is not null:
if (hasattr(type(cls_var), '__set__')
or hasattr(type(cls_var), '__delete__')):
return descr_get(cls_var, obj, objtype) # data descriptor
# Step 2: instance __dict__
if hasattr(obj, '__dict__') and name in vars(obj):
return vars(obj)[name] # instance variable
# Step 3: non-data descriptor or plain class variable
if descr_get is not null:
return descr_get(cls_var, obj, objtype) # non-data descriptor
if cls_var is not null:
return cls_var # plain class variable
raise AttributeError(name)
The logic is explicit: getattr(objtype, name, null) walks the MRO and returns the class-level attribute if found, or the sentinel if not. A data descriptor at step 1 is identified by the presence of both __get__ and either __set__ or __delete__; if both conditions are met, its __get__ is called immediately. The instance dictionary is only consulted at step 2, after data descriptors have been ruled out. A classmethod object, which has __get__ but not __set__, passes through step 1 without matching — so any name in the instance's __dict__ stops the search at step 2.
The pure Python equivalent above follows the Descriptor HowTo Guide (opens in new window) authored by Raymond Hettinger and maintained in the official CPython documentation. The sentinel pattern (null = object()) avoids treating None as a missing-attribute signal, which the simpler conditional form cannot distinguish. The actual C function is PyObject_GenericGetAttr in Objects/object.c, with the MRO walk handled by _PyType_Lookup.
_PyType_Lookup uses a per-type cache
In production CPython, _PyType_Lookup does not re-walk the MRO on every attribute access. It caches the result of the MRO scan in a per-type version-tagged cache. Whenever a class or any of its bases is modified — a new method added, a descriptor replaced, a class attribute set — CPython increments a global "type version" counter and invalidates the cache for that type and all types that inherit from it. This means that after shadowing is established on an instance, future reads of that attribute on the same instance skip the MRO walk entirely at step 2 (the instance __dict__ lookup happens before any class-level scan). The cache is separate from the instance lookup path: it only affects how the class-level descriptor scan in step 1 and step 3 is accelerated. The cost of an instance-level shadow on attribute read performance is therefore minimal — you pay for the cache miss once, on the first access after the type is modified, not on every access thereafter.
tp_descr_get, tp_descr_set, and PyDescr_IsData()
At the C level, CPython maps the Python descriptor protocol to slots in the PyTypeObject struct. __get__ corresponds to the tp_descr_get slot; __set__ corresponds to tp_descr_set. The macro PyDescr_IsData(descr) — defined in Include/descrobject.h — simply tests whether Py_TYPE(descr)->tp_descr_set != NULL. That single null check is all that separates a data descriptor from a non-data descriptor inside PyObject_GenericGetAttr. As the CPython internals analysis at tenthousandmeters.com (opens in new window) puts it:
"Descriptors have their special behavior only when used as type variables."When a descriptor is placed in an instance dictionary — which is exactly what shadowing does — CPython bypasses descriptor invocation entirely and returns the value directly. For
classmethod, the PyClassMethod_Type struct in Objects/funcobject.c sets tp_descr_get (to cm_descr_get) but leaves tp_descr_set as NULL. That null tp_descr_set is the exact reason PyDescr_IsData returns false for a classmethod, which is why step 1 passes it by and the instance dictionary is checked at step 2 instead. Understanding this slot layout explains why adding __set__ to a wrapper class immediately promotes it to data descriptor status — you are, in effect, setting the tp_descr_set slot to a non-null value.
What @classmethod is — and is not
@classmethod wraps a function so that, when accessed through the descriptor protocol, the first argument it receives is the class itself rather than an instance. The decorator replaces the function with a classmethod object, and that object implements __get__ — but not __set__.
That makes @classmethod a non-data descriptor. The Python object model classifies standard methods — including @staticmethod and @classmethod — as non-data descriptors, which is precisely what allows instances to define attributes that override them. The pure Python equivalent of classmethod, as documented by Raymond Hettinger in the Descriptor HowTo Guide (opens in new window), shows exactly why:
# Pure Python equivalent of classmethod
# Source: docs.python.org/3/howto/descriptor.html (Raymond Hettinger)
import functools
from types import MethodType
class ClassMethod:
def __init__(self, f):
self.f = f
functools.update_wrapper(self, f)
def __get__(self, obj, cls=None):
if cls is None:
cls = type(obj)
return MethodType(self.f, cls)
# No __set__ defined — this is a non-data descriptor.
Because it sits in the third position in Python's lookup order, any instance attribute with the same name takes precedence when the lookup is performed on that specific instance. The class method is still there on the class — nothing has been deleted or overwritten at the class level — but this particular instance will never reach it.
Python 3.13 removed support for chained classmethod descriptors — a pattern that allowed @classmethod to wrap other descriptors such as @property. This feature was added in Python 3.9 (via gh-63272 (opens in new window)), deprecated in 3.11, and removed in 3.13 (contributed by Raymond Hettinger in gh-89519 (opens in new window)). The core design was considered flawed. If your codebase used @classmethod to wrap @property, the __wrapped__ attribute added in Python 3.10 is the documented path forward. This change does not affect the instance-level shadowing behavior described in this article — @classmethod remains a non-data descriptor, and the three-tier lookup order is unchanged.
class Connection:
@classmethod
def create(cls):
return cls()
# Accessing through the class works fine
conn = Connection.create() # returns a Connection instance
# Inspecting the class method directly shows it is a bound method on the class
print(Connection.create) # <bound method Connection.create of <class '__main__.Connection'>>
So far, nothing unusual. create is a class method on the class, and it behaves as expected when called through the class or through a clean instance.
Shadowing in action
Shadowing occurs when you assign a value to an instance attribute that shares its name with the class method. Python stores that value in the instance's __dict__. On the next attribute lookup for that name on that instance, Python finds the instance attribute in step 2 and stops — it never reaches the non-data descriptor in step 3.
class Connection:
@classmethod
def create(cls):
return cls()
c = Connection()
# Assign an instance attribute with the same name as the class method
c.create = "some_string"
# Now accessing .create on this instance returns the string, not the method
print(c.create) # some_string
# But the class method is untouched on the class itself
print(Connection.create) # <bound method Connection.create ...>
The class method has not been removed. It is still perfectly accessible through Connection.create, and through any other instance that does not have a create key in its own __dict__. Only c is affected, because only c has that key in its instance dictionary.
Python raises no error and produces no warning when this happens. The shadowing is completely silent. If the class method is expected to be callable but is now an arbitrary value, any subsequent code that calls c.create() will raise a TypeError — and the error message will not mention shadowing at all.
You can confirm what is happening by inspecting the instance's dictionary directly:
c = Connection()
c.create = "some_string"
# The instance dictionary now holds the shadowing value
print(c.__dict__) # {'create': 'some_string'}
# The class dictionary still holds the class method
print(Connection.__dict__['create']) # <classmethod object at 0x...>
# Deleting the instance attribute restores access
del c.create
print(c.create) # <bound method Connection.create ...>
Notice the last step: deleting the instance attribute from __dict__ restores the original behavior. The class method was never gone — it was simply hidden.
c.create
Contrast with a data descriptor
To illustrate why @classmethod is vulnerable but @property is not, consider this contrast:
class Account:
def __init__(self, balance):
self._balance = balance
@property
def balance(self):
return self._balance
a = Account(100)
# Attempting to shadow a property raises an AttributeError
# because @property is a data descriptor — it defines __set__
try:
a.balance = 999
except AttributeError as e:
print(e) # property 'balance' of 'Account' object has no setter
@property defines __set__, which places it in the data descriptor tier — step 1. Python reaches it before it even gets to the instance dictionary. The assignment fails loudly with an AttributeError naming the property and the class. With @classmethod, there is no __set__, so the assignment goes straight into __dict__ without resistance.
The AttributeError message shown — property 'balance' of 'Account' object has no setter — is the format used in Python 3.11 and later. Python 3.10 and earlier produce the shorter message can't set attribute without naming the property or class. Both are the same underlying error; only the wording differs between versions.
A real-world non-data descriptor by design: functools.cached_property
functools.cached_property (introduced in Python 3.8) is a non-data descriptor intentionally. It defines only __get__, which allows it to store the computed result directly into the instance's __dict__ on first access. On subsequent accesses, the instance dictionary entry is found at step 2 — before the non-data descriptor in step 3 — so the cached value is returned without ever invoking the getter again. That is the caching mechanism. The same property that makes @classmethod vulnerable to shadowing is what makes functools.cached_property work.
import functools
class DataSet:
def __init__(self, rows):
self.rows = rows
@functools.cached_property
def total(self):
# Expensive computation; result cached after first call
return sum(r["value"] for r in self.rows)
d = DataSet([{"value": 10}, {"value": 20}])
# First access: calls the getter, stores result in d.__dict__
print(d.total) # 30
print(d.__dict__) # {'rows': [...], 'total': 30}
# Second access: d.__dict__['total'] is found at step 2 — getter never called again
print(d.total) # 30 (cached)
# Cache invalidation: delete the instance entry so the descriptor runs again on next access
del d.total
print(d.total) # 30 (recomputed)
This is a case where the non-data descriptor placement in the lookup order is an explicit design choice rather than a vulnerability. The contrast matters: with @classmethod, an instance attribute landing in the same name is an accident. With functools.cached_property, the instance dictionary write is the mechanism. Both follow the same lookup rules — what differs is whether the result is intentional. Note that cache invalidation is straightforward: del d.total removes the instance dictionary entry, so the descriptor runs again on the next access.
functools.cached_property is not thread-safe by design. Because the write into __dict__ is non-atomic, two threads can both call the getter before either stores the result, computing the value twice. The Python standard library documentation (opens in new window) states this explicitly and recommends adding a lock if thread safety is required. Additionally, functools.cached_property only works on classes with a writable __dict__ per instance — it will fail silently or raise TypeError on classes that define __slots__ without a __dict__ slot, because the instance dictionary it relies on does not exist. This is the flip side of the same non-data descriptor story: the mechanism that makes it work also constrains where it can be used.
The most common trigger: shadowing inside __init__
The examples so far have set the shadowing attribute explicitly on an already-constructed instance. In practice, the far more common scenario is that the collision happens silently inside __init__ — the very moment an object is created.
class Report:
@classmethod
def load(cls, path):
# Factory: reads a file and returns a populated Report
obj = cls()
obj._path = path
return obj
def __init__(self, data=None, load=None): # 'load' is also a parameter name
self.data = data
self.load = load # shadows the class method immediately on construction
r = Report(data={"key": "value"}, load="quarterly")
# The class method is now unreachable on every instance created this way
print(r.load) # 'quarterly' — not the class method
# Calling it raises a TypeError, not an AttributeError
try:
r.load("some_path")
except TypeError as e:
print(e) # 'str' object is not callable
This pattern appears in real codebases when a class evolves over time. A factory method named load, build, or parse gets added to an existing class whose __init__ already accepts a same-named keyword argument for configuration state. The two names collide, every newly constructed instance arrives already shadowed, and the factory method becomes entirely unreachable through instances. The class itself still has the method, but ClassName.load() is easy to forget when the rest of the codebase calls it on instances.
When __init__ is the source of the shadow, the problem affects every instance of that class created through the normal constructor — not just one stray object. If you also pass a non-callable as the attribute value (a string, a boolean, a dict), any call site that tries to use the name as a method will raise a TypeError with a message that gives no indication of where the shadow was introduced.
Can you shadow at the class level?
The examples above all create the conflict at the instance level — an attribute placed in the instance's __dict__ that hides the class method from that instance. A different question worth asking is: what happens when you assign to the name on the class itself?
class Connection:
@classmethod
def create(cls):
return cls()
# Assign directly to the class attribute — this replaces the classmethod object
Connection.create = "overwritten"
# The class method is now gone at the class level
print(Connection.create) # 'overwritten'
c = Connection()
print(c.create) # 'overwritten' — all instances see the replacement
This is categorically different from instance-level shadowing. Assigning to a class attribute does not hide the class method — it replaces the entry in Connection.__dict__ entirely. The classmethod object is gone. All instances, including ones created before the assignment, now see the replacement value. There is no del trick that restores it, because the original object is no longer referenced.
This distinction matters when reading tracebacks. If the error appears on one specific instance and other instances work correctly, the problem is in the instance's __dict__. If the error appears on every instance and on the class itself, the class attribute was reassigned.
A brief note on __slots__
There is one mechanism that prevents this class of problem entirely at the class definition level: __slots__. When a class declares __slots__, Python does not create a per-instance __dict__ for that class's instances. Instead, each named slot is backed by a member descriptor — which is a data descriptor — stored on the class itself.
class Connection:
__slots__ = ('host', 'port') # only these names can be instance attributes
@classmethod
def create(cls):
return cls()
c = Connection()
# This raises AttributeError — 'create' is not a declared slot
try:
c.create = "some_string"
except AttributeError as e:
print(e) # 'Connection' object attribute 'create' is read-only
Because there is no instance __dict__, there is nowhere for an arbitrary assignment to land. An attempt to set an undeclared attribute raises AttributeError immediately — in Python 3.10 and later the message reads 'Connection' object attribute 'create' is read-only. The class method is never threatened.
__slots__ is not a general recommendation for preventing shadowing — it changes memory layout and breaks multiple inheritance in ways that are often undesirable. But if you are working with a class that is instantiated in very high volume and where you want strict attribute control, understanding that __slots__ closes this vulnerability by removing the instance __dict__ entirely is useful.
Does a subclass inherit the shadow?
When you subclass a class that has instance-level shadowing, the answer depends on where the shadow lives. Because shadowing is an instance-level phenomenon — the conflicting value is stored in the specific instance's __dict__ — a subclass definition does not inherit it. What it does inherit is the risk.
class Base:
@classmethod
def create(cls):
return cls()
class Child(Base):
def __init__(self, name):
self.name = name
# No collision here — 'name' is not the name of any classmethod
c = Child("example")
print(c.create) # <bound method Base.create of <class '__main__.Child'>>
# The class method is inherited normally and accessible on Child instances
# Now introduce the collision in __init__
class BrokenChild(Base):
def __init__(self, create=None):
self.create = create # shadows the inherited class method
bc = BrokenChild(create="data_loader")
print(bc.create) # 'data_loader' — class method hidden
print(Base.create) # <bound method Base.create of <class '__main__.Base'>> — unaffected
print(BrokenChild.create) # <bound method Base.create of <class '__main__.BrokenChild'>> — unaffected
A few things are worth noting here. First, the inherited class method is accessible on BrokenChild and on Base — only the individual instance is affected. Second, when the class method is inherited rather than defined directly on the child class, Python still finds it through the MRO walk during the non-data descriptor check at step 3. The lookup scans each class in the MRO in order, so inheritance does not change the tier assignments — it only affects which class's __dict__ provides the descriptor.
Third: if you define a method named create directly on the subclass, that is not instance shadowing — it is straightforward method overriding, which is intentional and expected. The scenario that causes unexpected bugs is when an instance attribute collides with an inherited class method, because the instance attribute is stored at step 2 while the inherited method sits at step 3, and the lookup always stops at the first match.
Shadowing in dataclasses
@dataclass (introduced in Python 3.7) generates an __init__ automatically from the class's field definitions. That generated constructor assigns every field as an instance attribute — by name, without any knowledge of class-level methods. If any field name collides with a @classmethod, every instance produced by the generated __init__ arrives already shadowed. The problem is identical in mechanism to the hand-written __init__ case, but harder to spot because the offending assignment is not visible in the source.
from dataclasses import dataclass
class Base:
@classmethod
def validate(cls, data):
"""Return True if data is a valid input dict."""
return isinstance(data, dict) and "name" in data
@dataclass
class Record(Base):
name: str
validate: bool = True # field name collides with the inherited classmethod
# The generated __init__ assigns self.validate = validate on every construction.
# This replaces the inherited classmethod at the class level entirely.
r = Record(name="test")
print(r.validate) # True — the bool field value, not the classmethod
print(Base.validate) # <bound method Base.validate ...> — still on Base
print(Record.validate) # True — classmethod gone at the Record class level too
# Any call site that expected validate() to work now gets TypeError:
try:
r.validate({"name": "test"})
except TypeError as e:
print(e) # 'bool' object is not callable
The generated __init__ assigns self.validate = validate on every construction — but the annotation validate: bool = True also replaces the inherited classmethod at the class level itself, not just on instances. Record.validate is True, meaning the classmethod is gone from Record entirely, not merely hidden on individual instances. This is class-level replacement by the field annotation, compounded by instance assignment from the generated constructor. The fix is to rename either the field or the inherited classmethod so the names no longer collide. If renaming is not practical, the cleanest structural solution is to move the factory method to a separate mixin or to rename the field using a field() alias.
Factory class methods named load, create, build, or parse are common patterns. Dataclass field names drawn from configuration schemas, API payloads, or ORM column names are equally common. The two pools of names overlap more than you might expect. Auditing a dataclass for classmethod name collisions before shipping is worth the few seconds it takes.
from dataclasses import dataclass
from typing import ClassVar
@dataclass
class Pipeline:
name: str
stages: list
build: str = "release"
MAX_STAGES: ClassVar[int] = 10
@classmethod
def build(cls, config: dict) -> "Pipeline":
"""Construct a Pipeline from a config dict."""
return cls(
name=config["name"],
stages=config.get("stages", []),
)
def run(self) -> None:
print(f"Running {self.name} ({self.build} build)")
How to prevent shadowing
There is no single universally correct prevention strategy — the right approach depends on how much control you have over the class design. These options range from naming conventions that cost nothing to structural changes that make the problem impossible.
Naming conventions
The simplest prevention is to give factory class methods names that cannot plausibly collide with instance state. A method named create competes with any attribute that describes a creation source or creation mode. A method named from_file, from_dict, or from_env is unlikely to collide with any instance attribute, because attributes describing data origin are typically stored under different keys. The from_ prefix is a common Python convention for alternative constructors precisely because it is unlikely to shadow anything.
class Connection:
# Prefer names that cannot collide with instance state
@classmethod
def from_url(cls, url):
...
@classmethod
def from_env(cls):
...
def __init__(self, host, port, timeout):
self.host = host
self.port = port
self.timeout = timeout
# None of these collide with from_url or from_env
A custom __setattr__ guard
If you own the class and want a runtime guard that raises immediately when a shadowing assignment occurs — rather than silently succeeding — you can override __setattr__ to check whether the name being set already exists as a class-level descriptor.
class ShadowGuard:
"""Mixin that raises AttributeError if an instance attribute would shadow
a classmethod or staticmethod defined on the class."""
def __setattr__(self, name, value):
for klass in type(self).__mro__:
if name in klass.__dict__:
existing = klass.__dict__[name]
if isinstance(existing, (classmethod, staticmethod)):
raise AttributeError(
f"Cannot set instance attribute {name!r}: "
f"it would shadow a {type(existing).__name__} "
f"defined on {klass.__name__}."
)
break
super().__setattr__(name, value)
class Connection(ShadowGuard):
@classmethod
def create(cls):
return cls()
def __init__(self, timeout=30):
self.timeout = timeout # fine — no classmethod named 'timeout'
c = Connection()
try:
c.create = "some_value"
except AttributeError as e:
print(e)
# Cannot set instance attribute 'create': it would shadow a classmethod
# defined on Connection.
The guard works by walking the MRO at the point of assignment and checking whether the target name maps to a classmethod or staticmethod object. If it does, the assignment is rejected with a clear, actionable error message that names both the attribute and the class where the conflict lives. The break after the first match ensures the check stops at the first class in the MRO that defines the name — mirroring the precedence Python itself uses.
This guard catches assignments that happen after construction as well as those inside __init__, since __setattr__ intercepts all instance attribute writes. It does not prevent class-level reassignment (MyClass.name = value), because that bypasses __setattr__ entirely and operates directly on the class __dict__.
__init_subclass__ for inherited enforcement
If you are building a base class and want to enforce the no-shadow rule on all subclasses without requiring them to inherit the __setattr__ mixin explicitly, __init_subclass__ lets you inspect each subclass at class definition time and raise if any class-body attribute would shadow a classmethod defined on the base.
class Base:
@classmethod
def create(cls):
return cls()
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
for name, value in cls.__dict__.items():
if name.startswith('__'):
continue
# Allow intentional classmethod or staticmethod overrides
if isinstance(value, (classmethod, staticmethod)):
continue
# Flag non-descriptor values that collide with an inherited classmethod
for ancestor in cls.__mro__[1:]:
if name in ancestor.__dict__ and isinstance(ancestor.__dict__[name], classmethod):
raise TypeError(
f"{cls.__name__}.{name} shadows the classmethod "
f"{ancestor.__name__}.{name}. Rename one of them."
)
# Caught at class definition time — not at instance construction:
try:
class BrokenChild(Base):
create = "a plain class attribute that shadows the classmethod"
except TypeError as e:
print(e)
# BrokenChild.create shadows the classmethod Base.create. Rename one of them.
# Legitimate classmethod override is allowed:
class BetterChild(Base):
@classmethod
def create(cls):
return cls() # intentional override — no TypeError raised
This approach catches class-body collisions at definition time, which is earlier and more explicit than a runtime guard. The guard skips names whose subclass value is itself a classmethod or staticmethod, so intentional overrides — a subclass redefining a factory method — are permitted without error. It does not catch __init__-level shadowing, because the assignments inside __init__ happen at instance construction, not at class definition. For complete coverage, combining __init_subclass__ (class-body check) with the __setattr__ mixin (instance assignment check) covers both vectors.
A data descriptor wrapper that makes the name structurally unshadowable
All of the approaches above are defensive — they detect or report the collision. There is a structural approach that makes shadowing impossible at the Python level without relying on __slots__: promote the name to a data descriptor by wrapping the classmethod in a class that defines __set__ (and __get__, to preserve callable behavior). Because data descriptors occupy step 1 in the lookup order, an instance attribute assignment to the same name will never reach the instance __dict__. A descriptor that defines only __set__ qualifies as a data descriptor per the Python Data Model — both __get__ and __set__ are included here so the method remains callable, but the tier-1 status comes from __set__ alone.
class ProtectedClassMethod:
"""A data descriptor that wraps a classmethod and blocks instance shadowing.
Because it defines __set__, Python places it at step 1 (data descriptor tier)
in the attribute lookup order. Any attempt to assign the same name on an instance
raises AttributeError immediately — the assignment never reaches __dict__.
"""
def __init__(self, func):
self._cm = classmethod(func)
self._owner = None
def __set_name__(self, owner, name):
self._name = name
self._owner = owner
def __get__(self, obj, objtype=None):
# When accessed via the class (Connection.create), obj is None and
# objtype is the class. When accessed via an instance (conn.create),
# obj is the instance and objtype is type(obj). Using self._owner as
# the fallback is safer than type(obj) when obj is None.
if objtype is None:
objtype = self._owner if self._owner is not None else type(obj)
return self._cm.__get__(obj, objtype)
def __set__(self, obj, value):
raise AttributeError(
f"Cannot assign to {self._name!r}: "
f"it is a protected class method and cannot be shadowed."
)
class Connection:
@ProtectedClassMethod
def create(cls):
"""Factory: return a new Connection instance."""
return cls()
def __init__(self, timeout=30):
self.timeout = timeout
conn = Connection.create()
print(conn) # <__main__.Connection object at 0x...>
print(conn.create) # — still accessible
try:
conn.create = "some_value"
except AttributeError as e:
print(e)
# Cannot assign to 'create': it is a protected class method and cannot be shadowed.
The key mechanism is __set_name__, called automatically by Python when a descriptor is assigned to a class body attribute. It receives the owner class and the attribute name, letting the descriptor store its own name for the error message without any manual registration. The result is a drop-in replacement for @classmethod whose name is structurally immune to instance shadowing — no MRO walk at runtime, no __setattr__ hook, no reliance on caller discipline.
ProtectedClassMethod does not forward __doc__, __name__, or other function metadata automatically. For production use, add functools.update_wrapper(self, func) in __init__ after wrapping, or copy the relevant attributes manually. This preserves introspection and documentation toolchain compatibility.
Enforcing protection through a metaclass
If you need the unshadowable guarantee across an entire class hierarchy without modifying individual method declarations, a metaclass can enforce it at class creation time. The metaclass overrides __setattr__ at the class level — not the instance level — blocking any post-definition attempt to overwrite a classmethod on the class itself, while a custom __init__ on the metaclass wraps every classmethod with ProtectedClassMethod automatically.
import functools
class ProtectedClassMethod:
def __init__(self, func):
self._cm = classmethod(func)
self._owner = None
functools.update_wrapper(self, func)
def __set_name__(self, owner, name):
self._name = name
self._owner = owner
def __get__(self, obj, objtype=None):
if objtype is None:
objtype = self._owner if self._owner is not None else type(obj)
return self._cm.__get__(obj, objtype)
def __set__(self, obj, value):
raise AttributeError(
f"Cannot assign to {self._name!r}: protected class method."
)
class ProtectedMeta(type):
"""Metaclass that automatically wraps every @classmethod as a ProtectedClassMethod."""
def __new__(mcs, name, bases, namespace):
for attr, value in list(namespace.items()):
if isinstance(value, classmethod):
namespace[attr] = ProtectedClassMethod(value.__func__)
return super().__new__(mcs, name, bases, namespace)
class Base(metaclass=ProtectedMeta):
@classmethod
def create(cls):
return cls()
@classmethod
def from_dict(cls, d):
return cls()
# All classmethods on Base are automatically protected — no decorator needed:
b = Base()
print(b.create) #
try:
b.create = "shadow attempt"
except AttributeError as e:
print(e) # Cannot assign to 'create': protected class method.
try:
b.from_dict = "shadow attempt"
except AttributeError as e:
print(e) # Cannot assign to 'from_dict': protected class method.
The metaclass approach trades explicitness for breadth: every classmethod defined anywhere in the hierarchy is protected without any per-method annotation. The tradeoff is that metaclasses interact with inheritance and third-party libraries in ways that require care — a class cannot simultaneously use two metaclasses unless one inherits from the other. For most application code, the ProtectedClassMethod descriptor used selectively on high-risk names is the more practical choice. The metaclass version suits framework or base-class authors who want the guarantee to be implicit and universal.
Using typing.ClassVar annotation to signal and enforce class-level intent
Python's typing.ClassVar (PEP 526) is a type annotation that signals to both humans and type checkers that a name is a class-level variable, not an instance attribute. Mypy enforces this at the type level: mypy.readthedocs.io (opens in new window) documents that assigning a ClassVar-annotated name through an instance raises a type error at lint time, not at runtime. This is a zero-overhead lint-time prevention that costs nothing at runtime and integrates with any CI pipeline running mypy or pyright.
The pattern goes further when combined with __init_subclass__: if a base class annotates its classmethods as ClassVar-like protected names in a registry and __init_subclass__ checks each new subclass's annotations for collisions, you get class-definition-time enforcement without a metaclass. This is precisely how Pydantic V2 detects shadowing — it raises a NameError at class construction if a subclass field name matches a parent-class attribute — and how attrs prevents field names from conflicting with validator methods.
from __future__ import annotations
from typing import ClassVar, get_type_hints
import inspect
class ProtectedBase:
"""Base class that combines ClassVar annotations with __init_subclass__
enforcement to prevent subclass instance fields from shadowing classmethods."""
# Annotate protected names as ClassVar so mypy flags instance assignments
# at lint time — this costs nothing at runtime.
#
# The runtime guard in __init_subclass__ catches collisions even for code
# that lacks type annotations entirely (e.g. dynamically generated classes).
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
# Collect all classmethod names defined anywhere in the MRO up to ProtectedBase
protected = set()
for ancestor in cls.__mro__:
if ancestor is cls:
continue
for name, obj in ancestor.__dict__.items():
if isinstance(obj, (classmethod, staticmethod)):
protected.add(name)
# Collect annotations on the new subclass
try:
own_hints = get_type_hints(cls, include_extras=True)
except Exception:
own_hints = {}
for name in own_hints:
if name in protected:
raise TypeError(
f"{cls.__name__}: annotated field {name!r} would shadow "
f"a classmethod or staticmethod inherited from a base class. "
f"Rename the field or annotate the base-class method as ClassVar."
)
class Base(ProtectedBase):
@classmethod
def create(cls):
return cls()
@classmethod
def validate(cls, data):
return isinstance(data, dict)
# This raises TypeError at class definition time — not at instance construction:
try:
class BadModel(Base):
create: str # annotation matches an inherited classmethod
name: str = "" # fine — 'name' is not a classmethod
except TypeError as e:
print(e)
# BadModel: annotated field 'create' would shadow a classmethod inherited from a base class.
# Legitimate subclass works normally
class GoodModel(Base):
name: str = ""
record_id: int = 0
Pydantic V2 raises a NameError at class construction if a field name shadows a parent-class attribute. The error message even suggests using a field alias. The attrs library similarly validates field names against the inherited namespace at class creation time. Both frameworks converged on class-definition-time enforcement rather than instance-construction-time enforcement because catching the problem earlier — before any instance exists — makes the error message unambiguous: the class body is still on screen in the traceback, not some distant __init__ call buried in user code.
Moving conflicting state off the instance with a class-level WeakKeyDictionary registry
Sometimes renaming is genuinely not possible — the conflicting name comes from an external protocol, a serialization format, or a third-party interface you cannot change. In these cases, the problem is not that the shadow exists, but that conflicting state is stored on the instance's __dict__. The structural fix is to relocate that state off the instance entirely, into a class-level registry keyed by instance identity. The standard tool for this is weakref.WeakKeyDictionary: it maps instances to their associated state without placing anything in the instance's __dict__, and it releases the stored state automatically when the instance is garbage collected.
import weakref
from typing import ClassVar
class Report:
# External protocol requires instances to expose a 'create' key, but
# the class also defines a @classmethod named 'create' that we cannot rename.
# Solution: store the per-instance 'create' value in a class-level registry
# instead of in self.__dict__, so the classmethod is never shadowed.
_create_registry: ClassVar[weakref.WeakKeyDictionary] = weakref.WeakKeyDictionary()
@classmethod
def create(cls, source: str):
"""Factory: build a Report from a source string."""
obj = cls()
Report._create_registry[obj] = source
return obj
@property
def creation_source(self):
"""Access the per-instance creation value through a non-conflicting name."""
return Report._create_registry.get(self)
@creation_source.setter
def creation_source(self, value):
Report._create_registry[self] = value
r = Report.create("quarterly_data")
# The classmethod is accessible normally — nothing in r.__dict__ shadows it
print(r.create) # <bound method Report.create ...>
print(r.__dict__) # {} — no shadow
print(r.creation_source) # 'quarterly_data'
# WeakKeyDictionary releases the registry entry when `r` is garbage collected —
# no memory leak even when large numbers of Report instances are created.
import gc
del r
gc.collect()
# Registry entry is automatically removed.
The WeakKeyDictionary approach is the correct solution when the constraint is external — when you truly cannot rename either the classmethod or the conflicting attribute. The Python standard library documentation (opens in new window) notes that WeakKeyDictionary is suited precisely for "associating additional data with an object owned by other parts of an application without adding attributes to those objects." That is the exact scenario: state that must logically belong to an instance, stored externally to avoid a naming conflict with the class interface.
Not all Python objects can be weakly referenced. Basic int and tuple instances cannot be keys in a WeakKeyDictionary — neither can instances of classes that define __slots__ without including __weakref__ in the slot list. If your class uses __slots__, add '__weakref__' to the slots list to re-enable weak reference support. Also note that WeakKeyDictionary is not thread-safe by default; wrap accesses in a threading.Lock if instances are created or accessed from multiple threads concurrently.
A read-only sentinel descriptor as a zero-cost name reservation
The ProtectedClassMethod wrapper described above promotes a classmethod to a data descriptor by wrapping it. There is a lighter variant for cases where you want to reserve a name as unshadowable but do not need to route every access through a wrapper: a standalone read-only sentinel descriptor that raises AttributeError on any write, placed directly on the class. It costs nothing on reads — the descriptor is never invoked on get for the classmethod case because you can use it alongside a classmethod rather than wrapping it — and it makes the name structurally immovable from tier 1.
class ReadOnlySentinel:
"""Minimal data descriptor that blocks instance assignment.
Unlike ProtectedClassMethod, this does NOT wrap the classmethod — its
sole purpose is to occupy tier 1 (data descriptor) by defining __set__,
making instance assignment impossible. The classmethod handles reads.
"""
def __set_name__(self, owner, name):
self._name = name
def __set__(self, obj, value):
raise AttributeError(
f"Cannot assign {self._name!r} on an instance: "
f"this name is reserved at the class level."
)
# No __get__ defined. Defining __set__ alone qualifies this as a data
# descriptor (tier 1). The classmethod's __get__ handles reads normally.
# IMPORTANT: In a class body, names are processed top-to-bottom.
# If you write 'create = ReadOnlySentinel()' THEN '@classmethod def create',
# the classmethod assignment REPLACES the sentinel — the name ends up holding
# only the classmethod, and the protection is lost.
#
# The correct pattern is to apply the sentinel AFTER class creation,
# or use ProtectedClassMethod which combines both behaviors in one descriptor.
class Connection:
@classmethod
def create(cls):
return cls()
def __init__(self, timeout: int = 30) -> None:
self.timeout = timeout
# Apply sentinel post-class to reserve the name at tier 1:
Connection.create = ReadOnlySentinel()
# NOTE: this overwrites the classmethod at the class level — the sentinel is now
# the sole occupant of 'create'. Accessing Connection.create now returns
# the sentinel object itself (no __get__ defined), not a bound method.
# This demonstrates the tradeoff: pure name-reservation at the cost of losing
# the callable. For callable + protection, use ProtectedClassMethod.
# The practical pattern: __init_subclass__ guards class-body collisions:
class ProtectedConnection:
@classmethod
def create(cls):
"""Factory: return a new ProtectedConnection instance."""
return cls()
def __init_subclass__(cls, **kwargs: object) -> None:
super().__init_subclass__(**kwargs)
# Guard against class-body collisions in subclasses
if "create" in cls.__dict__ and not isinstance(cls.__dict__["create"], classmethod):
raise TypeError(
f"{cls.__name__}.create must be a classmethod, not "
f"{type(cls.__dict__['create']).__name__}."
)
# Subclass body collision is caught at class-definition time:
try:
class BrokenSub(ProtectedConnection):
create = "override_attempt" # raises immediately
except TypeError as e:
print(e)
# BrokenSub.create must be a classmethod, not str.
conn = ProtectedConnection.create()
print(conn) # <__main__.ProtectedConnection object at 0x...>
# Instance assignment is NOT blocked by this approach (no __set__ on the classmethod).
# For full instance-level protection, use ProtectedClassMethod.
conn.create = "instance_value" # silently succeeds — guard is class-body only
print(conn.create) # 'instance_value' — reminder: __init_subclass__ ≠ instance guard
In a class body, names are processed top-to-bottom. If you write create = ReadOnlySentinel() on one line and @classmethod def create(cls): ... on a later line, the classmethod assignment replaces the sentinel in the namespace, not the other way around. The sentinel must be the last assignment to that name in the class body, or it must be applied via a post-class mechanism like __init_subclass__, a class decorator, or a metaclass. This is why ProtectedClassMethod — which wraps both behaviors in one object — is the simpler production pattern. The standalone sentinel is documented here as a conceptual building block: understanding why __set__ alone earns tier-1 status clarifies exactly how the more complex wrappers work.
A __prepare__ namespace guard that blocks collisions before the class body executes
All of the approaches above react after a class is defined or after an instance attribute is assigned. There is an earlier interception point: the metaclass __prepare__ hook, which is called before the class body runs and returns the mapping object Python uses to collect the namespace. Returning a custom mapping that raises TypeError when a key is written that would shadow a reserved name catches the collision the moment the class body is interpreted — before __new__, before any instance exists.
class _GuardedNamespace(dict):
"""A dict-like namespace that prevents plain attributes from overwriting
a classmethod already registered in the same class body."""
def __setitem__(self, key, value):
existing = self.get(key)
if existing is not None and isinstance(existing, classmethod) \
and not isinstance(value, (classmethod, staticmethod)):
raise TypeError(
f"Class body error: {key!r} was already declared as a "
f"@classmethod. Assigning a plain value here shadows it."
)
super().__setitem__(key, value)
class GuardedMeta(type):
@classmethod
def __prepare__(mcs, name, bases, **kwargs):
return _GuardedNamespace()
class Pipeline(metaclass=GuardedMeta):
@classmethod
def build(cls, config):
return cls()
# This line raises TypeError at class definition time — not at runtime:
# build = "release" <-- would raise immediately
The distinction from __init_subclass__ is timing. __init_subclass__ fires after the class body has been fully assembled and type.__new__ has run. __prepare__ fires before a single line of the class body executes. That means a collision between a @classmethod declared on line 3 and a plain attribute assigned on line 10 of the same class body is caught the moment line 10 is reached — not after the entire class is constructed. For complex class bodies where the collision might be buried deep in a generated or dynamically written class, this is meaningfully earlier feedback.
__prepare__ only guards within a single class body. It does not intercept instance-level assignments (those happen at runtime via __setattr__) and does not examine inherited names from base classes. Combining it with the __init_subclass__ check covers cross-class inheritance collisions, and the ShadowGuard mixin covers runtime instance assignments. Together, all three vectors are blocked.
Monitoring attribute assignment on classes you do not own
The approaches above all require modifying the class — adding a metaclass, a mixin, or a descriptor. In some situations you do not own the class at all: it comes from a third-party library, a generated ORM model, or a deserialization layer whose source you cannot touch. A common misconception is that Python's sys.addaudithook can intercept attribute assignment. In CPython, ordinary attribute writes — via obj.attr = val, setattr(), or object.__setattr__() — do not fire any built-in audit event. The sys.audit system covers security-sensitive operations such as exec, import, file opens, and socket connections, not general attribute mutation.
The correct approach for monitoring a class you do not own is to monkey-patch __setattr__ on that class at import time, chaining the original method non-destructively. This requires write access to the class object but not to its source file, making it viable for test fixtures and development environments without touching library code.
# ACCURACY NOTE: Python's sys.audit mechanism does NOT fire any built-in audit
# event for ordinary attribute assignment (obj.attr = val), setattr(), or
# object.__setattr__(). Zero audit events are generated by attribute writes in
# CPython. The "object.__setattr__" string is not in CPython's audit event table.
#
# For zero-modification monitoring of attribute assignment on a class you do not
# own, the correct approach is monkey-patching __setattr__ on the class at import
# time, or using a test-environment fixture. Here is the monkey-patch pattern:
import warnings
def _install_shadow_monitor(cls):
"""Patch cls.__setattr__ to warn when an instance attribute would shadow
a classmethod. Installs non-destructively by chaining the original method."""
original_setattr = cls.__setattr__ if "__setattr__" in cls.__dict__ else None
def _monitored_setattr(self, name, value):
for klass in type(self).__mro__:
if name in klass.__dict__ and isinstance(klass.__dict__[name], classmethod):
warnings.warn(
f"Instance attribute {name!r} on {type(self).__name__!r} "
f"shadows @classmethod {klass.__name__}.{name!r}.",
stacklevel=2,
)
break
if original_setattr is not None:
original_setattr(self, name, value)
else:
object.__setattr__(self, name, value)
cls.__setattr__ = _monitored_setattr
return cls
# Usage: apply to a class you do not own at import time or in test setup.
# Gate behind an environment variable for development/CI use only.
import os
if os.getenv("MONITOR_SHADOWS"):
import third_party_module # hypothetical
_install_shadow_monitor(third_party_module.SomeClass)
# For classes you own, the ShadowGuard __setattr__ mixin is the cleaner solution.
The monkey-patch approach modifies __setattr__ on the target class at import time, which means it requires write access to the class — it is not zero-modification in the strictest sense, but it does not require editing the original source file. Gate it behind an environment variable (MONITOR_SHADOWS=1) so it only activates in development and CI. For classes you own, the ShadowGuard __setattr__ mixin described above is a cleaner and more explicit alternative, since it documents the intent in the class definition itself rather than in setup code.
A common misconception is that sys.addaudithook can intercept ordinary attribute assignment. In CPython, regular attribute writes — whether via obj.attr = val, setattr(), or object.__setattr__() — do not fire any built-in audit event. The "object.__setattr__" string does not appear in CPython's audit event table. The sys.audit system is designed for security-sensitive operations such as exec, import, file opens, and socket connections — not general attribute mutation.
Static analysis: catching the collision before the code runs
Runtime guards all share a limitation: they fire only after the code executes. In a CI pipeline, a static check that catches the problem at lint time — before any tests run, before any deployment — is strictly more valuable. Two practical approaches exist for Python's mainstream static analysis ecosystem.
The first is a custom mypy plugin. Mypy's plugin API exposes get_class_decorator_hook and get_method_hook, but the most direct entry point for this problem is Plugin.get_attribute_hook combined with a check in analyze_class_body. A plugin can walk the type-checked class body, collect all names bound to classmethod objects, and then flag any __init__ or other method that assigns self.<name> using those same names. The mypy documentation describes the full plugin interface at mypy.readthedocs.io (opens in new window).
# mypy_shadow_plugin.py — minimal skeleton
# Register in mypy.ini: plugins = mypy_shadow_plugin
from mypy.plugin import Plugin, ClassDefContext
from mypy.nodes import Decorator, FuncDef, AssignmentStmt, MemberExpr
class ShadowCheckPlugin(Plugin):
def get_class_decorator_hook(self, fullname):
return None # not intercepting decorators here
def get_customize_class_mro_hook(self, fullname):
return None
def get_additional_deps(self, file):
return []
def get_class_decorator_hook_2(self, fullname):
return None
def report_config_data(self, ctx):
return {}
# The real work: analyze each class body after it is type-checked.
def get_class_decorator_hook(self, fullname):
return None
def plugin(version):
return ShadowCheckPlugin
A complete mypy plugin requires implementing the AST traversal against mypy's internal node types, which goes beyond a quick skeleton. For most teams, the more immediately practical approach is a custom pylint checker, because pylint's checker API is simpler and the AST it exposes is Python's standard ast module rather than mypy's typed IR.
"""classmethod_shadow_checker.py — pylint checker plugin (pylint >= 2.14).
Register in .pylintrc or pyproject.toml:
load-plugins = classmethod_shadow_checker
Requires: pip install pylint astroid
Note: pylint passes astroid nodes, NOT Python ast nodes.
"""
from astroid import nodes as astroid_nodes
from pylint.checkers import BaseChecker
# pylint >= 2.14 removed IAstroidChecker and __implements__.
# Modern checkers inherit BaseChecker only — no interface declaration needed.
class ClassmethodShadowChecker(BaseChecker):
name = "classmethod-shadow"
msgs = {
"W9901": (
"Instance attribute %r shadows @classmethod of the same name",
"classmethod-shadow",
"An instance attribute assignment shadows a @classmethod "
"defined on the same class. Rename one of them.",
),
}
def visit_classdef(self, node: astroid_nodes.ClassDef) -> None:
"""Collect @classmethod names, then flag self.<n> = assignments."""
# pylint passes astroid.nodes.FunctionDef — NOT ast.FunctionDef.
cm_names: set[str] = set()
for child in node.body:
if not isinstance(child, astroid_nodes.FunctionDef):
continue
decs = child.decorators.nodes if child.decorators else []
for dec in decs:
if isinstance(dec, astroid_nodes.Name) and dec.name == "classmethod":
cm_names.add(child.name)
if not cm_names:
return
# In astroid, attribute-assignment targets are AssignAttr nodes.
# Use nodes_of_class() to walk the method body efficiently.
for child in node.body:
if not isinstance(child, astroid_nodes.FunctionDef):
continue
for assign_attr in child.nodes_of_class(astroid_nodes.AssignAttr):
if (assign_attr.attrname in cm_names
and isinstance(assign_attr.expr, astroid_nodes.Name)
and assign_attr.expr.name == "self"):
self.add_message(
"classmethod-shadow",
node=assign_attr,
args=(assign_attr.attrname,),
)
def register(linter: object) -> None:
linter.register_checker(ClassmethodShadowChecker(linter))
With this checker registered, running pylint --load-plugins=classmethod_shadow_checker yourmodule.py emits a W9901 warning for every instance attribute assignment that shadows a @classmethod in the same class. This integrates cleanly into CI — pylint's exit code is non-zero when warnings are present, and the check runs in milliseconds on entire codebases without executing any Python. Combined with a pre-commit hook, this can catch the collision before a pull request is even opened, which is earlier than any runtime guard can reach.
Can __getattribute__ intercept a shadow?
The article has covered __getattr__ (it cannot intercept shadowing because it only fires on lookup failure) and the runtime guards that raise at assignment time. There is a third option that goes deeper: overriding __getattribute__ itself. Unlike __getattr__, which is a fallback, __getattribute__ is called on every attribute access — it is the entry point to the entire lookup chain. Overriding it gives you the ability to intercept a read even when a shadow is in place.
class ShadowInterceptor:
"""Override __getattribute__ to detect and optionally block classmethod shadows."""
def __getattribute__(self, name):
# Access the instance dict directly via object.__getattribute__ to avoid
# infinite recursion (calling self.__dict__ would re-enter this method).
try:
instance_dict = object.__getattribute__(self, '__dict__')
except AttributeError:
instance_dict = {}
if name in instance_dict:
# The name exists in the instance dict — check whether it shadows
# a classmethod on the class or any ancestor in the MRO.
for klass in type(self).__mro__:
if name in klass.__dict__ and isinstance(klass.__dict__[name], classmethod):
raise AttributeError(
f"Access to {name!r} blocked: instance attribute shadows "
f"classmethod {klass.__name__}.{name}. "
f"Delete the instance attribute to restore access."
)
# Fall through to normal lookup for all other cases.
return object.__getattribute__(self, name)
class Connection(ShadowInterceptor):
@classmethod
def create(cls):
return cls()
def __init__(self, timeout=30):
self.timeout = timeout
c = Connection()
# Normal attribute access works fine
print(c.timeout) # 30
print(c.create) # <bound method Connection.create ...>
# Introducing a shadow now raises AttributeError on READ, not just on write
c.__dict__['create'] = "injected_string" # bypass __setattr__ to test
try:
_ = c.create
except AttributeError as e:
print(e)
# Access to 'create' blocked: instance attribute shadows classmethod Connection.create.
The critical implementation detail is accessing __dict__ via object.__getattribute__(self, '__dict__') rather than self.__dict__. The latter would call your override recursively until the stack overflows. Delegating to object.__getattribute__ for the base case is how every correct __getattribute__ override avoids that trap.
Every attribute access on every instance invokes your override — including accesses to self.timeout, self._cache, and every other attribute. A MRO walk for each non-shadow access adds real overhead in tight loops. This approach is best reserved for debugging harnesses or test fixtures, not production code paths. For production, a __setattr__ guard that blocks shadowing at write time is cheaper because it only fires once per assignment rather than on every read.
Using inspect.getattr_static() to see through a shadow
The standard getattr() function runs the full descriptor protocol — it returns whatever the lookup chain produces, which means it returns the shadowing instance attribute rather than the classmethod. There is a less-known function in the standard library that does something different: inspect.getattr_static(), introduced in Python 3.2, retrieves an attribute without triggering the descriptor protocol or any dynamic lookup. It peeks at the raw object stored in the namespace at that name. This makes it the cleanest single call for verifying what is actually in the instance dictionary versus what the class holds.
import inspect
class Connection:
@classmethod
def create(cls):
return cls()
c = Connection()
c.create = "shadowing_string" # instance attribute shadows classmethod
# getattr() returns the shadow — the descriptor protocol hands you the instance attr
print(getattr(c, 'create')) # 'shadowing_string'
# inspect.getattr_static() bypasses the descriptor protocol and instance __dict__ priority
# It returns whatever lives in the first namespace it finds the name in.
# For an instance, it searches instance.__dict__ first.
raw_instance = inspect.getattr_static(c, 'create')
print(raw_instance) # 'shadowing_string'
# To see the class-level classmethod object, call it on the class directly:
raw_class = inspect.getattr_static(Connection, 'create')
print(raw_class) # <classmethod object at 0x...>
# Comparing the two reveals the shadow:
def is_shadowing(obj, name):
"""Return True if name is an instance attribute that shadows a class-level descriptor."""
inst_val = inspect.getattr_static(obj, name, None)
class_val = inspect.getattr_static(type(obj), name, None)
if inst_val is None or class_val is None:
return False
# Shadow exists when the instance has a non-descriptor value and the class has a descriptor
return (name in vars(obj)) and isinstance(class_val, (classmethod, staticmethod))
print(is_shadowing(c, 'create')) # True
print(is_shadowing(c, 'timeout')) # False (not shadowing a classmethod)
The inspect documentation (opens in new window) notes that getattr_static() may return descriptor objects rather than the values they would produce — that is exactly what you want here. Calling it on the class gives you back the raw classmethod object, not the bound method that normal attribute access would produce. This makes it a precise diagnostic tool: if inspect.getattr_static(obj, name) returns a plain value but inspect.getattr_static(type(obj), name) returns a classmethod, a shadow is active on that instance.
Using super() and type(self) to reach a shadowed classmethod
Once a shadow is in place on an instance, the standard self.method_name access returns the shadowing value. But the classmethod is not gone — it is still accessible through the class. Inside the class body itself, there are two clean escape hatches. The first is to call the method through the class name directly. The second, in a subclass context, is to use super().
class Base:
@classmethod
def create(cls):
return cls()
class Child(Base):
def __init__(self, create=None):
# self.create here shadows the inherited classmethod on this instance
self.create = create
def make_copy(self):
# self.create is the shadowing instance attribute — cannot call it as a factory.
# WRONG: return self.create() # TypeError if self.create is not callable
# Option 1: access through the class name directly — always reaches the classmethod,
# regardless of what is in any instance's __dict__.
return Base.create()
def make_via_type(self):
# type(self).create looks up 'create' on the class, bypassing the instance __dict__.
# The shadow only blocks access through the instance — class-level access is unaffected.
return type(self).create() # returns a Child instance
def make_via_super(self):
# super() works too — it creates a proxy that skips the instance __dict__
# and walks the MRO from the class above the current one.
# This is the cleanest form inside a subclass method.
return super().create() # returns an instance of type(self)
c = Child(create="quarterly_report")
print(c.create) # 'quarterly_report' — shadow in place on this instance
print(Child.create) # <bound method Base.create of <class '__main__.Child'>> — intact
print(c.make_via_type()) # <__main__.Child object at 0x...> — factory works via type()
The key insight is that the shadow only blocks access when you go through the instance. The classmethod is always reachable through the class, through type(self), or through super() inside a subclass method. In production code where you encounter this situation, the least invasive fix is calling the factory through ClassName.classmethod_name() while you track down and rename the conflicting attribute.
Python 3.12's typing.override and classmethod shadowing
Python 3.12 introduced typing.override (PEP 698) — a decorator that signals to type checkers that the decorated method is an intentional override of a method in a base class. Type checkers that support @override will raise an error if no matching method exists in any ancestor class, catching stale overrides after a refactor. This has a direct relationship to classmethod shadowing: applying @override to a classmethod that overrides a parent classmethod is the explicit, type-checker-verified signal that the override is intentional rather than accidental.
from typing import override # Python 3.12+
class Base:
@classmethod
def create(cls, data: dict):
return cls()
class GoodChild(Base):
@override # type checker verifies Base.create exists — intentional override
@classmethod
def create(cls, data: dict):
validated = {k: v for k, v in data.items() if v is not None}
return super().create(validated)
class BadChild(Base):
@override # type checker flags this: no 'load' on Base — likely a typo
@classmethod
def load(cls, data: dict): # intended to override 'create', but name is wrong
return super().create(data)
# At runtime, @override is a no-op — it just sets __override__ = True on the function.
# The safety guarantee comes from static analysis: mypy, pyright, and pytype all
# understand @override and will catch the BadChild error before the code ever runs.
Per PEP 698 (opens in new window), @override is purely a static analysis hint — at runtime it is a no-op that sets the __override__ attribute on the decorated function to True. Its value is entirely in CI: a type checker running in strict mode can flag every intentional classmethod override that lacks @override, and flag every @override-decorated method that does not match any ancestor name. Combined with the pylint checker described above, this gives you two orthogonal layers — one catching accidental same-name instance attributes, the other catching intentional but misnamed overrides.
When combining @override with @classmethod, order matters. @override must come above @classmethod, not below it. Decorators are applied bottom-up, so placing @override above means it wraps the classmethod object. Placing it below would wrap the raw function before @classmethod processes it, which may prevent type checkers from recognizing the override correctly. PEP 698 explicitly documents this ordering requirement.
Does a classmethod shadow survive pickling and copy.deepcopy()?
This is one of the least-documented consequences of classmethod shadowing in production Python. When pickle serializes an instance, it serializes the instance's __dict__ — which includes any shadowing attribute stored there. When that pickled object is deserialized, the shadow is reconstituted: the new instance arrives with the same conflicting key in its __dict__, and the classmethod is invisible through that instance exactly as it was before serialization. The bug survives a round-trip through pickle, through copy.deepcopy(), and through any serialization mechanism that reconstructs from __dict__.
import pickle, copy
class Report:
@classmethod
def load(cls, path):
obj = cls()
obj._path = path
return obj
def __init__(self, load=None):
self.load = load # shadows the classmethod on every instance
r = Report(load="quarterly_data.json")
print(r.load) # 'quarterly_data.json' — shadow in place
# Pickle round-trip
pickled = pickle.dumps(r)
r2 = pickle.loads(pickled)
print(r2.__dict__) # {'load': 'quarterly_data.json'} — shadow reconstituted
print(r2.load) # 'quarterly_data.json' — classmethod still unreachable on r2
# deepcopy has identical behavior
r3 = copy.deepcopy(r)
print(r3.load) # 'quarterly_data.json' — shadow survives deep copy too
# The classmethod is still intact at the class level — but r2 and r3 cannot reach it
print(Report.load) # <bound method Report.load ...>
There is a corollary that is equally important: if the shadowing value is itself non-picklable — a lambda, an open file handle, a database cursor, a threading lock — then pickle.dumps() will raise a PicklingError or TypeError when it encounters that attribute. The error message will name the shadowing attribute, which can be a useful diagnostic: if you see pickle failing on an attribute that should be a classmethod, a shadow may be the cause. To prevent the shadow from serializing, implement __getstate__ to exclude it from the pickled state, or use __reduce__ to control reconstruction entirely:
class Report:
@classmethod
def load(cls, path):
obj = cls()
obj._path = path
return obj
def __init__(self, load=None):
self.load = load
def __getstate__(self):
"""Return pickling state, excluding any shadowing classmethod names."""
state = self.__dict__.copy()
# Remove the shadowing attribute from the pickled state.
# On unpickling, the classmethod will be accessible normally.
state.pop('load', None)
return state
def __setstate__(self, state):
self.__dict__.update(state)
r = Report(load="quarterly_data.json")
pickled = pickle.dumps(r)
r2 = pickle.loads(pickled)
print('load' in r2.__dict__) # False — shadow not reconstituted
print(r2.load) # <bound method Report.load ...> — classmethod accessible again
This pattern — stripping conflicting keys in __getstate__ — is the pragmatic fix when renaming is not immediately possible. It does not fix the underlying design problem (the name collision), but it prevents the shadow from propagating through serialization boundaries and across process restarts when instances are stored in queues, caches, or databases.
The ORM and deserialization vector
The most common production source of classmethod shadowing is not a hand-written __init__ — it is ORM hydration and deserialization, where arbitrary keys from external data are mapped directly to instance attributes via setattr. Any data source whose keys happen to match a classmethod name will silently shadow that method on every object hydrated from that data.
class Record:
@classmethod
def from_dict(cls, data: dict):
obj = cls()
for key, value in data.items():
setattr(obj, key, value) # Any key matching a classmethod silently shadows it
return obj
@classmethod
def validate(cls, data: dict):
return bool(data.get("id"))
# Imagine this JSON came from an API endpoint:
api_response = {
"id": 42,
"name": "example",
"validate": False, # This key matches the classmethod name — shadow incoming
}
record = Record.from_dict(api_response)
print(record.validate) # False — the bool field, not the classmethod
print(Record.validate) # <bound method Record.validate ...> — intact at class level
try:
record.validate({"id": 1}) # TypeError: 'bool' object is not callable
except TypeError as e:
print(e)
# The fix: filter keys before hydration, or use a field allow-list
SAFE_FIELDS = {'id', 'name'} # explicit allow-list excludes method names
class SafeRecord(Record):
@classmethod
def from_dict(cls, data: dict):
obj = cls()
for key, value in data.items():
if key in SAFE_FIELDS:
setattr(obj, key, value)
return obj
The pattern of using an allow-list rather than a block-list is the safer choice in any hydration layer. Block-lists require knowing in advance every classmethod name — and every future classmethod name added by any team member. Allow-lists invert the burden: only explicitly permitted field names can become instance attributes, making it structurally impossible for an unexpected key to shadow a method.
SQLAlchemy, Django ORM, and Pydantic all have their own attribute assignment layers that can hit this problem. SQLAlchemy's Column descriptors are data descriptors, so they occupy tier 1 and cannot be shadowed by instance assignments — that is a feature, not an accident. Pydantic models use __slots__-like validation layers that catch field name conflicts at model definition time. If you are writing a raw from_dict hydration method without a framework, implementing the allow-list or the ProtectedClassMethod wrapper described in the prevention section is the equivalent safeguard.
Why this matters in practice
Shadowing is rarely intentional. The scenarios where it occurs unintentionally tend to follow a pattern: the class method has a generic, descriptive name — create, load, reset, parse — and somewhere in the class or in code that operates on instances, that same name is used as an attribute to store state.
| Scenario | What happens | Visible error |
|---|---|---|
Instance attribute set with same name as @classmethod |
Instance attribute takes priority; class method is hidden for that instance | None until you try to call it |
| Calling the shadowed name as a method | TypeError: 'str' object is not callable (or similar) |
Runtime error, confusing message |
| Accessing through the class directly | Class method is returned normally | None — class is unaffected |
| Accessing through a different instance | Class method is returned normally | None — only the affected instance is shadowing |
| Deleting the instance attribute | Class method becomes accessible again on that instance | None — the issue fully resolves |
TypeError: 'str' object is not callable (or similar)The most confusing moment is when the error surfaces far from where the shadowing occurred. You set an attribute during initialization or in a data-loading step, and a different part of the codebase later tries to call the method by name on an instance. The TypeError it produces points at the call site, not the assignment site.
How to detect shadowing when debugging
When a TypeError: 'X' object is not callable appears on what should be a method call, the first diagnostic step is to check the instance dictionary directly. vars(obj) is an alias for obj.__dict__ and is often more readable in a REPL session:
# vars(obj) is an alias for obj.__dict__ — useful in a REPL
c = Connection()
c.create = "some_string"
print(vars(c)) # {'create': 'some_string'}
# Walk the MRO to check whether the name also exists as a class-level attribute.
# Checking only vars(type(c)) misses classmethods defined on parent classes.
in_instance = "create" in vars(c)
in_class = any("create" in klass.__dict__ for klass in type(c).__mro__)
print(in_instance and in_class)
# True — both levels have a 'create' entry, confirming a shadow
def diagnose_shadow(obj, name):
"""Check whether an attribute name is being shadowed on an instance."""
in_instance = hasattr(obj, '__dict__') and name in obj.__dict__
# Walk the full MRO so inherited classmethods are found, not just the immediate class.
class_owner = None
class_val_raw = None
for klass in type(obj).__mro__:
if name in klass.__dict__:
class_owner = klass
class_val_raw = klass.__dict__[name]
break
if in_instance and class_owner is not None:
instance_val = obj.__dict__[name]
# Resolve via getattr so the descriptor protocol runs — classmethod descriptor
# objects are not directly callable; the bound method they produce is.
class_val_resolved = getattr(type(obj), name, None)
print(f"SHADOW DETECTED: '{name}' exists in both __dict__ levels")
print(f" Instance value : {instance_val!r}")
print(f" Class value : {class_val_raw!r} (defined on {class_owner.__name__})")
print(f" Callable check : instance={callable(instance_val)}, class={callable(class_val_resolved)}")
elif in_instance:
print(f"'{name}' is only on the instance (no class attribute with that name)")
elif class_owner is not None:
print(f"'{name}' is only on the class — no instance shadow")
else:
print(f"'{name}' not found on instance or class")
class Connection:
@classmethod
def create(cls):
return cls()
c = Connection()
c.create = "some_string"
diagnose_shadow(c, "create")
# SHADOW DETECTED: 'create' exists in both __dict__ levels
# Instance value : 'some_string'
# Class value : <classmethod object at 0x...> (defined on Connection)
# Callable check : instance=False, class=True
For classes with deep inheritance, you may need to walk the full MRO to find where the class-level descriptor originates. inspect.getmro(type(obj)) returns the full chain, and checking each class's __dict__ in order tells you exactly which ancestor owns the descriptor that is being shadowed.
If a test suite passes but production throws a TypeError on a method call, look for code paths that construct objects with dynamic data — deserialization, ORM hydration, configuration loading — where key names from external input are mapped directly to instance attributes. A config key or JSON field whose name collides with a class method will shadow silently every time that data is loaded.
@staticmethod is also a non-data descriptor — it defines __get__ but not __set__. An instance attribute with the same name as a @staticmethod will shadow it by exactly the same mechanism. The practical difference is that static methods are called without cls or self, so the TypeError message when you accidentally call the shadowing value may look slightly different, but the root cause is identical.
If you need to guarantee that a class method cannot be shadowed on an instance, one option is to use a data descriptor — a custom class that defines __set__ (which alone is enough to qualify; __get__ is also included to preserve callable behavior) — to enforce that the name stays bound to the method. In practice, this is rarely necessary. The more practical guard is naming discipline: do not give instance attributes the same names as class-level callables, and document the constraint in your class if the risk is real.
When naming class methods, prefer names that describe an action clearly tied to construction or class-level behavior — from_dict, from_file, build_default. These are less likely to collide with instance attributes, which tend to hold state rather than describe operations.
Frequently Asked Questions
What does it mean to shadow a class method in Python?
Shadowing occurs when an instance attribute is assigned a name that is already used by a class method decorated with @classmethod. Because @classmethod is a non-data descriptor — it defines __get__ but not __set__ — it sits below instance attributes in Python's lookup order. Python finds the instance attribute first and never reaches the class method on that specific instance.
Why can an instance attribute shadow @classmethod but not @property?
@property is a data descriptor because it defines __set__ (and __delete__). Any descriptor that defines __set__ or __delete__ occupies the first tier of Python's lookup order, taking priority over instance attributes. @classmethod defines only __get__, making it a non-data descriptor at tier 3. An instance attribute at tier 2 beats it. Assigning to a property raises AttributeError because __set__ intercepts the assignment; assigning to a classmethod name silently writes to __dict__.
Does a data descriptor have to define both __get__ and __set__?
No. The Python Data Model states that defining __set__ or __delete__ is sufficient to make a descriptor a data descriptor and earn it tier-1 priority in the lookup order. __get__ is not required. In practice, data descriptors almost always also define __get__ so they return a useful value on access, but the tier-1 status comes from __set__ or __delete__ alone. This matters when building a ProtectedClassMethod wrapper: the __set__ definition is what promotes it to tier 1, regardless of whether __get__ is present.
What error does Python raise when you try to call a shadowed class method?
Python raises no error at the moment of shadowing. The assignment is completely silent. The error appears later, when code calls the attribute expecting a callable. If the shadowing value is a string, the call raises TypeError: 'str' object is not callable. The error message points to the call site, not the assignment site where the shadow was introduced — which is what makes this problem difficult to debug.
How does PyObject_GenericGetAttr implement the three-tier attribute lookup?
PyObject_GenericGetAttr is the C function in CPython that implements object.__getattribute__. It first walks the class MRO via _PyType_Lookup, checking for a data descriptor (anything whose type has a non-null tp_descr_set slot, tested by the PyDescr_IsData() macro). If found, it calls __get__ and returns. If not, it checks the instance __dict__. If the name is there, it returns that value. Finally, it falls back to a non-data descriptor or plain class attribute. @classmethod has a null tp_descr_set, so it falls to tier 3, below instance attributes at tier 2.
Can shadowing happen inside __init__?
Yes, and this is the most common scenario in practice. If a class defines a @classmethod and the __init__ method assigns self.name = value using the same name, every instance constructed through that constructor arrives already shadowed. The class method becomes unreachable on those instances from the moment of creation, with no error raised.
What is the difference between instance-level shadowing and class-level reassignment?
Instance-level shadowing stores a conflicting value in one specific instance's __dict__, leaving the class method intact in the class __dict__ and accessible through the class or any other instance. Class-level reassignment (for example, MyClass.method = 'value') overwrites the entry in the class __dict__ itself, removing the class method for all instances. The fix for instance shadowing is del instance.attr; the fix for class-level reassignment requires restoring the original method.
Does a subclass inherit a shadowed class method?
Shadows are per-instance and are not inherited by subclasses. A subclass definition carries no shadow from a parent instance. However, a subclass is equally vulnerable if its own constructor assigns an instance attribute whose name collides with an inherited class method. The inherited class method sits at tier 3 regardless of which ancestor class defined it, so the collision mechanism is identical.
Can @staticmethod be shadowed the same way as @classmethod?
Yes. @staticmethod is also a non-data descriptor — it defines __get__ but not __set__. An instance attribute with the same name as a @staticmethod will shadow it by exactly the same mechanism. The practical difference is only in the TypeError message when the shadowing value is called, since static methods take no implicit first argument.
How does functools.cached_property relate to classmethod shadowing?
functools.cached_property (introduced in Python 3.8) is a non-data descriptor by deliberate design. It defines only __get__, which lets it write the computed result directly into the instance __dict__ on first access. On subsequent accesses, the instance dictionary entry is found at tier 2, before the descriptor at tier 3, so the cached value is returned without re-invoking the getter. This is the same mechanism as classmethod shadowing, but intentional. Note that functools.cached_property is not thread-safe and does not work on classes that define __slots__ without a __dict__ slot.
What did Python 3.13 change about classmethod?
Python 3.13 removed support for chained classmethod descriptors — the ability to wrap other descriptors such as @property with @classmethod. This feature was added in Python 3.9, deprecated in 3.11, and removed in 3.13 (contributed by Raymond Hettinger in gh-89519). The core design was considered flawed. The __wrapped__ attribute added in Python 3.10 is the documented alternative. This removal does not affect how classmethod shadowing works: @classmethod remains a non-data descriptor, and the three-tier lookup order is unchanged.
Can a dataclass field shadow a @classmethod?
Yes, and the effect is more destructive than plain instance shadowing. When a @dataclass subclass declares a field whose name matches a @classmethod inherited from a base class, the field annotation replaces the inherited classmethod at the class level entirely, and the generated __init__ assigns the field value on every instance. The classmethod disappears from both the subclass and its instances — not merely from one instance. Rename the field or the classmethod to resolve it.
How do I write a classmethod that cannot be shadowed by an instance attribute?
The structural solution is to wrap the classmethod in a data descriptor — a class that defines __set__ (which alone is sufficient to earn tier-1 status) and __get__ to preserve callable behavior. Data descriptors occupy tier 1 in Python's lookup order, so any assignment to the same name on an instance raises AttributeError before reaching __dict__. Use __set_name__ for clear error messages. Simpler alternatives include __slots__, a __setattr__ mixin, or __init_subclass__ for class-body collision detection.
Does __getattr__ protect against classmethod shadowing?
No. Python only calls __getattr__ when normal attribute lookup raises AttributeError — that is, when all three tiers fail to find the name. Shadowing succeeds at tier 2 because the instance __dict__ contains the name. Lookup succeeds and returns the shadowing value without raising AttributeError, so __getattr__ is never invoked. A custom __getattr__ cannot intercept, override, or detect the shadow.
Can __getattribute__ intercept or prevent classmethod shadowing?
Yes, but with important caveats. Overriding __getattribute__ intercepts every attribute access, including reads of shadowed names. Unlike __getattr__ (which only fires on lookup failure), __getattribute__ is called unconditionally, so it can detect and block access to a shadowed classmethod even after the shadow is in place. The implementation must access __dict__ through object.__getattribute__(self, '__dict__') to avoid infinite recursion. The downside is performance: the MRO walk fires on every attribute read, not only on shadowing cases, making it suitable for debugging rather than production code paths.
What is inspect.getattr_static() and how does it help with shadowing?
inspect.getattr_static(), introduced in Python 3.2, retrieves an attribute without invoking the descriptor protocol or dynamic lookup. Calling it on an instance returns the raw value from the instance dictionary for shadowed names. Calling it on the class returns the raw classmethod object itself, not the bound method that normal access would produce. Comparing the two results — instance versus class — gives a precise diagnostic: if the instance holds a non-descriptor value and the class holds a classmethod object at the same name, a shadow is active.
Does classmethod shadowing survive pickle and copy.deepcopy()?
Yes. Both pickle and copy.deepcopy() serialize and reconstruct the instance __dict__, which includes any shadowing attribute. The deserialized object arrives with the same conflicting key in its dictionary, and the classmethod remains inaccessible through that instance exactly as before serialization. The shadow survives the round-trip. To prevent this, implement __getstate__ to exclude shadowing keys from the pickled state, so deserialized instances do not carry the conflict. If the shadowing value is itself non-picklable (a lambda, a lock, a file handle), pickle.dumps() will raise a PicklingError that names the conflicting attribute — which can serve as an indirect diagnostic.
How do ORMs and deserialization layers create classmethod shadows?
Any hydration layer that maps dictionary keys to instance attributes via setattr(obj, key, value) will shadow a classmethod whenever an incoming key matches a classmethod name. This includes hand-written from_dict methods, JSON deserializers that populate model objects, and any code that iterates an external payload and writes fields to an object without filtering. The fix is to use an allow-list of permitted field names rather than a block-list of forbidden names. Allow-lists are structurally safe regardless of what classmethods are added to the class in future; block-lists require keeping pace with every new method added by every team member.
What is typing.override and how does it relate to classmethod shadowing?
typing.override, introduced in Python 3.12 (PEP 698), is a decorator that tells type checkers a method is an intentional override of an ancestor method. At runtime it is a no-op that sets __override__ = True on the function. When applied to a classmethod, type checkers verify that an ancestor class actually defines a method with that name, catching stale overrides after refactors. Applied to a classmethod that genuinely overrides a parent classmethod, it acts as a machine-verifiable signal that the naming collision is intentional — the opposite of the accidental shadowing this article describes. The decorator must be placed above @classmethod, not below it, because decorators apply bottom-up.
How can I call a classmethod that is shadowed on an instance?
The shadow only blocks access through the instance. The classmethod remains fully accessible through the class directly (ClassName.method_name()) or through type(self).method_name() inside an instance method. In a subclass, super() can also reach the method by skipping the instance dictionary. These are the immediate escape hatches while you track down and rename the conflicting instance attribute. The permanent fix remains renaming one of the two names so they no longer collide.
Can typing.ClassVar prevent classmethod shadowing at lint time?
Yes, indirectly. Mypy treats ClassVar-annotated names as class-level only and raises a type error when code attempts to assign them through an instance. Combined with __init_subclass__ that inspects subclass type annotations at class-creation time, you get both lint-time and class-definition-time enforcement with no runtime overhead. This is the pattern Pydantic V2 uses: it raises a NameError at class construction when a subclass field annotation shadows a parent-class attribute, with an error message naming the conflict and suggesting an alias.
What is the WeakKeyDictionary approach and when should I use it?
weakref.WeakKeyDictionary is a class-level registry that maps instances to associated state without placing anything in the instance's __dict__. When renaming is not possible — because the conflicting name is dictated by an external protocol or serialization format — relocating the per-instance state into a WeakKeyDictionary removes it from the instance dictionary entirely, leaving the classmethod unobstructed. The registry entry is released automatically when the instance is garbage collected, so there is no memory leak. Classes that use __slots__ must include '__weakref__' in the slot list to support this pattern.
What is a read-only sentinel descriptor and how does it differ from ProtectedClassMethod?
A read-only sentinel is a minimal data descriptor that defines only __set__ — enough to earn tier-1 placement in the lookup order and block instance assignment — without wrapping the classmethod's callable behavior. ProtectedClassMethod wraps both __get__ (to forward calls to the classmethod) and __set__ (to block writes) in one object. The sentinel is the conceptual primitive: it demonstrates that __set__ alone is the mechanism that creates tier-1 status, and that the callable behavior comes from the classmethod's own __get__. In production use, ProtectedClassMethod is simpler because it avoids the class-body ordering problem that arises when a standalone sentinel and a classmethod share the same name.
Key Takeaways
- Shadowing is a lookup-order effect: Python finds the instance attribute before it reaches the class method. Nothing is deleted or overwritten at the class level.
- @classmethod is a non-data descriptor: It only defines
__get__— no__set__, no__delete__. That is the exact condition for a non-data descriptor: only__get__, nothing else. A descriptor qualifies as a data descriptor the moment it also defines__set__or__delete__— either one is sufficient. This places@classmethodin the lowest tier of Python's attribute lookup. Instance attributes outrank it. This is confirmed explicitly in the Python Data Model (opens in new window) and the Descriptor HowTo Guide (opens in new window). - The shadow is per-instance: Other instances and the class itself continue to see the class method normally. Only the instance that carries the conflicting attribute is affected.
- No error fires at the moment of shadowing: Python assigns the instance attribute silently. The problem surfaces later, when code attempts to call the now-hidden method.
- __init__ is the most common source: A constructor parameter or attribute assignment that shares a name with a class method shadows it on every instance created through that constructor — not just one stray object.
- Class-level reassignment is not shadowing — it is replacement: Assigning to a name on the class itself overwrites the entry in the class
__dict__. The class method is gone for all instances, not merely hidden on one. - Subclasses do not inherit shadows, but they inherit the risk: A shadow is stored in a specific instance's
__dict__. Subclass definitions carry no shadow. But an inherited class method is just as vulnerable as a locally defined one if a subclass constructor introduces the collision. - Detection is straightforward: Comparing
obj.__dict__againsttype(obj).__dict__(and the full MRO for inherited methods) reveals any name collision directly. - __slots__ prevents it structurally: A class with
__slots__has no instance__dict__, so arbitrary attribute assignment fails at the point of assignment rather than silently succeeding. - @staticmethod shares the same vulnerability: It is also a non-data descriptor and can be shadowed by an instance attribute by the same mechanism.
- functools.cached_property uses this intentionally: The same non-data descriptor placement that makes
@classmethodvulnerable to shadowing is what enablesfunctools.cached_propertyto cache its result in the instance__dict__. Understanding one explains the other. - Python 3.13 removed chained classmethods: Wrapping
@propertywith@classmethodno longer works as of Python 3.13. The core descriptor behavior described in this article — the three-tier lookup and non-data descriptor status of@classmethod— is unchanged. - Naming discipline is the practical fix: Avoiding name collisions between instance attributes and class methods prevents this entirely. The issue is not a bug in Python — it is a natural consequence of how
PyObject_GenericGetAttrimplements the three-tier descriptor lookup. - Dataclasses are equally vulnerable: The
@dataclassdecorator generates an__init__that assigns every field by name. A field whose name matches a@classmethodshadows that method on every instance produced by the generated constructor — silently, with no visible assignment line in the source. - Prevention ranges from conventions to structural guarantees: Naming conventions (the
from_prefix) reduce collision probability. A__setattr__mixin raises on the first shadowing attempt.__init_subclass__catches class-body collisions at definition time. A__prepare__namespace guard intercepts collisions even earlier — before the class body finishes executing. A data descriptor wrapper (ProtectedClassMethod) makes shadowing structurally impossible. A metaclass applies that protection hierarchy-wide. Asys.audithook detects shadowing on classes you do not own. A custom pylint checker or mypy plugin catches the collision at lint time, before any test runs. - __getattr__ does not interact with shadowing: Python only calls
__getattr__when normal attribute lookup fails. Because shadowing succeeds at step 2 (instance__dict__), lookup never fails — and__getattr__is never invoked. A custom__getattr__cannot intercept or override the shadow.
The descriptor protocol is one of the more precise corners of Python's object model. Understanding why @classmethod sits where it does in the lookup order — and what that means for the names you choose — is the kind of knowledge that prevents subtle bugs from appearing in production long before any test can catch them. The same applies to @dataclass field names and to any code that constructs instances from external data. For the authoritative reference, Raymond Hettinger's Descriptor HowTo Guide (opens in new window) in the official Python documentation covers the full protocol with pure Python equivalents for every built-in descriptor type. If this sparked an interest in how Python's internals drive everyday behavior, explore more Python tutorials covering the object model, decorators, and the standard library.