\n

AttributeError: Object Has No Attribute — What It Really Means and How to Fix It

Final Exam & Certification

Complete this tutorial and pass the 10-question final exam to earn a downloadable certificate of completion.

skip to exam

If you have spent more than a day writing Python, you have seen this error. It arrives without ceremony, usually at the worst possible moment, and reads something like: AttributeError: 'NoneType' object has no attribute 'append'. The AttributeError is one of the most frequently encountered exceptions in the Python language. It fires every time you try to access an attribute — a method, a property, a variable — on an object that does not have it. That sounds simple, but the causes run deep. They range from a one-character typo to a fundamental misunderstanding of how Python resolves attribute lookups at runtime. This article takes the error apart completely. We walk through the interpreter's attribute resolution chain, examine the dunder methods that control it, cover all nine causes including __slots__ and type drift, look at the Python Enhancement Proposals that have shaped how AttributeError behaves, work through two real-world debugging scenarios, and show you how to control attribute access in your own classes. No hand-waving. No copy-paste band-aids.

What's in this Python Tutorial

How Python Resolves Attributes Under the Hood

Before you can fix an AttributeError, you need to understand the machinery that produces it. Every time you write obj.x, Python does not simply look up x in a dictionary. It follows a specific resolution order, codified in the data model documentation.

The official Python data model documentation explains that __getattr__ serves as a fallback: it is invoked only after the standard attribute access mechanism has already failed with an AttributeError — either because __getattribute__() could not find the name in the instance or class tree, or because a property's __get__() raised the error.

The full lookup chain works like this:

  1. Python calls object.__getattribute__() on the instance. This is the unconditional entry point — it runs on every single attribute access.
  2. Inside __getattribute__, Python checks for data descriptors in the class hierarchy (objects that define both __get__ and __set__). Data descriptors get highest priority.
  3. Next, it checks the instance's __dict__ — the object's own namespace.
  4. Then it checks for non-data descriptors and other class-level attributes in the MRO (method resolution order).
  5. If all of that fails, and the class defines __getattr__, Python calls it as a last resort.
  6. If __getattr__ is not defined or itself raises AttributeError, the exception propagates to you.
Interactive Tap each step to see what CPython checks

Tap any step above to expand it.

Raymond Hettinger's Descriptor HowTo Guide in the official Python documentation lays out the lookup priority order: data descriptors beat instance variables, which beat non-data descriptors, which beat class variables, with __getattr__() as the final option if the class provides one.

This means an AttributeError is the final outcome of a multi-step search that came up empty at every level. Knowing this helps you diagnose where in the chain the lookup failed.

Here is the resolution chain in code form:

class Resolver:
    """Simplified model of Python's attribute resolution."""

    class_var = "I live on the class"

    def __init__(self):
        self.instance_var = "I live on the instance"

    def __getattr__(self, name):
        # Called ONLY if __getattribute__ raises AttributeError
        raise AttributeError(
            f"'{type(self).__name__}' object has no attribute '{name}'",
            name=name,
            obj=self
        )

obj = Resolver()

# These succeed at different stages of the chain:
print(obj.instance_var)   # Found in instance __dict__
print(obj.class_var)      # Found in class namespace

# This fails through the entire chain:
print(obj.nonexistent)    # AttributeError

How CPython Caches Attribute Lookups — and What Invalidates the Cache

Every attribute lookup you write — obj.x — eventually passes through PyObject_GenericGetAttr, which delegates to _PyObject_GenericGetAttrWithDict. That function calls _PyType_Lookup to walk the MRO looking for descriptors. Walking the MRO on every access would be slow for a language used as heavily as Python, so CPython maintains a global method cache to short-circuit repeated lookups.

The cache is a fixed-size hash table of exactly 4,096 entries (MCACHE_SIZE_EXP = 12 in Objects/typeobject.c). Each entry is keyed by XORing the type's tp_version_tag (a 32-bit integer that uniquely identifies the type's current definition) with the attribute name's internal memory address — not its hash, its address, exploiting the fact that Python interns frequently used strings. Attribute names longer than 100 characters (MCACHE_MAX_ATTR_SIZE) are never cached at all.

There is a second caching layer on top of this: Python 3.11's adaptive specializing interpreter (PEP 659) rewrites the LOAD_ATTR bytecode instruction itself after a code path executes enough times. A hot obj.x where obj is always the same type gets rewritten to LOAD_ATTR_SLOT (for __slots__ attributes) or LOAD_ATTR_WITH_HINT (for instance attributes with a known dictionary index), completely bypassing _PyType_Lookup. This is why attribute access in tight loops is significantly faster in Python 3.11+ than in earlier versions.

What breaks the cache

Any modification to a class at runtime — adding a method, assigning a class variable, changing a base class — causes CPython to call PyType_Modified(), which zeroes the tp_version_tag of that type and every subclass of it. This invalidates every entry in the 4,096-slot cache that referenced those types, and also de-optimizes all the specialized LOAD_ATTR bytecodes that were keyed to those types. Dynamic class modification in tight loops is therefore doubly expensive: it both defeats the global type cache and forces re-specialization of bytecodes. If you must modify a class at runtime, do it once during initialization, not repeatedly during execution.

There is one more consequence worth knowing: adding __getattr__ to a class forces CPython to use slot_tp_getattr_hook instead of the bare PyObject_GenericGetAttr for all attribute accesses on that class. The hook does additional work on every access — it must be prepared to call __getattr__ as a fallback. This means that defining __getattr__ on a frequently-accessed class carries a measurable overhead on every attribute access, not just the ones that actually fall back to __getattr__.

Inspecting AttributeError Programmatically: .name and .obj

Since Python 3.11, every AttributeError instance carries two machine-readable attributes: .name (the string name that was looked up) and .obj (the object on which the lookup failed). These are populated automatically by CPython's _PyObject_GenericGetAttrWithDict via the internal set_attribute_error_context() function — but they were not exposed on the exception object itself until Python 3.11 added the name= and obj= keyword arguments to AttributeError.__init__.

This makes it possible to write error handlers that branch precisely on which attribute was missing, without parsing the error message string:

try:
    result = obj.compute()
except AttributeError as e:
    print(e.name)   # e.g. 'compute' — the attribute that was missing
    print(e.obj)    # the actual object, not just its type name
    print(type(e.obj).__name__)  # the class name

# Practical use: library code that needs to handle multiple missing attributes
# differently without string matching on the error message
try:
    value = config.timeout
except AttributeError as e:
    if e.name == 'timeout':
        value = 30   # supply a default
    else:
        raise        # re-raise if it was something unexpected
Implementation note

When you raise AttributeError yourself in __getattr__ or elsewhere, pass the name= and obj= keyword arguments so that your exceptions integrate cleanly with this system: raise AttributeError("...", name=name, obj=self). Without them, e.name and e.obj will be None on the caught exception, losing information that callers may need.

Predict the Error 1 / 4

What does Python print?

The Nine Most Common Causes

1. Typos and Case Sensitivity

Python is case-sensitive. my_list.Append(5) and my_list.append(5) are not the same call. This is the single most common trigger for the error, and it bites beginners and veterans alike.

data = {"users": [1, 2, 3]}

# Wrong: Python dict method is .keys(), not .Keys()
data.Keys()
# AttributeError: 'dict' object has no attribute 'Keys'

The fix is obvious once you spot it, but in a 500-line file, it can take time. Use your editor's autocomplete. Trust it more than your memory.

2. Operating on NoneType

This is arguably the most frustrating variant. A function that was supposed to return a list or object returned None instead, and you tried to call a method on the result.

def find_user(user_id):
    users = {"alice": {"name": "Alice", "role": "admin"}}
    return users.get(user_id)  # Returns None if not found

user = find_user("bob")
print(user.upper())
# AttributeError: 'NoneType' object has no attribute 'upper'

The root cause is not the .upper() call — it is that find_user returned None and the calling code did not account for that. This pattern shows up constantly with dictionary .get() calls, database queries, and regex matches (re.search() returns None on no match).

# Defensive approach
user = find_user("bob")
if user is not None:
    print(user["name"].upper())
else:
    print("User not found")

3. Wrong Type Entirely

You expected a list but got an integer. You expected a string but got a dictionary. This happens when variables get reassigned or when function return types are not what you assumed. Here is the abstract version:

count = 10
count.append(11)
# AttributeError: 'int' object has no attribute 'append'

Trivial in isolation. In real code, the same failure looks like this — a variable named result starts as a list, gets silently reassigned to the integer return value of a function, and the reassignment is three lines away from the crash:

def process_batch(items):
    result = []
    for item in items:
        result = item.get('count', 0)   # BUG: reassigns result to an int
    result.append({'total': 'done'})    # AttributeError: 'int' has no 'append'

process_batch([{'count': 5}, {'count': 3}])

The traceback points at result.append(), but the real bug is the assignment inside the loop. type(result) inserted before the .append() call would reveal the int immediately. This pattern — using the same variable name for two semantically different things — is where type annotations earn their keep:

def process_batch(items: list[dict]) -> list[dict]:
    result: list[dict] = []
    for item in items:
        count: int = item.get('count', 0)   # separate variable, clear intent
        result.append({'count': count})
    return result

A static type checker like mypy would catch the original version at lint time, before the code ever runs.

4. Accessing Attributes Before They Exist

Instance attributes in Python are created at assignment time, not at class definition time. If your __init__ method conditionally creates an attribute, code that assumes the attribute always exists will fail.

class Connection:
    def __init__(self, host, use_ssl=False):
        self.host = host
        if use_ssl:
            self.certificate = load_cert()

conn = Connection("example.com", use_ssl=False)
print(conn.certificate)
# AttributeError: 'Connection' object has no attribute 'certificate'

The fix: always initialize all attributes in __init__, even if their initial value is None.

class Connection:
    def __init__(self, host, use_ssl=False):
        self.host = host
        self.certificate = load_cert() if use_ssl else None

If you are using the @dataclass decorator, the same rule applies — every field that might not be set must have a default value. A field with no default and no value passed at construction time will raise TypeError, not AttributeError, but a field only conditionally set inside a __post_init__ method produces the same conditional-existence bug:

from dataclasses import dataclass

@dataclass
class Connection:
    host: str
    use_ssl: bool = False
    certificate: str | None = None  # always present, conditionally valued

    def __post_init__(self):
        if self.use_ssl:
            self.certificate = load_cert()

5. Module Shadowing

This one has confused Python developers for years. You create a file named random.py or math.py in your project directory, and suddenly import random imports your file instead of the standard library module. The result is an AttributeError on functions that should exist.

$ python random.py
AttributeError: module 'random' has no attribute 'randint'

The random module Python loaded was your local random.py, not the standard library one. The function randint does not exist on your file because you never defined it.

Starting with Python 3.13, the error message now explicitly tells you about this scenario, thanks to work by Pablo Galindo Salgado and other CPython contributors. The improved message reads:

AttributeError: module 'random' has no attribute 'randint'
(consider renaming '/home/you/random.py' if it has the same
name as a library you intended to import)

This improvement alone has saved countless hours of debugging for beginners.

The module shadowing problem has a more dangerous cousin in the real world. In supply chain attacks, threat actors publish malicious packages to PyPI with names that closely resemble legitimate ones — a technique called typosquatting. Once installed, these packages shadow or intercept the import you intended, executing attacker-controlled code with full access to your runtime environment. The QuickLens supply chain attack is a recent example of how this plays out against real users at scale.

Python 3.13 C-API: PyObject_GetOptionalAttr

A less-documented addition in Python 3.13 that matters to extension authors and library maintainers: the C API gained two new functions, PyObject_GetOptionalAttr and PyObject_GetOptionalAttrString, contributed by Serhiy Storchaka (gh-106521). These are alternatives to the older PyObject_GetAttr family.

The difference matters at the C level: PyObject_GetAttr raises AttributeError in C when an attribute is absent, requiring the caller to check and clear the exception manually. The new optional variants instead return 0 with a NULL output and no exception set when the attribute simply does not exist, reserving -1 for genuine errors. This makes the common pattern of "try to get an attribute, fall back to a default" significantly cleaner and faster in C extensions and in CPython's own internal code. If you write C extensions or embed Python, prefer these in 3.13+ when absent attributes are a normal condition rather than an error.

6. Name Mangling on Private Attributes

Attributes prefixed with double underscores (__name) undergo name mangling. Python transforms __name into _ClassName__name to prevent accidental conflicts in subclasses. If you access __name from outside the class, it will not resolve.

class Vault:
    def __init__(self):
        self.__secret = "hidden"

v = Vault()
print(v.__secret)
# AttributeError: 'Vault' object has no attribute '__secret'

# The attribute exists, but under a mangled name:
print(v._Vault__secret)  # "hidden"
Tip

Name mangling is a convention, not a security mechanism. If you just want to signal "this is internal," use a single underscore prefix (_name) instead. The single underscore is a universal Python convention that signals "private by agreement" without triggering the mangling behavior. Developers who treat name mangling as an access control boundary tend to build systems that are brittle in exactly the ways real attackers probe — access control belongs at the architecture level, not the naming level. For a grounded look at how threat actors actually bypass trust boundaries, the Scattered Spider 2025 attack chain is instructive.

7. Attributes Blocked by __slots__

When a class defines __slots__, Python replaces the per-instance __dict__ with a fixed set of slot descriptors. Any attempt to assign an attribute not listed in __slots__ raises an AttributeError — not just on reads, but on writes.

class Point:
    __slots__ = ('x', 'y')

    def __init__(self, x, y):
        self.x = x
        self.y = y

p = Point(1, 2)
p.z = 3
# AttributeError: 'Point' object has no attribute 'z'

# Also fails — __dict__ doesn't exist on slotted instances:
p.__dict__
# AttributeError: 'Point' object has no attribute '__dict__'

This trips up developers in two situations. First, when adding a new attribute to a class that already has __slots__ defined — you must add the name to __slots__ too, or the assignment fails. Second, when using vars() or libraries that rely on __dict__ (like some serialisation frameworks): since slotted instances have no __dict__, those tools raise AttributeError even though the instance itself is perfectly valid.

The fix for the first case is to add the attribute to __slots__. The fix for the second is to iterate __slots__ manually or add '__dict__' explicitly to __slots__ — though doing so largely defeats the memory savings that motivated using slots in the first place.

Tip

__slots__ is not inherited automatically. If a subclass does not define its own __slots__, it gets a __dict__ back and the memory savings are lost. If a subclass defines __slots__, it inherits the parent slots plus its own — but only if the parent also used __slots__.

8. Failed Writes and Deletes: __setattr__ and __delattr__

AttributeError is not only a read-time error. It also fires when you try to write or delete an attribute that cannot be set or removed. Two situations produce this reliably: writing to a read-only property, and deleting a slot or a property that has no deleter defined.

class Circle:
    def __init__(self, radius):
        self._radius = radius

    @property
    def radius(self):
        return self._radius

    # No setter defined

c = Circle(5)
print(c.radius)   # 5 — read works fine
c.radius = 10
# AttributeError: can't set attribute 'radius'
# (Python 3.11+: AttributeError: property 'radius' of 'Circle' object has no setter)

The fix is to define a setter if mutation is intentional, or to access the backing attribute directly (c._radius = 10) if you understand what you are bypassing.

The delete case is equally common. Calling del obj.attr on a slot or a property with no deleter raises AttributeError: radius with a message that is easy to misread as "the attribute does not exist" — but the real meaning is "the attribute cannot be deleted through this interface."

del c.radius
# AttributeError: property 'radius' of 'Circle' object has no deleter

# __slots__ also blocks deletion:
class Point:
    __slots__ = ('x', 'y')
    def __init__(self, x, y):
        self.x = x
        self.y = y

p = Point(1, 2)
del p.x      # succeeds — the slot is now empty
print(p.x)   # AttributeError: 'Point' object has no attribute 'x'

9. Missing super().__init__() in Subclasses

This is one of the most disorienting variants because the traceback points to a line that looks perfectly reasonable. When a subclass overrides __init__ without calling super().__init__(), any attributes the parent class creates in its own __init__ are never set. Code that later expects those parent attributes to exist raises AttributeError.

class Animal:
    def __init__(self, name):
        self.name = name
        self.alive = True

    def describe(self):
        return f"{self.name} is {'alive' if self.alive else 'not alive'}"

class Dog(Animal):
    def __init__(self, name, breed):
        # BUG: forgot to call super().__init__(name)
        self.breed = breed

d = Dog("Rex", "Labrador")
print(d.describe())
# AttributeError: 'Dog' object has no attribute 'name'

The traceback points to self.name inside describe() on the Animal class. It is not obvious that the real bug is three lines away in Dog.__init__. The fix is always to call super().__init__() with the expected arguments:

class Dog(Animal):
    def __init__(self, name, breed):
        super().__init__(name)   # parent attributes are now created
        self.breed = breed

d = Dog("Rex", "Labrador")
print(d.describe())   # Rex is alive

This variant is especially common in multiple-inheritance hierarchies. If any class in a cooperative inheritance chain fails to call super().__init__(), attributes from classes further up the MRO silently go uncreated. The rule is: in cooperative inheritance, always call super().__init__() even if you think the parent does nothing interesting.

Spot the Bug click the line that causes AttributeError

This code raises AttributeError: 'NoneType' object has no attribute 'upper'. Which line is responsible?

1users = {"alice": {"name": "Alice", "role": "admin"}}
2match = users.get("bob")
3print(match.upper())
4print("done")

PEPs That Shaped AttributeError

Several Python Enhancement Proposals have directly influenced how AttributeError behaves, how it is reported, and how developers can customize it.

PEP 562 — Module __getattr__ and __dir__ (Python 3.7)

Authored by Ivan Levkivskyi and accepted in December 2017, PEP 562 introduced the ability to define __getattr__ at the module level. Before this PEP, customizing attribute access on a module required replacing the module object in sys.modules with a custom class — an awkward hack.

Under PEP 562, when a module attribute lookup fails through normal means, Python checks whether the module's __dict__ contains a callable named __getattr__. If one is present, Python calls it with the attribute name as its argument. The function must either return the requested value or raise AttributeError to signal that the attribute genuinely does not exist.

The primary motivation was managing deprecation warnings gracefully. Library authors can now deprecate module-level names without removing them immediately:

# mylib.py
from warnings import warn

new_function = lambda: "use this"

def __getattr__(name):
    if name == "old_function":
        warn(f"{name} is deprecated, use new_function", DeprecationWarning, stacklevel=2)
        return new_function
    raise AttributeError(f"module {__name__!r} has no attribute {name!r}")

Another practical application is lazy loading of submodules. Heavy imports can be deferred until the submodule is actually accessed, which significantly improves startup time for large packages. The PEP includes an explicit example of this pattern using importlib.import_module inside the module-level __getattr__.

Guido van Rossum participated directly in the discussion around this PEP. In a November 2017 python-dev thread discussing the implementation details, he engaged with how Module.__getattribute__ should be structured and how the except AttributeError clause should invoke the new __getattr__ hook.

PEP 657 — Fine Grained Error Locations in Tracebacks (Python 3.11)

Authored by Pablo Galindo Salgado, Batuhan Taskaya, and Ammar Askar, PEP 657 transformed how tracebacks identify the source of errors. Before this PEP, a traceback pointed to a line. If that line contained multiple attribute accesses, you had no idea which one failed.

The PEP itself uses AttributeError as a motivating example. Consider this code:

foo(a.name, b.name, c.name)

Before PEP 657 (Python 3.10 and earlier):

Traceback (most recent call last):
  File "test.py", line 19, in <module>
    foo(a.name, b.name, c.name)
AttributeError: 'NoneType' object has no attribute 'name'

Which of the three objects — a, b, or c — was None? The traceback did not say.

After PEP 657 (Python 3.11+):

Traceback (most recent call last):
  File "test.py", line 17, in <module>
    foo(a.name, b.name, c.name)
                ^^^^^^
AttributeError: 'NoneType' object has no attribute 'name'

The carets under b.name make it immediately clear. The PEP accomplished this by mapping each bytecode instruction to start and end column offsets, a technique inspired by Java's JEP 358, which addressed the same ambiguity problem for NullPointerException. PEP 657 contrasts the two approaches: Java's JEP 358 solution needed to reverse-engineer compiled bytecode to recover column information, while Python could record column offsets directly during compilation without any post-hoc analysis.

"Did You Mean?" Suggestions (Python 3.10+)

While not a standalone PEP, the "Did you mean?" feature added to AttributeError messages in Python 3.10 deserves special attention. Contributed by Pablo Galindo Salgado (tracked as bpo-38530 on the CPython bug tracker), this feature uses string similarity matching to suggest corrections when you mistype an attribute name.

>>> import collections
>>> collections.namedtoplo
AttributeError: module 'collections' has no attribute 'namedtoplo'.
Did you mean: 'namedtuple'?

The implementation, refined by Dennis Sweeney in a follow-up commit, uses a multi-strategy approach: it first checks for an exact case-insensitive match, then applies Levenshtein distance with a 50% similarity threshold, and finally checks whether the sorted underscore-separated segments match.

Python 3.13 extended the "Did you mean?" system further — it now fires on instance method typos too, not just module attributes:

>>> "hello".spit()
AttributeError: 'str' object has no attribute 'spit'. Did you mean: 'split'?

In Python 3.12, the suggestion system was further extended. A common mistake inside a method body is referencing an instance attribute by its bare name — for example, writing blech instead of self.blech. Python 3.12 now raises a NameError in this case and suggests self.blech when an instance attribute by that exact name exists. This is distinct from an AttributeError but addresses the same root cause: forgetting the self. prefix.

Debugging Strategies and Deeper Solutions

How to Debug Python AttributeError: Step by Step

  1. Read the full error message

    Every AttributeError tells you two things: the type of the object and the attribute name it could not find. Read both before touching any code. On Python 3.11+, caret markers in the traceback highlight exactly which access on a chained line failed.

  2. Confirm the object's actual type with type()

    Call type(obj) on the object before the failing line. If it returns NoneType or a different class than expected, the bug is in the code that produced the object, not at the line where the error appears.

  3. Inspect available attributes with dir() and vars()

    Use dir(obj) to see all attributes including inherited ones. Use vars(obj) for the instance __dict__ only. Compare against the name in the error — look for typos, case differences, or a missing __init__ assignment.

  4. Use hasattr() or inspect.getattr_static() for safe inspection

    hasattr(obj, 'name') returns True or False without raising. Use getattr(obj, 'name', default) when you also want a fallback value. For properties with side effects or buggy getters, use inspect.getattr_static(obj, 'name') to inspect the raw stored value without executing any descriptor code.

  5. Check for module shadowing if the error is on an import

    If the AttributeError appears on a standard library module, check whether you have a local file with the same name — for example random.py or math.py. Rename or move the local file. Python 3.13 includes an explicit hint about this in the error message automatically.

Use dir() and vars() to Inspect What Exists

When you get an AttributeError, the first question is: what attributes does this object have? The techniques below are specific to this error, but for a broader look at the full toolkit, see our guide to debugging Python code.

obj = some_function()

# See all attributes (including inherited ones)
print(dir(obj))

# See only instance attributes
print(vars(obj))

# Check for a specific attribute
print(hasattr(obj, 'target_attribute'))

Use type() to Confirm the Object's Class

Half the time, the object is not the type you think it is.

result = some_api_call()
print(type(result))  # <class 'NoneType'>  -- there is your problem

Use hasattr() for Defensive Access

The built-in hasattr(obj, name) function returns True or False without raising an exception. Under the hood, it simply attempts the attribute access and catches AttributeError.

if hasattr(response, 'json'):
    data = response.json()
else:
    data = response.text

The Hidden Cost of hasattr(): Side Effects and the Buggy Property Problem

A detail that surprises experienced developers: hasattr(obj, name) is not cheaper than getattr(obj, name). Under the hood, hasattr() simply calls getattr() and returns False if an AttributeError is raised — the full attribute lookup runs either way. This matters in two practical situations.

First, if the attribute is a property, the property's getter runs when you call hasattr(). Any side effects in that getter — database queries, network calls, state mutations — execute during your "safe" existence check.

Second, and more insidiously: if a property's getter raises AttributeError internally due to a bug, hasattr() silently returns False. The error is swallowed completely. The property exists, the getter ran, it crashed — and your code proceeds as though the attribute simply wasn't there:

class DataModel:
    @property
    def summary(self):
        # Bug: self._records was never initialized
        return self._records[0].title   # raises AttributeError internally

model = DataModel()

# hasattr silently swallows the AttributeError from inside the property:
print(hasattr(model, 'summary'))   # False — but the property EXISTS
                                   # The real error (missing _records) is gone

# getattr reveals the truth:
try:
    model.summary
except AttributeError as e:
    print(e)   # 'DataModel' object has no attribute '_records'

In Python 3, hasattr() was changed to only catch AttributeError — earlier versions swallowed all exceptions including SystemExit and KeyboardInterrupt. Even so, the AttributeError blind spot remains. When you are writing code against objects you do not fully control, prefer the getattr(obj, name, sentinel) pattern over hasattr(), and inspect the sentinel comparison explicitly. It gives you the same information without the silent-failure risk from buggy properties.

Use getattr() With a Default

getattr(obj, name, default) lets you provide a fallback value if the attribute does not exist, avoiding the error entirely. It is the programmatic equivalent of dict.get() for objects.

# Single attribute with fallback
timeout = getattr(config, 'timeout', 30)

# Useful when reading optional attributes from multiple sources
for attr in ('host', 'port', 'timeout', 'retries'):
    value = getattr(config, attr, None)
    if value is not None:
        print(f"{attr}: {value}")

Use try/except AttributeError for Duck Typing

Python's duck-typing philosophy favours trying the operation and handling failure over checking first. The canonical idiom is EAFP — for a broader look at Python's exception handling system, see our dedicated guide — Easier to Ask Forgiveness than Permission. This is often the right choice when you expect the attribute to exist in the common case and absence is genuinely exceptional:

# LBYL (Look Before You Leap) — the hasattr style
if hasattr(obj, 'render'):
    obj.render()

# EAFP (Easier to Ask Forgiveness than Permission) — the duck-typing style
try:
    obj.render()
except AttributeError:
    pass  # obj doesn't support rendering; that's fine

The EAFP approach is preferred when the attribute is expected to be present on the vast majority of objects — checking with hasattr on every iteration of a hot loop adds overhead compared to a single try/except that almost never fires. Use LBYL when absence is the normal case, not the exception.

Use breakpoint() for Interactive Inspection

When the error is hard to reproduce in isolation, drop a breakpoint() call directly before the failing line. This invokes pdb, Python's built-in debugger, and lets you inspect the live object interactively:

result = some_complex_function()
breakpoint()        # drops you into pdb at this line
result.process()    # the line that raises AttributeError

Inside pdb, run type(result) and dir(result) against the live object before the crash. The commands p result (print) and pp vars(result) (pretty-print the instance dict) give you a complete picture. breakpoint() was added in Python 3.7 as a cleaner alternative to import pdb; pdb.set_trace().

Distinguishing AttributeError from Related Errors

Four exceptions are frequently confused with each other because they all surface when you try to access something that is not there. The distinction matters for both debugging and for writing correct exception handlers.

Error What triggered it Example
AttributeError Accessing a non-existent attribute on an object obj.missing
KeyError Accessing a non-existent key in a dict using [] d['missing']
NameError Using a name that has not been defined in the current scope print(undefined_var)
TypeError Calling something that is not callable, or wrong argument types 42() or len(42)

The most common confusion is between AttributeError and KeyError. If you have a dictionary d and write d['key'], a missing key raises KeyError. If you write d.key, Python interprets this as an attribute access on the dict object itself, not a key lookup — and raises AttributeError: 'dict' object has no attribute 'key'. Neither is wrong syntactically, but they mean very different things and require different fixes.

d = {"name": "Alice"}

d['name']    # "Alice" — dict key access, KeyError if missing
d.name       # AttributeError — Python looks for a dict attribute named 'name'
d.get('name')  # "Alice" — dict method, returns None if missing (no exception)

The AttributeError / NameError confusion is most common when you forget self. inside a method. Writing name instead of self.name inside a method body raises NameError: name 'name' is not defined — not an AttributeError — because Python is looking up a local or global variable, not an attribute. Python 3.12 specifically added a hint for this case, suggesting self.name when a matching instance attribute exists.

When writing exception handlers, catch the most specific error you can:

# Catch only AttributeError — do not use a bare except
try:
    value = obj.optional_field
except AttributeError:
    value = default_value

# Both AttributeError and KeyError can legitimately need catching together
# when working with objects that might be either dicts or class instances:
try:
    value = obj['key']
except (AttributeError, KeyError, TypeError):
    value = default_value

Deeper Solutions Most Articles Do Not Cover

The standard advice — use hasattr(), add a try/except, check type() — addresses symptoms. The following tools address structure, and experienced Python developers tend to reach for them once the simpler approaches stop scaling.

inspect.getattr_static() — Bypass Descriptors Entirely

The standard getattr() runs the full resolution chain including descriptor __get__ calls, which means calling a property's getter, which means side effects. inspect.getattr_static(obj, name) bypasses all descriptor protocol calls and returns the raw object stored in the class or instance __dict__ directly. This is the correct tool when you need to introspect what is stored without executing any code:

import inspect

class Model:
    @property
    def summary(self):
        raise AttributeError("broken getter")  # simulated bug

m = Model()

# getattr() triggers the getter — AttributeError surfaces
# hasattr() swallows it silently — returns False

# getattr_static bypasses the getter entirely:
raw = inspect.getattr_static(m, 'summary', None)
print(raw)           # <property object at 0x...> — confirms it exists
print(type(raw))     # <class 'property'>
# Call the getter explicitly if needed:
print(type(m).summary.fget(m))  # now you see the real error

getattr_static() is also essential for debugging metaclass-heavy frameworks, ORM field definitions, and any codebase where attribute access carries heavy side effects. It works on instances, classes, and modules.

typing.Protocol — Structural Contracts Instead of hasattr() Chains

Code that uses multiple hasattr() checks to verify an object before using it is usually trying to enforce an interface without saying so. This is the essence of duck typing vs structural typing in Python. typing.Protocol (Python 3.8+) lets you declare that interface explicitly, and isinstance() checks work against it at runtime without requiring inheritance:

from typing import Protocol, runtime_checkable

@runtime_checkable
class Renderable(Protocol):
    def render(self) -> str: ...
    def get_width(self) -> int: ...

# Before: multiple hasattr() calls, no documentation of intent
def draw(obj):
    if hasattr(obj, 'render') and hasattr(obj, 'get_width'):
        obj.render()

# After: explicit, checkable, documented
def draw(obj: Renderable) -> None:
    if not isinstance(obj, Renderable):
        raise TypeError(f"Expected Renderable, got {type(obj).__name__}")
    obj.render()

Static type checkers like mypy enforce Protocol compliance at lint time, catching missing attributes before any code runs. This eliminates an entire class of runtime AttributeError — not by catching the error, but by making it structurally impossible.

__init_subclass__ — Enforce Attribute Contracts at Class Creation Time

If you are building a base class and child classes must define certain attributes, __init_subclass__ lets you validate the contract when the subclass is defined — not when it is first instantiated. Errors surface at import time, before any object is ever created:

class Plugin:
    required_attrs = ('name', 'version', 'execute')

    def __init_subclass__(cls, **kwargs):
        super().__init_subclass__(**kwargs)
        for attr in cls.required_attrs:
            if not hasattr(cls, attr):
                raise AttributeError(
                    f"{cls.__name__} must define '{attr}'. "
                    f"Missing required Plugin attribute.",
                    name=attr,
                    obj=cls
                )

class ValidPlugin(Plugin):
    name = "my-plugin"
    version = "1.0"
    def execute(self): pass   # all three present — no error

class BrokenPlugin(Plugin):
    name = "broken"
    # version and execute missing
    # AttributeError raised here, at class definition time

This technique is used by Django's model metaclass, SQLAlchemy's declarative base, and Pydantic's model system — all of them validate attribute structure when the class is created, not when an instance is first used. The result is dramatically earlier error detection.

contextlib.suppress(AttributeError) — Intentional EAFP Without Noise

When you genuinely want to attempt an attribute access and silently move on if it is absent — and only in that case — contextlib.suppress communicates intent far more clearly than a bare except: pass block. It is also scoped: it suppresses only within the with block, not the broader except clause:

from contextlib import suppress

# Verbose EAFP — communicates nothing about intent
try:
    obj.cleanup()
except AttributeError:
    pass

# Explicit EAFP — communicates "this is intentionally optional"
with suppress(AttributeError):
    obj.cleanup()

# Particularly clean for optional initialization:
with suppress(AttributeError):
    self._cache.clear()   # only called if _cache exists

object.__getattribute__ Directly — Bypassing Hook Overhead

In performance-critical code on a class that defines __getattr__, you can call object.__getattribute__(self, name) directly to bypass slot_tp_getattr_hook and go straight to the generic lookup. This avoids the overhead of the hook machinery on accesses you know will succeed:

class LazyProxy:
    def __init__(self):
        self._cache = {}

    def _load(self, name: str) -> object:
        # Override in subclass: fetch the value for `name` from your data source
        raise NotImplementedError(f"_load not implemented for {name!r}")

    def __getattr__(self, name):
        if name not in self._cache:
            self._cache[name] = self._load(name)
        return self._cache[name]

    def get_cache_direct(self):
        # Bypass __getattr__ hook — _cache always exists after __init__
        # This avoids the slot_tp_getattr_hook overhead on a known attribute
        return object.__getattribute__(self, '_cache')

The same technique is used inside __setattr__ and __getattribute__ implementations to avoid infinite recursion — but it is also legitimately useful in tight loops where the hook overhead is measurable. Profile first: __getattr__ overhead only matters at millions of accesses per second.

Check Your Understanding Question 1 of 3

Controlling AttributeError in Your Own Classes

__getattr__ — The Fallback Hook

Define __getattr__ when you want to handle lookups that fail through normal resolution. It is only called after __getattribute__ raises AttributeError, which means it will never intercept access to attributes that actually exist.

class FlexibleConfig:
    def __init__(self, **kwargs):
        self.__dict__.update(kwargs)

    def __getattr__(self, name):
        return None  # Return None for any missing config key

config = FlexibleConfig(debug=True, host="localhost")
print(config.debug)    # True
print(config.port)     # None (handled by __getattr__)
Antipattern Warning

Returning None silently for every missing attribute swallows typos. config.timeoout returns None instead of raising, so the bug propagates silently until something downstream fails with a confusing error. Unless you genuinely want every missing key to be None (a config with sparse defaults), raise AttributeError for unknown names:

class StrictConfig:
    _VALID_KEYS = {'host', 'port', 'timeout', 'debug'}

    def __init__(self, **kwargs):
        self.__dict__.update(kwargs)

    def __getattr__(self, name):
        if name in self._VALID_KEYS:
            return None   # valid key, just not set
        raise AttributeError(
            f"'{type(self).__name__}' has no config key '{name}'. "
            f"Valid keys: {sorted(self._VALID_KEYS)}",
            name=name,
            obj=self
        )

config = StrictConfig(host="localhost")
print(config.port)      # None  — valid key, not set
print(config.timeoout)  # AttributeError with a helpful message

The __getattr__ + Property Bug Trap

There is a subtle and dangerous interaction between __getattr__ and properties that even experienced Python developers regularly fall into. When a property's getter raises AttributeError internally — due to a bug, not because the property is absent — CPython's slot_tp_getattr_hook cannot distinguish "the attribute does not exist" from "the attribute exists but its getter crashed." It treats both outcomes identically: it clears the exception and calls __getattr__ with the property's name instead.

The result is that your __getattr__ fallback is invoked for an attribute that actually exists, and the real error is silently discarded:

class Proxy:
    def __init__(self, data):
        self._data = data

    @property
    def value(self):
        # Bug: AttributeError raised inside the property getter
        return self._data.missing_field   # AttributeError here

    def __getattr__(self, name):
        # This gets called for 'value' because the property raised AttributeError.
        # The real error (missing_field on _data) is gone.
        return f"fallback for {name}"

p = Proxy(object())
print(p.value)
# Prints: "fallback for value"
# The property exists — but its internal AttributeError triggered __getattr__.
Why this is hard to debug

This failure mode produces no traceback for the real error. The code appears to work — it returns a value — but it returns the wrong one. The bug inside the property is completely invisible. The reliable diagnostic is to call the property directly: type(obj).value.fget(obj), which invokes the getter bypassing the __getattr__ fallback and surfaces the real exception.

This is a known CPython behavior tracked in issue #103936. The core difficulty is that the lookup machinery cannot distinguish a legitimate "attribute absent" signal from an accidental one raised inside a getter. If you define __getattr__ on a class, be aware that it will silently absorb AttributeErrors from any of your properties' getter code.

Raising AttributeError Intentionally

Library authors and framework developers need to raise AttributeError themselves — for example, when implementing proxy objects, lazy-loading wrappers, or read-only attributes. The convention is to match the format Python itself uses: type name, then attribute name, in a way that is immediately actionable.

# Match Python's own message format
raise AttributeError(
    f"'{type(obj).__name__}' object has no attribute '{name}'"
)

# For a read-only property, match the property descriptor message format
raise AttributeError(
    f"property '{name}' of '{type(obj).__name__}' has no setter"
)

If you are implementing __getattr__ as a lazy loader or proxy, there is one critical rule: always re-raise as AttributeError, never swallow the exception and return None. Code that calls hasattr() relies on AttributeError being raised to determine that the attribute is absent. If your __getattr__ returns None instead of raising, hasattr() returns True for every name, which breaks any caller that uses hasattr() to branch on capability:

class BrokenProxy:
    def __getattr__(self, name):
        return None  # BAD: hasattr() now returns True for everything

class CorrectProxy:
    def __getattr__(self, name):
        raise AttributeError(
            f"'{type(self).__name__}' object has no attribute '{name}'",
            name=name,
            obj=self
        )

broken = BrokenProxy()
correct = CorrectProxy()

print(hasattr(broken, 'nonexistent'))   # True  — misleading
print(hasattr(correct, 'nonexistent'))  # False — correct

Python 3.11 introduced an additional optional keyword argument: AttributeError now accepts name and obj keyword arguments that power the improved traceback display. If you are on 3.11+, passing these makes your custom errors integrate cleanly with Python's traceback machinery:

# Python 3.11+ — name and obj kwargs feed the improved error display
raise AttributeError(
    f"'{type(self).__name__}' object has no attribute '{name}'",
    name=name,
    obj=self
)

__getattribute__ — Total Control (Use With Caution)

__getattribute__ intercepts every attribute access, whether the attribute exists or not. This is powerful but dangerous — it is trivially easy to create infinite recursion.

class AuditedObject:
    def __init__(self):
        object.__setattr__(self, '_log', [])
        object.__setattr__(self, 'value', 42)

    def __getattribute__(self, name):
        if name != '_log':
            log = object.__getattribute__(self, '_log')
            log.append(f"Accessed: {name}")
        return object.__getattribute__(self, name)

obj = AuditedObject()
print(obj.value)       # 42
print(obj._log)        # ['Accessed: value']
Common Pitfall

Notice the explicit calls to object.__getattribute__ to avoid infinite recursion. If you wrote self._log inside __getattribute__, it would call __getattribute__ again, which would try to access self._log, and so on forever.

The access logging pattern in AuditedObject mirrors a principle that matters well beyond Python development. Every attribute access is an observable event — and observability is the foundation of detection. Attackers who compromise systems with access to sensitive objects rely on the absence of that logging. For a concrete example of how threat actors turned the same logic against defenders by building their own access logs from stolen data, the Kibana hit list campaign is worth reading.

Fix the Code click tokens into the correct order

Complete the line that raises a properly attributed AttributeError using the Python 3.11+ name= and obj= keyword arguments:

click tokens below to add them here

Real-World Debugging Walkthroughs

Scenario 1: The Chained None

Let us work through a scenario that combines multiple causes. You have a data processing pipeline:

import csv

class DataPipeline:
    def __init__(self, filepath):
        self.filepath = filepath
        self.records = []

    def load(self):
        with open(self.filepath) as f:
            reader = csv.DictReader(f)
            self.records = list(reader)

    def transform(self):
        for record in self.records:
            record['full_name'] = record['first'] + ' ' + record['last']
            record['email'] = record['full_name'].replace(' ', '.').lower()

    def get_emails(self):
        return [r.get('email') for r in self.records]


pipeline = DataPipeline("users.csv")
pipeline.load()
pipeline.transform()
emails = pipeline.get_emails()
print(emails.sort())
# AttributeError?  No -- but a subtle bug.
# list.sort() returns None, not the sorted list.
# So 'emails' is sorted in place, but print() shows None.

# Now suppose someone writes:
print(emails.sort().join(', '))
# AttributeError: 'NoneType' object has no attribute 'join'

The chain of events: .sort() returns None because it sorts in place. Calling .join() on None produces an AttributeError. The fix depends on what you actually wanted:

# Option 1: Sort in place, then use the list
emails.sort()
print(', '.join(emails))

# Option 2: Get a sorted copy
print(', '.join(sorted(emails)))

This pattern — calling a method that returns None and then chaining another call onto it — is one of the most common sources of NoneType AttributeError in Python. Other methods that return None include list.append(), list.extend(), dict.update(), and set.add().

Scenario 2: The API Response That Isn't What You Think

This one appears constantly in code that talks to HTTP APIs. The requests library's response.json() returns a parsed Python object — but whether that object is a dict, a list, or something else depends entirely on what the server sent back.

import requests

resp = requests.get("https://api.example.com/users")
data = resp.json()

# Developer assumes API returns {"users": [...]}
print(data['users'][0].get('email'))

# But the API returned an error body:
# {"error": "unauthorized", "code": 401}
# So data['users'] raises KeyError, not AttributeError.

# More subtle version — API returns a list on success:
# [{"id": 1, "email": "user@example.com"}, ...]
# Now data is a list, and data['users'] raises:
# TypeError: list indices must be integers, not str

# But this raises AttributeError:
print(data.get('users'))
# AttributeError: 'list' object has no attribute 'get'

The fix is to inspect the response before assuming its shape:

resp = requests.get("https://api.example.com/users")
resp.raise_for_status()   # raises HTTPError on 4xx/5xx before you touch the body
data = resp.json()

if isinstance(data, list):
    users = data
elif isinstance(data, dict):
    users = data.get('users', [])
else:
    raise ValueError(f"Unexpected API response type: {type(data)}")

for user in users:
    print(user.get('email'))

Using isinstance() to confirm the type before branching your logic is the correct pattern when you do not control the API contract. It turns a confusing AttributeError: 'list' object has no attribute 'get' into an explicit, readable conditional.

This kind of defensive type validation is also a secure coding practice, not just a debugging habit. Unvalidated API responses are an attack surface: a server you do not control can return malformed or attacker-manipulated data, and code that blindly trusts the shape of that response is code that can be made to behave unexpectedly. The LexisNexis breach via FulcrumSec is a recent example of what happens when untrusted external input reaches systems that assumed it was safe.

Scenario 3: The Unpickled Object Whose Class Has Changed

This one appears in production and almost never in development, which makes it painful to diagnose. You serialize an object with pickle, deploy new code that adds or renames attributes on the class, and then deserialize the old data. Python reconstructs the object by calling __new__ (bypassing __init__) and then populating __dict__ directly from the pickle data. Attributes that exist in the new class definition but were not present when the object was originally pickled simply do not exist on the deserialized instance.

import pickle

# Version 1 of the class — what was pickled
class UserProfile:
    def __init__(self, username):
        self.username = username

user = UserProfile("alice")
data = pickle.dumps(user)

# Later: Version 2 adds a new required attribute
class UserProfile:
    def __init__(self, username):
        self.username = username
        self.is_verified = False   # new in v2

# Deserializing old v1 data under v2 class definition
restored = pickle.loads(data)
print(restored.is_verified)
# AttributeError: 'UserProfile' object has no attribute 'is_verified'

The standard fix is to implement __setstate__, which pickle calls after reconstruction. Use it to supply defaults for any attribute that might be missing from older serialized data:

class UserProfile:
    def __init__(self, username):
        self.username = username
        self.is_verified = False

    def __setstate__(self, state):
        # Restore the pickled attributes
        self.__dict__.update(state)
        # Supply defaults for anything missing in old data
        self.__dict__.setdefault('is_verified', False)

restored = pickle.loads(data)
print(restored.is_verified)   # False — default applied cleanly

The same pattern applies to any serialization library that bypasses __init__ during reconstruction — including copy.copy() and copy.deepcopy() when combined with classes that rely on __init__ side effects to create attributes. If you use dataclasses or Pydantic models, serialization frameworks handle this more gracefully through schema versioning, but pure pickle workflows require manual migration handling via __setstate__.

Error Diagnostic Recognise your error pattern instantly

Which of these matches your AttributeError?

Key Takeaways

The AttributeError is not a mystery. It is the predictable outcome of Python's attribute resolution chain finding nothing at any level. The error always tells you two things: the type of the object and the name it could not find. Read both pieces of information carefully before touching any code.

Always initialize all instance attributes in __init__, even to None. Never name your files after standard library modules. If a class uses __slots__, remember that new attributes must be declared there and that __dict__ will not exist. Use dir() and type() when debugging. Drop a breakpoint() before the failing line to inspect the live object. Leverage hasattr() and getattr() for defensive programming, or use the EAFP pattern with try/except AttributeError when absence is genuinely exceptional rather than routine.

When writing __getattr__, resist the temptation to return None for everything unknown — it turns typos into silent bugs. Raise AttributeError with a useful message for names that are truly not recognised. And if you are on Python 3.11 or later, pay close attention to the caret indicators and "Did you mean?" suggestions — they represent years of careful work by CPython contributors like Pablo Galindo Salgado, Dennis Sweeney, and others to make this error less painful.

The error will never go away entirely. In a dynamically typed language, it is the price you pay for flexibility. But understanding why it happens — really understanding the resolution chain, the dunder methods, and the common pitfalls — means the error becomes a signpost rather than a roadblock.

That is the difference between reading the error and comprehending it. The same discipline that makes Python code robust — defensive initialization, explicit type checks, controlled attribute access, predictable failure modes — is the discipline that makes it harder to exploit. If that intersection between writing solid Python and building secure systems interests you, kandibrian.com and nohacky.com are worth bookmarking.

Frequently Asked Questions

Python's AttributeError fires when the entire attribute resolution chain comes up empty. The nine most common causes include: a typo or case error in the attribute name; operating on a NoneType value returned by a function; type drift where a variable gets reassigned to a different type; accessing an instance attribute before it has been created in __init__; module shadowing where a local file shares the name of a standard library module; name mangling on double-underscore attributes accessed from outside the class; and attributes blocked by __slots__ when a class uses fixed slot descriptors instead of __dict__.

This means a function or method returned None instead of the expected object. Common sources include dict.get() when the key is absent, re.search() when no match is found, and in-place list methods like list.sort() and list.reverse() which always return None. Use type(result) to confirm before chaining calls. Guard against None with if result is not None, or use getattr(obj, 'name', default) as a safe fallback.

When you write obj.x, Python follows this chain: (1) object.__getattribute__() is called first; (2) data descriptors in the class hierarchy get highest priority; (3) the instance's __dict__ is checked; (4) non-data descriptors and class-level attributes are checked in MRO order; (5) if everything fails and __getattr__ is defined, it is called as a last resort; (6) if __getattr__ is not defined or also raises AttributeError, the exception propagates. An AttributeError is the result of the entire chain finding nothing.

__getattr__ is a fallback hook called only after normal attribute lookup fails — it never intercepts attributes that already exist. __getattribute__ intercepts every single attribute access whether the attribute exists or not. Use __getattr__ for optional or computed attributes. Use __getattribute__ only when you need auditing or access control on every access, and always delegate to object.__getattribute__ explicitly to avoid infinite recursion.

Use dir(obj) to see all attributes including inherited ones. Use vars(obj) for the instance __dict__ only. Use type(obj) to confirm the actual class — it is often not what you expect. Use hasattr(obj, 'name') to safely test before accessing. On Python 3.10+, the error message may include a "Did you mean?" suggestion. On Python 3.11+, caret markers in the traceback highlight exactly which attribute access on a chained line failed.

Python 3.10 added "Did you mean?" suggestions using string similarity matching (bpo-38530). Python 3.11 introduced fine-grained column offsets in tracebacks via PEP 657. Python 3.12 extended the suggestion system to NameError inside methods, suggesting self.attr when a bare name matches an instance attribute. Python 3.13 added a module shadowing hint in the error message and introduced PyObject_GetOptionalAttr in the C API as an alternative to PyObject_GetAttr that does not raise AttributeError when an attribute is absent.

Yes. AttributeError is a standard Python exception and can be caught with try/except AttributeError like any other. This is the basis of the EAFP idiom and also how hasattr() works internally — it calls getattr() and catches AttributeError to return True or False. When writing your own handlers, catch AttributeError specifically rather than using a bare except, and only catch it when absence is a genuinely expected condition. If you are catching it to suppress a programming error (a typo, a wrong type), fix the bug instead.

AttributeError fires when you access a name on an object using dot notation — obj.name — and that attribute does not exist. NameError fires when you use a name directly in the current scope and Python cannot find it as a local variable, an enclosing variable, a global, or a builtin. The practical confusion arises inside class methods: writing name instead of self.name raises NameError, not AttributeError, because Python is looking in the local scope, not on the object. Python 3.12 added a specific hint for this: if name matches a known instance attribute, the NameError message will suggest self.name.

When Python unpickles an object, it reconstructs the instance by calling __new__ (skipping __init__) and then restoring __dict__ from the serialized data. If the class definition has changed since the object was pickled — for example, a new attribute was added — that attribute will not be in the old serialized data and will not exist on the restored instance. Any code that accesses it raises AttributeError. The fix is to implement __setstate__ on the class and use self.__dict__.setdefault('new_attr', default) to supply defaults for attributes missing from old pickle data.

AttributeError is raised when dot-notation access fails on an object: obj.missing. KeyError is raised when bracket-notation key lookup fails on a mapping: d['missing']. The confusion arises because both look like "something is not there." On a dictionary, writing d.key instead of d['key'] produces AttributeError because Python interprets it as looking for a dict method named key, not a stored value. Use d['key'] for dict values and d.get('key') for safe fallback access. Use dot notation only for object attributes and methods.

References

  1. Python Software Foundation. Data Model — Customizing Attribute Access. Python 3 Documentation. docs.python.org/3/reference/datamodel.html
  2. Ivan Levkivskyi. PEP 562 — Module __getattr__ and __dir__. Accepted December 2017 (Python 3.7). peps.python.org/pep-0562/
  3. Pablo Galindo Salgado, Batuhan Taskaya, Ammar Askar. PEP 657 — Include Fine Grained Error Locations in Tracebacks. Final (Python 3.11). peps.python.org/pep-0657/
  4. Python Software Foundation. What's New In Python 3.10 — Better Error Messages (bpo-38530, contributed by Pablo Galindo; refined by Dennis Sweeney). docs.python.org/3/whatsnew/3.10.html
  5. Python Software Foundation. What's New In Python 3.12 — Improved NameError Suggestions for Instances (gh-99139, contributed by Pablo Galindo). docs.python.org/3/whatsnew/3.12.html
  6. Python Software Foundation. What's New In Python 3.13 — Module Shadowing Hint in AttributeError. docs.python.org/3/whatsnew/3.13.html
Certificate of Completion
Final Exam
Pass mark: 80% · Score 80% or higher to receive your certificate

Enter your name as you want it to appear on your certificate, then start the exam. Your name is used only to generate your certificate and is never transmitted or stored anywhere.

Question 1 of 10