Decorators - Advanced Topics - Learning Python (2013)

Learning Python (2013)

Part VIII. Advanced Topics

Chapter 39. Decorators

In the advanced class topics chapter of this book (Chapter 32), we met static and class methods, took a quick look at the @ decorator syntax Python offers for declaring them, and previewed decorator coding techniques. We also met function decorators briefly in Chapter 38, while exploring the property built-in’s ability to serve as one, and in Chapter 29 while studying the notion of abstract superclasses.

This chapter picks up where this previous decorator coverage left off. Here, we’ll dig deeper into the inner workings of decorators and study more advanced ways to code new decorators ourselves. As we’ll see, many of the concepts we studied earlier—especially state retention—show up regularly in decorators.

This is a somewhat advanced topic, and decorator construction tends to be of more interest to tool builders than to application programmers. Still, given that decorators are becoming increasingly common in popular Python frameworks, a basic understanding can help demystify their role, even if you’re just a decorator user.

Besides covering decorator construction details, this chapter serves as a more realistic case study of Python in action. Because its examples grow somewhat larger than most of the others we’ve seen in this book, they better illustrate how code comes together into more complete systems and tools. As an extra perk, some of the code we’ll write here may be used as general-purpose tools in your day-to-day programs.

What’s a Decorator?

Decoration is a way to specify management or augmentation code for functions and classes. Decorators themselves take the form of callable objects (e.g., functions) that process other callable objects. As we saw earlier in this book, Python decorators come in two related flavors, neither of which requires 3.X or new-style classes:

§ Function decorators, added in Python 2.4, do name rebinding at function definition time, providing a layer of logic that can manage functions and methods, or later calls to them.

§ Class decorators, added in Python 2.6 and 3.0, do name rebinding at class definition time, providing a layer of logic that can manage classes, or the instances created by later calls to them.

In short, decorators provide a way to insert automatically run code at the end of function and class definition statements—at the end of a def for function decorators, and at the end of a class for class decorators. Such code can play a variety of roles, as described in the following sections.

Managing Calls and Instances

In typical use, this automatically run code may be used to augment calls to functions and classes. It arranges this by installing wrapper (a.k.a. proxy) objects to be invoked later:

Call proxies

Function decorators install wrapper objects to intercept later function calls and process them as needed, usually passing the call on to the original function to run the managed action.

Interface proxies

Class decorators install wrapper objects to intercept later instance creation calls and process them as required, usually passing the call on to the original class to create a managed instance.

Decorators achieve these effects by automatically rebinding function and class names to other callables, at the end of def and class statements. When later invoked, these callables can perform tasks such as tracing and timing function calls, managing access to class instance attributes, and so on.

Managing Functions and Classes

Although most examples in this chapter deal with using wrappers to intercept later calls to functions and classes, this is not the only way decorators can be used:

Function managers

Function decorators can also be used to manage function objects, instead of or in addition to later calls to them—to register a function to an API, for instance. Our primary focus here, though, will be on their more commonly used call wrapper application.

Class managers

Class decorators can also be used to manage class objects directly, instead of or in addition to instance creation calls—to augment a class with new methods, for example. Because this role intersects strongly with that of metaclasses, we’ll see additional use cases in the next chapter. As we’ll find, both tools run at the end of the class creation process, but class decorators often offer a lighter-weight solution.

In other words, function decorators can be used to manage both function calls and function objects, and class decorators can be used to manage both class instances and classes themselves. By returning the decorated object itself instead of a wrapper, decorators become a simple post-creation step for functions and classes.

Regardless of the role they play, decorators provide a convenient and explicit way to code tools useful both during program development and in live production systems.

Using and Defining Decorators

Depending on your job description, you might encounter decorators as a user or a provider (you might also be a maintainer, but that just means you straddle the fence). As we’ve seen, Python itself comes with built-in decorators that have specialized roles—static and class method declaration, property creation, and more. In addition, many popular Python toolkits include decorators to perform tasks such as managing database or user-interface logic. In such cases, we can get by without knowing how the decorators are coded.

For more general tasks, programmers can code arbitrary decorators of their own. For example, function decorators may be used to augment functions with code that adds call tracing or logging, performs argument validity testing during debugging, automatically acquires and releases thread locks, times calls made to functions for optimization, and so on. Any behavior you can imagine adding to—really, wrapping around—a function call is a candidate for custom function decorators.

On the other hand, function decorators are designed to augment only a specific function or method call, not an entire object interface. Class decorators fill the latter role better—because they can intercept instance creation calls, they can be used to implement arbitrary object interface augmentation or management tasks. For example, custom class decorators can trace, validate, or otherwise augment every attribute reference made for an object. They can also be used to implement proxy objects, singleton classes, and other common coding patterns. In fact, we’ll find that many class decorators bear a strong resemblance to—and in fact are a prime application of—the delegation coding pattern we met in Chapter 31.

Why Decorators?

Like many advanced Python tools, decorators are never strictly required from a purely technical perspective: we can often implement their functionality instead using simple helper function calls or other techniques. And at a base level, we can always manually code the name rebinding that decorators perform automatically.

That said, decorators provide an explicit syntax for such tasks, which makes intent clearer, can minimize augmentation code redundancy, and may help ensure correct API usage:

§ Decorators have a very explicit syntax, which makes them easier to spot than helper function calls that may be arbitrarily far-removed from the subject functions or classes.

§ Decorators are applied once, when the subject function or class is defined; it’s not necessary to add extra code at every call to the class or function, which may have to be changed in the future.

§ Because of both of the prior points, decorators make it less likely that a user of an API will forget to augment a function or class according to API requirements.

In other words, beyond their technical model, decorators offer some advantages in terms of both code maintenance and consistency. Moreover, as structuring tools, decorators naturally foster encapsulation of code, which reduces redundancy and makes future changes easier.

Decorators do have some potential drawbacks, too—when they insert wrapper logic, they can alter the types of the decorated objects, and they may incur extra calls when used as call or interface proxies. On the other hand, the same considerations apply to any technique that adds wrapping logic to objects.

We’ll explore these tradeoffs in the context of real code later in this chapter. Although the choice to use decorators is still somewhat subjective, their advantages are compelling enough that they are quickly becoming best practice in the Python world. To help you decide for yourself, let’s turn to the details.

NOTE

Decorators versus macros: Python’s decorators bear similarities to what some call aspect-oriented programming in other languages—code inserted to run automatically before or after a function call runs. Their syntax also very closely resembles (and is likely borrowed from) Java’s annotations, though Python’s model is usually considered more flexible and general.

Some liken decorators to macros too, but this isn’t entirely apt, and might even be misleading. Macros (e.g., C’s #define preprocessor directive) are typically associated with textual replacement and expansion, and designed for generating code. By contrast, Python’s decorators are a runtime operation, based upon name rebinding, callable objects, and often, proxies. While the two may have use cases that sometimes overlap, decorators and macros are fundamentally different in scope, implementation, and coding patterns. Comparing the two seems akin to comparing Python’s import with a C #include, which similarly confuses a runtime object-based operation with text insertion.

Of course, the term macro has been a bit diluted over time—to some, it now can also refer to any canned series of steps or procedure—and users of other languages might find the analogy to descriptors useful anyhow. But they should probably also keep in mind that decorators are about callable objects managing callable objects, not text expansion. Python tends to be best understood and used in terms of Python idioms.

The Basics

Let’s get started with a first-pass look at decoration behavior from a symbolic perspective. We’ll write real and more substantial code soon, but since most of the magic of decorators boils down to an automatic rebinding operation, it’s important to understand this mapping first.

Function Decorators

Function decorators have been available in Python since version 2.4. As we saw earlier in this book, they are largely just syntactic sugar that runs one function through another at the end of a def statement, and rebinds the original function name to the result.

Usage

A function decorator is a kind of runtime declaration about the function whose definition follows. The decorator is coded on a line just before the def statement that defines a function or method, and it consists of the @ symbol followed by a reference to a metafunction—a function (or other callable object) that manages another function.

In terms of code, function decorators automatically map the following syntax:

@decorator # Decorate function

def F(arg):

...

F(99) # Call function

into this equivalent form, where decorator is a one-argument callable object that returns a callable object with the same number of arguments as F (in not F itself):

def F(arg):

...

F = decorator(F) # Rebind function name to decorator result

F(99) # Essentially calls decorator(F)(99)

This automatic name rebinding works on any def statement, whether it’s for a simple function or a method within a class. When the function F is later called, it’s actually calling the object returned by the decorator, which may be either another object that implements required wrapping logic, or the original function itself.

In other words, decoration essentially maps the first of the following into the second—though the decorator is really run only once, at decoration time:

func(6, 7)

decorator(func)(6, 7)

This automatic name rebinding accounts for the static method and property decoration syntax we met earlier in the book:

class C:

@staticmethod

def meth(...): ... # meth = staticmethod(meth)

class C:

@property

def name(self): ... # name = property(name)

In both cases, the method name is rebound to the result of a built-in function decorator, at the end of the def statement. Calling the original name later invokes whatever object the decorator returns. In these specific cases, the original names are rebound to a static method router and property descriptor, but the process is much more general than this—as the next section explains.

Implementation

A decorator itself is a callable that returns a callable. That is, it returns the object to be called later when the decorated function is invoked through its original name—either a wrapper object to intercept later calls, or the original function augmented in some way. In fact, decorators can beany type of callable and return any type of callable: any combination of functions and classes may be used, though some are better suited to certain contexts.

For example, to tap into the decoration protocol in order to manage a function just after it is created, we might code a decorator of this form:

def decorator(F):

# Process function F

return F

@decorator

def func(): ... # func = decorator(func)

Because the original decorated function is assigned back to its name, this simply adds a post-creation step to function definition. Such a structure might be used to register a function to an API, assign function attributes, and so on.

In more typical use, to insert logic that intercepts later calls to a function, we might code a decorator to return a different object than the original function—a proxy for later calls:

def decorator(F):

# Save or use function F

# Return a different callable: nested def, class with __call__, etc.

@decorator

def func(): ... # func = decorator(func)

This decorator is invoked at decoration time, and the callable it returns is invoked when the original function name is later called. The decorator itself receives the decorated function; the callable returned receives whatever arguments are later passed to the decorated function’s name. When coded properly, this works the same for class-level methods: the implied instance object simply shows up in the first argument of the returned callable.

In skeleton terms, here’s one common coding pattern that captures this idea—the decorator returns a wrapper that retains the original function in an enclosing scope:

def decorator(F): # On @ decoration

def wrapper(*args): # On wrapped function call

# Use F and args

# F(*args) calls original function

return wrapper

@decorator # func = decorator(func)

def func(x, y): # func is passed to decorator's F

...

func(6, 7) # 6, 7 are passed to wrapper's *args

When the name func is later called, it really invokes the wrapper function returned by decorator; the wrapper function can then run the original func because it is still available in an enclosing scope. When coded this way, each decorated function produces a new scope to retain state.

To do the same with classes, we can overload the call operation and use instance attributes instead of enclosing scopes:

class decorator:

def __init__(self, func): # On @ decoration

self.func = func

def __call__(self, *args): # On wrapped function call

# Use self.func and args

# self.func(*args) calls original function

@decorator

def func(x, y): # func = decorator(func)

... # func is passed to __init__

func(6, 7) # 6, 7 are passed to __call__'s *args

When the name func is later called now, it really invokes the __call__ operator overloading method of the instance created by decorator; the __call__ method can then run the original func because it is still available in an instance attribute. When coded this way, each decorated function produces a new instance to retain state.

Supporting method decoration

One subtle point about the prior class-based coding is that while it works to intercept simple function calls, it does not quite work when applied to class-level method functions:

class decorator:

def __init__(self, func): # func is method without instance

self.func = func

def __call__(self, *args): # self is decorator instance

# self.func(*args) fails! # C instance not in args!

class C:

@decorator

def method(self, x, y): # method = decorator(method)

... # Rebound to decorator instance

When coded this way, the decorated method is rebound to an instance of the decorator class, instead of a simple function.

The problem with this is that the self in the decorator’s __call__ receives the decorator class instance when the method is later run, and the instance of class C is never included in *args. This makes it impossible to dispatch the call to the original method—the decorator object retains the original method function, but it has no instance to pass to it.

To support both functions and methods, the nested function alternative works better:

def decorator(F): # F is func or method without instance

def wrapper(*args): # class instance in args[0] for method

# F(*args) runs func or method

return wrapper

@decorator

def func(x, y): # func = decorator(func)

...

func(6, 7) # Really calls wrapper(6, 7)

class C:

@decorator

def method(self, x, y): # method = decorator(method)

... # Rebound to simple function

X = C()

X.method(6, 7) # Really calls wrapper(X, 6, 7)

When coded this way wrapper receives the C class instance in its first argument, so it can dispatch to the original method and access state information.

Technically, this nested-function version works because Python creates a bound method object and thus passes the subject class instance to the self argument only when a method attribute references a simple function; when it references an instance of a callable class instead, the callable class’s instance is passed to self to give the callable class access to its own state information. We’ll see how this subtle difference can matter in more realistic examples later in this chapter.

Also note that nested functions are perhaps the most straightforward way to support decoration of both functions and methods, but not necessarily the only way. The prior chapter’s descriptors, for example, receive both the descriptor and subject class instance when called. Though more complex, later in this chapter we’ll see how this tool can be leveraged in this context as well.

Class Decorators

Function decorators proved so useful that the model was extended to allow class decoration as of Python 2.6 and 3.0. They were initially resisted because of role overlap with metaclasses; in the end, though, they were adopted because they provide a simpler way to achieve many of the same goals.

Class decorators are strongly related to function decorators; in fact, they use the same syntax and very similar coding patterns. Rather than wrapping individual functions or methods, though, class decorators are a way to manage classes, or wrap up instance construction calls with extra logic that manages or augments instances created from a class. In the latter role, they may manage full object interfaces.

Usage

Syntactically, class decorators appear just before class statements, in the same way that function decorators appear just before def statements. In symbolic terms, for a decorator that must be a one-argument callable that returns a callable, the class decorator syntax:

@decorator # Decorate class

class C:

...

x = C(99) # Make an instance

is equivalent to the following—the class is automatically passed to the decorator function, and the decorator’s result is assigned back to the class name:

class C:

...

C = decorator(C) # Rebind class name to decorator result

x = C(99) # Essentially calls decorator(C)(99)

The net effect is that calling the class name later to create an instance winds up triggering the callable returned by the decorator, which may or may not call the original class itself.

Implementation

New class decorators are coded with many of the same techniques used for function decorators, though some may involve two levels of augmentation—to manage both instance construction calls, as well as instance interface access. Because a class decorator is also a callable that returns a callable, most combinations of functions and classes suffice.

However it’s coded, the decorator’s result is what runs when an instance is later created. For example, to simply manage a class just after it is created, return the original class itself:

def decorator(C):

# Process class C

return C

@decorator

class C: ... # C = decorator(C)

To instead insert a wrapper layer that intercepts later instance creation calls, return a different callable object:

def decorator(C):

# Save or use class C

# Return a different callable: nested def, class with __call__, etc.

@decorator

class C: ... # C = decorator(C)

The callable returned by such a class decorator typically creates and returns a new instance of the original class, augmented in some way to manage its interface. For example, the following inserts an object that intercepts undefined attributes of a class instance:

def decorator(cls): # On @ decoration

class Wrapper:

def __init__(self, *args): # On instance creation

self.wrapped = cls(*args)

def __getattr__(self, name): # On attribute fetch

return getattr(self.wrapped, name)

return Wrapper

@decorator

class C: # C = decorator(C)

def __init__(self, x, y): # Run by Wrapper.__init__

self.attr = 'spam'

x = C(6, 7) # Really calls Wrapper(6, 7)

print(x.attr) # Runs Wrapper.__getattr__, prints "spam"

In this example, the decorator rebinds the class name to another class, which retains the original class in an enclosing scope and creates and embeds an instance of the original class when it’s called. When an attribute is later fetched from the instance, it is intercepted by the wrapper’s__getattr__ and delegated to the embedded instance of the original class. Moreover, each decorated class creates a new scope, which remembers the original class. We’ll flesh out this example into some more useful code later in this chapter.

Like function decorators, class decorators are commonly coded as either “factory” functions that create and return callables, classes that use __init__ or __call__ methods to intercept call operations, or some combination thereof. Factory functions typically retain state in enclosing scope references, and classes in attributes.

Supporting multiple instances

As for function decorators, some callable type combinations work better for class decorators than others. Consider the following invalid alternative to the class decorator of the prior example:

class Decorator:

def __init__(self, C): # On @ decoration

self.C = C

def __call__(self, *args): # On instance creation

self.wrapped = self.C(*args)

return self

def __getattr__(self, attrname): # On atrribute fetch

return getattr(self.wrapped, attrname)

@Decorator

class C: ... # C = Decorator(C)

x = C()

y = C() # Overwrites x!

This code handles multiple decorated classes (each makes a new Decorator instance) and will intercept instance creation calls (each runs __call__). Unlike the prior version, however, this version fails to handle multiple instances of a given class—each instance creation call overwrites the prior saved instance. The original version does support multiple instances, because each instance creation call makes a new independent wrapper object. More generally, either of the following patterns supports multiple wrapped instances:

def decorator(C): # On @ decoration

class Wrapper:

def __init__(self, *args): # On instance creation: new Wrapper

self.wrapped = C(*args) # Embed instance in instance

return Wrapper

class Wrapper: ...

def decorator(C): # On @ decoration

def onCall(*args): # On instance creation: new Wrapper

return Wrapper(C(*args)) # Embed instance in instance

return onCall

We’ll study this phenomenon in a more realistic context later in the chapter too; in practice, though, we must be careful to combine callable types properly to support our intent, and choose state policies wisely.

Decorator Nesting

Sometimes one decorator isn’t enough. For instance, suppose you’ve coded two function decorators to be used during development—one to test argument types before function calls, and another to test return value types after function calls. You can use either independently, but what to do if you want to employ both on a single function? What you really need is a way to nest the two, such that the result of one decorator is the function decorated by the other. It’s irrelevant which is nested, as long as both steps run on later calls.

To support multiple nested steps of augmentation this way, decorator syntax allows you to add multiple layers of wrapper logic to a decorated function or method. When this feature is used, each decorator must appear on a line of its own. Decorator syntax of this form:

@A

@B

@C

def f(...):

...

runs the same as the following:

def f(...):

...

f = A(B(C(f)))

Here, the original function is passed through three different decorators, and the resulting callable object is assigned back to the original name. Each decorator processes the result of the prior, which may be the original function or an inserted wrapper.

If all the decorators insert wrappers, the net effect is that when the original function name is called, three different layers of wrapping object logic will be invoked, to augment the original function in three different ways. The last decorator listed is the first applied, and is the most deeply nested when the original function name is later called (insert joke about Python “interior decorators” here).

Just as for functions, multiple class decorators result in multiple nested function calls, and possibly multiple levels and steps of wrapper logic around instance creation calls. For example, the following code:

@spam

@eggs

class C:

...

X = C()

is equivalent to the following:

class C:

...

C = spam(eggs(C))

X = C()

Again, each decorator is free to return either the original class or an inserted wrapper object. With wrappers, when an instance of the original C class is finally requested, the call is redirected to the wrapping layer objects provided by both the spam and eggs decorators, which may have arbitrarily different roles—they might trace and validate attribute access, for example, and both steps would be run on later requests.

For instance, the following do-nothing decorators simply return the decorated function:

def d1(F): return F

def d2(F): return F

def d3(F): return F

@d1

@d2

@d3

def func(): # func = d1(d2(d3(func)))

print('spam')

func() # Prints "spam"

The same syntax works on classes, as do these same do-nothing decorators.

When decorators insert wrapper function objects, though, they may augment the original function when called—the following concatenates to its result in the decorator layers, as it runs the layers from inner to outer:

def d1(F): return lambda: 'X' + F()

def d2(F): return lambda: 'Y' + F()

def d3(F): return lambda: 'Z' + F()

@d1

@d2

@d3

def func(): # func = d1(d2(d3(func)))

return 'spam'

print(func()) # Prints "XYZspam"

We use lambda functions to implement wrapper layers here (each retains the wrapped function in an enclosing scope); in practice, wrappers can take the form of functions, callable classes, and more. When designed well, decorator nesting allows us to combine augmentation steps in a wide variety of ways.

Decorator Arguments

Both function and class decorators can also seem to take arguments, although really these arguments are passed to a callable that in effect returns the decorator, which in turn returns a callable. By nature, this usually sets up multiple levels of state retention. The following, for instance:

@decorator(A, B)

def F(arg):

...

F(99)

is automatically mapped into this equivalent form, where decorator is a callable that returns the actual decorator. The returned decorator in turn returns the callable run later for calls to the original function name:

def F(arg):

...

F = decorator(A, B)(F) # Rebind F to result of decorator's return value

F(99) # Essentially calls decorator(A, B)(F)(99)

Decorator arguments are resolved before decoration ever occurs, and they are usually used to retain state information for use in later calls. The decorator function in this example, for instance, might take a form like the following:

def decorator(A, B):

# Save or use A, B

def actualDecorator(F):

# Save or use function F

# Return a callable: nested def, class with __call__, etc.

return callable

return actualDecorator

The outer function in this structure generally saves the decorator arguments away as state information, for use in the actual decorator, the callable it returns, or both. This code snippet retains the state information argument in enclosing function scope references, but class attributes are commonly used as well.

In other words, decorator arguments often imply three levels of callables: a callable to accept decorator arguments, which returns a callable to serve as decorator, which returns a callable to handle calls to the original function or class. Each of the three levels may be a function or class and may retain state in the form of scopes or class attributes.

Decorator arguments can be used to provide attribute initialization values, call trace message labels, attribute names to be validated, and much more—any sort of configuration parameter for objects or their proxies is a candidate. We’ll see concrete examples of decorator arguments employed later in this chapter.

Decorators Manage Functions and Classes, Too

Although much of the rest of this chapter focuses on wrapping later calls to functions and classes, it’s important to remember that the decorator mechanism is more general than this—it is a protocol for passing functions and classes through any callable immediately after they are created. As such, it can also be used to invoke arbitrary post-creation processing:

def decorator(O):

# Save or augment function or class O

return O

@decorator

def F(): ... # F = decorator(F)

@decorator

class C: ... # C = decorator(C)

As long as we return the original decorated object this way instead of a proxy, we can manage functions and classes themselves, not just later calls to them. We’ll see more realistic examples later in this chapter that use this idea to register callable objects to an API with decoration and assign attributes to functions when they are created.

Coding Function Decorators

On to the code—in the rest of this chapter, we are going to study working examples that demonstrate the decorator concepts we just explored. This section presents a handful of function decorators at work, and the next shows class decorators in action. Following that, we’ll close out with some larger case studies of class and function decorator usage—complete implementations of class privacy and argument range tests.

Tracing Calls

To get started, let’s revive the call tracer example we met in Chapter 32. The following defines and applies a function decorator that counts the number of calls made to the decorated function and prints a trace message for each call:

# File decorator1.py

class tracer:

def __init__(self, func): # On @ decoration: save original func

self.calls = 0

self.func = func

def __call__(self, *args): # On later calls: run original func

self.calls += 1

print('call %s to %s' % (self.calls, self.func.__name__))

self.func(*args)

@tracer

def spam(a, b, c): # spam = tracer(spam)

print(a + b + c) # Wraps spam in a decorator object

Notice how each function decorated with this class will create a new instance, with its own saved function object and calls counter. Also observe how the *args argument syntax is used to pack and unpack arbitrarily many passed-in arguments. This generality enables this decorator to be used to wrap any function with any number of positional arguments; this version doesn’t yet work on keyword arguments or class-level methods, and doesn’t return results, but we’ll fix these shortcomings later in this section.

Now, if we import this module’s function and test it interactively, we get the following sort of behavior—each call generates a trace message initially, because the decorator class intercepts it. This code runs as is under both Python 2.X and 3.X, as does all code in this chapter unless otherwise noted (I’ve made prints version-neutral, and decorators do not require new-style classes; some hex addresses have also been shortened to protect the sighted):

>>> from decorator1 import spam

>>> spam(1, 2, 3) # Really calls the tracer wrapper object

call 1 to spam

6

>>> spam('a', 'b', 'c') # Invokes __call__ in class

call 2 to spam

abc

>>> spam.calls # Number calls in wrapper state information

2

>>> spam

<decorator1.tracer object at 0x02D9A730>

When run, the tracer class saves away the decorated function, and intercepts later calls to it, in order to add a layer of logic that counts and prints each call. Notice how the total number of calls shows up as an attribute of the decorated function—spam is really an instance of the tracerclass when decorated, a finding that may have ramifications for programs that do type checking, but is generally benign (decorators might copy the original function’s __name__, but such forgery is limited, and could lead to confusion).

For function calls, the @ decoration syntax can be more convenient than modifying each call to account for the extra logic level, and it avoids accidentally calling the original function directly. Consider a nondecorator equivalent such as the following:

calls = 0

def tracer(func, *args):

global calls

calls += 1

print('call %s to %s' % (calls, func.__name__))

func(*args)

def spam(a, b, c):

print(a, b, c)

>>> spam(1, 2, 3) # Normal nontraced call: accidental?

1 2 3

>>> tracer(spam, 1, 2, 3) # Special traced call without decorators

call 1 to spam

1 2 3

This alternative can be used on any function without the special @ syntax, but unlike the decorator version, it requires extra syntax at every place where the function is called in your code. Furthermore, its intent may not be as obvious, and it does not ensure that the extra layer will be invoked for normal calls. Although decorators are never required (we can always rebind names manually), they are often the most convenient and uniform option.

Decorator State Retention Options

The last example of the prior section raises an important issue. Function decorators have a variety of options for retaining state information provided at decoration time, for use during the actual function call. They generally need to support multiple decorated objects and multiple calls, but there are a number of ways to implement these goals: instance attributes, global variables, nonlocal closure variables, and function attributes can all be used for retaining state.

Class instance attributes

For example, here is an augmented version of the prior example, which adds support for keyword arguments with ** syntax, and returns the wrapped function’s result to support more use cases (for nonlinear readers, we first studied keyword arguments in Chapter 18, and for readers working with the book examples package, some filenames in this chapter are again implied by the command-lines that follow their listings):

class tracer: # State via instance attributes

def __init__(self, func): # On @ decorator

self.calls = 0 # Save func for later call

self.func = func

def __call__(self, *args, **kwargs): # On call to original function

self.calls += 1

print('call %s to %s' % (self.calls, self.func.__name__))

return self.func(*args, **kwargs)

@tracer

def spam(a, b, c): # Same as: spam = tracer(spam)

print(a + b + c) # Triggers tracer.__init__

@tracer

def eggs(x, y): # Same as: eggs = tracer(eggs)

print(x ** y) # Wraps eggs in a tracer object

spam(1, 2, 3) # Really calls tracer instance: runs tracer.__call__

spam(a=4, b=5, c=6) # spam is an instance attribute

eggs(2, 16) # Really calls tracer instance, self.func is eggs

eggs(4, y=4) # self.calls is per-decoration here

Like the original, this uses class instance attributes to save state explicitly. Both the wrapped function and the calls counter are per-instance information—each decoration gets its own copy. When run as a script under either 2.X or 3.X, the output of this version is as follows; notice how thespam and eggs functions each have their own calls counter, because each decoration creates a new class instance:

c:\code> python decorator2.py

call 1 to spam

6

call 2 to spam

15

call 1 to eggs

65536

call 2 to eggs

256

While useful for decorating functions, this coding scheme still has issues when applied to methods—a shortcoming we’ll address in a later revision.

Enclosing scopes and globals

Closure functions—with enclosing def scope references and nested defs—can often achieve the same effect, especially for static data like the decorated original function. In this example, though, we would also need a counter in the enclosing scope that changes on each call, and that’s not possible in Python 2.X (recall from Chapter 17 that the nonlocal statement is 3.X-only).

In 2.X, we can still use either classes and attributes per the prior section, or other options. Moving state variables out to the global scope with declarations is one candidate, and works in both 2.X and 3.X:

calls = 0

def tracer(func): # State via enclosing scope and global

def wrapper(*args, **kwargs): # Instead of class attributes

global calls # calls is global, not per-function

calls += 1

print('call %s to %s' % (calls, func.__name__))

return func(*args, **kwargs)

return wrapper

@tracer

def spam(a, b, c): # Same as: spam = tracer(spam)

print(a + b + c)

@tracer

def eggs(x, y): # Same as: eggs = tracer(eggs)

print(x ** y)

spam(1, 2, 3) # Really calls wrapper, assigned to spam

spam(a=4, b=5, c=6) # wrapper calls spam

eggs(2, 16) # Really calls wrapper, assigned to eggs

eggs(4, y=4) # Global calls is not per-decoration here!

Unfortunately, moving the counter out to the common global scope to allow it to be changed like this also means that it will be shared by every wrapped function. Unlike class instance attributes, global counters are cross-program, not per-function—the counter is incremented for any traced function call. You can tell the difference if you compare this version’s output with the prior version’s—the single, shared global call counter is incorrectly updated by calls to every decorated function:

c:\code> python decorator3.py

call 1 to spam

6

call 2 to spam

15

call 3 to eggs

65536

call 4 to eggs

256

Enclosing scopes and nonlocals

Shared global state may be what we want in some cases. If we really want a per-function counter, though, we can either use classes as before, or make use of closure (a.k.a. factory) functions and the nonlocal statement in Python 3.X, described in Chapter 17. Because this new statement allows enclosing function scope variables to be changed, they can serve as per-decoration and changeable data. In 3.X only:

def tracer(func): # State via enclosing scope and nonlocal

calls = 0 # Instead of class attrs or global

def wrapper(*args, **kwargs): # calls is per-function, not global

nonlocal calls

calls += 1

print('call %s to %s' % (calls, func.__name__))

return func(*args, **kwargs)

return wrapper

@tracer

def spam(a, b, c): # Same as: spam = tracer(spam)

print(a + b + c)

@tracer

def eggs(x, y): # Same as: eggs = tracer(eggs)

print(x ** y)

spam(1, 2, 3) # Really calls wrapper, bound to func

spam(a=4, b=5, c=6) # wrapper calls spam

eggs(2, 16) # Really calls wrapper, bound to eggs

eggs(4, y=4) # Nonlocal calls _is_ per-decoration here

Now, because enclosing scope variables are not cross-program globals, each wrapped function gets its own counter again, just as for classes and attributes. Here’s the new output when run under 3.X:

c:\code> py −3 decorator4.py

call 1 to spam

6

call 2 to spam

15

call 1 to eggs

65536

call 2 to eggs

256

Function attributes

Finally, if you are not using Python 3.X and don’t have a nonlocal statement—or you want your code to work portably on both 3.X and 2.X—you may still be able to avoid globals and classes by making use of function attributes for some changeable state instead. In all Pythons since 2.1, we can assign arbitrary attributes to functions to attach them, with func.attr=value. Because a factory function makes a new function on each call, its attributes become per-call state. Moreover, you need to use this technique only for state variables that must change; enclosing scope references are still retained and work normally.

In our example, we can simply use wrapper.calls for state. The following works the same as the preceding nonlocal version because the counter is again per-decorated-function, but it also runs in Python 2.X:

def tracer(func): # State via enclosing scope and func attr

def wrapper(*args, **kwargs): # calls is per-function, not global

wrapper.calls += 1

print('call %s to %s' % (wrapper.calls, func.__name__))

return func(*args, **kwargs)

wrapper.calls = 0

return wrapper

@tracer

def spam(a, b, c): # Same as: spam = tracer(spam)

print(a + b + c)

@tracer

def eggs(x, y): # Same as: eggs = tracer(eggs)

print(x ** y)

spam(1, 2, 3) # Really calls wrapper, assigned to spam

spam(a=4, b=5, c=6) # wrapper calls spam

eggs(2, 16) # Really calls wrapper, assigned to eggs

eggs(4, y=4) # wrapper.calls _is_ per-decoration here

As we learned in Chapter 17, this works only because the name wrapper is retained in the enclosing tracer function’s scope. When we later increment wrapper.calls, we are not changing the name wrapper itself, so no nonlocal declaration is required. This version runs in either Python line:

c:\code> py −2 decorator5.py

...same output as prior version, but works on 2.X too...

This scheme was almost relegated to a footnote, because it may be more obscure than nonlocal in 3.X and might be better saved for cases where other schemes don’t help. However, function attributes also have substantial advantages. For one, they allow access to the saved state fromoutside the decorator’s code; nonlocals can only be seen inside the nested function itself, but function attributes have wider visibility. For another, they are far more portable; this scheme also works in 2.X, making it version-neutral.

We will employ function attributes again in an answer to one of the end-of-chapter questions, where their visibility outside callables becomes an asset. As changeable state associated with a context of use, they are equivalent to enclosing scope nonlocals. As usual, choosing from multiple tools is an inherent part of the programming task.

Because decorators often imply multiple levels of callables, you can combine functions with enclosing scopes, classes with attributes, and function attributes to achieve a variety of coding structures. As we’ll see later, though, this sometimes may be subtler than you expect—each decorated function should have its own state, and each decorated class may require state both for itself and for each generated instance.

In fact, as the next section will explain in more detail, if we want to apply function decorators to class-level methods, too, we also have to be careful about the distinction Python makes between decorators coded as callable class instance objects and decorators coded as functions.

Class Blunders I: Decorating Methods

When I wrote the first class-based tracer function decorator in decorator1.py earlier, I naively assumed that it could also be applied to any method—decorated methods should work the same, I reasoned, but the automatic self instance argument would simply be included at the front of*args. The only real downside to this assumption is that it is completely wrong! When applied to a class’s method, the first version of the tracer fails, because self is the instance of the decorator class and the instance of the decorated subject class is not included in *args at all. This is true in both Python 3.X and 2.X.

I introduced this phenomenon earlier in this chapter, but now we can see it in the context of realistic working code. Given the class-based tracing decorator:

class tracer:

def __init__(self, func): # On @ decorator

self.calls = 0 # Save func for later call

self.func = func

def __call__(self, *args, **kwargs): # On call to original function

self.calls += 1

print('call %s to %s' % (self.calls, self.func.__name__))

return self.func(*args, **kwargs)

decoration of simple functions works as advertised earlier:

@tracer

def spam(a, b, c): # spam = tracer(spam)

print(a + b + c) # Triggers tracer.__init__

>>> spam(1, 2, 3) # Runs tracer.__call__

call 1 to spam

6

>>> spam(a=4, b=5, c=6) # spam saved in an instance attribute

call 2 to spam

15

However, decoration of class-level methods fails (more lucid sequential readers might recognize this as an adaptation of our Person class resurrected from the object-oriented tutorial in Chapter 28):

class Person:

def __init__(self, name, pay):

self.name = name

self.pay = pay

@tracer

def giveRaise(self, percent): # giveRaise = tracer(giveRaise)

self.pay *= (1.0 + percent)

@tracer

def lastName(self): # lastName = tracer(lastName)

return self.name.split()[-1]

>>> bob = Person('Bob Smith', 50000) # tracer remembers method funcs

>>> bob.giveRaise(.25) # Runs tracer.__call__(???, .25)

call 1 to giveRaise

TypeError: giveRaise() missing 1 required positional argument: 'percent'

>>> print(bob.lastName()) # Runs tracer.__call__(???)

call 1 to lastName

TypeError: lastName() missing 1 required positional argument: 'self'

The root of the problem here is in the self argument of the tracer class’s __call__ method—is it a tracer instance or a Person instance? We really need both as it’s coded: the tracer for decorator state, and the Person for routing on to the original method. Really, self must be thetracer object, to provide access to tracer’s state information (its calls and func); this is true whether decorating a simple function or a method.

Unfortunately, when our decorated method name is rebound to a class instance object with a __call__, Python passes only the tracer instance to self; it doesn’t pass along the Person subject in the arguments list at all. Moreover, because the tracer knows nothing about the Personinstance we are trying to process with method calls, there’s no way to create a bound method with an instance, and thus no way to correctly dispatch the call. This isn’t a bug, but it’s wildly subtle.

In the end, the prior listing winds up passing too few arguments to the decorated method, and results in an error. Add a line to the decorator’s __call__ to print all its arguments to verify this—as you can see, self is the tracer instance, and the Person instance is entirely absent:

>>> bob.giveRaise(.25)

<__main__.tracer object at 0x02A486D8> (0.25,) {}

call 1 to giveRaise

Traceback (most recent call last):

File "<stdin>", line 1, in <module>

File "<stdin>", line 9, in __call__

TypeError: giveRaise() missing 1 required positional argument: 'percent'

As mentioned earlier, this happens because Python passes the implied subject instance to self when a method name is bound to a simple function only; when it is an instance of a callable class, that class’s instance is passed instead. Technically, Python makes a bound method object containing the subject instance only when the method is a simple function, not when it is a callable instance of another class.

Using nested functions to decorate methods

If you want your function decorators to work on both simple functions and class-level methods, the most straightforward solution lies in using one of the other state retention solutions described earlier—code your function decorator as nested defs, so that you don’t depend on a single selfinstance argument to be both the wrapper class instance and the subject class instance.

The following alternative applies this fix using Python 3.X nonlocals; recode this to use function attributes for the changeable calls to use in 2.X. Because decorated methods are rebound to simple functions instead of instance objects, Python correctly passes the Person object as the first argument, and the decorator propagates it on in the first item of *args to the self argument of the real, decorated methods:

# A call tracer decorator for both functions and methods

def tracer(func): # Use function, not class with __call__

calls = 0 # Else "self" is decorator instance only!

def onCall(*args, **kwargs): # Or in 2.X+3.X: use [onCall.calls += 1]

nonlocal calls

calls += 1

print('call %s to %s' % (calls, func.__name__))

return func(*args, **kwargs)

return onCall

if __name__ == '__main__':

# Applies to simple functions

@tracer

def spam(a, b, c): # spam = tracer(spam)

print(a + b + c) # onCall remembers spam

@tracer

def eggs(N):

return 2 ** N

spam(1, 2, 3) # Runs onCall(1, 2, 3)

spam(a=4, b=5, c=6)

print(eggs(32))

# Applies to class-level method functions too!

class Person:

def __init__(self, name, pay):

self.name = name

self.pay = pay

@tracer

def giveRaise(self, percent): # giveRaise = tracer(giveRaise)

self.pay *= (1.0 + percent) # onCall remembers giveRaise

@tracer

def lastName(self): # lastName = tracer(lastName)

return self.name.split()[-1]

print('methods...')

bob = Person('Bob Smith', 50000)

sue = Person('Sue Jones', 100000)

print(bob.name, sue.name)

sue.giveRaise(.10) # Runs onCall(sue, .10)

print(int(sue.pay))

print(bob.lastName(), sue.lastName()) # Runs onCall(bob), lastName in scopes

We’ve also indented the file’s self-test code under a __name__ test so the decorator can be imported and used elsewhere. This version works the same on both functions and methods, but runs in 3.X only due to its nonlocal:

c:\code> py −3 calltracer.py

call 1 to spam

6

call 2 to spam

15

call 1 to eggs

4294967296

methods...

Bob Smith Sue Jones

call 1 to giveRaise

110000

call 1 to lastName

call 2 to lastName

Smith Jones

Trace through these results to make sure you have a handle on this model; the next section provides an alternative to it that supports classes, but is also substantially more complex.

Using descriptors to decorate methods

Although the nested function solution illustrated in the prior section is the most straightforward way to support decorators that apply to both functions and class-level methods, other schemes are possible. The descriptor feature we explored in the prior chapter, for example, can help here as well.

Recall from our discussion in that chapter that a descriptor is normally a class attribute assigned to an object with a __get__ method run automatically whenever that attribute is referenced and fetched; new-style class object derivation is required for descriptors in Python 2.X, but not 3.X:

class Descriptor(object):

def __get__(self, instance, owner): ...

class Subject:

attr = Descriptor()

X = Subject()

X.attr # Roughly runs Descriptor.__get__(Subject.attr, X, Subject)

Descriptors may also have __set__ and __del__ access methods, but we don’t need them here. More relevant to this chapter’s topic, because the descriptor’s __get__ method receives both the descriptor class instance and subject class instance when invoked, it’s well suited to decorating methods when we need both the decorator’s state and the original class instance for dispatching calls. Consider the following alternative tracing decorator, which also happens to be a descriptor when used for a class-level method:

class tracer(object): # A decorator+descriptor

def __init__(self, func): # On @ decorator

self.calls = 0 # Save func for later call

self.func = func

def __call__(self, *args, **kwargs): # On call to original func

self.calls += 1

print('call %s to %s' % (self.calls, self.func.__name__))

return self.func(*args, **kwargs)

def __get__(self, instance, owner): # On method attribute fetch

return wrapper(self, instance)

class wrapper:

def __init__(self, desc, subj): # Save both instances

self.desc = desc # Route calls back to deco/desc

self.subj = subj

def __call__(self, *args, **kwargs):

return self.desc(self.subj, *args, **kwargs) # Runs tracer.__call__

@tracer

def spam(a, b, c): # spam = tracer(spam)

...same as prior... # Uses __call__ only

class Person:

@tracer

def giveRaise(self, percent): # giveRaise = tracer(giveRaise)

...same as prior... # Makes giveRaise a descriptor

This works the same as the preceding nested function coding. Its operation varies by usage context:

§ Decorated functions invoke only its __call__, and never invoke its __get__.

§ Decorated methods invoke its __get__ first to resolve the method name fetch (on I.method); the object returned by __get__ retains the subject class instance and is then invoked to complete the call expression, thereby triggering the decorator’s __call__ (on ()).

For example, the test code’s call to:

sue.giveRaise(.10) # Runs __get__ then __call__

runs tracer.__get__ first, because the giveRaise attribute in the Person class has been rebound to a descriptor by the method function decorator. The call expression then triggers the __call__ method of the returned wrapper object, which in turn invokes tracer.__call__. In other words, decorated method calls trigger a four-step process: tracer.__get__, followed by three call operations— wrapper.__call__, tracer.__call__, and finally the original wrapped method.

The wrapper object retains both descriptor and subject instances, so it can route control back to the original decorator/descriptor class instance. In effect, the wrapper object saves the subject class instance available during method attribute fetch and adds it to the later call’s arguments list, which is passed to the decorator__call__. Routing the call back to the descriptor class instance this way is required in this application so that all calls to a wrapped method use the same calls counter state information in the descriptor instance object.

Alternatively, we could use a nested function and enclosing scope references to achieve the same effect—the following version works the same as the preceding one, by swapping a class and object attributes for a nested function and scope references. It requires noticeably less code, but follows the same four-step process on each decorated method call:

class tracer(object):

def __init__(self, func): # On @ decorator

self.calls = 0 # Save func for later call

self.func = func

def __call__(self, *args, **kwargs): # On call to original func

self.calls += 1

print('call %s to %s' % (self.calls, self.func.__name__))

return self.func(*args, **kwargs)

def __get__(self, instance, owner): # On method fetch

def wrapper(*args, **kwargs): # Retain both inst

return self(instance, *args, **kwargs) # Runs __call__

return wrapper

Add print statements to these alternatives’ methods to trace the multistep get/call process on your own, and run them with the same test code as in the nested function alternative shown earlier (see file calltracer-descr.py for their source). In either coding, this descriptor-based scheme is also substantially subtler than the nested function option, and so is probably a second choice here. To be more blunt, if its complexity doesn’t send you screaming into the night, its performance costs probably should! Still, this may be a useful coding pattern in other contexts.

It’s also worth noting that we might code this descriptor-based decorator more simply as follows, but it would then apply only to methods, not to simple functions—an intrinsic limitation of attribute descriptors (and just the inverse of the problem we’re trying to solve: application to both functions and methods)

class tracer(object): # For methods, but not functions!

def __init__(self, meth): # On @ decorator

self.calls = 0

self.meth = meth

def __get__(self, instance, owner): # On method fetch

def wrapper(*args, **kwargs): # On method call: proxy with self+inst

self.calls += 1

print('call %s to %s' % (self.calls, self.meth.__name__))

return self.meth(instance, *args, **kwargs)

return wrapper

class Person:

@tracer # Applies to class methods

def giveRaise(self, percent): # giveRaise = tracer(giveRaise)

... # Makes giveRaise a descriptor

@tracer # But fails for simple functions

def spam(a, b, c): # spam = tracer(spam)

... # No attribute fetch occurs here

In the rest of this chapter we’re going to be fairly casual about using classes or functions to code our function decorators, as long as they are applied only to functions. Some decorators may not require the instance of the original class, and will still work on both functions and methods if coded as a class—something like Python’s own staticmethod decorator, for example, wouldn’t require an instance of the subject class (indeed, its whole point is to remove the instance from the call).

The moral of this story, though, is that if you want your decorators to work on both simple functions and methods, you’re probably better off using the nested-function-based coding pattern outlined here instead of a class with call interception.

Timing Calls

To sample the fuller flavor of what function decorators are capable of, let’s turn to a different use case. Our next decorator times calls made to a decorated function—both the time for one call, and the total time among all calls. The decorator is applied to two functions, in order to compare the relative speed of list comprehensions and the map built-in call:

# File timerdeco1.py

# Caveat: range still differs - a list in 2.X, an iterable in 3.X

# Caveat: timer won't work on methods as coded (see quiz solution)

import time, sys

force = list if sys.version_info[0] == 3 else (lambda X: X)

class timer:

def __init__(self, func):

self.func = func

self.alltime = 0

def __call__(self, *args, **kargs):

start = time.clock()

result = self.func(*args, **kargs)

elapsed = time.clock() - start

self.alltime += elapsed

print('%s: %.5f, %.5f' % (self.func.__name__, elapsed, self.alltime))

return result

@timer

def listcomp(N):

return [x * 2 for x in range(N)]

@timer

def mapcall(N):

return force(map((lambda x: x * 2), range(N)))

result = listcomp(5) # Time for this call, all calls, return value

listcomp(50000)

listcomp(500000)

listcomp(1000000)

print(result)

print('allTime = %s' % listcomp.alltime) # Total time for all listcomp calls

print('')

result = mapcall(5)

mapcall(50000)

mapcall(500000)

mapcall(1000000)

print(result)

print('allTime = %s' % mapcall.alltime) # Total time for all mapcall calls

print('\n**map/comp = %s' % round(mapcall.alltime / listcomp.alltime, 3))

When run in either Python 3.X or 2.X, the output of this file’s self-test code is as follows—giving for each function call the function name, time for this call, and time for all calls so far, along with the first call’s return value, cumulative time for each function, and the map-to-comprehension time ratio at the end:

c:\code> py −3 timerdeco1.py

listcomp: 0.00001, 0.00001

listcomp: 0.00499, 0.00499

listcomp: 0.05716, 0.06215

listcomp: 0.11565, 0.17781

[0, 2, 4, 6, 8]

allTime = 0.17780527629411225

mapcall: 0.00002, 0.00002

mapcall: 0.00988, 0.00990

mapcall: 0.10601, 0.11591

mapcall: 0.21690, 0.33281

[0, 2, 4, 6, 8]

allTime = 0.3328064956447921

**map/comp = 1.872

Times vary per Python line and test machine, of course, and cumulative time is available as a class instance attribute here. As usual, map calls are almost twice as slow as list comprehensions when the latter can avoid a function call (or equivalently, its requirement of function calls can makemap slower).

Decorators versus per-call timing

For comparison, see Chapter 21 for a nondecorator approach to timing iteration alternatives like these. As a review, we saw two per-call timing techniques there, homegrown and library—here deployed to time the 1M list comprehension case of the decorator’s test code, though incurring extra costs for management code including an outer loop and function calls:

>>> def listcomp(N): [x * 2 for x in range(N)]

>>> import timer # Chapter 21 techniques

>>> timer.total(1, listcomp, 1000000)

(0.1461295268088542, None)

>>> import timeit

>>> timeit.timeit(number=1, stmt=lambda: listcomp(1000000))

0.14964829430189397

In this specific case, a nondecorator approach would allow the subject functions to be used with or without timing, but it would also complicate the call signature when timing is desired—we’d need to add code at every call instead of once at the def. Moreover, in the nondecorator scheme there would be no direct way to guarantee that all list builder calls in a program are routed through timer logic, short of finding and potentially changing them all. This may make it difficult to collect cumulative data for all calls.

In general, decorators may be preferred when functions are already deployed as part of a larger system, and may not be easily passed to analysis functions at calls. On the other hand, because decorators charge each call to a function with augmentation logic, a nondecorator approach may be better if you wish to augment calls more selectively. As usual, different tools serve different roles.

NOTE

Timer call portability and new options in 3.3: Also see Chapter 21’s more complete handling and selection of time module functions, as well as its sidebar concerning the new and improved timer functions in this module available as of Python 3.3 (e.g., perf_counter). We’re taking a simplistic approach here for both brevity and version neutrality, but time.clock may not be best on some platforms even prior to 3.3, and platform or version tests may be required outside Windows.

Testing subtleties

Notice how this script uses its force setting to make it portable between 2.X and 3.X. As described in Chapter 14, the map built-in returns an iterable that generates results on demand in 3.X, but an actual list in 2.X. Hence, 3.X’s map by itself doesn’t compare directly to a list comprehension’s work. In fact, without wrapping it in a list call to force results production, the map test takes virtually no time at all in 3.X—it returns an iterable without iterating!

At the same time, adding this list call in 2.X too charges map with an unfair penalty—the map test’s results would include the time required to build two lists, not one. To work around this, the script selects a map enclosing function per the Python version number in sys: in 3.X, pickinglist, and in 2.X using a no-op function that simply returns its input argument unchanged. This adds a very minor constant time in 2.X, which is probably fully overshadowed by the cost of the inner loop iterations in the timed function.

While this makes the comparison between list comprehensions and map more fair in either 2.X or 3.X, because range is also an iterator in 3.X, the results for 2.X and 3.X won’t compare directly unless you also hoist this call out of the timed code. They’ll be relatively comparable—and will reflect best practice code in each line anyhow—but a range iteration adds extra time in 3.X only. For more on all such things, see Chapter 21’s benchmark recreations; producing comparable numbers is often a nontrivial task.

Finally, as we did for the tracer decorator earlier, we could make this timing decorator reusable in other modules by indenting the self-test code at the bottom of the file under a __name__ test so it runs only when the file is run, not when it’s imported. We won’t do this here, though, because we’re about to add another feature to our code.

Adding Decorator Arguments

The timer decorator of the prior section works, but it would be nice if it were more configurable—providing an output label and turning trace messages on and off, for instance, might be useful in a general-purpose tool like this. Decorator arguments come in handy here: when they’re coded properly, we can use them to specify configuration options that can vary for each decorated function. A label, for instance, might be added as follows:

def timer(label=''):

def decorator(func):

def onCall(*args): # Multilevel state retention:

... # args passed to function

func(*args) # func retained in enclosing scope

print(label, ... # label retained in enclosing scope

return onCall

return decorator # Returns the actual decorator

@timer('==>') # Like listcomp = timer('==>')(listcomp)

def listcomp(N): ... # listcomp is rebound to new onCall

listcomp(...) # Really calls onCall

This code adds an enclosing scope to retain a decorator argument for use on a later actual call. When the listcomp function is defined, Python really invokes decorator—the result of timer, run before decoration actually occurs—with the label value available in its enclosing scope. That is, timer returns the decorator, which remembers both the decorator argument and the original function, and returns the callable onCall, which ultimately invokes the original function on later calls. Because this structure creates new decorator and onCall functions, their enclosing scopes are per-decoration state retention.

We can put this structure to use in our timer to allow a label and a trace control flag to be passed in at decoration time. Here’s an example that does just that, coded in a module file named timerdeco2.py so it can be imported as a general tool; it uses a class for the second state retention level instead of a nested function, but the net result is similar:

import time

def timer(label='', trace=True): # On decorator args: retain args

class Timer:

def __init__(self, func): # On @: retain decorated func

self.func = func

self.alltime = 0

def __call__(self, *args, **kargs): # On calls: call original

start = time.clock()

result = self.func(*args, **kargs)

elapsed = time.clock() - start

self.alltime += elapsed

if trace:

format = '%s %s: %.5f, %.5f'

values = (label, self.func.__name__, elapsed, self.alltime)

print(format % values)

return result

return Timer

Mostly all we’ve done here is embed the original Timer class in an enclosing function, in order to create a scope that retains the decorator arguments per deployment. The outer timer function is called before decoration occurs, and it simply returns the Timer class to serve as the actual decorator. On decoration, an instance of Timer is made that remembers the decorated function itself, but also has access to the decorator arguments in the enclosing function scope.

Timing with decorator arguments

This time, rather than embedding self-test code in this file, we’ll run the decorator in a different file. Here’s a client of our timer decorator, the module file testseqs.py, applying it to sequence iteration alternatives again:

import sys

from timerdeco2 import timer

force = list if sys.version_info[0] == 3 else (lambda X: X)

@timer(label='[CCC]==>')

def listcomp(N): # Like listcomp = timer(...)(listcomp)

return [x * 2 for x in range(N)] # listcomp(...) triggers Timer.__call__

@timer(trace=True, label='[MMM]==>')

def mapcall(N):

return force(map((lambda x: x * 2), range(N)))

for func in (listcomp, mapcall):

result = func(5) # Time for this call, all calls, return value

func(50000)

func(500000)

func(1000000)

print(result)

print('allTime = %s\n' % func.alltime) # Total time for all calls

print('**map/comp = %s' % round(mapcall.alltime / listcomp.alltime, 3))

Again, to make this fair, map is wrapped in a list call in 3.X only. When run as is in 3.X or 2.X, this file prints the following—each decorated function now has a label of its own defined by decorator arguments, which will be more useful when we need to find trace displays mixed in with a larger program’s output:

c:\code> py −3 testseqs.py

[CCC]==> listcomp: 0.00001, 0.00001

[CCC]==> listcomp: 0.00504, 0.00505

[CCC]==> listcomp: 0.05839, 0.06344

[CCC]==> listcomp: 0.12001, 0.18344

[0, 2, 4, 6, 8]

allTime = 0.1834406801777564

[MMM]==> mapcall: 0.00003, 0.00003

[MMM]==> mapcall: 0.00961, 0.00964

[MMM]==> mapcall: 0.10929, 0.11892

[MMM]==> mapcall: 0.22143, 0.34035

[0, 2, 4, 6, 8]

allTime = 0.3403542519173618

**map/comp = 1.855

As usual, we can also test interactively to see how the decorator’s configuration arguments come into play:

>>> from timerdeco2 import timer

>>> @timer(trace=False) # No tracing, collect total time

... def listcomp(N):

... return [x * 2 for x in range(N)]

...

>>> x = listcomp(5000)

>>> x = listcomp(5000)

>>> x = listcomp(5000)

>>> listcomp.alltime

0.0037191417530599152

>>> listcomp

<timerdeco2.timer.<locals>.Timer object at 0x02957518>

>>> @timer(trace=True, label='\t=>') # Turn on tracing, custom label

... def listcomp(N):

... return [x * 2 for x in range(N)]

...

>>> x = listcomp(5000)

=> listcomp: 0.00106, 0.00106

>>> x = listcomp(5000)

=> listcomp: 0.00108, 0.00214

>>> x = listcomp(5000)

=> listcomp: 0.00107, 0.00321

>>> listcomp.alltime

0.003208920466562404

As is, this timing function decorator can be used for any function, both in modules and interactively. In other words, it automatically qualifies as a general-purpose tool for timing code in our scripts. Watch for another example of decorator arguments in the section Implementing Private Attributes, and again in “A Basic Range-Testing Decorator for Positional Arguments”.

NOTE

Supporting methods: This section’s timer decorator works on any function, but a minor rewrite is required to be able to apply it to class-level methods too. In short, as our earlier section Class Blunders I: Decorating Methods illustrated, it must avoid using a nested class. Because this mutation was deliberately reserved to be a subject of one of our end-of-chapter quiz questions, though, I’ll avoid giving away the answer completely here.

Coding Class Decorators

So far we’ve been coding function decorators to manage function calls, but as we’ve seen, decorators have been extended to work on classes too as of Python 2.6 and 3.0. As described earlier, while similar in concept to function decorators, class decorators are applied to classes instead—they may be used either to manage classes themselves, or to intercept instance creation calls in order to manage instances. Also like function decorators, class decorators are really just optional syntactic sugar, though many believe that they make a programmer’s intent more obvious and minimize erroneous or missed calls.

Singleton Classes

Because class decorators may intercept instance creation calls, they can be used to either manage all the instances of a class, or augment the interfaces of those instances. To demonstrate, here’s a first class decorator example that does the former—managing all instances of a class. This code implements the classic singleton coding pattern, where at most one instance of a class ever exists. Its singleton function defines and returns a function for managing instances, and the @ syntax automatically wraps up a subject class in this function:

# 3.X and 2.X: global table

instances = {}

def singleton(aClass): # On @ decoration

def onCall(*args, **kwargs): # On instance creation

if aClass not in instances: # One dict entry per class

instances[aClass] = aClass(*args, **kwargs)

return instances[aClass]

return onCall

To use this, decorate the classes for which you want to enforce a single-instance model (for reference, all the code in this section is in the file singletons.py):

@singleton # Person = singleton(Person)

class Person: # Rebinds Person to onCall

def __init__(self, name, hours, rate): # onCall remembers Person

self.name = name

self.hours = hours

self.rate = rate

def pay(self):

return self.hours * self.rate

@singleton # Spam = singleton(Spam)

class Spam: # Rebinds Spam to onCall

def __init__(self, val): # onCall remembers Spam

self.attr = val

bob = Person('Bob', 40, 10) # Really calls onCall

print(bob.name, bob.pay())

sue = Person('Sue', 50, 20) # Same, single object

print(sue.name, sue.pay())

X = Spam(val=42) # One Person, one Spam

Y = Spam(99)

print(X.attr, Y.attr)

Now, when the Person or Spam class is later used to create an instance, the wrapping logic layer provided by the decorator routes instance construction calls to onCall, which in turn ensures a single instance per class, regardless of how many construction calls are made. Here’s this code’s output (2.X prints extra tuple parentheses):

c:\code> python singletons.py

Bob 400

Bob 400

42 42

Coding alternatives

Interestingly, you can code a more self-contained solution here if you’re able to use the nonlocal statement (available in Python 3.X only) to change enclosing scope names, as described earlier—the following alternative achieves an identical effect, by using one enclosing scope per class, instead of one global table entry per class. It works the same, but it does not depend on names in the global scope outside the decorator (note that the None check could use is instead of == here, but it’s a trivial test either way):

# 3.X only: nonlocal

def singleton(aClass): # On @ decoration

instance = None

def onCall(*args, **kwargs): # On instance creation

nonlocal instance # 3.X and later nonlocal

if instance == None:

instance = aClass(*args, **kwargs) # One scope per class

return instance

return onCall

In either Python 3.X or 2.X (2.6 and later), you can also code a self-contained solution with either function attributes or a class instead. The first of the following codes the former, leveraging the fact that there will be one onCall function per decoration—the object namespace serves the same role as an enclosing scope. The second uses one instance per decoration, rather than an enclosing scope, function object, or global table. In fact, the second relies on the same coding pattern that we will later see is a common decorator class blunder—here we want just one instance, but that’s not usually the case:

# 3.X and 2.X: func attrs, classes (alternative codings)

def singleton(aClass): # On @ decoration

def onCall(*args, **kwargs): # On instance creation

if onCall.instance == None:

onCall.instance = aClass(*args, **kwargs) # One function per class

return onCall.instance

onCall.instance = None

return onCall

class singleton:

def __init__(self, aClass): # On @ decoration

self.aClass = aClass

self.instance = None

def __call__(self, *args, **kwargs): # On instance creation

if self.instance == None:

self.instance = self.aClass(*args, **kwargs) # One instance per class

return self.instance

To make this decorator a fully general-purpose tool, choose one, store it in an importable module file, and indent the self-test code under a __name__ check—steps we’ll leave as suggested exercise. The final class-based version offers a portability and explicit option, with extra structure that may better support later evolution, but OOP might not be warranted in all contexts.

Tracing Object Interfaces

The singleton example of the prior section illustrated using class decorators to manage all the instances of a class. Another common use case for class decorators augments the interface of each generated instance. Class decorators can essentially install on instances a wrapper or “proxy” logic layer that manages access to their interfaces in some way.

For example, in Chapter 31, the __getattr__ operator overloading method is shown as a way to wrap up entire object interfaces of embedded instances, in order to implement the delegation coding pattern. We saw similar examples in the managed attribute coverage of the prior chapter. Recall that __getattr__ is run when an undefined attribute name is fetched; we can use this hook to intercept method calls in a controller class and propagate them to an embedded object.

For reference, here’s the original nondecorator delegation example, working on two built-in type objects:

class Wrapper:

def __init__(self, object):

self.wrapped = object # Save object

def __getattr__(self, attrname):

print('Trace:', attrname) # Trace fetch

return getattr(self.wrapped, attrname) # Delegate fetch

>>> x = Wrapper([1,2,3]) # Wrap a list

>>> x.append(4) # Delegate to list method

Trace: append

>>> x.wrapped # Print my member

[1, 2, 3, 4]

>>> x = Wrapper({"a": 1, "b": 2}) # Wrap a dictionary

>>> list(x.keys()) # Delegate to dictionary method

Trace: keys # Use list() in 3.X

['a', 'b']

In this code, the Wrapper class intercepts access to any of the wrapped object’s named attributes, prints a trace message, and uses the getattr built-in to pass off the request to the wrapped object. Specifically, it traces attribute accesses made outside the wrapped object’s class; accesses inside the wrapped object’s methods are not caught and run normally by design. This whole-interface model differs from the behavior of function decorators, which wrap up just one specific method.

Tracing interfaces with class decorators

Class decorators provide an alternative and convenient way to code this __getattr__ technique to wrap an entire interface. As of both 2.6 and 3.0, for example, the prior class example can be coded as a class decorator that triggers wrapped instance creation, instead of passing a premade instance into the wrapper’s constructor (also augmented here to support keyword arguments with **kargs and to count the number of accesses made to illustrate changeable state):

def Tracer(aClass): # On @ decorator

class Wrapper:

def __init__(self, *args, **kargs): # On instance creation

self.fetches = 0

self.wrapped = aClass(*args, **kargs) # Use enclosing scope name

def __getattr__(self, attrname):

print('Trace: ' + attrname) # Catches all but own attrs

self.fetches += 1

return getattr(self.wrapped, attrname) # Delegate to wrapped obj

return Wrapper

if __name__ == '__main__':

@Tracer

class Spam: # Spam = Tracer(Spam)

def display(self): # Spam is rebound to Wrapper

print('Spam!' * 8)

@Tracer

class Person: # Person = Tracer(Person)

def __init__(self, name, hours, rate): # Wrapper remembers Person

self.name = name

self.hours = hours

self.rate = rate

def pay(self): # Accesses outside class traced

return self.hours * self.rate # In-method accesses not traced

food = Spam() # Triggers Wrapper()

food.display() # Triggers __getattr__

print([food.fetches])

bob = Person('Bob', 40, 50) # bob is really a Wrapper

print(bob.name) # Wrapper embeds a Person

print(bob.pay())

print('')

sue = Person('Sue', rate=100, hours=60) # sue is a different Wrapper

print(sue.name) # with a different Person

print(sue.pay())

print(bob.name) # bob has different state

print(bob.pay())

print([bob.fetches, sue.fetches]) # Wrapper attrs not traced

It’s important to note that this is very different from the tracer decorator we met earlier (despite the name!). In “Coding Function Decorators”, we looked at decorators that enabled us to trace and time calls to a given function or method. In contrast, by intercepting instance creation calls, the class decorator here allows us to trace an entire object interface—that is, accesses to any of the instance’s attributes.

The following is the output produced by this code under both 3.X and 2.X (2.6 and later): attribute fetches on instances of both the Spam and Person classes invoke the __getattr__ logic in the Wrapper class, because food and bob are really instances of Wrapper, thanks to the decorator’s redirection of instance creation calls:

c:\code> python interfacetracer.py

Trace: display

Spam!Spam!Spam!Spam!Spam!Spam!Spam!Spam!

[1]

Trace: name

Bob

Trace: pay

2000

Trace: name

Sue

Trace: pay

6000

Trace: name

Bob

Trace: pay

2000

[4, 2]

Notice how there is one Wrapper class with state retention per decoration, generated by the nested class statement in the Tracer function, and how each instance gets its own fetches counter by virtue of generating a new Wrapper instance. As we’ll see ahead, orchestrating this is trickier than you may expect.

Applying class decorators to built-in types

Also notice that the preceding decorates a user-defined class. Just like in the original example in Chapter 31, we can also use the decorator to wrap up a built-in type such as a list, as long as we either subclass to allow decoration syntax or perform the decoration manually—decorator syntaxrequires a class statement for the @ line. In the following, x is really a Wrapper again due to the indirection of decoration:

>>> from interfacetracer import Tracer

>>> @Tracer

... class MyList(list): pass # MyList = Tracer(MyList)

>>> x = MyList([1, 2, 3]) # Triggers Wrapper()

>>> x.append(4) # Triggers __getattr__, append

Trace: append

>>> x.wrapped

[1, 2, 3, 4]

>>> WrapList = Tracer(list) # Or perform decoration manually

>>> x = WrapList([4, 5, 6]) # Else subclass statement required

>>> x.append(7)

Trace: append

>>> x.wrapped

[4, 5, 6, 7]

The decorator approach allows us to move instance creation into the decorator itself, instead of requiring a premade object to be passed in. Although this seems like a minor difference, it lets us retain normal instance creation syntax and realize all the benefits of decorators in general. Rather than requiring all instance creation calls to route objects through a wrapper manually, we need only augment class definitions with decorator syntax:

@Tracer # Decorator approach

class Person: ...

bob = Person('Bob', 40, 50)

sue = Person('Sue', rate=100, hours=60)

class Person: ... # Nondecorator approach

bob = Wrapper(Person('Bob', 40, 50))

sue = Wrapper(Person('Sue', rate=100, hours=60))

Assuming you will make more than one instance of a class, and want to apply the augmentation to every instance of a class, decorators will generally be a net win in terms of both code size and code maintenance.

NOTE

Attribute version skew note: The preceding tracer decorator works for explicitly accessed attribute names on all Pythons. As we learned in Chapter 38, Chapter 32, and elsewhere, though, __getattr__ intercepts built-ins’ implicit accesses to operator overloading methods like __str__ and __repr__ in Python 2.X’s default classic classes, but not in 3.X’s new-style classes.

In Python 3.X’s classes, instances inherit defaults for some, but not all of these names from the class (really, from the object superclass). Moreover, in 3.X, implicitly invoked attributes for built-in operations like printing and + are not routed through __getattr__, or its cousin, __getattribute__. In new-style classes, built-ins start such searches at classes and skip the normal instance lookup entirely.

Here, this means that the __getattr__ based tracing wrapper will automatically trace and propagate operator overloading calls for built-ins in 2.X as coded, but not in 3.X. To see this, display “x” directly at the end of the preceding interactive session—in 2.X the attribute __repr__ is traced and the list prints as expected, but in 3.X no trace occurs and the list prints using a default display for the Wrapper class:

>>> x # 2.X

Trace: __repr__

[4, 5, 6, 7]

>>> x # 3.X

<interfacetracer.Tracer.<locals>.Wrapper object at 0x02946358>

To work the same in 3.X, operator overloading methods generally must be redefined redundantly in the wrapper class, either by hand, by tools, or by definition in superclasses. We’ll see this at work again in a Private decorator later in this chapter—where we’ll also study ways to add the methods required of such code in 3.X.

Class Blunders II: Retaining Multiple Instances

Curiously, the decorator function in this example can almost be coded as a class instead of a function, with the proper operator overloading protocol. The following slightly simplified alternative works similarly because its __init__ is triggered when the @ decorator is applied to the class,and its __call__ is triggered when a subject class instance is created. Our objects are really instances of Tracer this time, and we essentially just trade an enclosing scope reference for an instance attribute here:

class Tracer:

def __init__(self, aClass): # On @decorator

self.aClass = aClass # Use instance attribute

def __call__(self, *args): # On instance creation

self.wrapped = self.aClass(*args) # ONE (LAST) INSTANCE PER CLASS!

return self

def __getattr__(self, attrname):

print('Trace: ' + attrname)

return getattr(self.wrapped, attrname)

@Tracer # Triggers __init__

class Spam: # Like: Spam = Tracer(Spam)

def display(self):

print('Spam!' * 8)

...

food = Spam() # Triggers __call__

food.display() # Triggers __getattr__

As we saw in the abstract earlier, though, this class-only alternative handles multiple classes as before, but it won’t quite work for multiple instances of a given class: each instance construction call triggers __call__, which overwrites the prior instance. The net effect is that Tracer saves just one instance—the last one created. Experiment with this yourself to see how, but here’s an example of the problem:

@Tracer

class Person: # Person = Tracer(Person)

def __init__(self, name): # Wrapper bound to Person

self.name = name

bob = Person('Bob') # bob is really a Wrapper

print(bob.name) # Wrapper embeds a Person

Sue = Person('Sue')

print(sue.name) # sue overwrites bob

print(bob.name) # OOPS: now bob's name is 'Sue'!

This code’s output follows—because this tracer only has a single shared instance, the second overwrites the first:

Trace: name

Bob

Trace: name

Sue

Trace: name

Sue

The problem here is bad state retention—we make one decorator instance per class, but not per class instance, such that only the last instance is retained. The solution, as in our prior class blunder for decorating methods, lies in abandoning class-based decorators.

The earlier function-based Tracer version does work for multiple instances, because each instance construction call makes a new Wrapper instance, instead of overwriting the state of a single shared Tracer instance; the original nondecorator version handles multiple instances correctly for the same reason. The moral here: decorators are not only arguably magical, they can also be incredibly subtle!

Decorators Versus Manager Functions

Regardless of such subtleties, the Tracer class decorator example ultimately still relies on __getattr__ to intercept fetches on a wrapped and embedded instance object. As we saw earlier, all we’ve really accomplished is moving the instance creation call inside a class, instead of passing the instance into a manager function. With the original nondecorator tracing example, we would simply code instance creation differently:

class Spam: # Nondecorator version

... # Any class will do

food = Wrapper(Spam()) # Special creation syntax

@Tracer

class Spam: # Decorator version

... # Requires @ syntax at class

food = Spam() # Normal creation syntax

Essentially, class decorators shift special syntax requirements from the instance creation call to the class statement itself. This is also true for the singleton example earlier in this section—rather than decorating a class and using normal instance creation calls, we could simply pass the class and its construction arguments into a manager function:

instances = {}

def getInstance(aClass, *args, **kwargs):

if aClass not in instances:

instances[aClass] = aClass(*args, **kwargs)

return instances[aClass]

bob = getInstance(Person, 'Bob', 40, 10) # Versus: bob = Person('Bob', 40, 10)

Alternatively, we could use Python’s introspection facilities to fetch the class from an already created instance (assuming creating an initial instance is acceptable):

instances = {}

def getInstance(object):

aClass = object.__class__

if aClass not in instances:

instances[aClass] = object

return instances[aClass]

bob = getInstance(Person('Bob', 40, 10)) # Versus: bob = Person('Bob', 40, 10)

The same holds true for function decorators like the tracer we wrote earlier: rather than decorating a function with logic that intercepts later calls, we could simply pass the function and its arguments into a manager that dispatches the call:

def func(x, y): # Nondecorator version

... # def tracer(func, args): ... func(*args)

result = tracer(func, (1, 2)) # Special call syntax

@tracer

def func(x, y): # Decorator version

... # Rebinds name: func = tracer(func)

result = func(1, 2) # Normal call syntax

Manager function approaches like this place the burden of using special syntax on calls, instead of expecting decoration syntax at function and class definitions, but also allow you to selectively apply augmentation on a call-by-call basis.

Why Decorators? (Revisited)

So why did I just show you ways to not use decorators to implement singletons? As I mentioned at the start of this chapter, decorators present us with tradeoffs. Although syntax matters, we all too often forget to ask the “why” questions when confronted with new tools. Now that we’ve seen how decorators actually work, let’s step back for a minute to glimpse the big picture here before moving on to more code.

Like most language features, decorators have both pros and cons. For example, in the negatives column, decorators may suffer from three potential drawbacks, which can vary per decorator type:

Type changes

As we’ve seen, when wrappers are inserted, a decorated function or class does not retain its original type—it is rebound to a wrapper (proxy) object, which might matter in programs that use object names or test object types. In the singleton example, both the decorator and manager function approaches retain the original class type for instances; in the tracer code, neither approach does, because wrappers are required. Of course, you should avoid type checks in a polymorphic language like Python anyhow, but there are exceptions to most rules.

Extra calls

A wrapping layer added by decoration incurs the additional performance cost of an extra call each time the decorated object is invoked—calls are relatively time-expensive operations, so decoration wrappers can make a program slower. In the tracer code, both approaches require each attribute to be routed through a wrapper layer; the singleton example avoids extra calls by retaining the original class type.

All or nothing

Because decorators augment a function or class, they generally apply to every later call to the decorated object. That ensures uniform deployment, but can also be a negative if you’d rather apply an augmentation more selectively on a call-by-call basis.

That said, none of these is a very serious issue. For most programs, decorations’ uniformity is an asset, the type difference is unlikely to matter, and the speed hit of the extra calls will be insignificant. Furthermore, the latter of these occurs only when wrappers are used, can often be negated if we simply remove the decorator when optimal performance is required, and is also incurred by nondecorator solutions that add wrapping logic (including metaclasses, as we’ll see in Chapter 40).

Conversely, as we saw at the start of this chapter, decorators have three main advantages. Compared to the manager (a.k.a. “helper”) function solutions of the prior section, decorators offer:

Explicit syntax

Decorators make augmentation explicit and obvious. Their @ syntax is easier to recognize than special code in calls that may appear anywhere in a source file—in our singleton and tracer examples, for instance, the decorator lines seem more likely to be noticed than extra code at calls would be. Moreover, decorators allow function and instance creation calls to use normal syntax familiar to all Python programmers.

Code maintenance

Decorators avoid repeated augmentation code at each function or class call. Because they appear just once, at the definition of the class or function itself, they obviate redundancy and simplify future code maintenance. For our singleton and tracer cases, we need to use special code at each call to use a manager function approach—extra work is required both initially and for any modifications that must be made in the future.

Consistency

Decorators make it less likely that a programmer will forget to use required wrapping logic. This derives mostly from the two prior advantages—because decoration is explicit and appears only once, at the decorated objects themselves, decorators promote more consistent and uniform API usage than special code that must be included at each call. In the singleton example, for instance, it would be easy to forget to route all class creation calls through special code, which would subvert the singleton management altogether.

Decorators also promote code encapsulation to reduce redundancy and minimize future maintenance effort; although other code structuring tools do too, decorators add explicit structure that makes this natural for augmentation tasks.

None of these benefits completely requires decorator syntax to be achieved, though, and decorator usage is ultimately a stylistic choice. That said, most programmers find them to be a net win, especially as a tool for using libraries and APIs correctly.

NOTE

Historic anecdote: I can recall similar arguments being made both for and against constructor functions in classes—prior to the introduction of __init__ methods, programmers achieved the same effect by running an instance through a method manually when creating it (e.g., X=Class().init()). Over time, though, despite being fundamentally a stylistic choice, the __init__ syntax came to be universally preferred because it was more explicit, consistent, and maintainable. Although you should be the judge, decorators seem to bring many of the same assets to the table.

Managing Functions and Classes Directly

Most of our examples in this chapter have been designed to intercept function and instance creation calls. Although this is typical for decorators, they are not limited to this role. Because decorators work by running new functions and classes through decorator code, they can also be used to manage function and class objects themselves, not just later calls made to them.

Imagine, for example, that you require methods or classes used by an application to be registered to an API for later processing (perhaps that API will call the objects later, in response to events). Although you could provide a registration function to be called manually after the objects are defined, decorators make your intent more explicit.

The following simple implementation of this idea defines a decorator that can be applied to both functions and classes, to add the object to a dictionary-based registry. Because it returns the object itself instead of a wrapper, it does not intercept later calls:

# Registering decorated objects to an API

from __future__ import print_function # 2.X

registry = {}

def register(obj): # Both class and func decorator

registry[obj.__name__] = obj # Add to registry

return obj # Return obj itself, not a wrapper

@register

def spam(x):

return(x ** 2) # spam = register(spam)

@register

def ham(x):

return(x ** 3)

@register

class Eggs: # Eggs = register(Eggs)

def __init__(self, x):

self.data = x ** 4

def __str__(self):

return str(self.data)

print('Registry:')

for name in registry:

print(name, '=>', registry[name], type(registry[name]))

print('\nManual calls:')

print(spam(2)) # Invoke objects manually

print(ham(2)) # Later calls not intercepted

X = Eggs(2)

print(X)

print('\nRegistry calls:')

for name in registry:

print(name, '=>', registry[name](2)) # Invoke from registry

When this code is run the decorated objects are added to the registry by name, but they still work as originally coded when they’re called later, without being routed through a wrapper layer. In fact, our objects can be run both manually and from inside the registry table:

c:\code> py −3 registry-deco.py

Registry:

spam => <function spam at 0x02969158> <class 'function'>

ham => <function ham at 0x02969400> <class 'function'>

Eggs => <class '__main__.Eggs'> <class 'type'>

Manual calls:

4

8

16

Registry calls:

spam => 4

ham => 8

Eggs => 16

A user interface might use this technique, for example, to register callback handlers for user actions. Handlers might be registered by function or class name, as done here, or decorator arguments could be used to specify the subject event; an extra def statement enclosing our decorator could be used to retain such arguments for use on decoration.

This example is artificial, but its technique is very general. For example, function decorators might also be used to process function attributes, and class decorators might insert new class attributes, or even new methods, dynamically. Consider the following function decorators—they assign function attributes to record information for later use by an API, but they do not insert a wrapper layer to intercept later calls:

# Augmenting decorated objects directly

>>> def decorate(func):

func.marked = True # Assign function attribute for later use

return func

>>> @decorate

def spam(a, b):

return a + b

>>> spam.marked

True

>>> def annotate(text): # Same, but value is decorator argument

def decorate(func):

func.label = text

return func

return decorate

>>> @annotate('spam data')

def spam(a, b): # spam = annotate(...)(spam)

return a + b

>>> spam(1, 2), spam.label

(3, 'spam data')

Such decorators augment functions and classes directly, without catching later calls to them. We’ll see more examples of class decorations managing classes directly in the next chapter, because this turns out to encroach on the domain of metaclasses; for the remainder of this chapter, let’s turn to two larger case studies of decorators at work.

Example: “Private” and “Public” Attributes

The final two sections of this chapter present larger examples of decorator use. Both are presented with minimal description, partly because this chapter has hit its size limits, but mostly because you should already understand decorator basics well enough to study these on your own. Being general-purpose tools, these examples give us a chance to see how decorator concepts come together in more useful code.

Implementing Private Attributes

The following class decorator implements a Private declaration for class instance attributes—that is, attributes stored on an instance, or inherited from one of its classes. It disallows fetch and change access to such attributes from outside the decorated class, but still allows the class itself to access those names freely within its own methods. It’s not exactly C++ or Java, but it provides similar access control as an option in Python.

We saw an incomplete first-cut implementation of instance attribute privacy for changes only in Chapter 30. The version here extends this concept to validate attribute fetches too, and it uses delegation instead of inheritance to implement the model. In fact, in a sense this is just an extension to the attribute tracer class decorator we met earlier.

Although this example utilizes the new syntactic sugar of class decorators to code attribute privacy, its attribute interception is ultimately still based upon the __getattr__ and __setattr__ operator overloading methods we met in prior chapters. When a private attribute access is detected, this version uses the raise statement to raise an exception, along with an error message; the exception may be caught in a try or allowed to terminate the script.

Here is the code, along with a self test at the bottom of the file. It will work under both Python 3.X and 2.X (2.6 and later) because it employs version-neutral print and raise syntax, though as coded it catches built-ins’ dispatch to operator overloading method attributes in 2.X only (more on this in a moment):

"""

File access1.py (3.X + 2.X)

Privacy for attributes fetched from class instances.

See self-test code at end of file for a usage example.

Decorator same as: Doubler = Private('data', 'size')(Doubler).

Private returns onDecorator, onDecorator returns onInstance,

and each onInstance instance embeds a Doubler instance.

"""

traceMe = False

def trace(*args):

if traceMe: print('[' + ' '.join(map(str, args)) + ']')

def Private(*privates): # privates in enclosing scope

def onDecorator(aClass): # aClass in enclosing scope

class onInstance: # wrapped in instance attribute

def __init__(self, *args, **kargs):

self.wrapped = aClass(*args, **kargs)

def __getattr__(self, attr): # My attrs don't call getattr

trace('get:', attr) # Others assumed in wrapped

if attr in privates:

raise TypeError('private attribute fetch: ' + attr)

else:

return getattr(self.wrapped, attr)

def __setattr__(self, attr, value): # Outside accesses

trace('set:', attr, value) # Others run normally

if attr == 'wrapped': # Allow my attrs

self.__dict__[attr] = value # Avoid looping

elif attr in privates:

raise TypeError('private attribute change: ' + attr)

else:

setattr(self.wrapped, attr, value) # Wrapped obj attrs

return onInstance # Or use __dict__

return onDecorator

if __name__ == '__main__':

traceMe = True

@Private('data', 'size') # Doubler = Private(...)(Doubler)

class Doubler:

def __init__(self, label, start):

self.label = label # Accesses inside the subject class

self.data = start # Not intercepted: run normally

def size(self):

return len(self.data) # Methods run with no checking

def double(self): # Because privacy not inherited

for i in range(self.size()):

self.data[i] = self.data[i] * 2

def display(self):

print('%s => %s' % (self.label, self.data))

X = Doubler('X is', [1, 2, 3])

Y = Doubler('Y is', [-10, −20, −30])

# The following all succeed

print(X.label) # Accesses outside subject class

X.display(); X.double(); X.display() # Intercepted: validated, delegated

print(Y.label)

Y.display(); Y.double()

Y.label = 'Spam'

Y.display()

# The following all fail properly

"""

print(X.size()) # prints "TypeError: private attribute fetch: size"

print(X.data)

X.data = [1, 1, 1]

X.size = lambda S: 0

print(Y.data)

print(Y.size())

"""

When traceMe is True, the module file’s self-test code produces the following output. Notice how the decorator catches and validates both attribute fetches and assignments run outside of the wrapped class, but does not catch attribute accesses inside the class itself:

c:\code> py −3 access1.py

[set: wrapped <__main__.Doubler object at 0x00000000029769B0>]

[set: wrapped <__main__.Doubler object at 0x00000000029769E8>]

[get: label]

X is

[get: display]

X is => [1, 2, 3]

[get: double]

[get: display]

X is => [2, 4, 6]

[get: label]

Y is

[get: display]

Y is => [-10, −20, −30]

[get: double]

[set: label Spam]

[get: display]

Spam => [−20, −40, −60]

Implementation Details I

This code is a bit complex, and you’re probably best off tracing through it on your own to see how it works. To help you study, though, here are a few highlights worth mentioning.

Inheritance versus delegation

The first-cut privacy example shown in Chapter 30 used inheritance to mix in a __setattr__ to catch accesses. Inheritance makes this difficult, however, because differentiating between accesses from inside or outside the class is not straightforward (inside access should be allowed to run normally, and outside access should be restricted). To work around this, the Chapter 30 example requires inheriting classes to use __dict__ assignments to set attributes—an incomplete solution at best.

The version here uses delegation (embedding one object inside another) instead of inheritance; this pattern is better suited to our task, as it makes it much easier to distinguish between accesses inside and outside of the subject class. Attribute accesses from outside the subject class are intercepted by the wrapper layer’s overloading methods and delegated to the class if valid. Accesses inside the class itself (i.e., through self within its methods’ code) are not intercepted and are allowed to run normally without checks, because privacy is not inherited in this version.

Decorator arguments

The class decorator used here accepts any number of arguments, to name private attributes. What really happens, though, is that the arguments are passed to the Private function, and Private returns the decorator function to be applied to the subject class. That is, the arguments are used before decoration ever occurs; Private returns the decorator, which in turn “remembers” the privates list as an enclosing scope reference.

State retention and enclosing scopes

Speaking of enclosing scopes, there are actually three levels of state retention at work in this code:

§ The arguments to Private are used before decoration occurs and are retained as an enclosing scope reference for use in both onDecorator and onInstance.

§ The class argument to onDecorator is used at decoration time and is retained as an enclosing scope reference for use at instance construction time.

§ The wrapped instance object is retained as an instance attribute in the onInstance proxy object, for use when attributes are later accessed from outside the class.

This all works fairly naturally, given Python’s scope and namespace rules.

Using __dict__ and __slots__ (and other virtual names)

The __setattr__ method in this code relies on an instance object’s __dict__ attribute namespace dictionary in order to set onInstance’s own wrapped attribute. As we learned in the prior chapter, this method cannot assign an attribute directly without looping. However, it uses thesetattr built-in instead of __dict__ to set attributes in the wrapped object itself. Moreover, getattr is used to fetch attributes in the wrapped object, since they may be stored in the object itself or inherited by it.

Because of that, this code will work for most classes—including those with “virtual” class-level attributes based on slots, properties, descriptors, and even __getattr__ and its ilk. By assuming a namespace dictionary for itself only and using storage-neutral tools for the wrapped object, the wrapper class avoids limitations inherent in other tools.

For example, you may recall from Chapter 32 that new-style classes with __slots__ may not store attributes in a __dict__ (and in fact may not even have one of these at all). However, because we rely on a __dict__ only at the onInstance level here, and not in the wrapped instance, this concern does not apply. In addition, because setattr and getattr apply to attributes based on both __dict__ and __slots__, our decorator applies to classes using either storage scheme. By the same reasoning, the decorator also applies to new-style properties and similar tools: delegated names will be looked up anew in the wrapped instance, irrespective of attributes of the decorator proxy object itself.

Generalizing for Public Declarations, Too

Now that we have a Private implementation, it’s straightforward to generalize the code to allow for Public declarations too—they are essentially the inverse of Private declarations, so we need only negate the inner test. The example listed in this section allows a class to use decorators to define a set of either Private or Public instance attributes—attributes of any kind stored on an instance or inherited from its classes—with the following semantics:

§ Private declares attributes of a class’s instances that cannot be fetched or assigned, except from within the code of the class’s methods. That is, any name declared Private cannot be accessed from outside the class, while any name not declared Private can be freely fetched or assigned from outside the class.

§ Public declares attributes of a class’s instances that can be fetched or assigned from both outside the class and within the class’s methods. That is, any name declared Public can be freely accessed anywhere, while any name not declared Public cannot be accessed from outside the class.

Private and Public declarations are intended to be mutually exclusive: when using Private, all undeclared names are considered Public, and when using Public, all undeclared names are considered Private. They are essentially inverses, though undeclared names not created by a class’s methods behave slightly differently—new names can be assigned and thus created outside the class under Private (all undeclared names are accessible), but not under Public (all undeclared names are inaccessible).

Again, study this code on your own to get a feel for how this works. Notice that this scheme adds an additional fourth level of state retention at the top, beyond that described in the preceding section: the test functions used by the lambdas are saved in an extra enclosing scope. This example is coded to run under either Python 3.X or 2.X (2.6 or later), though it comes with a caveat when run under 3.X (explained briefly in the file’s docstring and expanded on after the code):

"""

File access2.py (3.X + 2.X)

Class decorator with Private and Public attribute declarations.

Controls external access to attributes stored on an instance, or

Inherited by it from its classes. Private declares attribute names

that cannot be fetched or assigned outside the decorated class,

and Public declares all the names that can.

Caveat: this works in 3.X for explicitly named attributes only: __X__

operator overloading methods implicitly run for built-in operations

do not trigger either __getattr__ or __getattribute__ in new-style

classes. Add __X__ methods here to intercept and delegate built-ins.

"""

traceMe = False

def trace(*args):

if traceMe: print('[' + ' '.join(map(str, args)) + ']')

def accessControl(failIf):

def onDecorator(aClass):

class onInstance:

def __init__(self, *args, **kargs):

self.__wrapped = aClass(*args, **kargs)

def __getattr__(self, attr):

trace('get:', attr)

if failIf(attr):

raise TypeError('private attribute fetch: ' + attr)

else:

return getattr(self.__wrapped, attr)

def __setattr__(self, attr, value):

trace('set:', attr, value)

if attr == '_onInstance__wrapped':

self.__dict__[attr] = value

elif failIf(attr):

raise TypeError('private attribute change: ' + attr)

else:

setattr(self.__wrapped, attr, value)

return onInstance

return onDecorator

def Private(*attributes):

return accessControl(failIf=(lambda attr: attr in attributes))

def Public(*attributes):

return accessControl(failIf=(lambda attr: attr not in attributes))

See the prior example’s self-test code for a usage example. Here’s a quick look at these class decorators in action at the interactive prompt; they work the same in 2.X and 3.X for attributes referenced by explicit name like those tested here. As advertised, non-Private or Public names can be fetched and changed from outside the subject class, but Private or non-Public names cannot:

>>> from access2 import Private, Public

>>> @Private('age') # Person = Private('age')(Person)

class Person: # Person = onInstance with state

def __init__(self, name, age):

self.name = name

self.age = age # Inside accesses run normally

>>> X = Person('Bob', 40)

>>> X.name # Outside accesses validated

'Bob'

>>> X.name = 'Sue'

>>> X.name

'Sue'

>>> X.age

TypeError: private attribute fetch: age

>>> X.age = 'Tom'

TypeError: private attribute change: age

>>> @Public('name')

class Person:

def __init__(self, name, age):

self.name = name

self.age = age

>>> X = Person('bob', 40) # X is an onInstance

>>> X.name # onInstance embeds Person

'bob'

>>> X.name = 'Sue'

>>> X.name

'Sue'

>>> X.age

TypeError: private attribute fetch: age

>>> X.age = 'Tom'

TypeError: private attribute change: age

Implementation Details II

To help you analyze the code, here are a few final notes on this version. Since this is just a generalization of the preceding section’s version, the implementation notes there apply here as well.

Using __X pseudoprivate names

Besides generalizing, this version also makes use of Python’s __X pseudoprivate name mangling feature (which we met in Chapter 31) to localize the wrapped attribute to the proxy control class, by automatically prefixing it with this class’s name. This avoids the prior version’s risk for collisions with a wrapped attribute that may be used by the real, wrapped class, and it’s useful in a general tool like this. It’s not quite “privacy,” though, because the mangled version of the name can be used freely outside the class. Notice that we also have to use the fully expanded name string—'_onInstance__wrapped'— as a test value in __setattr__, because that’s what Python changes it to.

Breaking privacy

Although this example does implement access controls for attributes of an instance and its classes, it is possible to subvert these controls in various ways—for instance, by going through the expanded version of the wrapped attribute explicitly (bob.pay might not work, but the fully mangled bob._onInstance__wrapped.pay could!). If you have to explicitly try to do so, though, these controls are probably sufficient for normal intended use. Of course, privacy controls can generally be subverted in other languages if you try hard enough (#define private public may work in some C++ implementations, too). Although access controls can reduce accidental changes, much of this is up to programmers in any language; whenever source code may be changed, airtight access control will always be a bit of a pipe dream.

Decorator tradeoffs

We could again achieve the same results without decorators, by using manager functions or coding the name rebinding of decorators manually; the decorator syntax, however, makes this consistent and a bit more obvious in the code. The chief potential downsides of this and any other wrapper-based approach are that attribute access incurs an extra call, and instances of decorated classes are not really instances of the original decorated class—if you test their type with X.__class__ or isinstance(X, C), for example, you’ll find that they are instances of the wrapperclass. Unless you plan to do introspection on objects’ types, though, the type issue is probably irrelevant, and the extra call may apply mostly to development time; as we’ll see later, there are ways to remove decorations automatically if desired.

Open Issues

As is, this example works as planned under both Python 2.X and 3.X for methods called explicitly by name. As with most software, though, there is always room for improvement. Most notably, this tool turns in mixed performance on operator overloading methods if they are used by client classes.

As coded, the proxy class is a classic class when run under 2.X, but a new-style class when run by 3.X. As such, the code supports any client class in 2.X, but in 3.X fails to validate or delegate operator overloading methods dispatched implicitly by built-in operations, unless they are redefined in the proxy. Clients that do not use operator overloading are fully supported, but others may require additional code in 3.X.

Importantly, this is not a new-style class issue here, it’s a Python version issue—the same code runs differently and fails in 3.X only. Because the nature of the wrapped object’s class is irrelevant to the proxy, we are concerned only with the proxy’s own code, which works under 2.X but not 3.X.

We’ve met this issue a few times already in this book, but let’s take a quick look at its impact on the very realistic code we’ve written here, and explore a workaround to it.

Caveat: Implicitly run operator overloading methods fail to delegate under 3.X

Like all delegation-based classes that use __getattr__, this decorator works cross-version for normally named or explicitly called attributes only. When run implicitly by built-in operations, operator overloading methods like __str__ and __add__ work differently for new-style classes. Because this code is interpreted as a new-style class in 3.X only, such operations fail to reach an embedded object that defines them when run under this Python line as currently coded.

As we learned in the prior chapter, built-in operations look for operator overloading names in instances for classic classes, but not for new-style classes—for the latter, they skip the instance entirely and begin the search for such methods in classes (technically, in the namespace dictionaries of all classes in the instance’s tree). Hence, the __X__ operator overloading methods implicitly run for built-in operations do not trigger either __getattr__ or __getattribute__ in new-style classes; because such attribute fetches skip our onInstance class’s __getattr__ altogether, they cannot be validated or delegated.

Our decorator’s class is not coded as explicitly new-style (by deriving from object), so it will catch operator overloading methods if run under 2.X as a default classic class. In 3.X, though, because all classes are new-style automatically (and by mandate), such methods will fail if they are implemented by the embedded object—because they are not caught by the proxy, they won’t be passed on.

The most direct workaround in 3.X is to redefine redundantly in onInstance all the operator overloading methods that can possibly be used in wrapped objects. Such extra methods can be added by hand, by tools that partly automate the task (e.g., with class decorators or the metaclasses discussed in the next chapter), or by definition in reusable superclasses. Though tedious—and code-intensive enough to largely omit here—we’ll explore approaches to satisfying this 3.X-only requirement in a moment.

First, though, to see the difference for yourself, try applying the decorator to a class that uses operator overloading methods under 2.X; validations work as before, and both the __str__ method used by printing and the __add__ method run for + invoke the decorator’s __getattr__ and hence wind up being validated and delegated to the subject Person object correctly:

C:\code> c:\python27\python

>>> from access2 import Private

>>> @Private('age')

class Person:

def __init__(self):

self.age = 42

def __str__(self):

return 'Person: ' + str(self.age)

def __add__(self, yrs):

self.age += yrs

>>> X = Person()

>>> X.age # Name validations fail correctly

TypeError: private attribute fetch: age

>>> print(X) # __getattr__ => runs Person.__str__

Person: 42

>>> X + 10 # __getattr__ => runs Person.__add__

>>> print(X) # __getattr__ => runs Person.__str__

Person: 52

When the same code is run under Python 3.X, though, the implicitly invoked __str__ and __add__ skip the decorator’s __getattr__ and look for definitions in or above the decorator class itself; print winds up finding the default display inherited from the class type (technically, from the implied object superclass in 3.X), and + generates an error because no default is inherited:

C:\code> c:\python33\python

>>> from access2 import Private

>>> @Private('age')

class Person:

def __init__(self):

self.age = 42

def __str__(self):

return 'Person: ' + str(self.age)

def __add__(self, yrs):

self.age += yrs

>>> X = Person() # Name validations still work

>>> X.age # But 3.X fails to delegate built-ins!

TypeError: private attribute fetch: age

>>> print(X)

<access2.accessControl.<locals>.onDecorator.<locals>.onInstance object at ...etc>

>>> X + 10

TypeError: unsupported operand type(s) for +: 'onInstance' and 'int'

>>> print(X)

<access2.accessControl.<locals>.onDecorator.<locals>.onInstance object at ...etc>

Strangely, this occurs only for dispatch from built-in operations; explicit direct calls to overload methods are routed to __getattr__, though clients using operator overloading can’t be expected to do the same:

>>> X.__add__(10) # Though calls by name work normally

>>> X._onInstance__wrapped.age # Break privacy to view result...

52

In other words, this is a matter of built-in operations versus explicit calls; it has little to do with the actual names of the methods involved. Just for built-in operations, Python skips a step for 3.X’s new-style classes.

Using the alternative __getattribute__ method won’t help here—although it is defined to catch every attribute reference (not just undefined names), it is also not run by built-in operations. Python’s property feature, which we met in Chapter 38, won’t help directly here either; recall that properties are automatically run code associated with specific attributes defined when a class is written, and are not designed to handle arbitrary attributes in wrapped objects.

Approaches to redefining operator overloading methods for 3.X

As mentioned earlier, the most straightforward solution under 3.X is to redundantly redefine operator overloading names that may appear in embedded objects in delegation-based classes like our decorator. This isn’t ideal because it creates some code redundancy, especially compared to 2.X solutions. However, it isn’t an impossibly major coding effort; can be automated to some extent with tools or superclasses; suffices to make our decorator work in 3.X; and may allow operator overloading names to be declared Private or Public too, assuming overloading methods trigger the failIf test internally.

Inline definition

For instance, the following is an inline redefinition approach—add method redefinitions to the proxy for every operator overloading method a wrapped object may define itself, to catch and delegate. We’re adding just four operation interceptors to illustrate, but others are similar (new code is in bold font here):

def accessControl(failIf):

def onDecorator(aClass):

class onInstance:

def __init__(self, *args, **kargs):

self.__wrapped = aClass(*args, **kargs)

# Intercept and delegate built-in operations specifically

def __str__(self):

return str(self.__wrapped)

def __add__(self, other):

return self.__wrapped + other # Or getattr(x, '__add__')(y)

def __getitem__(self, index):

return self.__wrapped[index] # If needed

def __call__(self, *args, **kargs):

return self.__wrapped(*args, **kargs) # If needed

# plus any others needed

# Intercept and delegate by-name attribute access generically

def __getattr__(self, attr): ...

def __setattr__(self, attr, value): ...

return onInstance

return onDecorator

Mix-in superclasses

Alternatively, these methods can be inserted by a common superclass—given that there are dozens of such methods, an external class may be better suited to the task, especially if it is general enough to be used in any such interface proxy class. Either of the following mix-in class schemes (among likely others) suffice to catch and delegate built-ins operations:

§ The first catches built-ins and forcibly reroutes down to the subclass __getattr__. It requires that operator overloading names be public per the decorator’s specifications, but built-in operation calls will work the same as both explicit name calls and 2.X’s classic classes.

§ The second catches built-ins and reroutes to the wrapped object directly. It requires access to and assumes a proxy attribute named _wrapped giving access to the embedded object—which is less than ideal because it precludes wrapped objects from using the same name and creates a subclass dependency, but better than using the mangled and class-specific _onInstance__wrapped, and no worse than a similarly named method.

Like the inline approach, both of these mix-ins also require one method per built-in operation in general tools that proxy arbitrary objects’ interfaces. Notice how these classes catch operation calls rather than operation attribute fetches, and thus must perform the actual operation by delegating a call or expression:

class BuiltinsMixin:

def __add__(self, other):

return self.__class__.__getattr__(self, '__add__')(other)

def __str__(self):

return self.__class__.__getattr__(self, '__str__')()

def __getitem__(self, index):

return self.__class__.__getattr__(self, '__getitem__')(index)

def __call__(self, *args, **kargs):

return self.__class__.__getattr__(self, '__call__')(*args, **kargs)

# plus any others needed

def accessControl(failIf):

def onDecorator(aClass):

class onInstance(BuiltinsMixin):

...rest unchanged...

def __getattr__(self, attr): ...

def __setattr__(self, attr, value): ...

class BuiltinsMixin:

def __add__(self, other):

return self._wrapped + other # Assume a _wrapped

def __str__(self): # Bypass __getattr__

return str(self._wrapped)

def __getitem__(self, index):

return self._wrapped[index]

def __call__(self, *args, **kargs):

return self._wrapped(*args, **kargs)

# plus any others needed

def accessControl(failIf):

def onDecorator(aClass):

class onInstance(BuiltinsMixin):

...and use self._wrapped instead of self.__wrapped...

def __getattr__(self, attr): ...

def __setattr__(self, attr, value): ...

Either one of these superclass mix-ins will be extraneous code, but must be implemented only once, and seem much more straightforward than the various metaclass- or decorator-based tool approaches you’ll find online that populate each proxy class with the requisite methods redundantly (see the class augmentation examples in Chapter 40 for the principles behind such tools).

Coding variations: Routers, descriptors, automation

Naturally, both of the prior section’s mix-in superclasses might be improved with additional code changes we’ll largely pass on here, except for two variations worth noting briefly. First, compare the following mutation of the first mix-in—which uses a simpler coding structure but will incur an extra call per built-in operation, making it slower (though perhaps not significantly so in a proxy context):

class BuiltinsMixin:

def reroute(self, attr, *args, **kargs):

return self.__class__.__getattr__(self, attr)(*args, **kargs)

def __add__(self, other):

return self.reroute('__add__', other)

def __str__(self):

return self.reroute('__str__')

def __getitem__(self, index):

return self.reroute('__getitem__', index)

def __call__(self, *args, **kargs):

return self.reroute('__call__', *args, **kargs)

# plus any others needed

Second, all the preceding built-in mix-in classes code each operator overloading method explicitly, and intercept the call issued for the operation. With an alternative coding, we could instead generate methods from a list of names mechanically, and intercept only the attribute fetch preceding the call by creating class-level descriptors of the prior chapter—as in the following, which, like the second mix-in alternative, assumes the proxied object is named _wrapped in the proxy instance itself:

class BuiltinsMixin:

class ProxyDesc(object): # object for 2.X

def __init__(self, attrname):

self.attrname = attrname

def __get__(self, instance, owner):

return getattr(instance._wrapped, self.attrname) # Assume a _wrapped

builtins = ['add', 'str', 'getitem', 'call'] # Plus any others

for attr in builtins:

exec('__%s__ = ProxyDesc("__%s__")' % (attr, attr))

This coding may be the most concise, but also the most implicit and complex, and is fairly tightly coupled with its subclasses by the shared name. The loop at the end of this class is equivalent to the following, run in the mix-in class’s local scope—it creates descriptors that respond to initial name lookups by fetching from the wrapped object in __get__, rather than catching the later operation call itself:

__add__ = ProxyDesc("__add__")

__str__ = ProxyDesc("__str__")

...etc...

With such operator overloading methods added—either inline or by mix-in inheritance—the prior Private example client that overloaded + and print with __str__ and __add__ works correctly under 2.X and 3.X, as do subclasses that overload indexing and calls. If you care to experiment further, see files access2_builtins*.py in the book examples package for complete codings of these options; we’ll also employ that third of the mix-in options in a solution to an end-of-chapter quiz.

Should operator methods be validated?

Adding support for operator overloading methods is required of interface proxies in general, to delegate calls correctly. In our specific privacy application, though, it also raises some additional design choices. In particular, privacy of operator overloading methods differs per implementation:

§ Because they invoke __getattr__, the rerouter mix-ins require either that all __X__ names accessed be listed in Public decorations, or that Private be used instead when operator overloading is present in clients. In classes that use overloading heavily, Public may be impractical.

§ Because they bypass __getattr__ entirely, as coded here both the inline scheme and self._wrapped mix-ins do not have these constraints, but they preclude built-in operations from being made private, and cause built-in operation dispatch to work asymmetrically from both explicit__X__ calls by-name and 2.X’s default classic classes.

§ Python 2.X classic classes have the first bullet’s constraints, simply because all __X__ names are routed through __getattr__ automatically.

§ Operator overloading names and protocols differ between 2.X and 3.X, making truly cross-version decoration less than trivial (e.g., Public decorators may need to list names from both lines).

We’ll leave final policy here a TBD, but some interface proxies might prefer to allow __X__ operator names to always pass unchecked when delegated.

In the general case, though, a substantial amount of extra code is required to accommodate 3.X’s new-style classes as delegation proxies—in principle, every operator overloading method that is no longer dispatched as a normal instance attribute automatically will need to be defined redundantly in a general tool class like this privacy decorator. This is why this extension is omitted in our code: there are potentially more than 50 such methods! Because all its classes are new-style, delegation-based code is more difficult—though not necessarily impossible—in Python 3.X.

Implementation alternatives: __getattribute__ inserts, call stack inspection

Although redundantly defining operator overloading methods in wrappers is probably the most straightforward workaround to Python 3.X dilemma outlined in the prior section, it’s not necessarily the only one. We don’t have space to explore this issue much further here, so deeper investigation will have to be relegated to suggested exercise. Because one dead-end alternative illustrates class concepts well, though, it merits a brief mention.

One downside of the privacy example is that instance objects are not truly instances of the original class—they are instances of the wrapper instead. In some programs that rely on type testing, this might matter. To support such cases, we might try to achieve similar effects by inserting a__getattribute__ and a __setattr__ method into the original class, to catch every attribute reference and assignment made on its instances. These inserted methods would pass valid requests up to their superclass to avoid loops, using the techniques we studied in the prior chapter. Here is the potential change to our class decorator’s code:

# Method insertion: rest of access2.py code as before

def accessControl(failIf):

def onDecorator(aClass):

def getattributes(self, attr):

trace('get:', attr)

if failIf(attr):

raise TypeError('private attribute fetch: ' + attr)

else:

return object.__getattribute__(self, attr)

def setattributes(self, attr, value):

trace('set:', attr)

if failIf(attr):

raise TypeError('private attribute change: ' + attr)

else:

return object.__setattr__(self, attr, value)

aClass.__getattribute__ = getattributes

aClass.__setattr__ = setattributes # Insert accessors

return aClass # Return original class

return onDecorator

This alternative addresses the type-testing issue but suffers from others. For one thing, this decorator can be used by new-style class clients only: because __getattribute__ is a new-style-only tool (as is this __setattr__ coding), decorated classes in 2.X must use new-style derivation, which may or may not be appropriate for their goals. In fact, the set of classes supported is even further limited: inserting methods will break clients that are already using a __setattr__ or __getattribute__ of their own.

Worse, this scheme does not address the built-in operation attributes issue described in the prior section, because __getattribute__ is also not run in these contexts. In our case, if Person had a __str__ it would be run by print operations, but only because it was actually present in that class. As before, the __str__ attribute would not be routed to the inserted __getattribute__ method generically—printing would bypass this method altogether and call the class’s __str__ directly.

Although this is probably better than not supporting operator overloading methods in a wrapped object at all (barring redefinition, at least), this scheme still cannot intercept and validate __X__ methods, making it impossible for any of them to be private. Whether operator overloading methods should be private is another matter, but this structure precludes the possibility.

Much worse, because this nonwrapper approach works by adding a __getattribute__ and __setattr__ to the decorated class, it also intercepts attribute accesses made by the class itself and validates them the same as accesses made from outside. In other words, the class’s own method won’t be able to use its private names either! This is a showstopper for the insertion approach.

In fact, inserting these methods this way is functionally equivalent to inheriting them, and implies the same constraints as our original Chapter 30 privacy code. To know whether an attribute access originated inside or outside the class, our methods might need to inspect frame objects on the Python call stack. This might ultimately yield a solution—implementing private attributes as properties or descriptors that check the stack and validate for outside accesses only, for example—but it would slow access further, and is far too dark a magic for us to explore here. (Descriptors seem to make all things possible, even when they shouldn’t!)

While interesting, and possibly relevant for some other use cases, this method insertion technique doesn’t meet our goals. We won’t explore this option’s coding pattern further here because we will study class augmentation techniques in the next chapter, in conjunction with metaclasses. As we’ll see there, metaclasses are not strictly required for changing classes this way, because class decorators can often serve the same role.

Python Isn’t About Control

Now that I’ve gone to such great lengths to implement Private and Public attribute declarations for Python code, I must again remind you that it is not entirely Pythonic to add access controls to your classes like this. In fact, most Python programmers will probably find this example to be largely or totally irrelevant, apart from serving as a demonstration of decorators in action. Most large Python programs get by successfully without any such controls at all.

That said, you might find this tool useful in limited scopes during development. If you do wish to regulate attribute access in order to eliminate coding mistakes, or happen to be a soon-to-be-ex-C++-or-Java programmer, most things are possible with Python’s operator overloading andintrospection tools.

Example: Validating Function Arguments

As a final example of the utility of decorators, this section develops a function decorator that automatically tests whether arguments passed to a function or method are within a valid numeric range. It’s designed to be used during either development or production, and it can be used as a template for similar tasks (e.g., argument type testing, if you must). Because this chapter’s size limits have been broached, this example’s code is largely self-study material, with limited narrative; as usual, browse the code for more details.

The Goal

In the object-oriented tutorial of Chapter 28, we wrote a class that gave a pay raise to objects representing people based upon a passed-in percentage:

class Person:

...

def giveRaise(self, percent):

self.pay = int(self.pay * (1 + percent))

There, we noted that if we wanted the code to be robust it would be a good idea to check the percentage to make sure it’s not too large or too small. We could implement such a check with either if or assert statements in the method itself, using inline tests:

class Person:

def giveRaise(self, percent): # Validate with inline code

if percent < 0.0 or percent > 1.0:

raise TypeError, 'percent invalid'

self.pay = int(self.pay * (1 + percent))

class Person: # Validate with asserts

def giveRaise(self, percent):

assert percent >= 0.0 and percent <= 1.0, 'percent invalid'

self.pay = int(self.pay * (1 + percent))

However, this approach clutters up the method with inline tests that will probably be useful only during development. For more complex cases, this can become tedious (imagine trying to inline the code needed to implement the attribute privacy provided by the last section’s decorator). Perhaps worse, if the validation logic ever needs to change, there may be arbitrarily many inline copies to find and update.

A more useful and interesting alternative would be to develop a general tool that can perform range tests for us automatically, for the arguments of any function or method we might code now or in the future. A decorator approach makes this explicit and convenient:

class Person:

@rangetest(percent=(0.0, 1.0)) # Use decorator to validate

def giveRaise(self, percent):

self.pay = int(self.pay * (1 + percent))

Isolating validation logic in a decorator simplifies both clients and future maintenance.

Notice that our goal here is different than the attribute validations coded in the prior chapter’s final example. Here, we mean to validate the values of function arguments when passed, rather than attribute values when set. Python’s decorator and introspection tools allow us to code this new task just as easily.

A Basic Range-Testing Decorator for Positional Arguments

Let’s start with a basic range test implementation. To keep things simple, we’ll begin by coding a decorator that works only for positional arguments and assumes they always appear at the same position in every call; they cannot be passed by keyword name, and we don’t support additional**args keywords in calls because this can invalidate the positions declared in the decorator. Code the following in a file called rangetest1.py:

def rangetest(*argchecks): # Validate positional arg ranges

def onDecorator(func):

if not __debug__: # True if "python -O main.py args..."

return func # No-op: call original directly

else: # Else wrapper while debugging

def onCall(*args):

for (ix, low, high) in argchecks:

if args[ix] < low or args[ix] > high:

errmsg = 'Argument %s not in %s..%s' % (ix, low, high)

raise TypeError(errmsg)

return func(*args)

return onCall

return onDecorator

As is, this code is mostly a rehash of the coding patterns we explored earlier: we use decorator arguments, nested scopes for state retention, and so on.

We also use nested def statements to ensure that this works for both simple functions and methods, as we learned earlier. When used for a class’s method, onCall receives the subject class’s instance in the first item in *args and passes this along to self in the original method function; argument numbers in range tests start at 1 in this case, not 0.

New here, notice this code’s use of the __debug__ built-in variable—Python sets this to True, unless it’s being run with the –O optimize command-line flag (e.g., python –O main.py). When __debug__ is False, the decorator returns the origin function unchanged, to avoid extra later calls and their associated performance penalty. In other words, the decorator automatically removes its augmentation logic when –O is used, without requiring you to physically remove the decoration lines in your code.

This first iteration solution is used as follows:

# File rangetest1_test.py

from __future__ import print_function # 2.X

from rangetest1 import rangetest

print(__debug__) # False if "python -O main.py"

@rangetest((1, 0, 120)) # persinfo = rangetest(...)(persinfo)

def persinfo(name, age): # age must be in 0..120

print('%s is %s years old' % (name, age))

@rangetest([0, 1, 12], [1, 1, 31], [2, 0, 2009])

def birthday(M, D, Y):

print('birthday = {0}/{1}/{2}'.format(M, D, Y))

class Person:

def __init__(self, name, job, pay):

self.job = job

self.pay = pay

@rangetest([1, 0.0, 1.0]) # giveRaise = rangetest(...)(giveRaise)

def giveRaise(self, percent): # Arg 0 is the self instance here

self.pay = int(self.pay * (1 + percent))

# Comment lines raise TypeError unless "python -O" used on shell command line

persinfo('Bob Smith', 45) # Really runs onCall(...) with state

#persinfo('Bob Smith', 200) # Or person if -O cmd line argument

birthday(5, 31, 1963)

#birthday(5, 32, 1963)

sue = Person('Sue Jones', 'dev', 100000)

sue.giveRaise(.10) # Really runs onCall(self, .10)

print(sue.pay) # Or giveRaise(self, .10) if -O

#sue.giveRaise(1.10)

#print(sue.pay)

When run, valid calls in this code produce the following output (all the code in this section works the same under Python 2.X and 3.X, because function decorators are supported in both, we’re not using attribute delegation, and we use version-neutral exception construction and printing techniques):

C:\code> python rangetest1_test.py

True

Bob Smith is 45 years old

birthday = 5/31/1963

110000

Uncommenting any of the invalid calls causes a TypeError to be raised by the decorator. Here’s the result when the last two lines are allowed to run (as usual, I’ve omitted some of the error message text here to save space):

C:\code> python rangetest1_test.py

True

Bob Smith is 45 years old

birthday = 5/31/1963

110000

TypeError: Argument 1 not in 0.0..1.0

Running Python with its -O flag at a system command line will disable range testing, but also avoid the performance overhead of the wrapping layer—we wind up calling the original undecorated function directly. Assuming this is a debugging tool only, you can use this flag to optimize your program for production use:

C:\code> python -O rangetest1_test.py

False

Bob Smith is 45 years old

birthday = 5/31/1963

110000

231000

Generalizing for Keywords and Defaults, Too

The prior version illustrates the basics we need to employ, but it’s fairly limited—it supports validating arguments passed by position only, and it does not validate keyword arguments (in fact, it assumes that no keywords are passed in a way that makes argument position numbers incorrect). Additionally, it does nothing about arguments with defaults that may be omitted in a given call. That’s fine if all your arguments are passed by position and never defaulted, but less than ideal in a general tool. Python supports much more flexible argument-passing modes, which we’re not yet addressing.

The mutation of our example shown next does better. By matching the wrapped function’s expected arguments against the actual arguments passed in a call, it supports range validations for arguments passed by either position or keyword name, and it skips testing for default arguments omitted in the call. In short, arguments to be validated are specified by keyword arguments to the decorator, which later steps through both the *pargs positionals tuple and the **kargs keywords dictionary to validate.

"""

File rangetest.py: function decorator that performs range-test

validation for arguments passed to any function or method.

Arguments are specified by keyword to the decorator. In the actual

call, arguments may be passed by position or keyword, and defaults

may be omitted. See rangetest_test.py for example use cases.

"""

trace = True

def rangetest(**argchecks): # Validate ranges for both+defaults

def onDecorator(func): # onCall remembers func and argchecks

if not __debug__: # True if "python -O main.py args..."

return func # Wrap if debugging; else use original

else:

code = func.__code__

allargs = code.co_varnames[:code.co_argcount]

funcname = func.__name__

def onCall(*pargs, **kargs):

# All pargs match first N expected args by position

# The rest must be in kargs or be omitted defaults

expected = list(allargs)

positionals = expected[:len(pargs)]

for (argname, (low, high)) in argchecks.items():

# For all args to be checked

if argname in kargs:

# Was passed by name

if kargs[argname] < low or kargs[argname] > high:

errmsg = '{0} argument "{1}" not in {2}..{3}'

errmsg = errmsg.format(funcname, argname, low, high)

raise TypeError(errmsg)

elif argname in positionals:

# Was passed by position

position = positionals.index(argname)

if pargs[position] < low or pargs[position] > high:

errmsg = '{0} argument "{1}" not in {2}..{3}'

errmsg = errmsg.format(funcname, argname, low, high)

raise TypeError(errmsg)

else:

# Assume not passed: default

if trace:

print('Argument "{0}" defaulted'.format(argname))

return func(*pargs, **kargs) # OK: run original call

return onCall

return onDecorator

The following test script shows how the decorator is used—arguments to be validated are given by keyword decorator arguments, and at actual calls we can pass by name or position and omit arguments with defaults even if they are to be validated otherwise:

"""

File rangetest_test.py (3.X + 2.X)

Comment lines raise TypeError unless "python -O" used on shell command line

"""

from __future__ import print_function # 2.X

from rangetest import rangetest

# Test functions, positional and keyword

@rangetest(age=(0, 120)) # persinfo = rangetest(...)(persinfo)

def persinfo(name, age):

print('%s is %s years old' % (name, age))

@rangetest(M=(1, 12), D=(1, 31), Y=(0, 2013))

def birthday(M, D, Y):

print('birthday = {0}/{1}/{2}'.format(M, D, Y))

persinfo('Bob', 40)

persinfo(age=40, name='Bob')

birthday(5, D=1, Y=1963)

#persinfo('Bob', 150)

#persinfo(age=150, name='Bob')

#birthday(5, D=40, Y=1963)

# Test methods, positional and keyword

class Person:

def __init__(self, name, job, pay):

self.job = job

self.pay = pay

# giveRaise = rangetest(...)(giveRaise)

@rangetest(percent=(0.0, 1.0)) # percent passed by name or position

def giveRaise(self, percent):

self.pay = int(self.pay * (1 + percent))

bob = Person('Bob Smith', 'dev', 100000)

sue = Person('Sue Jones', 'dev', 100000)

bob.giveRaise(.10)

sue.giveRaise(percent=.20)

print(bob.pay, sue.pay)

#bob.giveRaise(1.10)

#bob.giveRaise(percent=1.20)

# Test omitted defaults: skipped

@rangetest(a=(1, 10), b=(1, 10), c=(1, 10), d=(1, 10))

def omitargs(a, b=7, c=8, d=9):

print(a, b, c, d)

omitargs(1, 2, 3, 4)

omitargs(1, 2, 3)

omitargs(1, 2, 3, d=4)

omitargs(1, d=4)

omitargs(d=4, a=1)

omitargs(1, b=2, d=4)

omitargs(d=8, c=7, a=1)

#omitargs(1, 2, 3, 11) # Bad d

#omitargs(1, 2, 11) # Bad c

#omitargs(1, 2, 3, d=11) # Bad d

#omitargs(11, d=4) # Bad a

#omitargs(d=4, a=11) # Bad a

#omitargs(1, b=11, d=4) # Bad b

#omitargs(d=8, c=7, a=11) # Bad a

When this script is run, out-of-range arguments raise an exception as before, but arguments may be passed by either name or position, and omitted defaults are not validated. This code runs on both 2.X and 3.X. Trace its output and test this further on your own to experiment; it works as before, but its scope has been broadened:

C:\code> python rangetest_test.py

Bob is 40 years old

Bob is 40 years old

birthday = 5/1/1963

110000 120000

1 2 3 4

Argument "d" defaulted

1 2 3 9

1 2 3 4

Argument "c" defaulted

Argument "b" defaulted

1 7 8 4

Argument "c" defaulted

Argument "b" defaulted

1 7 8 4

Argument "c" defaulted

1 2 8 4

Argument "b" defaulted

1 7 7 8

On validation errors, we get an exception as before when one of the method test lines is uncommented, unless the -O command-line argument is passed to Python to disable the decorator’s logic:

TypeError: giveRaise argument "percent" not in 0.0..1.0

Implementation Details

This decorator’s code relies on both introspection APIs and subtle constraints of argument passing. To be fully general we could in principle try to mimic Python’s argument matching logic in its entirety to see which names have been passed in which modes, but that’s far too much complexity for our tool. It would be better if we could somehow match arguments passed by name against the set of all expected arguments’ names, in order to determine which position arguments actually appear in during a given call.

Function introspection

It turns out that the introspection API available on function objects and their associated code objects has exactly the tool we need. This API was briefly introduced in Chapter 19, but we’ll actually put it to use here. The set of expected argument names is simply the first N variable names attached to a function’s code object:

# In Python 3.X (and 2.6+ for compatibility)

>>> def func(a, b, c, e=True, f=None): # Args: three required, two defaults

x = 1 # Plus two more local variables

y = 2

>>> code = func.__code__ # Code object of function object

>>> code.co_nlocals

7

>>> code.co_varnames # All local variable names

('a', 'b', 'c', 'e', 'f', 'x', 'y')

>>> code.co_varnames[:code.co_argcount] # <== First N locals are expected args

('a', 'b', 'c', 'e', 'f')

And as usual, starred-argument names in the call proxy allow it to collect arbitrarily many arguments to be matched against the expected arguments so obtained from the function’s introspection API:

>>> def catcher(*pargs, **kargs): print('%s, %s' % (pargs, kargs))

>>> catcher(1, 2, 3, 4, 5)

(1, 2, 3, 4, 5), {}

>>> catcher(1, 2, c=3, d=4, e=5) # Arguments at calls

(1, 2), {'d': 4, 'e': 5, 'c': 3}

The function object’s API is available in older Pythons, but the func.__code__ attribute is named func.func_code in 2.5 and earlier; the newer __code__ attribute is also redundantly available in 2.6 and later for portability. Run a dir call on function and code objects for more details. Code like the following would support 2.5 and earlier, though the sys.version_info result itself is similarly nonportable—it’s a named tuple in recent Pythons, but we can use offsets on newer and older Pythons alike:

>>> import sys # For backward compatibility

>>> tuple(sys.version_info) # [0] is major release number

(3, 3, 0, 'final', 0)

>>> code = func.__code__ if sys.version_info[0] == 3 else func.func_code

Argument assumptions

Given the decorated function’s set of expected argument names, the solution relies upon two constraints on argument passing order imposed by Python (these still hold true in both 2.X and 3.X current releases):

§ At the call, all positional arguments appear before all keyword arguments.

§ In the def, all nondefault arguments appear before all default arguments.

That is, a nonkeyword argument cannot generally follow a keyword argument at a call, and a nondefault argument cannot follow a default argument at a definition. All “name=value” syntax must appear after any simple “name” in both places. As we’ve also learned, Python matches argument values passed by position to argument names in function headers from left to right, such that these values always match the leftmost names in headers. Keywords match by name instead, and a given argument can receive only one value.

To simplify our work, we can also make the assumption that a call is valid in general—that is, that all arguments either will receive values (by name or position), or will be omitted intentionally to pick up defaults. This assumption won’t necessarily hold, because the function has not yet actually been called when the wrapper logic tests validity—the call may still fail later when invoked by the wrapper layer, due to incorrect argument passing. As long as that doesn’t cause the wrapper to fail any more badly, though, we can finesse the validity of the call. This helps, because validating calls before they are actually made would require us to emulate Python’s argument-matching algorithm in full—again, too complex a procedure for our tool.

Matching algorithm

Now, given these constraints and assumptions, we can allow for both keywords and omitted default arguments in the call with this algorithm. When a call is intercepted, we can make the following assumptions and deductions:

1. Let N be the number of passed positional arguments, obtained from the length of the *pargs tuple.

2. All N positional arguments in *pargs must match the first N expected arguments obtained from the function’s code object. This is true per Python’s call ordering rules, outlined earlier, since all positionals precede all keywords in a call.

3. To obtain the names of arguments actually passed by position, we can slice the list of all expected arguments up to the length N of the *pargs passed positionals tuple.

4. Any arguments after the first N expected arguments either were passed by keyword or were defaulted by omission at the call.

5. For each argument name to be validated by the decorator:

a. If the name is in **kargs, it was passed by name—indexing **kargs gives its passed value.

b. If the name is in the first N expected arguments, it was passed by position—its relative position in the expected list gives its relative position in *pargs.

c. Otherwise, we can assume it was omitted in the call and defaulted, and need not be checked.

In other words, we can skip tests for arguments that were omitted in a call by assuming that the first N actually passed positional arguments in *pargs must match the first N argument names in the list of all expected arguments, and that any others must either have been passed by keyword and thus be in **kargs, or have been defaulted. Under this scheme, the decorator will simply skip any argument to be checked that was omitted between the rightmost positional argument and the leftmost keyword argument; between keyword arguments; or after the rightmost positional in general. Trace through the decorator and its test script to see how this is realized in code.

Open Issues

Although our range-testing tool works as planned, three caveats remain—it doesn’t detect invalid calls, doesn’t handle some arbitrary-argument signatures, and doesn’t fully support nesting. Improvements may require extension or altogether different approaches. Here’s a quick rundown of the issues.

Invalid calls

First, as mentioned earlier, calls to the original function that are not valid still fail in our final decorator. The following both trigger exceptions, for example:

omitargs()

omitargs(d=8, c=7, b=6)

These only fail, though, where we try to invoke the original function, at the end of the wrapper. While we could try to imitate Python’s argument matching to avoid this, there’s not much reason to do so—since the call would fail at this point anyhow, we might as well let Python’s own argument-matching logic detect the problem for us.

Arbitrary arguments

Second, although our final version handles positional arguments, keyword arguments, and omitted defaults, it still doesn’t do anything explicit about *pargs and **kargs starred-argument names that may be used in a decorated function that accepts arbitrarily many arguments itself. We probably don’t need to care for our purposes, though:

§ If an extra keyword argument is passed, its name will show up in **kargs and can be tested normally if mentioned to the decorator.

§ If an extra keyword argument is not passed, its name won’t be in either **kargs or the sliced expected positionals list, and it will thus not be checked—it is treated as though it were defaulted, even though it is really an optional extra argument.

§ If an extra positional argument is passed, there’s no way to reference it in the decorator anyhow—its name won’t be in either **kargs or the sliced expected arguments list, so it will simply be skipped. Because such arguments are not listed in the function’s definition, there’s no way to map a name given to the decorator back to an expected relative position.

In other words, as it is the code supports testing arbitrary keyword arguments by name, but not arbitrary positionals that are unnamed and hence have no set position in the function’s argument signature. In terms of the function object’s API, here’s the effect of these tools in decorated functions:

>>> def func(*kargs, **pargs): pass

>>> code = func.__code__

>>> code.co_nlocals, code.co_varnames

(2, ('kargs', 'pargs'))

>>> code.co_argcount, code.co_varnames[:code.co_argcount]

(0, ())

>>> def func(a, b, *kargs, **pargs): pass

>>> code = func.__code__

>>> code.co_argcount, code.co_varnames[:code.co_argcount]

(2, ('a', 'b'))

Because starred-argument names show up as locals but not as expected arguments, they won’t be a factor in our matching algorithm—names preceding them in function headers can be validated as usual, but not any extra positional arguments passed. In principle, we could extend the decorator’s interface to support *pargs in the decorated function, too, for the rare cases where this might be useful (e.g., a special argument name with a test to apply to all arguments in the wrapper’s *pargs beyond the length of the expected arguments list), but we’ll pass on such an extension here.

Decorator nesting

Finally, and perhaps most subtly, this code’s approach does not fully support use of decorator nesting to combine steps. Because it analyzes arguments using names in function definitions, and the names of the call proxy function returned by a nested decoration won’t correspond to argument names in either the original function or decorator arguments, it does not fully support use in nested mode.

Technically, when nested, only the most deeply nested appearance’s validations are run in full; all other nesting levels run tests on arguments passed by keyword only. Trace the code to see why; because the onCall proxy’s call signature expects no named positional arguments, any to-be-validated arguments passed to it by position are treated as if they were omitted and hence defaulted, and are thus skipped.

This may be inherent in this tool’s approach—proxies change the argument name signatures at their levels, making it impossible to directly map names in decorator arguments to positions in passed argument sequences. When proxies are present, argument names ultimately apply to keywords only; by contrast, the first-cut solution’s argument positions may support proxies better, but do not fully support keywords.

In lieu of this nesting capability, we’ll generalize this decorator to support multiple types of validations in a single decoration in an end-of-chapter quiz solution, which also gives examples of the nesting limitation in action. Since we’ve already neared the space allocation for this example, though, if you care about these or any other further improvements, you’ve officially crossed over into the realm of suggested exercises.

Decorator Arguments Versus Function Annotations

Interestingly, the function annotation feature introduced in Python 3.X (3.0 and later) could provide an alternative to the decorator arguments used by our example to specify range tests. As we learned in Chapter 19, annotations allow us to associate expressions with arguments and return values, by coding them in the def header line itself; Python collects annotations in a dictionary and attaches it to the annotated function.

We could use this in our example to code range limits in the header line, instead of in decorator arguments. We would still need a function decorator to wrap the function in order to intercept later calls, but we would essentially trade decorator argument syntax:

@rangetest(a=(1, 5), c=(0.0, 1.0))

def func(a, b, c): # func = rangetest(...)(func)

print(a + b + c)

for annotation syntax like this:

@rangetest

def func(a:(1, 5), b, c:(0.0, 1.0)):

print(a + b + c)

That is, the range constraints would be moved into the function itself, instead of being coded externally. The following script illustrates the structure of the resulting decorators under both schemes, in incomplete skeleton code for brevity. The decorator arguments code pattern is that of our complete solution shown earlier; the annotation alternative requires one less level of nesting, because it doesn’t need to retain decorator arguments as state:

# Using decorator arguments (3.X + 2.X)

def rangetest(**argchecks):

def onDecorator(func):

def onCall(*pargs, **kargs):

print(argchecks)

for check in argchecks:

pass # Add validation code here

return func(*pargs, **kargs)

return onCall

return onDecorator

@rangetest(a=(1, 5), c=(0.0, 1.0))

def func(a, b, c): # func = rangetest(...)(func)

print(a + b + c)

func(1, 2, c=3) # Runs onCall, argchecks in scope

# Using function annotations (3.X only)

def rangetest(func):

def onCall(*pargs, **kargs):

argchecks = func.__annotations__

print(argchecks)

for check in argchecks:

pass # Add validation code here

return func(*pargs, **kargs)

return onCall

@rangetest

def func(a:(1, 5), b, c:(0.0, 1.0)): # func = rangetest(func)

print(a + b + c)

func(1, 2, c=3) # Runs onCall, annotations on func

When run, both schemes have access to the same validation test information, but in different forms—the decorator argument version’s information is retained in an argument in an enclosing scope, and the annotation version’s information is retained in an attribute of the function itself. In 3.X only, due to the use of function annotations:

C:\code> py −3 decoargs-vs-annotation.py

{'a': (1, 5), 'c': (0.0, 1.0)}

6

{'a': (1, 5), 'c': (0.0, 1.0)}

6

I’ll leave fleshing out the rest of the annotation-based version as a suggested exercise; its code would be identical to that of our complete solution shown earlier, because range-test information is simply on the function instead of in an enclosing scope. Really, all this buys us is a different user interface for our tool—it will still need to match argument names against expected argument names to obtain relative positions as before.

In fact, using annotation instead of decorator arguments in this example actually limits its utility. For one thing, annotation only works under Python 3.X, so 2.X is no longer supported; function decorators with arguments, on the other hand, work in both versions.

More importantly, by moving the validation specifications into the def header, we essentially commit the function to a single role—since annotation allows us to code only one expression per argument, it can have only one purpose. For instance, we cannot use range-test annotations for any other role.

By contrast, because decorator arguments are coded outside the function itself, they are both easier to remove and more general—the code of the function itself does not imply a single decoration purpose. Crucially, by nesting decorators with arguments, we can apply multiple augmentation steps to the same function; annotation directly supports only one. With decorator arguments, the function itself also retains a simpler, normal appearance.

Still, if you have a single purpose in mind, and you can commit to supporting 3.X only, the choice between annotation and decorator arguments is largely stylistic and subjective. As is so often true in life, one person’s decoration or annotation may well be another’s syntactic clutter!

Other Applications: Type Testing (If You Insist!)

The coding pattern we’ve arrived at for processing arguments in decorators could be applied in other contexts. Checking argument data types at development time, for example, is a straightforward extension:

def typetest(**argchecks):

def onDecorator(func):

...

def onCall(*pargs, **kargs):

positionals = list(allargs)[:len(pargs)]

for (argname, type) in argchecks.items():

if argname in kargs:

if not isinstance(kargs[argname], type):

...

raise TypeError(errmsg)

elif argname in positionals:

position = positionals.index(argname)

if not isinstance(pargs[position], type):

...

raise TypeError(errmsg)

else:

# Assume not passed: default

return func(*pargs, **kargs)

return onCall

return onDecorator

@typetest(a=int, c=float)

def func(a, b, c, d): # func = typetest(...)(func)

...

func(1, 2, 3.0, 4) # OK

func('spam', 2, 99, 4) # Triggers exception correctly

Using function annotations instead of decorator arguments for such a decorator, as described in the prior section, would make this look even more like type declarations in other languages:

@typetest

def func(a: int, b, c: float, d): # func = typetest(func)

... # Gasp!...

But we’re getting dangerously close to triggering a “flag on the play” here. As you should have learned in this book, this particular role is generally a bad idea in working code, and, much like private declarations, is not at all Pythonic (and is often a symptom of an ex-C++ programmer’s first attempts to use Python).

Type testing restricts your function to work on specific types only, instead of allowing it to operate on any types with compatible interfaces. In effect, it limits your code and breaks its flexibility. On the other hand, every rule has exceptions; type checking may come in handy in isolated cases while debugging and when interfacing with code written in more restrictive languages, such as C++.

Still, this general pattern of argument processing might also be applicable in a variety of less controversial roles. We might even generalize further by passing in a test function, much as we did to add Public decorations earlier; a single copy of this sort of code would then suffice for both range and type testing, and perhaps other similar goals. In fact, we will generalize this way in the end-of-chapter quiz coming up, so we’ll leave this extension as a cliffhanger here.

Chapter Summary

In this chapter, we explored decorators—both the function and class varieties. As we learned, decorators are a way to insert code to be run automatically when a function or class is defined. When a decorator is used, Python rebinds a function or class name to the callable object it returns. This hook allows us to manage functions and classes themselves, or later calls to them—by adding a layer of wrapper logic to catch later calls, we can augment both function calls and instance interfaces. As we also saw, manager functions and manual name rebinding can achieve the same effect, but decorators provide a more explicit and uniform solution.

As we also learned, class decorators can be used to manage classes themselves, rather than just their instances. Because this functionality overlaps with metaclasses—the topic of the next and final technical chapter— you’ll have to read ahead for the conclusion to this story, and that of this book at large. First, though, let’s work through the following quiz. Because this chapter was mostly focused on its examples, its quiz will ask you to modify some of its code in order to review. You can find the original versions’ code in the book’s examples package (see the preface for access pointers). If you’re pressed for time, study the modifications listed in the answers instead—programming is as much about reading code as writing it.

Test Your Knowledge: Quiz

1. Method decorators: As mentioned in one of this chapter’s notes, the timerdeco2.py module’s timer function decorator with decorator arguments that we wrote in the section Adding Decorator Arguments can be applied only to simple functions, because it uses a nested class with a__call__ operator overloading method to catch calls. This structure does not work for a class’s methods because the decorator instance is passed to self, not the subject class instance.

Rewrite this decorator so that it can be applied to both simple functions and methods in classes, and test it on both functions and methods. (Hint: see the section Class Blunders I: Decorating Methods for pointers.) Note that you will probably need to use function object attributes to keep track of total time, since you won’t have a nested class for state retention and can’t access nonlocals from outside the decorator code. As an added bonus, this makes your decorator usable on both Python 3.X and 2.X.

2. Class decorators: The Public/Private class decorators we wrote in module access2.py in this chapter’s first case study example will add performance costs to every attribute fetch in a decorated class. Although we could simply delete the @ decoration line to gain speed, we could also augment the decorator itself to check the __debug__ switch and perform no wrapping at all when the –O Python flag is passed on the command line—just as we did for the argument range-test decorators. That way, we can speed our program without changing its source, via command-line arguments (python –O main.py...). While we’re at it, we could also use one of the mix-in superclass techniques we studied to catch a few built-in operations in Python 3.X too. Code and test these two extensions.

3. Generalized argument validations: The function and method decorator we wrote in rangetest.py checks that passed arguments are in a valid range, but we also saw that the same pattern could apply to similar goals such as argument type testing, and possibly more. Generalize the range tester so that its single code base can be used for multiple argument validations. Passed-in functions may be the simplest solution given the coding structure here, though in more OOP-based contexts, subclasses that provide expected methods can often provide similar generalization routes as well.

Test Your Knowledge: Answers

1. Here’s one way to code the first question’s solution, and its output (though some methods may run too fast to register reported time). The trick lies in replacing nested classes with nested functions, so the self argument is not the decorator’s instance, and assigning the total time to the decorator function itself so it can be fetched later through the original rebound name (see the section “State Information Retention Options” of this chapter for details—functions support arbitrary attribute attachment, and the function name is an enclosing scope reference in this context). If you wish to expand this further, it might be useful to also record the best (minimum) call time in addition to the total time, as we did in Chapter 21’s timer examples.

2. """

3. File timerdeco.py (3.X + 2.X)

4. Call timer decorator for both functions and methods.

5. """

6. import time

7.

8. def timer(label='', trace=True): # On decorator args: retain args

9. def onDecorator(func): # On @: retain decorated func

10. def onCall(*args, **kargs): # On calls: call original

11. start = time.clock() # State is scopes + func attr

12. result = func(*args, **kargs)

13. elapsed = time.clock() - start

14. onCall.alltime += elapsed

15. if trace:

16. format = '%s%s: %.5f, %.5f'

17. values = (label, func.__name__, elapsed, onCall.alltime)

18. print(format % values)

19. return result

20. onCall.alltime = 0

21. return onCall

22. return onDecorator

I’ve coded tests in a separate file here to allow the decorator to be easily reused:

"""

File timerdeco-test.py

"""

from __future__ import print_function # 2.X

from timerdeco import timer

import sys

force = list if sys.version_info[0] == 3 else (lambda X: X)

print('---------------------------------------------------')

# Test on functions

@timer(trace=True, label='[CCC]==>')

def listcomp(N): # Like listcomp = timer(...)(listcomp)

return [x * 2 for x in range(N)] # listcomp(...) triggers onCall

@timer('[MMM]==>')

def mapcall(N):

return force(map((lambda x: x * 2), range(N))) # list() for 3.X views

for func in (listcomp, mapcall):

result = func(5) # Time for this call, all calls, return value

func(5000000)

print(result)

print('allTime = %s\n' % func.alltime) # Total time for all calls

print('---------------------------------------------------')

# Test on methods

class Person:

def __init__(self, name, pay):

self.name = name

self.pay = pay

@timer()

def giveRaise(self, percent): # giveRaise = timer()(giveRaise)

self.pay *= (1.0 + percent) # tracer remembers giveRaise

@timer(label='**')

def lastName(self): # lastName = timer(...)(lastName)

return self.name.split()[-1] # alltime per class, not instance

bob = Person('Bob Smith', 50000)

sue = Person('Sue Jones', 100000)

bob.giveRaise(.10)

sue.giveRaise(.20) # runs onCall(sue, .10)

print(int(bob.pay), int(sue.pay))

print(bob.lastName(), sue.lastName()) # runs onCall(bob), remembers lastName

print('%.5f %.5f' % (Person.giveRaise.alltime, Person.lastName.alltime))

If all goes according to plan, you’ll see the following output in both Python 3.X and 2.X, albeit with timing results that will vary per Python and machine:

c:\code> py −3 timerdeco-test.py

---------------------------------------------------

[CCC]==>listcomp: 0.00001, 0.00001

[CCC]==>listcomp: 0.57930, 0.57930

[0, 2, 4, 6, 8]

allTime = 0.5793010457092784

[MMM]==>mapcall: 0.00002, 0.00002

[MMM]==>mapcall: 1.08609, 1.08611

[0, 2, 4, 6, 8]

allTime = 1.0861149923442373

---------------------------------------------------

giveRaise: 0.00001, 0.00001

giveRaise: 0.00000, 0.00001

55000 120000

**lastName: 0.00001, 0.00001

**lastName: 0.00000, 0.00001

Smith Jones

0.00001 0.00001

23.The following three files satisfy the second question. The first gives the decorator—it’s been augmented to return the original class in optimized mode (–O), so attribute accesses don’t incur a speed hit. Mostly, it just adds the debug mode test statements and indents the class further to the right:

24."""

25.File access.py (3.X + 2.X)

26.Class decorator with Private and Public attribute declarations.

27.Controls external access to attributes stored on an instance, or

28.inherited by it from its classes in any fashion.

29.

30.Private declares attribute names that cannot be fetched or assigned

31.outside the decorated class, and Public declares all the names that can.

32.

33.Caveats: in 3.X catches built-ins coded in BuiltinMixins only (expand me);

34.as coded, Public may be less useful than Private for operator overloading.

35."""

36.from access_builtins import BuiltinsMixin # A partial set!

37.

38.traceMe = False

39.def trace(*args):

40. if traceMe: print('[' + ' '.join(map(str, args)) + ']')

41.

42.def accessControl(failIf):

43. def onDecorator(aClass):

44. if not __debug__:

45. return aClass

46. else:

47. class onInstance(BuiltinsMixin):

48. def __init__(self, *args, **kargs):

49. self.__wrapped = aClass(*args, **kargs)

50.

51. def __getattr__(self, attr):

52. trace('get:', attr)

53. if failIf(attr):

54. raise TypeError('private attribute fetch: ' + attr)

55. else:

56. return getattr(self.__wrapped, attr)

57.

58. def __setattr__(self, attr, value):

59. trace('set:', attr, value)

60. if attr == '_onInstance__wrapped':

61. self.__dict__[attr] = value

62. elif failIf(attr):

63. raise TypeError('private attribute change: ' + attr)

64. else:

65. setattr(self.__wrapped, attr, value)

66. return onInstance

67. return onDecorator

68.

69.def Private(*attributes):

70. return accessControl(failIf=(lambda attr: attr in attributes))

71.

72.def Public(*attributes):

73. return accessControl(failIf=(lambda attr: attr not in attributes))

I’ve also used one of our mix-in techniques to add some operator overloading method redefinitions to the wrapper class, so that in 3.X it correctly delegates built-in operations to subject classes that use these methods. As coded, the proxy is a default classic class in 2.X that routes these through __getattr__ already, but in 3.X is a new-style class that does not. The mix-in used here requires listing such methods in Public decorators; see earlier for alternatives that do not (but that also do not allow built-ins to be made private), and expand this class as needed:

"""

File access_builtins.py (from access2_builtins2b.py)

Route some built-in operations back to proxy class __getattr__, so they

work the same in 3.X as direct by-name calls and 2.X's default classic classes.

Expand me as needed to include other __X__ names used by proxied objects.

"""

class BuiltinsMixin:

def reroute(self, attr, *args, **kargs):

return self.__class__.__getattr__(self, attr)(*args, **kargs)

def __add__(self, other):

return self.reroute('__add__', other)

def __str__(self):

return self.reroute('__str__')

def __getitem__(self, index):

return self.reroute('__getitem__', index)

def __call__(self, *args, **kargs):

return self.reroute('__call__', *args, **kargs)

# Plus any others used by wrapped objects in 3.X only

Here too I split the self-test code off to a separate file, so the decorator could be imported elsewhere without triggering the tests, and without requiring a __name__ test and indenting:

"""

File: access-test.py

Test code: separate file to allow decorator reuse.

"""

import sys

from access import Private, Public

print('---------------------------------------------------------')

# Test 1: names are public if not private

@Private('age') # Person = Private('age')(Person)

class Person: # Person = onInstance with state

def __init__(self, name, age):

self.name = name

self.age = age # Inside accesses run normally

def __add__(self, N):

self.age += N # Built-ins caught by mix-in in 3.X

def __str__(self):

return '%s: %s' % (self.name, self.age)

X = Person('Bob', 40)

print(X.name) # Outside accesses validated

X.name = 'Sue'

print(X.name)

X + 10

print(X)

try: t = X.age # FAILS unless "python -O"

except: print(sys.exc_info()[1])

try: X.age = 999 # ditto

except: print(sys.exc_info()[1])

print('---------------------------------------------------------')

# Test 2: names are private if not public

# Operators must be non-Private or Public in BuiltinMixin used

@Public('name', '__add__', '__str__', '__coerce__')

class Person:

def __init__(self, name, age):

self.name = name

self.age = age

def __add__(self, N):

self.age += N # Built-ins caught by mix-in in 3.X

def __str__(self):

return '%s: %s' % (self.name, self.age)

X = Person('bob', 40) # X is an onInstance

print(X.name) # onInstance embeds Person

X.name = 'sue'

print(X.name)

X + 10

print(X)

try: t = X.age # FAILS unless "python -O"

except: print(sys.exc_info()[1])

try: X.age = 999 # ditto

except: print(sys.exc_info()[1])

Finally, if all works as expected, this test’s output is as follows in both Python 3.X and 2.X—the same code applied to the same class decorated with Private and then with Public:

c:\code> py −3 access-test.py

---------------------------------------------------------

Bob

Sue

Sue: 50

private attribute fetch: age

private attribute change: age

---------------------------------------------------------

bob

sue

sue: 50

private attribute fetch: age

private attribute change: age

c:\code> py −3 -O access-test.py # Suppresses the four access error messages

74.Here’s a generalized argument validator for you to study on your own. It uses a passed-in validation function, to which it passes the test’s criteria value coded for the argument in the decorator. This handles ranges, type tests, value testers, and almost anything else you can dream up in an expressive language like Python. I’ve also refactored the code a bit to remove some redundancy, and automated test failure processing. See this module’s self-test for usage examples and expected output. Per this example’s caveats described earlier, this decorator doesn’t fully work in nested mode as is—only the most deeply nested validation is run for positional arguments—but its arbitrary valuetest can be used to combine differing types of tests in a single decoration (though the amount of code needed in this mode may negate much of its benefits over a simple assert!).

75."""

76.File argtest.py: (3.X + 2.X) function decorator that performs

77.arbitrary passed-in validations for arguments passed to any

78.function method. Range and type tests are two example uses;

79.valuetest handles more arbitrary tests on an argument's value.

80.

81.Arguments are specified by keyword to the decorator. In the actual

82.call, arguments may be passed by position or keyword, and defaults

83.may be omitted. See self-test code below for example use cases.

84.

85.Caveats: doesn't fully support nesting because call proxy args

86.differ; doesn't validate extra args passed to a decoratee's *args;

87.and may be no easier than an assert except for canned use cases.

88."""

89.trace = False

90.

91.

92.def rangetest(**argchecks):

93. return argtest(argchecks, lambda arg, vals: arg < vals[0] or arg > vals[1])

94.

95.def typetest(**argchecks):

96. return argtest(argchecks, lambda arg, type: not isinstance(arg, type))

97.

98.def valuetest(**argchecks):

99. return argtest(argchecks, lambda arg, tester: not tester(arg))

100.

101.

102. def argtest(argchecks, failif): # Validate args per failif + criteria

103. def onDecorator(func): # onCall retains func, argchecks, failif

104. if not __debug__: # No-op if "python -O main.py args..."

105. return func

106. else:

107. code = func.__code__

108. expected = list(code.co_varnames[:code.co_argcount])

109. def onError(argname, criteria):

110. errfmt = '%s argument "%s" not %s'

111. raise TypeError(errfmt % (func.__name__, argname, criteria))

112.

113. def onCall(*pargs, **kargs):

114. positionals = expected[:len(pargs)]

115. for (argname, criteria) in argchecks.items(): # For all to test

116. if argname in kargs: # Passed by name

117. if failif(kargs[argname], criteria):

118. onError(argname, criteria)

119.

120. elif argname in positionals: # Passed by posit

121. position = positionals.index(argname)

122. if failif(pargs[position], criteria):

123. onError(argname, criteria)

124. else: # Not passed-dflt

125. if trace:

126. print('Argument "%s" defaulted' % argname)

127. return func(*pargs, **kargs) # OK: run original call

128. return onCall

129. return onDecorator

130.

131.

132. if __name__ == '__main__':

133. import sys

134. def fails(test):

135. try: result = test()

136. except: print('[%s]' % sys.exc_info()[1])

137. else: print('?%s?' % result)

138.

139. print('--------------------------------------------------------------------')

140. # Canned use cases: ranges, types

141.

142. @rangetest(m=(1, 12), d=(1, 31), y=(1900, 2013))

143. def date(m, d, y):

144. print('date = %s/%s/%s' % (m, d, y))

145.

146. date(1, 2, 1960)

147. fails(lambda: date(1, 2, 3))

148.

149. @typetest(a=int, c=float)

150. def sum(a, b, c, d):

151. print(a + b + c + d)

152.

153. sum(1, 2, 3.0, 4)

154. sum(1, d=4, b=2, c=3.0)

155. fails(lambda: sum('spam', 2, 99, 4))

156. fails(lambda: sum(1, d=4, b=2, c=99))

157.

158. print('--------------------------------------------------------------------')

159. # Arbitrary/mixed tests

160.

161. @valuetest(word1=str.islower, word2=(lambda x: x[0].isupper()))

162. def msg(word1='mighty', word2='Larch', label='The'):

163. print('%s %s %s' % (label, word1, word2))

164.

165. msg() # word1 and word2 defaulted

166. msg('majestic', 'Moose')

167. fails(lambda: msg('Giant', 'Redwood'))

168. fails(lambda: msg('great', word2='elm'))

169.

170. print('--------------------------------------------------------------------')

171. # Manual type and range tests

172.

173. @valuetest(A=lambda x: isinstance(x, int), B=lambda x: x > 0 and x < 10)

174. def manual(A, B):

175. print(A + B)

176.

177. manual(100, 2)

178. fails(lambda: manual(1.99, 2))

179. fails(lambda: manual(100, 20))

180.

181. print('--------------------------------------------------------------------')

182. # Nesting: runs both, by nesting proxies on original.

183. # Open issue: outer levels do not validate positionals due

184. # to call proxy function's differing argument signature;

185. # when trace=True, in all but the last of these "X" is

186. # classified as defaulted due to the proxy's signature.

187.

188. @rangetest(X=(1, 10))

189. @typetest(Z=str) # Only innermost validates positional args

190. def nester(X, Y, Z):

191. return('%s-%s-%s' % (X, Y, Z))

192.

193. print(nester(1, 2, 'spam')) # Original function runs properly

194. fails(lambda: nester(1, 2, 3)) # Nested typetest is run: positional

195. fails(lambda: nester(1, 2, Z=3)) # Nested typetest is run: keyword

196. fails(lambda: nester(0, 2, 'spam')) # <==Outer rangetest not run: posit.

197. fails(lambda: nester(X=0, Y=2, Z='spam')) # Outer rangetest is run: keyword

This module’s self-test output in both 3.X and 2.X follows (some 2.X object displays vary slightly): as usual, correlate with the source for more insights.

c:\code> py −3 argtest.py

--------------------------------------------------------------------

date = 1/2/1960

[date argument "y" not (1900, 2013)]

10.0

10.0

[sum argument "a" not <class 'int'>]

[sum argument "c" not <class 'float'>]

--------------------------------------------------------------------

The mighty Larch

The majestic Moose

[msg argument "word1" not <method 'islower' of 'str' objects>]

[msg argument "word2" not <function <lambda> at 0x0000000002A096A8>]

--------------------------------------------------------------------

102

[manual argument "A" not <function <lambda> at 0x0000000002A09950>]

[manual argument "B" not <function <lambda> at 0x0000000002A09B70>]

--------------------------------------------------------------------

1-2-spam

[nester argument "Z" not <class 'str'>]

[nester argument "Z" not <class 'str'>]

?0-2-spam?

[onCall argument "X" not (1, 10)]

Finally, as we’ve learned, this decorator’s coding structure works for both functions and methods:

# File argtest_testmeth.py

from argtest import rangetest, typetest

class C:

@rangetest(a=(1, 10))

def meth1(self, a):

return a * 1000

@typetest(a=int)

def meth2(self, a):

return a * 1000

>>> from argtest_testmeth import C

>>> X = C()

>>> X.meth1(5)

5000

>>> X.meth1(20)

TypeError: meth1 argument "a" not (1, 10)

>>> X.meth2(20)

20000

>>> X.meth2(20.9)

TypeError: meth2 argument "a" not <class 'int'>