Dirty Deeds, Done Dirt Cheap - Building Search Engine-Friendly Sites - SEO For Dummies, 6th Edition (2016)

SEO For Dummies, 6th Edition (2016)

Part II. Building Search Engine-Friendly Sites

Chapter 10. Dirty Deeds, Done Dirt Cheap

In This Chapter

arrow Examining the principles of tricking search engines

arrow Understanding how you can hurt your site

arrow Looking at doorway pages, redirects, cloaking, and more

arrow Understanding how you may be penalized

Everyone wants to fool the search engines — and the search engines know it. That’s why search engine optimization is such a strange business — a hybrid of technology and, oh, I dunno … industrial espionage, perhaps? Search engines don’t want you to know exactly how they rank pages because if you did, you would know exactly how to trick them into giving you top positions.

Now for a bit of history. When this whole search engine business started out, search engines just wanted people to follow some basic guidelines — make the Web site readable, provide a <TITLE> tag, provide a few keywords related to the page’s subject matter, and so on — and then the search engines would take it from there.

What happened, though, is that Web sites started jostling for position. For example, although the KEYWORDS meta tag seemed like a great idea, so many people misused it (by repeating words and using words that weren’t related to the subject matter) that it eventually became irrelevant to search engines. Eventually, the major search engines stopped giving much weight to the tag or just ignored it altogether.

Search engines try to hide their methods as much as they can, but it sometimes becomes apparent what the search engines want, and at that point, people start trying to give it to them in a manner the search engines regard as manipulative. This chapter discusses which things you should avoid doing because you risk upsetting the search engines and getting penalized — potentially even getting booted from a search engine for life!

Tricking Search Engines

Before getting down to the nitty-gritty details about tricking search engines, I focus on two topics: why you need to understand the dangers of using dirty tricks and what the overriding principles behind tricking the search engines are based on.

Deciding whether to trick

Should you use the tricks in this chapter, and if not, why not? You’ll hear several reasons for not using tricks. The first, ethics, is one I don’t belabor because I’m not sure that the argument behind this reason is very strong. You’ll hear from many people that the tricks in this chapter are unethical, that those who use them are cheating and are one step on the evolutionary ladder above pond scum (or one step below pond scum, depending on the commentator).

Self-righteousness is in ample supply on the Internet. Maybe these people are right, maybe not. I do know that many people who try such tricks also have great reasons for doing so and are not the Internet’s equivalent of Pol Pot or Attila the Hun. They’re simply trying to put their best foot forward in a difficult technical environment.

Many people have tried search engine tricks because they invested a lot of money in Web sites that turn out to be invisible to search engines. These folks can’t afford to abandon their sites and start again. (See Chapter 9 for a discussion of why search engines sometimes can’t read Web pages.) You can, and rightly so, point out that these folks can deal with the problem in other ways, but that just means the people involved are misinformed, not evil. The argument made by these tricksters might go something like this: Who gave search engines the right to set the rules, anyway?

One could argue that doing pretty much anything beyond the basics is cheating. Over the past few years, I’ve heard from hundreds of people who have put into action many of the ideas in this book, with great results. So if smart Webmasters armed with a little knowledge can push their Web sites up in the ranks above sites that may be more appropriate for a particular keyword phrase, yet are owned by folks with less knowledge … is that fair?

Also, consider that the search engines have actually (although unintentionally) encouraged the use of some dirty tricks. For example, the search engines don’t like people to buy links (see Chapter 17). Yet the search engines have encouraged the purchasing of links, by rewarding the purchasing of links. Buying links worked! (And, some would argue, still works today.) So business man X looks at competitor Y and sees how well Y’s Web site is ranking, and then discovers that Y ranked the site by purchasing links. Well, what is business man X supposed to think?

remember Ethics aside, the really good reason for avoiding egregious trickery is that it may have the opposite effect and harm your search engine position. A corollary to that reason is that other, legitimate ways exist to get a good search engine ranking. (Unfortunately, they’re often more complicated and time consuming.)

Figuring out the tricks

The idea behind most search engine tricks is simple: to confuse the search engines into thinking that your site is more appropriate for certain keyword phrases than they would otherwise believe. You do this generally by showing the search engine something that the site visitor doesn’t see.

Search engines want to see what site visitors see, yet they know they can’t. It will be a long time before search engines will be able to see and understand the images in a Web page, for instance. Right now, they can’t even read text in the images, although that could be possible at some point. (Recent patents suggest that this is something Google is working on now — but I bet it will still be years before Google tries to read all the text it finds in the average Web site’s images, if it ever does.) But to view and understand the images as a real person sees them? Britney Spears could well be president of the United States before that happens.

The search engine designers have started with this basic principle:

What the search engine sees should be what the user sees except for certain things it’s not interested in — images, for instance — and certain technical issues that are not important to the visitor (such as the DESCRIPTION meta tag, which <H> tag has been applied to a heading, and so on).

remember Here’s one other important principle: The text on the page should be there for the benefit of the site visitor, not the search engines.

Ideally, search engine designers want Web designers to act as though search engines don’t exist. (Of course, this is exactly what many Web designers have done and the reason that so many sites rank poorly in search engines!) Search engine designers want their programs to determine which pages are the most relevant for a particular search query. They want you — the Web designer — to focus on creating a site that serves your visitors’ needs and let search engines determine which site is most appropriate for which searcher. At least that was the original theory. Over the years, the search engines realized that total ignorance of SEO simply wasn’t going to happen. Now they provide basic SEO advice.

Still, what search engines definitely don’t want is for you to show one version of a page to visitors and another version to search engines because you feel that version is what the search engine will like most.

Do these tricks work?

Tricks do work, at least in some circumstances for some search engines. On the other hand, over time, search engines have become better and better at spotting the more obvious tricks; you don’t find crudely keyword-stuffed pages and pages with hidden text ranking well very often these days, for instance, though only a few years ago you did.

Could you use sophisticated tricks and rank first for your keywords? Perhaps, but your rank may not last long, and the penalty it could incur can last for a long time. (In most cases, the pages will simply drop in rank as the search engines apply an algorithm that recognizes the trick, but in some cases, a search engine could decide to remove all pages from a particular site from the index; see Chapter 21 for a discussion of this disaster.)

warning These tricks can be dangerous. You may get caught in one of several ways:

· A search engine algorithm may discover your trickery, and your page or your entire site could be dropped from the search engine.

· A competitor might discover what you’re doing and report you to the search engines. Google has stated that it prefers to let its algorithms track down cheaters and uses reports of search engine spamming to tune these algorithms, but Google will take direct action in some cases; in fact, it does employ a team of people to examine suspected “spam” sites.

· Your trick may work well for a while, until a major search engine changes its algorithm to block the trickery — at which point your site’s ranking will drop like a rock.

remember If you follow the advice from the rest of this book, you’ll surpass 80 percent of your competitors. You don’t need tricks.

Concrete Shoes, Cyanide, TNT — An Arsenal for Dirty Deeds

The next few sections take a look at some search engine tricks employed on the Web.

Keyword stacking and stuffing

You may run across pages that contain the same word or term, or maybe several words or terms, repeated again and again, often in hidden areas of the page (such as the KEYWORDS tag), though also sometimes visible to visitors. This is one of the earliest and crudest forms of a dirty deed, one that search engines have been aware of for years and are pretty good at finding these days. It’s rare to find crudely keyword-stacked pages ranking well in the search results anymore.

Take a look at Figure 10-1. The Web designer has repeated the word glucosamine numerous times, each one in a hyperlink to give it a little extra oomph. I found this page in Google a few years ago by searching for the term glucosamine glucosamine glucosamine; more recently, the same search phrase didn’t pull up anything nearly as crude as this page.

image

Figure 10-1: The person creating this page stacked it with the word glucosamine.

Look at this tactic from the search engine’s perspective. Repeating the word glucosamine over and over isn’t of any use to a site visitor, so it’s understandable why search engine designers don’t appreciate this kind of thing. This sort of trick rarely works these days, so sites doing this are also becoming less abundant.

technicalstuff The terms keyword stacking and keyword stuffing are often used interchangeably, though some people regard keyword stuffing as something a little different — placing inappropriate keywords inside image ALT attributes and in hidden layers. (The term keyword stacking is far less common these days.)

Hiding (and shrinking) keywords

Another old (and very crude) trick is to hide text; that is, to hide it from the site visitor but make it visible to the search engine, allowing you to fill a simple page with keywords for the sake of search engine optimization. (Remember that search engines don’t want you to show them content that isn’t also visible to the site visitor.)

This trick, often combined with keyword stuffing, involves placing large amounts of text into a page and hiding it from view. For instance, take a look at Figure 10-2. I found this page in Google some time ago. It has hidden text at the bottom of the page.

image

Figure 10-2: This text is hidden at the bottom of the page.

If you suspect that someone has hidden text on a page, you can often make it visible by clicking inside text at the top of the page and dragging the mouse to the bottom of the page to highlight everything in between. You can also look in the page’s source code.

How did this designer make the text disappear? At the bottom of the source code (choose View  ⇒  Source), I found this:

<FONT SIZE=7 COLOR="#ffffff"><H6>glucosamine glucosamine glucosamine glucosamine glucosamine emu oil emu oil emu oil kyolic kyolic kyolic wakunaga wakunaga wakunaga</H6></FONT>

Notice the COLOR="#ffffff" piece; #ffffff is hexadecimal color code for the color white. The page background is white, so — abracadabra — the text disappears.

Surprisingly, I still see this trick employed now and then. In fact, I sometimes get clients who have a sudden inspiration — “Hey, couldn’t we just put a bunch of keywords into the site and make it the same color as the background?” they ask, as if they have discovered something really clever. I have to explain that it’s the oldest trick in the book.

My opinion of this crude trick goes like this:

· It may help. The trick actually can work, though not nearly as often as it used to.

· It might not hurt. You still find pages using this trick, so clearly the search engines don’t always find it.

· … but it might. But search engines do discover the trick, frequently, and may penalize your site for doing it. The page may get dropped from the index, or you may have your entire site dropped.

· So why do it? It’s just way too risky … and unnecessary, too.

Here are some other tricks used for hiding text from the visitor while still making it visible to the search engine:

· Placing text inside <NOFRAMES> tags: Some designers do this even if the page isn’t a frame-definition document. I’ve seen this method work, too, but not in recent years.

· Placing text inside <NOSCRIPT> tags: <NOSCRIPT></NOSCRIPT> tags are used to put text on a page that can be read by browsers that don’t work with JavaScript. Some site owners use them to give more text to the search engines to read, and from what I’ve seen, the major search engines often do read this text, or at least they did a few years ago. However, the text inside these tags probably isn’t given as much weight as other text on a page, and over time will probably be given less and less weight. (However, I recently read an experiment from 2015 showing that Google still indexes the text in NOSCRIPT tags.)

· Using hidden fields: Sometimes designers hide words in a form’s hidden field (<INPUT TYPE="HIDDEN">). I doubt whether this does anything to help, anyway.

· Using hidden layers: Style sheets can be used to position a text layer underneath the visible layer or outside the browser. This trick is quite common and probably hard for search engines to figure out.

Some Web designers still stuff keywords into a page by using a very small font size. This trick is another one that search engines may look for and penalize.

Here’s another variation: Some Web designers make the text color just a little different from the background color to make it hard for someone reading the page to see. The text is effectively invisible, especially if it’s at the bottom of the page preceded by several blank lines. Search engines can look for ranges of colors to determine whether this trick is being employed.

And remember, it’s not just the search engines looking, it’s your competitors, too. They might just report you. (See the section “Paying the Ultimate Penalty,” later in this chapter.)

Hiding links

A variation on the old hidden-text trick is to hide links. As you discover in Chapters 16 through 18, links provide important clues to search engines about the site’s purpose. They also provide a way for search engines to discover pages. Thus, some Web designers create links that are specifically for search engines to find but are not intended for site visitors. Links can be made to look exactly like all the other text on a page or may even be hidden on punctuation marks — visitors are unlikely to click a link on a period, so the link can be made “invisible,” allowing a search engine to discover a page that the site visitor will never see. Links may be placed in transparent images or invisible layers, in small images, or in <NOFRAMES><NOSCRIPT> tags or may be hidden in any of the ways discussed previously for hiding ordinary text.

When I think about tricks like this, though, I think, “What’s the point?” There are so many legitimate things you can do in SEO, why bother with something like this?

Duplicating pages and sites

If content with keywords is good, then twice as much content is better, and three times as much is better still, right? Some site developers have duplicated pages and even entire sites, making virtual photocopies and adding the pages to the site or placing duplicated sites at different domain names.

technicalstuff Sometimes called mirror pages or mirror sites, these duplicate pages are intended to help a site gain more than one or two entries in the top positions. If you can create three or four Web sites that rank well, you can dominate the first page of the search results.

Some people who use this trick try to modify each page just a little to make it harder for search engines to recognize duplicates. But search engines have designed tools to find duplication and often drop a page from their indexes if they find it’s a duplicate of another page at the same site. Duplicate pages found across different sites are often okay, which is why content syndication can work well if done right (see the discussion of duplicate content in Chapter 11), but entire duplicate sites are something that search engines frown on.

On the other hand, one strategy that works well, and is not necessarily a “dirty trick,” is to create multiple sites. A company selling widgets, for instance, may have a primary site, with information about all its widgets, plus secondary sites; one focused on widgets for use during the summer, another for winter widgets, and a third for travel-size widgets. I’ve seen this strategy used very successfully, often resulting in multiple Page 1 search results.

Page swapping and page jacking

technicalstuff Here are a couple of variations on the duplication theme:

· Page swapping: In this now little-used technique, one page is placed at a site and then, after the page has attained a good position, it’s removed and replaced with a less optimized page. One serious problem with this technique is that major search engines often reindex pages very quickly, and it’s impossible to know when the search engines will return.

· Page jacking: Some truly unethical search engine marketers have employed the technique of using other peoples’ high-ranking Web pages, in effect stealing pages that perform well for a while. This is known as page jacking.

Doorway and Information Pages

technicalstuff A doorway page is created solely as an entrance from a search engine to your Web site. Doorway pages are sometimes known as gateway pages and ghost pages. The idea is to create highly optimized pages that are picked up and indexed by search engines and that, with luck, rank well and thus channel traffic to the site.

Search engines hate doorway pages because they break one of the cardinal rules: They’re intended for search engines, not for visitors. The sole purpose of a doorway page is to channel people from search engines to the real Web site.

technicalstuff One man’s doorway page is another man’s information page — or what some people call affiliate pages, advertising pages, or marketing pages. The difference between a doorway page and an information page is, perhaps, that the information page is designed for use by the visitor in such a manner that search engines will rank it well, whereas the doorway page is designed in such a manner that it’s utterly useless to the visitor because it’s intended purely for the search engine; in fact, originally doorway pages were stuffed full of keywords and duplicated hundreds of times.

Crude doorway pages don’t look like the rest of the site, having been created very quickly or even by some kind of program. Doorway pages are part of other strategies. The pages used in redirects and cloaking (discussed in the next section) are, in effect, doorway pages.

Where do you draw the line between a doorway page and an information page? That’s a question I can’t answer here; it’s for you to ponder and remains a matter of debate in the search engine optimization field. If a client asks me to help him in the search engine race and I create pages designed to rank well in search engines but in such a manner that they’re still useful to the visitor, have I created information pages or doorway pages? Most people would say that I created legitimate information pages.

Suppose, however, that I create lots of pages designed for use by the site visitor — pages that, until my client started thinking about search engine optimization, would have been deemed unnecessary. Surely these pages are, by intent, doorway pages, aren’t they, even if one could argue that they’re useful in some way?

Varying degrees of utility exist, and I know people in the business of creating "information" pages that are useful to the visitor in the author’s opinion only! Also, a number of search engine optimization companies create doorway pages that they simply call information pages.

Still, an important distinction exists between the two types of pages, and creating information pages is a widely used strategy. Search engines don’t know your intent, so if you create pages that appear to be useful, are not duplicated dozens or hundreds of times, and don’t break any other rules, they’ll be fine.

tip Here’s a good reality check. Be honest: Are the pages you just created truly of use to your site visitors? If you submitted these pages to Yahoo! Directory or the Open Directory Project for review by a human, would the site be accepted? If the answer is no, the pages probably aren’t informational. The “trick,” then, is to find a way to convert the pages you created for search engine purposes into pages that are useful in their own right — or for which a valid argument, at least, for utility can be made.

Using Redirects and Cloaking

Redirecting and cloaking serve the same purpose. The intention is to show one page to the search engines but a completely different page to the site visitor. Why do people want to do this? Here are a few reasons:

· If a site has been built in a manner that makes it invisible to search engines, cloaking allows the site owner to deliver indexable pages to search engines while retaining the original site.

· The site may not have much textual content, making it a poor fit for the search engine algorithms. Although search engine designers might argue that this fact means that the site isn’t a good fit for a search, this argument clearly doesn’t stand up to analysis and debate.

· Each search engine prefers something slightly different. As long as search engines can’t agree on what makes a good search match, why should they expect site owners and developers to accept good results in some search engines and bad results in others?

I’ve heard comments such as the following from site owners, and I can understand their frustration: “Search engines are defining how my site should work and what it should look like, and if the manner in which I want to design my site isn’t what they like to see, that’s not my fault! Who gave them the right to set the rules of commerce on the Internet?!”

What might frustrate and anger site owners more is if they realized that for years, one major search engine did accept cloaking, as long as you paid for it. Yahoo!’s Submit Pro program was in effect just that, a way to feed content into the search engine directly but display different content to site visitors. So cloaking is a crime, but until fairly recently, and for a decade or so, one search engine said, “Pay us, and we’ll help you do it.” (Is that a fee, a bribe, or a protection-racket payment?)

Understanding redirects

A redirect is the automatic loading of a page without user intervention. You click a link to load a Web page into your browser, and within seconds, the page you loaded disappears, and a new one appears. Designers often create pages designed for search engines — optimized, keyword-rich pages — that redirect visitors to the real Web site, which is not so well optimized. Search engines read the page, but visitors never really see it.

Redirects can be carried out in various ways:

· By using the REFRESH meta tag. This is an old trick that search engines discovered long ago; most search engines don’t index a page that has a REFRESH tag that quickly bounces the visitor to another page.

· By using JavaScript to automatically load another page within a split second.

· By using JavaScript that’s tripped by a user action that is almost certain to occur. You can see an example of this method at work in Figure 10-3, a page I found long ago (the trick isn’t that common these days). The large button on this page has a JavaScript mouseover event associated with it; when users move their mice over the image — as they’re almost certain to do — the mouseover event triggers, loading the next page.

image

Figure 10-3: The mouse pointer triggers a JavaScript mouseover event on the image and loads another page.

You’re unlikely to be penalized for using a redirect. But a search engine may ignore the redirect page. That is, if the search engine discovers that a page is redirecting to another page, it simply ignores the redirect page and indexes the destination page. Search engines reasonably assume that redirect pages are merely way stations on the route to the real content.

Examining cloaking

Cloaking is a more sophisticated trick than using a redirect and harder for search engines to uncover than a basic REFRESH meta tag redirect. When browsers or searchbots request a Web page, they send information about themselves to the site hosting the page — for example, "I’m Version 11 of Internet Explorer," or "I’m Googlebot." The cloaking program quickly looks in its list of searchbots for the device requesting the page. In addition, a cloaking program also has a list of IP numbers that it knows are used by searchbots; if the request comes from a matching IP number, it knows it’s a searchbot.

So, if the device or IP number isn’t found in the list of searchbots, the cloaking program tells the Web server to send the regular Web page, the one intended for site visitors. But if the device name or IP number is listed in the searchbot list — as it would be for Googlebot, for instance — the cloaking program sends a different page, one that the designer feels is better optimized for that particular search engine. (The cloaking program may have a library of pages, each designed for a particular search engine or group of engines.)

Here’s how the two page versions differ:

· Pages provided to the search engine: Often much simpler; created in a way to make them easy for search engines to read; have lots of heavily keyword-laden text that would sound clumsy to a real person.

· Pages presented to visitors: Often much more attractive, graphics-heavy pages, with less text and more complicated structures and navigation systems.

Search engines don’t like cloaking. Conservative search engine marketers steer well clear of this technique. Here’s how Google defines cloaking:

The term “cloaking” is used to describe a Web site that returns altered Web pages to search engines crawling the site.

Well, that’s pretty clear — cloaking is cloaking is cloaking. But, wait a minute:

In other words, the Web server is programmed to return different content to Google than it returns to regular users, usually in an attempt to distort search engine rankings.

Hang on: These two definitions aren’t describing the same concept. The phrase “in an attempt to distort” is critical. If I “return altered pages” without intending to distort rankings, am I cloaking? Here’s more from Google:

This can mislead users about what they’ll find when they click on a search result. To preserve the accuracy and quality of our search results, Google may permanently ban from our index any sites or site authors that engage in cloaking to distort their search rankings.

Notice a few important qualifications: altered pages … usually in an attempt to distort search engine rankings … cloaking to distort their search engine rankings.

This verbiage is ambiguous and seems to indicate that Google doesn’t totally outlaw the use of cloaking; it just doesn’t like you to use cloaking to cheat. Some would say that using cloaking to present to Google dynamic pages that are otherwise invisible, for instance, or that are blocked from indexing perhaps by the use of session IDs (see Chapter 9), would be an acceptable practice. And here’s another very common use for “cloaking”; many, many sites display different content depending on the location of the person viewing the page (they use IP location to do this; seeChapter 12). Thus, they show different content to different people, including “searchbots.”

And here’s another form of “valid cloaking.” As discussed in Chapter 9, SWFObject is a JavaScript that intentionally displays something different to the search engines; it shows a Flash animation to the visitor, but alternate text to browsers that don’t read Flash and to searchbots. Google is aware of SWFObject and indeed has announced that it reads the script, so this sounds like a form of legitimate cloaking.

So there are many legitimate uses for cloaking. However, as I’ve pointed out, many in the business advise that you never use cloaking in any circumstance, just in case Google thinks your purpose is to “distort search engine rankings.” My theory is that in most cases it’s fine, unless your purpose may be unclear, in which case there’s a risk that Google may misinterpret your aims.

Tricks Versus Strategies

When is a trick not a trick but merely a legitimate strategy? I don’t know, but I’ll tell you that there are many ways to play the SEO game, and what to one man is a trick might to another be the obvious thing to do.

Here’s an example: creating multiple Web sites. One client has two Web sites ranking in the top five on Google for his most important keyword. (No, I can’t tell you what the keyword is!) Another client at one point had around seven of the top ten results for one of his critical keywords; several of the links pointed to his own Web sites, and several pointed to his products positioned on other people’s Web sites.

Now, this is definitely a trick: Build a bunch of small Web sites that point links back to your “core” site and then link all those sites from various places. The aim (and it can sometimes work if it’s done correctly) is to boost the core site by creating many incoming links, but in many cases the “satellite” sites may also rank well. I’ve seen this technique work; I’ve also seen it fail miserably.

Is this a trick, though — building stand alone sites with the intention of seeing which ones will rank best? And doing so not merely as a way to create links to a core site but as a way to see what works best for search engines? I don’t know. I’ll leave it for you to decide.

Link Tricks

I look at on-page tricks in this chapter, but there’s another category, off-page tricks, or link trickery — that is, creating what amounts to fake links pointing to your site, links that only exist for the purpose of pushing your site up in the search engines. In fact, link trickery is far more common these days than on-page tricks. You can find out more about links in Chapters 16 to 19 and about the most egregious link trick — buying links — in Chapter 17.

Paying the Ultimate Penalty

Just how much trouble can you get into by breaking the rules? The most likely penalty isn’t really a penalty. It’s just that your pages won’t work well with a search engine’s algorithm, so they won’t rank well.

warning You can receive the ultimate penalty, though: having your entire site booted from the index. Here’s what Google has to say about it:

We investigate each report of deceptive practices thoroughly and take appropriate action when abuse is uncovered. At minimum, we will use the data from each spam report to improve our site ranking and filtering algorithms. The result of this should be visible over time as the quality of our searches gets even better.

Google is describing what I just explained — that it tweaks its algorithm to downgrade pages that use certain techniques. But:

In especially egregious cases, we will remove spammers from our index immediately so they don’t show up in search results at all. Other steps will be taken as necessary.

tip One of the dangers of using tricks, then, is that someone — perhaps your competitors — might report you, and if the trick is bad enough, you get the boot. Where do they report you? Google provides a form (at www.google.com/webmasters/tools/spamreport) and it’s accessible through the Webmaster account (see Chapter 13). Bing used to provide an easily accessible form, but now has hidden it away with a long convoluted URL; you can get to it through here: http://j.mp/1NLNY3r.

Note, by the way, that a report titled Most SEOs Don’t Report Competitors to Google on SEORoundtable.com found that 28 percent of respondents to a survey stated that they had reported competitors for using spam techniques. That may not be most, but it’s still almost a third. Yes, this really does happen.

What do you do if you think you’ve been penalized? You can find out in Chapter 21.