Chad Perrin: SOB

25 August 2006

a small piece of the social networking puzzle

Filed under: Cognition,Geek,Metalog — apotheon @ 04:43

While this whole Web2.0 thing appears to be little more than a vast cloud — a miasma, almost — of hype and buzzwords (maybe a swarm of buzzing words), there is a core of solid concept to it that should not be overlooked. The parts of the Web2.0 phenomenon that actually work when applied as designed work exceedingly well.

The reason these things work is pretty simple, and shouldn’t be a surprise to anyone, unless you’re going to be surprised that people haven’t done this sooner. It’s just a matter of applying socioeconomic principles and game theory (in many cases, the same thing) to the problem of enhancing your web presence. More than any other endeavor that springs immediately to mind, the strength of one’s web presence is directly linked to providing a valuable service or product to the general public: information. Information is distinct from data in that it is organized for maximum usefulness. The greater the degree of useful organization of the data you make available, the more generally useful the data itself, and the more such data you provide, the greater your power to inform — and the greater the use others can make of your website.

The Web places control over the user experience ultimately in the hands of the clients. They have the option at any time of going somewhere else. As such, they must be enticed with value. Shallow marketing ploys may draw in first-time viewers, but for the most part they will fail to grant you return visitors. Unless your revenue model is based on making money from ad clicks from people desperate to escape your ugly, useless website (don’t laugh, there are a couple websites out there that make real money like this), you need return “customers”. You need people to not only return, but also direct others to you. The social networking characteristics of this Web2.0 phenomenon, where they are effectively employed, are exceedingly good at providing the benefit of visitors: it’s then up to you to keep them coming back.

One example in particular of how to use Web2.0 methodologies to draw readers into your website, assuming for the sake of argument that we’re talking about a weblog, is the ultimate topic of this SOB entry: trackbacks and pingbacks.

Trackbacks and pingbacks are essentially the same thing. The difference is that trackbacks require you to jump through extra hoops to get them to work, if you’re the tracking party (as opposed to the backtracking party). It’s worth the effort, however, because it provides instant link love from others: when you link to someone else’s weblog post, a trackbackor pingback inserts a link into that website that leads back to you. This draws interested parties to contribute on your weblog, and it provides the people reading the backtracking or backpinging weblog with more context for the topic of discussion to which it referred.

As an administrator of a weblog site, it is definitely in your best interest to provide trackback and pingback service. This typically manifests thus: someone reads something you had to say, writes something related in his own weblog, links to what you said. Via either trackbacks or pingbacks, your weblog software automatically creates a link with some excerpted text from the other person’s weblog for context, preferably somewhere useful such as in the comments area on your weblog post.

Now, someone shortsighted and self-centered who is trying to get away with something rather than merely participating in a mutually beneficial economic relationship might look for a way to encourage incoming links without using trackbacks and pingbacks. Such a person might think of this as some kind of competition where people only “win” by making other people “lose”, and want to ensure that he is always at least breaking even with the other “winners”. Such a person might thus wish to deny links to others, preventing them from getting reciprocal traffic, and look for ways to “trap” readers at his own weblog. Such a person would in fact be doing himself a great disservice by exempting himself from the mutually beneficial sharing of link love made possible by trackbacks and pingbacks. Disallowing trackbacks and pingbacks only hurts you, because people are less likely to link to you when you are stingy about reciprocation. Self-centered market dominance tactics wherein one attempts to “win” by making others “lose” are more visible and obviously unpleasant than you might realize.

Allow pingbacks in your weblogs. I cannot stress this enough. The reduction in effort involved in taking advantage of pingbacks (where the incoming link is automatically detected and the reciprocal link is automatically inserted) is a distinct positive characteristic in encouraging others to link to you. Even if you’re not in this for the fame, you wouldn’t be publicly posting your thoughts to the web if you didn’t want complete strangers to come by and contribute to interesting discussion (or, if you would, you’re not very smart), and this is one of the most effective ways to get that.

Allowing trackbacks is an excellent idea as well. It places a greater effort burden on people getting reciprocal links from your weblog when linking to it than allowing pingbacks, because the person needs to explicitly specify a trackback URL for your weblog post for it to work. This means it is not as effective in encouraging incoming links, because people don’t like effort. They’d rather link to someone posting the same general ideas elsewhere, where pingbacks are implemented. On the other hand, when the pingback technology fails to notice an incoming link, trackbacks can provide a nice fall-back, a safety-net if you will, so that the opportunity to mutually benefit from reciprocal linking is still there. With trackbacks, however, make darn well certain that the trackback URL is not hard to find. Make it clear and obvious. Don’t make people guess based on which weblogging software you use or search throughout your website for it. Do not force people to guess what changes need to be made to a general-purpose trackback URL for it to work for a given post, either: ensure that the URL is available on the same page as the post to which someone might link, and that the URL is customized (automatically) for that page as much as necessary so no real thinking need be done by the would-be linker.

There’s a downside to trackbacks and pingbacks: spammers. Call them spingers or spacklers or whatever else you like, there’s a new breed of unregenerate marketing blood flukes attempting to suck nutrients out of your precious weblog fluids in acts of drive-by parasitism. It is for this reason, if no other, that you should implement post moderation for first-time trackbackers and pingbackers. When someone who has never linked to your weblog before without getting a trackback or pingback approved does so, it should go into a moderation queue, and not directly onto the page. This requires a little more effort from you, of course, as you then need to examine what’s in the queue and determine whether it’s worthy of approval, but if your system is set up to allow you to provide blanket approval for a given source once you determine that the person in question is not a spammer (as WordPress does), you’re home free.

There is one exception to using trackbacks and pingbacks. Someone like Randy Charles Morin, who obsessively tracks down incoming linkers without the benefit of trackbacks and pingbacks (as far as I’m aware), and makes a specific effort to link back to them with a personal touch, actually provides a far better alternative to trackbacks and pingbacks — at least, better for someone like me, who (very) occasionally links to him. The effort involved in that, however, is well outside the realm of reasonability for me, with my schedule, so I rely on trackbacks and pingbacks instead.

By the way, Randy runs something like a dozen different weblogs (I’m sure that’s an exaggeration, but still) that are usually updated at least daily. The man makes Real Money at this “blogging” thing: I’m impressed to have grossed just over $20 since February via my blog, and he probably makes more than that every day on his Google ads. While I don’t necessarily agree 100% with everything he has to say about how to improve the effectiveness of your weblog, I do tend to agree at least 98%, and even when I disagree it gets me thinking about the matter in a way that is ultimately beneficial. He posts tips and tricks about such things in a number of venues quite regularly, and he is definitely worth the read if you want to benefit from his extensive weblogging experiences.

24 August 2006

Signs You’re Doing Something Wrong (Part Two)

Filed under: Cognition,Geek — apotheon @ 04:23

About three months ago, I composed Signs You’re Doing Something Wrong (Part One). I promised more. I did not know how long it would take to get there, but I certainly expected it to be sooner than three months.

Here it is, three months (and change) later.

There are, as I pointed out in the previous installment, many programmers out there who simply don’t know they suck. These are some indicators they (we?) can use to help categorize themselves (ourselves?). You may notice that these signs you’re doing something wrong are attitude-oriented, rather than methodology oriented, but the indicators of suck are no more or less transferrable across activity domains. Arbitrary rules are also a sign that you’re doing something wrong when passing laws in the House of Representatives. Superficial productivity metrics are also a sign you’re doing something wrong when firing someone for procrastination. The inability to describe the benefits of your work without euphemising its failings is also a sign you’re doing something wrong if you’re cooking the books to make your multi-thousand-dollar product package look cheap compared to its one-good-dinner-price competitor. By the same token, the following points apply to many realms of endeavor, though I present them in the language of programming.

  1. Bandaging: When you find a bug or a vulnerability, don’t just slap a band-aid on it. I don’t care how busy you are, don’t create a behavioral patch for your software (or legislation, or revolutionary new vacuum cleaner design) and attach it to the standard distribution. Temporary solutions last the longest, when they’re applied directly to the problem. When shipped separately, however, there’s a little bit of breathing room. Take the opportunity to find the source of the problem and fix that instead for general distribution. Slapping a bandage on a nice deep cut may prevent you from bleeding to death, but you really need to get to a hospital and get that looked at by a professional. If you never hunt out the root causes of the problems you encounter, you’ll never be an expert, and if you never fix the root causes of your problems, you’ll only be replacing one problem now with four or five more problems later. The better you get at hunting down root causes and fixing them, the better you’ll be at avoiding problems in the first place later. If you’re using a magic number to solve an arithmetic bug in your program, you’re in for trouble later.
  2. Optimism: You should always, when you’re programming, be asking yourself “What can possibly go wrong?” You should be asking yourself this because you should be planning for every eventuality. This is cynicism, which is not the same as pessimism: as I’m known to say from time to time, a real cynic is an idealist who learns from life experience. Be cynical. Expect the worst but, unlike a pessimist, do not take that as a foregone conclusion of defeat. Instead, take it as foreknowledge of the sort of thing for which you should be planning. Bad, unplanned outcomes are common, and you can best recover from them or even avoid them entirely in many cases by imagining unplanned outcomes and preparing for them. When you’re always “looking at the bright side” and trying to comfort yourself with silver linings, you’re overlooking the potential for things to get worse. Sometimes, they do get worse, and if you’ve overlooked that potential, it will take you by surprise. When that happens, you’re screwed. This is particularly valuable when driving a car in heavy traffic: if you always assume that any random one of the dimwits around you might cut you off without leaving enough room for you to slow down and avoid rear-ending him, you’ll always be able to prepare to the point where you will always be able to avoid rear-ending him. Funny how that works.
  3. Spelling: No, you don’t have to be an excellent speller. On the other hand, you do have to be able to hide your atrocious spelling from others — more specifically, from excellent spellers. Hint: the spell-checker that comes with MS Word will not suffice (for instance, it doesn’t recognize when the wrong homonym is used, only when you add an extra T to the word TTwo). If you don’t care enough to double-check your spelling, you’ll write bad documentation, suffer from a lack of attention to detail, and fail utterly to appear professional to a great many people. More to the point, you’ll betray your contempt for correctness. The worse your spelling is, the more important it is to get it right: expert spellers can tell the difference between an expert speller who overlooked a typo and an atrocious speller who just doesn’t care enough to double-check. That’s the difference that matters — was it an overlooked typo, or are you both ignorant and irresponsible? One might legitimately ask how much attention you’ve paid to the correctness of your code if you never bothered to figure out whether the documentation should read “send an integer to method foo()” or “send an integer too method foo()”. Correctness matters.
  4. Tolerance: It has been very popular for many years to pretend that everyone has something to offer and everyone is valuable. It’s simply not true — at least, not for your purposes, whoever you are, and whatever your purposes might be. Incompetence should not be tolerated. I don’t mean that you should march into your project manager’s office and tell him what kind of useless, washed-up piece of crap he is. What I mean is that when your project manager sucks rocks, you should be polishing up your resume. You should be looking for someone who measures productivity by what your code does, and how well it does it, rather than by how many lines of it you’ve written this week, and when you find that person you should get him or her to hire you immediately. When you’re the project manager, don’t just accept corporate policy written by some idiot predecessor: work to change policy so that it makes sense. Tradition is not a justification for incompetence, and tolerance is not a justification for complacency. If you’re in a counterproductive environment, and you aren’t either looking for a better environment or trying to fix the environment in which you’re working, you are just dead weight. That means you’re doing something wrong.

I know some of this must seem downright heretical. Good. If I was telling you stuff that perfectly fits the “common wisdom”, you wouldn’t be learning anything new. Here’s a bonus sign you’re doing something wrong.

  1. Orthodoxy: If “nobody ever got fired for buying IBM” is the guiding principle of your professional life, you’ll never be anything but a mindless drone. While you may be able to come up with career-saving excuses all the time, you won’t be saving money and time. Nobody ever became great by pursuing a life of perfect averages.

23 August 2006

Linguistic Elegance Test: Closures and Lists

Filed under: Cognition,Geek — apotheon @ 04:05

I have talked about lexical closures a fair bit here at SOB. They factor heavily into what I’m about to discuss, so I’ll try defining them for those of you who might be unclear on the concept. This assumes some basic understanding of programming concepts, however: my target with this is programmers, after all.

If you visit Wikipedia at the Closure disambiguation page, you’ll find this definition:

an abstraction binding a function to its scope

From there, clicking on the link for Closure (computer science) takes you to a page whose first line of text defines closures thusly:

In programming languages, a closure is a function that refers to free variables in its lexical context.

It then goes on to say this:

A closure typically comes about when one function is declared entirely within the body of another, and the inner function refers to local variables of the outer function. At run time, when the outer function executes, a closure is formed. It consists of the inner function’s code and references to any variables in the outer function’s scope that the closure needs.

Closures are commonly used in functional programming to defer calculation, to hide state, and as arguments to higher-order functions.

A closure combines the code of a function with a special lexical environment bound to that function (scope). Closure lexical variables differ from global variables in that they do not occupy the global variable namespace. They differ from object oriented member variables in that they are bound to function invocations, not object instances.

The thoroughly excellent book Learning Perl Objects, References, and Modules, published in later editions under the name Intermediate Perl, offers the following explanations:

The kind of subroutine that can access all lexical variables that existed at the time it was declared is called a closure (a term borrowed from the world of mathematics).
Closures are “closed” only on lexical variables, since lexical variables eventually go out of scope. Because a package variables (which is global) never goes out of scope, a closure never closes on a package variable. All subroutines refer to the same single instance of the global variable.

In short, then, a lexical closure is (in my own words):

a block of code that can be passed around as a first-class function that contains data that achieves protection and encapsulation because it is assigned to a lexical variable in the closure’s enclosing scope, and has subsequently gone out of scope, though the closure persists

I could probably make that prettier, with some effort, but there it is.

A lexical variable, in case you’re not aware, is a variable with static local scope. That is to say that its scope is definable at compile time and is local to its nearest enclosing scope. In Perl, a variable is lexical if it is declared with my(). In Ruby and UCBLogo, all new variables are lexically scoped by default. Python does not provide proper lexical scoping, so far as I am aware, as its local variables are not fully accessible within scopes enclosed by the variable’s scope (they can be read, but not written to, effectively changing them from variables to constants for enclosed scopes). This limitation can apparently be short-circuited for single array elements, but this behavior looks more like a bug of the language design than a feature.

Lexical closures (also known simply as “closures” for short) require two things:

  1. proper lexical scoping
  2. “anonymous” blocks of code that are first-class functions

Lexical scoping provides a number of benefits to programming languages that implement it. As time passes, more and more often the languages we encounter implement lexical scoping. Some languages, such as Perl, predate the common recognition of the benefits of lexical scoping, and incorporate it into their design later: this applies to Lisp, as well. Other languages have incorporated lexical scoping from day one, and in the future I expect this number to grow prodigiously, as it is likely that any language designer worth a damn will be using lexical scope for years to come (at least until someone invents or discovers something even better).

Lexical scope contributes to cleaner code, reduces the likelihood of scope-related bugs, and because it is defined at compile time, runtime execution for a program with only lexically scoped variables is faster. More benefits than these apply, of course, but I’m sure you can learn more about the subject on your own.

The power and flexibility of functions that can be passed around as parameters for other functions should be fairly self-evident to even a thoroughly mediocre programmer, whether that programmer can immediately envision instances where it is directly useful or not.

By combining these two characteristics of a language — lexical scope and first-class functions — to create closures, we introduce protection and encapsulation. These are characteristics of data bound to code whose value are quite central to the entire object oriented programming paradigm.

I have recently addressed the matter of elegance in programming here at SOB. Within that discussion, and in a follow-up comment to it, I explained and demonstrated the varied potential for elegance as I defined it within the designs of various programming languages. My initial, very simple, demonstration involved merely providing a series of “hello world” programs in a number of different languages so that the amount of scaffolding cruft necessary to achieve this simplest of operations — outputting a string — could be compared between languages. It was intentionally an extremely simple example, as the point was to demonstrate the lack of elegance necessary to achieve such simple results in some languages. This is, I believe, a very salient and useful means of comparing the elegance of code a given set of languages provide for their users.

It is not, however, the only salient and useful comparison, and is by no means a comprehensive metric for judging the potential elegance within a language. In recognition of this, Colin Alston in his karnaugh.za.net weblog suggested that list processing might provide some useful data in a comparison between languages. He went further than merely suggesting it, however, and went on to provide such a demonstration in his weblog entry Elegance: a function of consistency.

(Note: Much of the following may seem confusing due to the fact that the “consistency” essay has been moved. As part of the move, it seems that all comments on the original weblog post there have been lost, thus eliminating much of the context for the following.)

Colin’s heart seems to rest with Python, one of a series of languages that I tend to hold in high esteem in general. Its closest relatives, it seems to me, are Perl and Ruby: a little further afield are the various Lisps. Python imposes some arbitrary limitations on the programmer, however, such as limiting anonymous first-class functions (called “lambdas” in Python, probably as a direct homage to the use of the term in Lisp, despite their limitations in Python) to a single semantic line of code. Another apparently arbitrary limitation is its restriction of scope (as mentioned above) so that variables that might otherwise be lexically scoped are read-only constants from the perspective of enclosed scopes.

This aside, I was surprised to see the length of the Python example provided as a solution to Colin’s list-processing comparison problem. It was not terribly long, of course — much shorter than C, C++, and Java examples — but several syntactic elements longer than I expected. I rather suspect that, like the Ruby example provided by one of his readers, it can be shortened somewhat. The Python example provided contains nine distinct syntactic elements (where the list used to populate the array is taken as a single syntactic element).

The Ruby example provided by Colin’s reader “jerith”, meanwhile, contains eight distinct syntactic elements, but I have shortened it to a mere five elements. Surprisingly, Perl outperforms even Ruby handily: Colin’s reader Jonathan McKeown provided both a seven-element example (beating the previous Python and Ruby examples) and a four-element example (beating my own golfed Ruby example). One might make a case for each of the Perl examples from Jonathan McKeown containing an extra syntactic element in the form of qw// used to define the list, but this is used as a tool to make the code easier to read and write, and is not a necessity of the code, in addition to which it is merely syntactic sugar for quote punctuation in the list definition — and, even if it is counted as a distinct syntactic element, it only brings the Perl examples into a tie with Ruby.

My own Perl example, provided later in the discussion, further golfed the Perl down to three syntactic elements. That’s one third the number of distinct syntactic elements used in the Python example. Thus, my surprise.

I have here a third suggestion for a point of comparison between languages. In this comparison, I use a lexical closure that operates upon a list and returns the results of that operation. The results are output to STDOUT (generally, the command line). The list itself is defined as arguments to the otherwise noninteractive program, thus allowing the program to behave as a filter utility. Each element of the modified list is printed on a separate line in the program’s output. The input and output of the program should look like this:

# programname arg1 arg2 arg3
arg3
arg2
arg1

The number of arguments the program can take should be arbitrary. As one might guess by the above example, the list processing operation I’ve decided on is a simple reversal of the elements in the list. I intentionally, as with the “hello world” example, deliberately chose a task that should be simple enough for a programmer to understand, at least in general terms, what is going on in examples even from languages with which (s)he is not familiar — though in this case there is some general programming knowledge that is not universal (lexical scoping, first-class functions, and the closure that results from these) in use; hopefully any shortfall in required knowledge has been addressed here.

I hope that quite a few of you will contribute examples of how these requirements for a demonstration of linguistic potential for elegance. In the meantime, I will start things off with two examples of my own.

First, Perl:

#!/usr/bin/perl -l

sub foo { my @list = @_; sub { reverse @list }; }

$bar = foo(@ARGV); print for $bar->();

Unlike Colin’s example, one need not agonize over the number of distinct syntactic elements at all in this case, but we are left with another potential point of contention: the -l argument in the shebang line. My take on it is that this is a directive to the interpreter, effectively redefining a particular core function of the language, and not a syntactic element. That being the case, this Perl example contains sixteen distinct syntactic elements, by my count. The closure generator itself contains nine, the function call that actually creates the closure to be used contains four, and the closure call statement where its results are printed to STDOUT contains three.

The closure generator suffers significantly from Perl’s heritage as a language without lexical scoping in earlier versions, and from the fact that in the addition of lexical scope, it was not made the default scope used for new variables. As a result, the @list array must be explicitly declared and assigned for it to have lexical scope, adding three otherwise unnecessary distinct syntactic elements. Similarly, the closure call and print statement contains a syntactic element that is not necessary in some other languages — the for loop. This is only necessary because of the requirement for printing each list element on a separate line, however. If the input list elements contained newline characters, this would not be necessary, and neither would the -l in the shebang line.

Second, Ruby:

#!/usr/bin/ruby

def foo (list) lambda { list.reverse } end

bar = foo(ARGV) puts bar.call

Because of Ruby’s puts() kernel method, there is no need for an interpreter directive as in the Perl example. This does provide for potentially increased elegance of code, if you accept as a given that adding a newline to output as a matter for distinct syntactic treatment increases gratuitous cruft, rather than simply being a lack of syntactic sugar. As I went easy on Perl in this matter, however, I will do the same for Ruby: let us consider puts() a syntactic point of elegance.

That being the case, this program contains a grand total of thirteen syntactic elements (the language keyword end is not counted, being merely a delimiter, any more than braces are in Perl). The same number of distinct syntactic elements are in use in the call to the closure and output to STDOUT, and in the closure instantiation itself, but Ruby is a clear winner in the closure generator definition. While one might point to a number of superficially apparent reasons for this by examining the list of additional syntactic elements involved in the Perl example, these are all necessary only because of the difference between the two languages in how lexical scope is achieved. In Ruby, it is the default behavior of the language, while in Perl it must be explicitly declared.

« Newer PostsOlder Posts »

All original content Copyright Chad Perrin: Distributed under the terms of the Open Works License