RSS
 

Archive for April, 2011

jStat – A Statistical Library With JavaScript

10 Apr

jStat is a JavaScript library which enables you to perform advanced statistical operations without the need of a dedicated statistical language.

Simply, it focuses on being a real JavaScript-based alternative for languages like R and MATLAB.

The library is standalone, however, for the plotting functionality, it requires jQuery, jQuery UI and jQuery-flot plugin.

jStat

Special Downloads:
Ajaxed Add-To-Basket Scenarios With jQuery And PHP
Free Admin Template For Web Applications
jQuery Dynamic Drag’n Drop
ScheduledTweets

Advertisements:
Professional XHTML Admin Template ($15 Discount With The Code: WRD.)
Psd to Xhtml
SSLmatic – Cheap SSL Certificates (from $19.99/year)

 
 

This Time, You’ll Learn Node.js

08 Apr

Node.js is an amazing new technology, but, unless you’re specifically a JavaScript developer, the process of becoming acquainted with it can quickly become a bit overwhelming. But that’s why we’re here! If you want to really learn how to use Node.js, this set of articles and screencasts will do the trick.


An Introduction to Node.js


Press the HD button for a clearer picture.
View more Nettuts+ screencasts on YouTube.

Screencast Transcript

Hi guys, my name is Christopher Roach, and I’ll be your guide throughout this series of screencasts on Node.js. In this series we’ll be using Node to create a simple blog engine, like the one made famous in the popular Ruby on Rails introductory video. The goal of this series is to give you, the viewer, a real feel for how Node works so that, even when working with any of the popular web development frameworks out there, such as Express or Getty, you’ll feel comfortable enough with the inner workings of Node to be able to drop down into its source and make changes to suit your needs as necessary.


Installation

Before we get into some of the details of what Node is and why you’d want to use it, I’d like to go ahead and get us started with the installation of Node, since, though super easy, it can take some time.

Node is still very young, and is in active development, so it’s best to install from the source.

Node is still very young, and is in active development, so it’s best to install from the source. That said, Node has very few dependencies, and so compilation is nowhere near as complicated as other projects you may have fought with in the past. To get the code, visit the Node.js website . If you scroll down the page to the download section, you’ll find a couple of choices. If you have Git installed, you can do a clone of the repository and install from there. Otherwise, there’s a link to a tarball that you can download instead. In this video, I’ll keep things simple, and install from the tarball.

While this is downloading, now is a good time to mention that efforts are ongoing to provide a port of Node for Windows, and there are instructions for installing on Windows for Cygwin or MinGW. I believe there are even some binary packages out there that you can install from, but at the time of this writing, it’s primary environment is Unix and Linux based platforms. If you’re on a Windows machine, you can click on the link for build instructions and follow the set of instructions there for a Windows installation or you can install a version of Linux, such as Ubuntu, and install Node there.

When it’s finished download, simply untar and unzip the package with tar -xvf and cd into the directory it created. First we need to do a ./configure, then make, and finally make install. That’s going to take a little time to build, so I’ll let that run in the background and take this opportunity to talk a bit more about Node, and why it’s causing such a stir in the web development community.


Introduction to Node

Node is JavaScript on the server.

So, if this article and video is your first introduction to Node, you’re probably wondering just what it is and what makes it worth learning when there are already so many other web development frameworks out there to choose from. Well, for starters, one reason you should care is that Node is JavaScript on the server, and let’s face it, if you work on the web, love it or hate it, you’re going to have to work with JavaScript at some point. Using JavaScript as your backend language as well as for the client-side means a whole lot less context switching for your brain.

Oh, I know what you’re thinking: “so Node is JavaScript on the server, well that’s great, but there’ve been other JavaScript on the server attempts in the past that have basically just fizzled.”

What makes Node any different from the rest?

Well, the short answer is: Node is server-side JavaScript finally done right. Where other attempts have basically been ports of traditional MVC web frameworks to the JavaScript language, Node is something entirely different. According to its website, Node is evented I/O for V8 JavaScript, but what exactly does that mean? Let’s start with V8.

V8 is Google’s super fast JavaScript implementation that’s used in their Chrome browser.

Through some really ingenious application of “Just in Time” compilation, V8 is able to achieve speeds for JavaScript that make users of other dynamic languages, such as Python and Ruby, green with envy. Take a look at some of the benchmarks and I believe you’ll be amazed. V8 JavaScript is up there with many JVM-based languages such as Clojure and Java and compiled languages, such as Go in many cases.

JavaScript’s ability to pass around closures makes event-based programming dead simple.

The other key phrase in that statement is evented I/O. This one is the biggie. When it comes to creating a web server you basically have two choices to make when dealing with multiple concurrent connection requests. The first, which is the more traditional route taken by web servers such as Apache, is to use threads to handle incoming connection requests. The other method, the one taken by Node and some extremely fast modern servers such as Nginx and Thin, is to use a single non-blocking thread with an event loop. This is where the decision to use JavaScript really shines, since JavaScript was designed to be used in a single threaded event loop-based environment: the browser. JavaScript’s ability to pass around closures makes event-based programming dead simple. You basically just call a function to perform some type of I/O and pass it a callback function and JavaScript automatically creates a closure, making sure that the correct state is preserved even after the calling function has long since gone out of scope. But this is all just technical jargon and I’m sure you’re dying to see some code in action. I’m going to fast forward a bit to the end of this install, so we can start playing around with our brand new, freshly minted copy of Node.


Confirming the Installation

So, it looks like my build has finally finished; I want to quickly check and make sure that everything went well with the install. To do so, simply run node --version from the command line, and you should see some indication that you’re running the latest version of Node which, at this time, is version 0.4.5. If you see a version print out then you can rest assured that everything went swimmingly and you’re ready to write your first Node app. So, let’s cd back into our home directory and create a new folder to hold all of our work during the course of this series of screencasts. Here I’m simply going to call mine ‘blog‘ and let’s cd into that to get started.


Node – The Server Framework

Unlike other frameworks, Node is not strictly for web development. In fact, you can think of Node as a framework for server development of any kind. With Node you can build an IRC server, a chat server, or, as we’ll see in this set of tutorials, an http server. So since we can’t have an introductory tutorial without the obligatory ‘Hello World‘ application, we’ll begin with that.


Hello World

Let’s create a new file called app.js. Now Node comes with a handfull of libraries to make the development of event-based servers easy. To use one of the available libraries, you simply include its module using the require function. The require function will return an object representing the module that you pass into it and you can capture that object in a variable. This effectively creates a namespace for the functionality of any required module. For the creation of an HTTP server, Node provides the http library. So let’s go ahead and require that now and assign the returned object to the http variable.

Next, we’ll need to actually create our server. The http library provides a function called createServer that takes a callback function and returns a new server object.

The callback function is what Node calls a listener function and it is called by the server whenever a new request comes in.

Whenever an HTTP request is made, the listener function will be called and objects representing the HTTP request and response will be passed into the function. We can then use the response object inside of our listener function to send a response back to the browser. To do so, we’ll first need to write the appropriate HTTP headers, so let’s call the writeHead function on our response object.

The writeHead function takes a couple of arguments. The first is an integer value representing the status code of the request which for us will be 200, in other words, OK. The second value is an object containing all of the response headers that we’d like to set. In this example, we’ll simply be setting the Content-type to ‘text/plain’ to send back plain text.

Once we’ve set the headers, we can send the data. To do that, you’ll call the write function and pass in the data that you wish to send. Here, let’s call the write function on our response object and pass in the string “Hello World“.

To actually send the response, we need to signal to the server that we’re done writing the body of our response; we can do that by calling response.end. The end function also allows us to pass in data as well, so we can shorten up our server code by getting rid of the call to the write function that we made earlier and instead passing in the string “Hello World” to the end function, like so.

Now that we’ve created our server, we need to set it up to listen for new requests. That’s easy enough to do: call the listen function on our server object and pass in a port number for it to listen on; in this case I’ll be using port 8000. The listen function also takes an optional second parameter which is the hostname URL, but since we’re just running this locally, we can safely skip that parameter for now.

Finally, let’s print out a message to let us know that our server is running and on what port it’s listening for new requests. You can do that by calling console.log, just like we would in the browser, and passing in the string “Listening on http://127.0.0.1:8000“. There we go, now let’s run our app by calling node and passing to it the name of the file we want it to execute.


THE REPL

Before we bring this first article and video in the series to a close, let’s flip back over to the terminal and quickly take a look at Node’s REPL.

A REPL, for those unfamiliar with the acronym, stands for Read-Eval-Print-Loop which is nothing more than a simple program that accepts commands, evaluates them, and prints their results.

It’s essentially an interactive prompt that allows you to do pretty much anything that you can do with regular Node, but without all the overhead of creating a separate file, and it’s great for experimentation, so let’s play around a bit with the REPL and learn a bit more about Node.

We’ll first need to stop our server application by hitting Ctrl-C. Then run node again, this time, however, without a filename. Running node without any arguments will bring up the REPL, as we can see here by the change in the prompt. The REPL is very simple: basically you can write JavaScript code and see the evaluation of that code. Despite its simplicity, though, the REPL does have few commands that can come in handy and you can get a look at each of these by calling the .help command at the prompt. Here (refer to screencast) we see a list of four commands, the first of which is the .break command. If you are writing some code that spans several lines and you find that you’ve made some type of mistake, and need to break out for whatever reason, the .break command can be used to do so. Let’s try it out now…

I’m going to create a function here and I’ll just call it foo and open the function body and then hit enter. Notice that, on the next line, rather than seeing the typical greater than symbol, we now see a set of three dots, or an ellipsis. This is Node’s way of indicating to us that we have not yet finished the command on the previous line and that Node is still expecting more from us before it evaluates the code that we’ve typed in. So, let’s go ahead and add a line of code now: we’ll do console.log and we’ll print out the name of the function. Let’s now hit enter, and, again, notice that the ellipsis character is being displayed once more. Node is still expecting us to finish the function at some point. Now let’s assume that I’ve made a mistake and I just want to get back to a normal prompt. If, I continue to hit enter, Node continues displaying the ellipsis character. But, if I call the .break command, Node will break us out of the current command and takes us back to the normal prompt.

Next, we have the .clear command. This one will clear our current context. So if you’ve cluttered up the environment with the creation of several variables and functions and you want a clean slate, simply run the .clear command and Voila, everything magically disappears.

.exit and .help

Finally, there’s the .exit and .help commands. The .help command is fairly obvious, since it’s the command we used to see the list of commands in the first place. The .exit command is equally obvious: you essentially just call it to exit the REPL, like so.

So, that pretty much covers all of the functionality that the REPL provides outside of the evaluation of the code you enter. But before we leave the REPL completely, I’d like to take this opportunity to discuss some differences and similarities between JavaScript in the browser and Node’s flavor of JavaScript. So let’s run Node again and jump back into the REPL.

The first difference between client-side JavaScript and Node is that, in the browser, any function or variable created outside of a function or object is bound to the global scope and available everywhere. In Node though, this is not true. Every file, and even the REPL, has its own module level scope to which all global declarations belong. We’ll see this put to use later in the series when we discuss modules and create a few of our own. But for now, you can see the actual module object for the REPL by typing module at the prompt. Notice that there is a prompt attribute buried a few levels deep in our module object? This controls the prompt that we see when in the REPL. Let’s just change that to something slightly different and see what happens. There now, we have a brand new prompt.

Another difference between Node JavaScript and browser JavaScript is that in the browser, you have a global window object that essentially ties you to the browser environment.

In Node, there is no browser, and, hence, no such thing as a window object. Node does however have a counterpart that hooks you into the operating environment that is the process object which we can see by simply typing process into the REPL. Here you’ll find several useful functions and information such as the list of environment variables.

One similarity that is important to mention here is the setTimeout function. If you’re familiar with client-side JavaScript, you’ve no doubt used this function a time or two. It basically let’s you setup a function to be called at a later time. Let’s go ahead and try that out now.

> function sayHello(seconds) {
...	console.log('Hello ');
...	  setTimeout(function() {
...     console.log('World');
...   }, seconds * 1000);
... }

This will create a function that when called, prints out the string ‘Hello’ and then a few seconds later prints the string ‘World’. Let’s execute the function now to see it in action.

> sayHello(2);

There are a couple of important ideas to take notice of here. First, Ryan Dahl, the creator of Node, has done his best to make the environment as familiar as possible to anyone with client-side JavaScript experience. So the use of names such as setTimeout and setInterval rather than sleep and repeat, for example, was a conscious decision to make the server-side environment match, wherever it makes sense, the browser environment.

The second concept that I want you to be aware of is the really important one. Notice that, when we call sayHello, right after printing the first string, control is immediately given back to the REPL. In the time between when the first string is printed and the callback function executed, you can continue to do anything you want at the REPL’s prompt. This is due to the event-based nature of Node. In Node, it’s near impossible to call any function that blocks for any reason and this holds true for the setTimeout function. Lets call our sayHello function again, however, this time let’s pass in a slightly longer timeout interval to give us enough time to play around a bit and prove our point. I believe 10 seconds should do the trick.

There we see the first string. Let’s go ahead and run some code of our own, how about 2 + 2. Great, we see that the answer is 4 and… there’s our second string being printed out now.


Conclusion

So that brings us to the close of the first episode in this series. I hope this has been a fairly informative introduction to Node for you, and I hope I’ve done a decent enough job of explaining why it’s so exciting, what it has to offer, and just how fun and simple it is to use. In the next episode, we’ll actually start writing some of the code for our blog engine; so I hope you’ll all join me again when things get a bit more hands on. See you then!

 
 

Majestic SEO Fresh Index

08 Apr

Majestic SEO has long had great link data, but their biggest issue has been usability. They sorta built with the approach of "let's give them everything" as a default, and then allowed advanced filtering to be done over the top to generate custom reports.

For advanced users this type of set up is ideal, because you are able to slice and dice it in many ways on your own terms. It allows you to spot nepotistic networks, pinpoint strategies quickly, and generally just give you a good look at what is going on in ways that wouldn't be able to do if you couldn't get all the data in a table. There are so many valuable edge case uses that can't practically be put in a single interface while keeping usability high for the average use.

But for people newer to the SEO game & those looking for a quick source of data the level of options can be a bit overwhelming when compared against something like Open Site Explorer. A parallel analogy would be that when I want to spot check rankings real quick I rely on our rank checker, but if you want to have a variety of in-depth historical views then something like Advanced Web Ranking can be a quite helpful tool.

In an attempt to improve the "at a glance" style functionality Majestic SEO announced their new site explorer, which puts more data at your fingertips without requiring you to open up an Excel spreadsheet:

How much can you use the Majestic Site Explorer?
The system is designed for silver users and above. Silver subscribers can query upto 10 different domains an HOUR. Gold subscribers can query upto 30 different domains an hour and Platinum subscribers can query upto 100 different domains an hour. All levels are subject to fair use terms.

These allow you to view data on a sitewide basis, at the subdomain level, or drill down to individual pages.

Here is an example of a site level report

and if you wanted data down to the URL level, here is an overview of a top few links (note that the report goes on for numerous pages with data)

This update helped Majestic SEO close the gap a bit with Open Site Explorer, but a couple more things they may want to consider doing are

  • adding result crowding / limit results to x per domain
  • allowing you to filter out internal link data

Those features are available via their advanced reports, but making it easier to do some of that stuff in the "at a glance" interface would allow Majestic SEO to provide as a best in breed solution for both the "at a glance" function and the "in-depth deep research" options.

Majestic SEO also announced their new fresh index, which allows you to view fresh link data as recently as within the past day. It doesn't require waiting for a monthly update or such, but offers link data right away. To help spread the word & give everyone a chance to see some of the new features they gave us free discount voucher codes to give out to get a 20% discount on your first month at any level.

If you have any questions about how Majestic SEO works you can sign up & register your own site, which allows you to access many of their features free. As a comparison SEOmoz (which offers Open Site Explorer) is also running a free 1-month trial right now.

 
 

Google begins tablet version of Chrome OS

07 Apr
The browser-based operating system is headed for touch-screen tablets. But Chrome OS competes not just with Apple's iPad, but also Google's own Android OS for tablets.

Originally posted at Deep Tech

 
 

26 Beautiful Web Design Agency Portfolios

06 Apr
Web design is a growing profession popular amongst new-age businesses. This places powerful marketing power into the hands of digital artists and graphic designers for the web. Thus we have seen the development of dozens of design agencies and branding teams. The collection of agency designs below illustrates a bright picture of the current scape [...]
 
 

6 questions to prepare you for a social media crisis

04 Apr

On October 27, 1980, the ARPANET — the Internet’s earliest incarnation — had its first epic fail. I’m not talking about your garden-variety system glitch: I’m talking about a spectacular, network-wide outage. The entire network was offline for hours.

Today it’s hard to even comprehend the idea of the entire Internet crashing (and when I try, it makes me feel slightly nauseated). But we face other kinds of online disasters, and when they happen, we need our own strategies for rebooting.

In social media, the disasters people talk about most are fundamentally crises of public relations. These fall into two types: crises that originate in social media, and crises that originate offline. In the era of Twitter, YouTube and Facebook, both types of crisis require a rapid, social media response.

Looking at the most recent social media crises is one way to think about the kinds of challenges for which you need to prepare. But social media has a way of ensuring that each crisis is different from the last, so if you’re prepared to handle a YouTube meltdown, you’ll probably get served with a FourSquare nightmare.

That’s why it pays to look for principles of online crisis management that will be relevant in the long run. And by examining the 1980 ARPANET crash, we can do just that: identify the questions the ARPANET team might have asked 31 years ago, and which your team could answer today.

  1. In conventional histories, it’s a well-worn trope to talk about how the Internet was designed to withstand nuclear attack; how its entire design was based on ensuring that even if one part of the network went down, the others would survive. From this flow all sorts of near-religious beliefs about the Internet’s propensity for authentic, peer-to-peer communications and its resistance to central authority. But the ARPANET crash points us to a moment in living memory when the Internet was far from unstoppable. What beliefs about the Internet is your social media strategy based on? How do you know whether those beliefs are well-founded?
  2. It’s striking that 31 years after the ARPANET crash, Google Scholar doesn’t contain a single in-depth academic study focusing specifically on this historic crash (perhaps because by the time journal articles became digital, it had ceased to be a technically relevant case). I obviously can’t speak to the technical interest that the crash might or might not hold for today’s computer scientists, security experts and network administrators, but it’s hard to believe that this incident doesn’t hold social or historical significance. Even if the only thing we can learn from the 1980 crash is the thinking process that led early network administrators to overlook this potential vulnerability, it would seem well worthwhile to examine the social, organizational and cognitive context in which the ARPANET was able to fail. What crucial online mistakes have you left un- or underexamined, and what could you learn from them?
  3. Today, the crash of your individual computer (typically on the 11th page of a 12-page, unsaved document) falls somewhere between annoyance and bummer on the scale of human misery. The short-term crash of your company’s site or internal server usually falls somewhere between inconvenience and embarrassment. The crash or overload of a significant portion of the global internets is somewhere between distracting and worrying (depending on what it portends for network security). But the prospect of a global, system-wide network crash is only at one extreme or the other: laughable (because what could possibly crash the whole Internet?) or heart-stopping (because imagine what could possibly crash the whole Internet). What scope of failure can you tolerate in your social media presence? What level of misery would a failure induce?
  4. A 1981 analysis of the crash noted that the problem might have been prevented, but a prevention system would have required lots of processing power and memory and “[s]ince CPU cycles and memory are both potentially scarce resources, this did not seem to us to be a cost-effective way to deal with problems that arise, say, once per year.” This feels like an amusing explanation today, when processing power and memory are dirt cheap.  What a great reminder that every crisis prevention or problem-solving strategy is based on a set of resource constraints and assumptions. But our strategies often fail to evolve as quickly as the underlying assumptions may change. If you have a strategy for preventing or managing potential online problems — for example, handling critical tweets — what assumptions does your strategy rest on? And how often do you stop to assess whether those assumptions still hold — and if not, to update your strategy?
  5. When the network went down, administrators realized they had a system-wide problem when they got phone calls from ARPANET sites all over the country. In the absence of the network itself, phone was the alternative channel of first resort, and in 1980, the network was small enough that phone-based communication was a viable option for getting an overall picture of the network. In today’s you may have to cope with losing access to key tools for your online response. What is your alternative channel of first resort? How would you communicate during a social media crisis if you couldn’t use social media tools to help?
  6. An error in a single bit brought the ARPANET to a halt. Call this the Death Star principle: if you focus only on preparing for the big problems, a tiny X-wing fighter can sneak in and blow up your entire space station. What tiny problems could occur for your social media activities? Which tiny problems could potentially blow up your whole strategy?

If you can answer these questions, you’ll have established the basic principles for your social media crisis management strategy. What questions would you add to the list?

 
 

On the Argument That Android Is Taking Over

04 Apr

Nice piece by Jon-Erik Storm on Henry Blodget’s and Fred Wilson’s arguments that Android is the new Windows:

Really? I can come up with three counterexamples. One, gaming consoles. There are three: XBox, Playstation, and Wii. There has almost always been more than one important gaming console. Two, there are several web browsers that people use. If IE were still the only one, standards like HTML5 and CSS wouldn’t matter. Three, is Facebook really the only social platform? What is Twitter then? Maybe iTunes would have been a better example, eh? And as for PCs, Apple seems content with it being the #1 laptop and #2 PC maker with its approximately 8% marketshare, but yet reaping more profits. But the point is these examples are unscientific and don’t explain why technology platforms stabilize that way (if they do) and why that will apply to smartphones.

That’s the question of the decade. Is mobile going to work out like the console market, with a handful of competing and roughly equal major platforms? Or is it going to work out like the PC, where a lower-cost inferior licensed OS grows to an overwhelmingly dominant monopoly position? (And, as Storm points out, Apple’s penalty for “losing” the PC war is that it is now the world’s most profitable PC maker.)

(Also worth noting about the console market: the lead has changed hands several times: Atari, Nintendo, Sony, Nintendo. And second-place has changed numerous times as well. It’s long been a healthy competitive market.)

Update: Another WordPress blog fireballed. Google has it cached.

 
 

“Anonymous” attacks Sony to protest PS3 hacker lawsuit

04 Apr

The hacker hordes of Anonymous have transferred their fickle attention to Sony. They are currently attacking the company's online Playstation store in retribution for Sony's lawsuit against PS3 hacker George Hotz (aka "GeoHot"). A denial of service attack has temporarily taken down playstation.com.

In a manifesto announcing the new operation, Anonymous railed against Sony for going after coders who seek to modify hardware that they own. The lawsuits are an "unforgivable offense against free speech and internet freedom, primary sources of free lulz (and you know how we feel about lulz)."

"Your corrupt business practices are indicative of a corporate philosophy that would deny consumers the right to use products they have paid for and rightfully own, in the manner of their choosing," continues the pronouncement. "Perhaps you should alert your customers to the fact that they are apparently only renting your products? In light of this assault on both rights and free expression, Anonymous, the notoriously handsome rulers of the internet, would like to inform you that you have only been 'renting' your web domains. Having trodden upon Anonymous' rights, you must now be trodden on."

Anonymous is rallying participants to voluntarily contribute to the denial of service attack on Sony. That attack is continuing, and it appears to be far more successful than recent hits on Angel Soft toilet paper. In Anonymous chat rooms, participants bash Sony but worry about how their actions will be perceived. "Guys, you need to talk to the gamers and explain to them that this does not affect their gameplay," wrote one.

Some even hope to take credit for a small drop in Sony's stock price: "We're already causing sony stock to drop!!!"

While most Anonymous attacks remain online-only hacks or protests, Operation Sony will feature a real world component. On April 16, Anonymous wants people to gather at their local Sony stores to complain in person—no doubt leading participants to rummage through their closets in order to dig out the old Guy Fawkes mask.

Read the comments on this post

 
 

10 ways spam taught us to focus our attention

02 Apr

DIGITAL WILL BE GIVING A PRODUCT PRESENTATION OF THE NEWEST MEMBERS OF THE DECSYSTEM-20 FAMILY; THE DECSYSTEM-2020, 2020T, 2060, AND 2060T. THE DECSYSTEM-20 FAMILY OF COMPUTERS HAS EVOLVED FROM THE TENEX OPERATING SYSTEM AND THE DECSYSTEM-10 COMPUTER ARCHITECTURE. BOTH THE DECSYSTEM-2060T AND 2020T OFFER FULL ARPANET SUPPORT UNDER THE TOPS-20 OPERATING SYSTEM. THE DECSYSTEM-2060 IS AN UPWARD EXTENSION OF THE CURRENT DECSYSTEM 2040 AND 2050 FAMILY. THE DECSYSTEM-2020 IS A NEW LOW END MEMBER OF THE DECSYSTEM-20 FAMILY AND FULLY SOFTWARE COMPATIBLE WITH ALL OF THE OTHER DECSYSTEM-20 MODELS.

You’ve just read the very first spam message. Sent by Carl Gartley on behalf of Gary Thuerk, this message went to several hundred ARPANET members on May 3, 1978. The message violated the until-then standard practice of e-mailing people individually (ah, those were the days!) and annoyed a whole lot of ARPANET users. It also sold some computers. And thus, the era of spam marketing was born.

It’s customary to curse the name of Thuerk, though Thuerk himself uses fatherespam as his LinkedIn profile URL, and prominently cites his role in creating spam as a professional credential. (Guess he decided to embrace it sometime after this interview.) But I think that Gary Thuerk is owed more than a sarcastic thank you.

After all, spam — now estimated at more than 75% of e-mail traffic — has been one of the major drivers of online innovation. To cope with “Pandora’s Inbox”, we’ve had to develop attention and information-management systems that prove crucial for surviving today’s communications-rich environment.

Spam is the vaccine for your attention span. It’s the toxin that has stimulated our immunity system’s defenses. Thanks to spam, we’ve had to find technical, social and personal ways of keeping our eyes on the 22% of e-mail that isn’t pure junk, and to avoid the 78% that is.

Those tools and tactics turn out to serve us very well in the era of social media. Now that people generate content and communications in ways that go well beyond e-mail, we need to focus in ways that go far beyond a spam filter. We can thank Gary Thuerk and the spammers of the universe for helping us develop the following ways to focus our attention:

  1. Email filtering: Email filters, which were first created to deal with spam, have since turned into powerful tools for managing and organizing incoming email. I’m utterly dependent on Gmail filters in ways that go way beyond spam elimination. Without spam I might have to read and file my e-mails by hand (shudder).
  2. Attention filtering: Email filters have inspired analogous tools on other platforms. Twitter lists, the Facebook “hide” option and the entire idea of PATH are all about filtering out extraneous content so we can focus our attention on a more limited circle of relationships or a more limited sphere of information.
  3. Texting and messaging: Spam made us impatient about the process of plowing through our inboxes. Texting, chat and Twitter are all instant communications tools that sidestep the whole inbox nightmare by coming to us in real time. (And better yet, by being incredibly short.) Learning to communicate in very brief increments is one of the legacies of spam, and in a world that connects us to hundreds or thousands of people through a wide range of social networks, we can be grateful that some of those conversations happen briefly.
  4. Pull: Email did a fantastic job of teaching us about the limits of push: content that gets pushed to you. As a result many of us have shifted much of our attention onto pull: content that we pull to us by choosing what to visit or subscribe to. For instance, instead of subscribing to e-newsletters, we might subscribe to blog RSS feeds. While e-newsletters are still alive and well, the shift to pull is an essential tool for people trying to manage a very high volume of information.
  5. FOAF: The friend-of-a-friend principle has driven a wide range of social networks in which your interactions are structured around networks of trusted contacts. Relying on networks of trust is a way of getting past the spam problem, by opening communication channels only along lines that mirror pre-existing social relationships. Just think about LinkedIn, which explicitly limits your ability to contact people based on how closely you are connected. That whole model of using social networks to construct boundaries around who gets our attention is in some part thanks to the problem of ungated attention first demonstrated by spam.
  6. Marketing with value: Spam’s assault on e-mail delivery and opening rates first forced marketers to think about what they could actually offer to make an e-mail worth reading. That consciousness and skill set has served marketers well in the social media era, where the competition for attention is even fiercer. If some online marketing now delivers real value to its targets — think the Dove Campaign for Real Beauty or Dell’s Ideastorm — that’s because marketers have learned that providing tangible value is one way to earn people’s attention.
  7. Opt in, opt out: To address the spam problem, many countries have laws that require all bulk e-mails to include an opt-out link, and/or to be sent only by people who have explicitly opted into the mailing list. (Of course, these laws are ignored by all kinds of illegitimate operations, which is why spam volumes remain so high.) This has given us the idea that you don’t demand the attention of someone who hasn’t asked for your content, and that losing someone’s attention is a routine and acceptable part of our communications ecosystem. You can see that principle extended into technologies and practices like the ever-evolving policies on what appears in your Facebook news feed, and the ease of unfollowing people on Twitter.
  8. Ignoring communications: Spam taught us that it was OK to ignore a lot of e-mail. We still have a ways to go in overcoming our notion that all e-mail deserves a reply, but to the extent that we’re asserting some sense of agency over how we allocate our attention, it builds on the foundations established by spam. Once you learn how to ignore offers from Nigerian princes, it gets a lot easier to ignore irrelevant office-wide memos.
  9. Getting rich quick: In a world that delivers daily messages about how you can get rich quick, it’s understandable that we’d lose our patience for long, slow empire-building. Maybe it’s overreaching to blame (or credit) spam for a generation of social media sites built on the business model of, “let’s build something that we can get Yahoo! or Google to buy.” But some of the startups that found their quick return through early acquisition have included some great tools for managing our information and communications (hello, delicious and Radian6).
  10. Penis talk: If we weren’t so constantly deluged by spam ads promoting Viagra, Cialis and penis enlargement, we might think that the size and engorgement of one’s genitalia were strictly personal matters. Thanks to spam, we now know how much people like to think and talk about penises, information that has helped to drive some of the Internet’s most successful entertainment sites. Imagine if we’d wasted all that attention on lady parts instead!
 
 

Top 5 Facebook Marketing Mistakes Small Businesses Make

02 Apr


This post originally appeared on the American Express OPEN Forum, where Mashable regularly contributes articles about leveraging social media and technology in small business.

While Facebook marketing is on the rise among small businesses, many are still struggling to master the basics.

“Many people have difficulty with just the basic Page set up,” says social media marketing consultant Nicole Krug. “For example, I still see people setting up their business as a profile page instead of a business Page. I have other clients who jumped into Groups when they came out and have divided their fan base.”

Here are five more common Facebook marketing mistakes to avoid:


1. Broadcasting


Ask any social marketing consultant what the number-one no-no is on Facebook, and he’ll likely tell you it’s “broadcasting” your messages instead of providing fans with relevant content and engaging on an continual basis.

“With Facebook, marketers of any size can do effective, word-of-mouth marketing at scale for the very first time. But Facebook is all about authenticity, so if your company is not being authentic or engaging with customers in a way that feels genuine, the community will see right through it,” says Facebook spokeswoman Annie Ta.

Peter Shankman, social media consultant, entrepreneur and author of “Customer Service: New Rules for a Social Media World,” agrees.

“Your job is to interact, not just to broadcast,” says Shankman. “Fans are looking for a reason to connect with you, and they’re showing you that by clicking ‘Like.’ Your job is to give them a reason to stay.”

According to Andy Smith, co-author of “The Dragonfly Effect: Quick, Effective and Powerful Ways to Use Social Media to Drive Social Change,” many businesses immediately ask how Facebook is going to make them money and have that be the focus, as opposed to trying to engage customers and provide a meaningful, authentic online experience. “Marketers need to recognize that people go to Facebook to make a connection or feel like part of a community,” says Smith.


2. Not Investing Adequate Time


Another common mistake is underestimating the amount of time a successful Facebook strategy entails. Many social media consultants report seeing a pervasive “set it and forget it” mentality among small businesses.

“Some small business owners are under the impression that if they set up a Page on Facebook, that’s all they have to do. They think people will just naturally come and want to be a fan of their product or service,” says Taylor Pratt of Raven Internet Marketing Tools. “But it takes much more of a commitment than that.”

It’s not just fan growth that will suffer from this approach — it may also hurt your relationships with existing fans, particularly customers who have come to expect timely responses to their posts and queries.

“Unlike traditional advertising methods such as a radio spot or a Yellow Pages listing, you can’t just create a Facebook Page and just let it run its course,” says Alex Levine, a social media strategist at Paco Communications. “Creating a Facebook Page is the first of many steps, but the Page needs to be updated and monitored constantly.”


3. Being Boring or Predictable


When they’re thinking about marketing, some business owners forget that Facebook is a social place where people share things they find funny, interesting or useful with their friends. Think about what kind of content your fans would actually want to share when planning your posts.

Shankman also cautions against becoming too predictable. “Status updates by themselves get boring. But then again, so do photos, videos and multimedia as a whole. Your job is to mix it up. The moment you become predictable, boring or annoying, they’ll hide you from their feed. So keep it varied and personal — a video here, a photo here, a tag of one of your fans here.”

Creating too much “filler” content by auto-publishing content from your blog or Twitter feed can also derail your efforts. Joseph Manna, community manager at Infusionsoft, recommends using Facebook’s native publishing tools to gain the most benefit from Facebook.

“Whatever you do, DON’T automate everything,” says Manna. “It’s nice to ‘set and forget,’ but the risk is two-fold: publishing systems sometimes have issues, and Facebook places low-priority on auto-published content.”


4. Failing to Learn About Facebook Mechanics and Tools


Since Facebook is a relatively new medium, some businesses have yet to explore all its functionality and they’re missing out on creating an optimal brand experience.

“Many small businesses do not take advantage of the tools to introduce themselves to the Facebook audience,” says Krug. “For example, the ‘Info’ tab is rarely utilized well, and very few small businesses [create] a custom welcome page.”

Krug also sees frequent mistakes around one of the most basic elements of Facebook presence: the profile image. “Most companies upload a version of their logo, but the resulting thumbnail image that shows up in news feeds often only captures a few letters in the middle of their logo — this partial, meaningless image is then how they’re branded throughout Facebook,” says Krug.

Facebook Insights, Facebook’s built-in analytics system, is also often overlooked, and with it the opportunity to analyze post-performance to see what types of content gets the most engagement.


5. Violating Facebook’s Terms


Not only is it critical to know how Facebook works and what tools are available, it’s also important to know the rules of the road — something that many businesses miss.

“Every day I see organizations endangering the communities they are growing by violating the terms they agreed to when their Facebook presence was created,” says small business marketing consultant Lisa Jenkins.

What are the most common violations? Some build a community on a personal page instead of a proper Facebook Page. Others fail to abide by Facebook’s rules around running contests. And don’t even think about “tagging” people who are in an image without their permission.

“Tagging people to get their attention is not only a violation of Terms but can be reported by those you are tagging as abusive behavior on your part — which brings your violation to Facebook’s attention and opens your Page’s content to review,” warns Jenkins.

To avoid these common mistakes, invest time in learning about the Facebook platform, educate yourself on how to build and sustain an audience, and don’t forget to engage with people like you do in real life.

“What sets small businesses apart from large companies is their ability to make personal connections with customers,” says Ben Nesvig of FuzedMarketing. “They tend to forget this when they join Facebook, yet it’s their biggest strength and asset.”


More Facebook Resources from Mashable:


- 4 Ways to Set Up a Storefront on Facebook
- HOW TO: Add Social Sharing Buttons to Your Website
- The Future of Social Search
- 5 Creative Facebook Places Marketing Campaigns
- Dog: Man’s Best Facebook Friend, Too? [INFOGRAPHIC]

For more Business & Marketing coverage: