Posts

 

Ajax Link Tracker 2.2 - The download

I have rewritten the Ajax Link Tracker to provide a more robust downloadable version.

One of the more interesting aspects of Ajax is the ability to track a user’s interaction within the browser. I wanted to investigate navigation patterns, so I built the Ajax based link tracker. If you press the “Ctrl” and "X" keys you will be presented with an overlay which displays links usage by percentage. This functionality was created with JavaScript and a very simple API. The first version was an experiment, it was never my intention to create a finished tool just to explore the idea and share the results.

Subsequently I have moved on to produce a hosted web application calledMapSurface that includes link tracking. Although MapSurface has many advantages over the link tracker I thought it would still be useful to produce a downloadable version of the link tracker. I have completely rebuilt the link tracker adding a few new features. It contains many of the lessons learnt from the construction and beta testing of MapSurface.

Ajax Link Tracker 2.2 + .Net/SQL Server Backend /> Ajax Link Tracker 2.2 + PHP/MYSQL Backend (Pierre Far) /> Ajax Link Tracker 1.0 + ASP/MDB Backend (Dean Liou)

Pierre Far and Dean Liou have provided alternative backend systems. I would like to thank them for their efforts and everyone else who has provided feedback.

New features and updates

  • Records clicks on buttons /> Tracks clicks on form buttons. This includes both submit and button input types. /> 
  • Overlay is dragable /> The overlay of click information is now dragable. This helps you to view data that may be overlapping. Thanks to dom-drag script from Aaron Boodman /> 
  • Allows you to change what information is displayed in the overlay /> The JavaScript can be configured to display click number, percentage of clicks and label or a combination of the three pieces of information /> 
  • Allow you to change how many days data is displayed in the overlay /> The JavaScript can be configured to change the date range of data used by the overlay. This property is useful for sites which have constantly changing layouts. /> 
  • Record keyboard hyperlink interactions /> Not everyone uses a mouse to click hyperlinks and buttons /> 
  • Can find hyperlink label text even in complex HTML /> Can find the label text in a link even if the hyperlink contains HTML mark-up. /> 
  • Object encapsulation /> All the JavaScript is wrapped into two objects. This should stop any conflicts between the Ajax Link Tracker and other JavaScript code. /> 
  • Does not override existing JavaScript events handlers /> The JavaScript will not override any events already added to the page.

Please remember that Ajax cannot work across domains. If you wish to make this code work within a website you must include the two aspx pages and .net classes within that project. You will also have to copy the appSettings xml block into the “Web.config” file of your application. Ajax Link Tracker 2 was built with Visual Studio 2003 and SQL Server 2005.

  • JavaScript
  • Projects

SXSWi 2006

Three weeks after the event, I must be the last person to write aboutSXSWi 2006. As I gained so much from the experience I thought I must share a few words however late.

Early this year I was talking to a director of a large UK publishing group and he passed the comment that it must be nice to work in an industry where people share knowledge. Events like SXSWi are why I answered simply, yes it is. The conference had a real sense of community where people shared beers ideas. Over the last few years I have fostered a growing scepticism about large industry conferences, but SXSWi provides a unique environment in which to learn and meet people.

Out of the vast number of talks and panel discussions, Rashmi Sinha’s contribution to the “Tagging 2.0” panel (good notes) has stuck in my mind. Her concise deconstruction of the differences between tagging and categorization has really added to my understanding. Rashmi carried on to talk about why tagging is so much easier than categorization for users. It was a shame she only had 10 minutes.

To me SXSWi 2006 seems to have a number of unscripted themes. The tagging discussion became part of a much wider theme centred on social network /> architectures and collective intelligence. The centre piece being James Surowiecki talk on “The Wisdom of Crowds”. James Surowiecki has put together a fascinating argument about the power of collective intelligence which goes to the heart of why so many social networks on the internet work well. If you have not read the book I would recommend the podcast of his SXSWi talk.

One of the other strands of discussion was bootstrapping and building web applications. Jason Fried (37 Signals) talked about using the skills, passion and creativity of our industry to build new types of businesses. He terms the phrase “the how of entrepreneurship” as a description of his talk. It’s fascinating to see how entrepreneurship is being embrace by designers and developers. There where many evangelising the “bootstrapping business model” of starting small, keep it simple and don’t use venture capital. Although listening to the subtext of a few speakers, you should add the last step “sell everything to Yahoo or Google and retire”!

Tantek Celik talked about BarCamps which was a nice tonic to the entrepreneurship of Jason Fried. To those who do not know what a BarCamp is; I love the Wikipedia definition: “BarCamp was created as an open, welcoming, once-a-year event for geeks to camp out for a couple days with wifi and smash their brains together.” The summer of love all over again, but with keyboards!

I hung out with the infamous Britpack. Travelling to Texas to meet people who only live a hundred miles away is a bit strange, but fun. There was also a sizable contingent from Brighton with both Andy and Jeremy giving talks. Unfortunately I missed Andy’s superheroes talk, but I did see Jeremy’s “How to bluff your way in DOM Scripting”. Jeremy’s talk which had an air of an amateur dramatics production, not because anything Jeremy said, but the old armchairs and props that had been placed on the stage.

I am already saving for next year.

Other Useful Links

  • Events

getAttribute href bug

Whilst working on the Ajax Link Tracker and MapSurface I have come across an inconsistency in how the href attribute is retrieved using DOM Scripting.

The href attribute is different to other element attributes in that the value set can be relative to the context of the page URL. If you set a link with a relative href attribute

test page

The browser will look at the pages current URL and derive an absolute URL for the link.

http://www.glenn.jones.net/development/test1.html

This is the root of the problem, some browsers return the text of the attribute and others return the derived absolute URL. The results also differ by the method you use to retrieve the href attribute. There are three common ways to access an attribute:

    linkobj.href;
    linkobj[‘href’];
    linkobj.getAttribute(‘href’);

The linkobj.href and linkobj[‘href’]; methods of accessing the attribute consistently return the derived absolute URL.

Microsoft has tried to address this by problem adding a second parameter to the getAttribute method. The second parameter can be set to 0,1 or 2. If the parameter is set to 2 the method returns the attribute text. Any other setting will return the derived absolute URL.

    linkobj.getAttribute(‘href’);
    linkobj.getAttribute(‘href’,2);
Derived /> Absolute URL Attribute Text
IE linkobj.href; x
IE linkobj.getAttribute(‘href’); x
IE linkobj.getAttribute(‘href’,2); x
Gecko linkobj.href; x
Gecko linkobj.getAttribute(‘href’); x
Gecko linkobj.getAttribute(‘href’,2); x
Opera linkobj.href; x
Opera linkobj.getAttribute(‘href’); x
Opera linkobj.getAttribute(‘href’,2); x

Get attribute test page Test on IE6, Firefox 1.5 and Opera 8.51.

So what should be returned by the getAttribute method? /> The W3C DOM Level 2 /> Core specification which sets out the structure of the getAttribute method does /> not cover this issue. It is /> not that either approach is wrong or right. On this point the specification is /> open to interpretation.

As a coder I would like to be able to access both values. The DOM Core specification should be updated to address the problem.

After a really good exchange with Jim in the comments below, I stand corrected. The specification does say the getAttribute should return the attribute value, not the absolute URL. The Microsoft approach is wrong.

For the time being I am using the old school object property /> method linkobj.href to return derived absolute URLs. It provides the most consistent results across all browsers.

URLs of interest

/> W3C REC DOM Level 2 Core specification for getAttribute

/> Gecko documentation for getAttribute

/> Microsoft documentation for getAttribute

As usual just as I was finishing this post I found this bug report on the QuickMode site which discusses the same subject. /> /> getAttribute HREF is always absolute.html

  • JavaScript

MapSurface - web page activity widget

MapSurface is a tool that tracks user activity within a web page. Its purpose is to provide an understanding of how users find, navigate and value web pages. It displays this information in a compact widget which sits above the web page.

If you press the “Alt” and “X” keys (try this on the Ajax Link Tracker page) the MapSurface Dashboard appears. The dashboard contains summary information such as the number of views and top referrers. There are two hyperlinks above the summary tables. Clicking on the “map” link will display an overlay of link usage by percentage. The “more >” link opens a second floating window called the ViewPlane which contains more detailed information. Example sites that are already configured with MapSurface include: www.glennjones.net, www.madgex.com and www.mapsurface.com

MapSurface is both an interface and web application design prototype. It is not yet a commercial or beta (Web 2.0) application. I was encouraged to build it by the positive response to the Ajax Link Tracker. I have posted this work to the web to get peoples views of its worth and also because I wanted to share the technical journey of building an application like this. I have used a number of interesting techniques such as JSON data transfers, dynamic script loading and module interface design pattern. Time permitting, I am hoping to write tutorials about each of these subjects.

I have numerous ideas to take this concept forward. Although the current version only displays a small about of information, it is gathering all it needs to match most current web statistics packages. The ViewPlane could be extended to include what browsers people are using etc.

But I believe the most interesting direction to extend MapSurface would be to use the social bookmarking and tagging site APIs. With some thought you could provide interesting information about the context of a page within the whole web. Measuring the popularity of the page through how many people have bookmarked or linked to it.

To help generate some feedback I have a limited number of test accounts for anyone who is interested in trying out MapSurface. If you would like to be considered for an account please fill-in the form on www.mapsurface.com. To use MapSurface you simply need an account at www.mapsurface.com and to add a JavaScript file link into any page you wish to track. The tracking file is currently 8K. The widget interface is loaded only when you press the key combination.

Note /> I will be developing different commercial applications from these concepts through my company Madgex. I am unsure about the path of the MapSurface prototype itself, if there is enough interest it could be developed into a commercial service. So for now I am not making the code open source.

  • JavaScript
  • Projects

What information do you really want to know about how people use your web site?

I am building a new and hopefully interesting way of displaying information about how people use web sites. I have for a long time thought that most web statistics software is blotted with lots of information that is of no interest. The larger web statistics software seems to be only concerned with advertising and e-commerce conversion rates. The rest are often very dry and technical. My friend Andy Budd just wrote a post “Lies, Damn Lies and More Server Statistics” where I think his opening comments represents a lot people’s frustration with current offerings.

What I am more interested in is how people find my site, what they like about it and what they don’t. I would like information that is focused on helping me write and design better sites. So for me the following short list is of most interest:

  • How popular is a page
  • Did they think the content had value
  • How did people find the page
  • Did they move on to look at other pages on the site
  • How did they use the navigation
  • What did they use to view the page

My question is what would you really like to know about how people use your site. If you had the choice of only a few measures, what are the most important to you?

14 January 2006 14:11

  • Projects

Ajax Link Tracker

There is now a new downloadable version. Ajax Link Tracker 2.2

Take a look at the next generation, MapSurface a modular JSON/On-Demand JavaScript interface that inculdes link tracking.

One of the more interesting aspects of Ajax is the ability to track a user’s interaction within the browser. I wanted to investigate navigation /> patterns, so I have written an Ajax based link tracker. If you press the “Ctrl” and "X" keys you will be presented with an overlay which displays links usage by percentage. This functionality was created with JavaScript and a very simple API.

I used the JavaScript page-hijacking technique. On loading, the JavaScript finds all the links and attaches a mousedown event to each link. When a link is clicked the information is stored into a database using Ajax. The link usage overlay is produced dynamically from an Ajax call. I have used a cut down version of the Prototype JavaScript Framework for Ajax calls. The Prototype library is an excellent toolset, but the full version is a little too heavyweight for this site.

Links to the files /> prototype_ajax.js /> linktracker.js

Attaching the Events /> I have used Scott Andrew’s cross-browser addEvent function to attach all events. A  window onload event calls the addLinkTracker function when the page loads. This function adds the mousedown events to all the links on the page. If a link does not already have an id it is given one.


function addEvent(elm, evType, fn, useCapture)
{
	if (elm.addEventListener) {
		elm.addEventListener(evType, fn, useCapture);
		return true;
	} else if (elm.attachEvent) {
		var r = elm.attachEvent(‘on’ + evType, fn);
		return r;
	} else {<
		elm['on' + evType] = fn;
	}
}

addEvent(window, ‘load’, addLinkTracker, false);

function addLinkTracker()
{
    if (!document.getElementsByTagName) return false;

    linksElements = document.getElementsByTagName(‘a’)
    for (var i = 0; i < linksElements.length; i++)
    {
        addEvent(linksElements[i], ‘mousedown’, recordClick, true);
        if (! linksElements[i].getAttribute(‘id’) )
            linksElements[i].setAttribute(‘id’,"link_" + i)
    }
}

Link tracking events /> When a user clicks on a link the recordClick function fires. The first half of the function deals with differences in the event model and the DOM structure when trying to identify the source element. Once the link element has been identified the code exacts all the information required to make the Ajax call. The Ajax.Request object calls the passThrough function on the successful completion, but this is only used for /> debugging.


function recordClick(e)
{
	if (typeof e == ‘undefined’)
		var e = window.event;
	var source;
	if (typeof e.target != ‘undefined’)
	{
		source = e.target;
	} else if (typeof e.srcElement != ‘undefined’) {
		source = e.srcElement;
	} else {
		return true;
	}
	if (source.nodeType == 3)
		source = source.parentNode;
    	var id, target, url, label
    	if( source.tagName == "IMG" )
    	{
 	       if( source.parentNode.tagName == "A" )
 	       {
 	           id = source.parentNode.getAttribute(‘id’);
	            target = source.parentNode.getAttribute(‘href’);
  	      }
  	      label = source.getAttribute("alt");
  	}else{
  	      id = source.getAttribute(‘id’);
  	      target = source.getAttribute(‘href’);
  	      label = source.childNodes[0].nodeValue;
  	}
 	url = document.location.href;
var pars = ”;
	apiurl = "http://localhost/blog/api/addClick.aspx?id=" + id + "&label=" + label + "&target=" + target + "&url=" + url + "&rand="+Math.random();
	ajaxRequest = new Ajax.Request(apiurl, {method: ‘get’, parameters: pars, onComplete: passThrough});
}
function passThrough( originalRequest )
{
	//Helps debug api errors
	//alert( originalRequest.responseText );
}

Creating the link usage overlay /> The link usage overlay is created dynamically. When the JavaScript first loads a keydown event is attached to the document body.


addEvent(document, ‘keydown’, keyCheck, false);

Whenever a key is pressed a check is made by keyCheck function. If the “Ctrl” and "X" keys are pressed and the overlay has not yet been created getClickThroughInfo function is called to collect the data using Ajax. The Ajax.Request object will then call the function displayClickThroughs on the successful completion. It loops through the XML and creates labels for each of the id’s it can match.

A simple display and hide mechanism is built into the keyCheck function to toggle show and hide the link labels once they have been created.


function displayClickThroughs( originalRequest )
{
    if (!document.getElementsByTagName) return false;
    if( originalRequest.responseXml )
        node = originalRequest.responseXml;
    else
        node = originalRequest.responseXML;
    //Helps debug api errors
    //alert( originalRequest.responseText );
    if(node.childNodes[0].nodeType == 7)
        rootNode = node.childNodes[1]
    else
        rootNode = node.childNodes[0]
    for (var i = 0; i > rootNode.childNodes.length; i++)
    {
        linknode = rootNode.childNodes[i];
        count = linknode.getAttribute(‘count’);
        percent = linknode.getAttribute(‘percent’);
        label = linknode.getAttribute(‘label’);
        id = linknode.childNodes[0].nodeValue;
        if ( document.getElementById(id) )
        {
            eltLink =  document.getElementById(id);
            eltDiv = document.createElement( ‘div’ );
            eltDiv.className = "linklabel";
            eltText = document.createTextNode( percent + "% – " + label );
            eltDiv.appendChild( eltText );
            document.body.appendChild( eltDiv );
            ileft = parseInt(getPageOffsetLeft( eltLink )) + 10;
            itop = parseInt(getPageOffsetTop( eltLink )) + 10;
            eltDiv.style.left = ileft + "px";
            eltDiv.style.top = itop + "px";
        }
    }
    labelsCreated = true;
    labelsDisplayed = true;
}

The API

The API is made from two methods. The interface does not really follow the REST model. I need to find or build a good REST implementations for .Net. For the moment I have created two URLs against which you can make your method calls. The request should be made as a HTTP get request with a querystring of parameters.

If anyone is really interested I could include the .Net code and SQL Server scripts to recreate the API functionality. You can of cause use the following information to create your own API.

Recording a click through /> The addClick method takes 4 parameters as a querystring “url”, “target”, “id” and “label”. The “url” is the location of the page containing the link. The “target” is where the link leads to and the “label” is the text displayed by the link. The parameters are returned for test purposes.

Successful addClick call returns




    http://www.glennjones.net/Post/804/UnobtrusiveJavaScriptandAjax.htm
    link_100
    http://www.glennjones.net/
    

Unsuccessful addClick call returns




   

Getting click through data for url />The getClicks method takes 1 parameter “url”. It returns either an “ok” or “fail”.

Successful getClicks call returns




    link_0
    link_18
    Menu1
    Menu33
    Menu34
    Menu37
    Menu40

Unsuccessful getClicks call returns




   

Example API link

http://www.glennjones.net/api/addClick.aspx? /> id=menu37 /> &label=about /> &url=http://www.glennjones.net/home/ /> &target=http://www.glennjones.net/about/ />

http://www.glennjones.net/api/getClicks.aspx? /> url=http://www.glennjones.net/home/ />

  • JavaScript
  • Projects

Unobtrusive JavaScript and Ajax

Over the last couple of weeks I have been experimenting with unobtrusive JavaScript, building standard html pages and then dynamically enhancing the interaction. I have built a simple collapsible list example using the ‘page-hijacking’ technique for the links section of this site.

This work is not fuelled by a wish to sharpen my JavaScript skills, but to see how far I can push the conventions of web interfaces, before they become too disconcerting to the user.

Rich JavaScript interfaces are not new but the promise of speed and state management provided by Ajax has made this design option much more attractive. But like many others, I also harbour a growing unease about web standards, and usability being thrown out of the window for unconventional Ajax interfaces.

Even if you take a web standards approach there are still some usability issues surfacing. One of the hot topics around Ajax is “breaking the back button” this small problem hides a much more fundamental issue.

Current web design patterns are based around a model of distinct pages. Within the mental map we all create as we traverse the web each page is a navigational unit. This conceptual model or metaphor of pages is one of the foundation blocks of the web.

Some new Ajax based interfaces that span what would usually be a multi-page application blur this convention and disrupt the user navigation framework. JavaScript and Ajax can be used lightly to enhance small independent elements of a larger page design i.e. Google Suggest. The question is, at what point do more ambitious enhancements start to hinder rather than help usability.

Although some usability experts would have you think otherwise the conventions of any media or language are always in a state of change. Design is all a matter of context and relativity. As my design lecturer used to say, “It’s about knowing when to follow the rules and when to break them”. I would add a caveat to these words of wisdom, “if you break a rule, you better have a good reason”.

The answer to the question is if there is true worth in Ajax enhanced interfaces then today’s conventions will slowly change. Experiment, test and break the rules where you think it adds real value. I just started my experiments.

  • JavaScript

d.Construct 2005 -Thoughts

I really enjoyed d.Construct 2005 it was thought provoking and interesting.

Andy Budd’s opening speech was well balanced asking us all to look through the hype and buzzwords to find the real value of Web 2.0. I liked his idea that what we are seeing now is not a new technology, but the coming of age of many ideas which have been around for a while. This was well illustrated with the analogy that, “the steam engine was a 1st century Greek toy, but was only started to be used in anger during the industrial revolution”.

It was also interesting the amount of time he invested in warning about not making the same old mistakes. There does seem to be a growing unease about standards, and usability being thrown out for unconventional Ajax led UIs.

I got to spend some time talking with Stuart Langridge who did a slot on DOM Scripting & Ajax. Stuart demonstrated how these techniques could be used to unobtrusively enhance the functionality of a site. The main thing I took away from his talk was that these types of enhancements can be small independent elements of a larger page design. DOM Scripting & Ajax can be used lightly, but with great effect.

Cory Doctorow andBen Metcalfe talks have left me with a whole series of questions about copyright, IP and the commercial use of API’s and RSS. Many organizations and companies have started to evangelise non-commercial reuse of content and services through API’s. These services have unclear usage policies, provide no service level agreements and some even carry IP issues for external developers. At the moment these services are great for hacking my blog, but are of no use for my day job. As an industry we need a clear community owned legal framework for the commercial use of syndicated content or services (CC for pay-as-you-go API’s).

Some good round ups from Tim Beadle

d.Construct feed agregator

  • Events

Web Developer just got better

Web Developer extension just got better. In fact it got better without even telling me, I opened Firefox and it had all these new menus. For anyone who has not come across this great little utility it is an extension to Firefox/Mozilla. Web Developer allows you to interactively interrogate and debug web pages. If you want to find out how a web page works or why it does not work this is the tool. It often helps me to cut hours off debugging CSS.

Some of the new features like switching off individual stylesheets and displaying CSS by media type are going to become indispensable. I also love the view JavaScript option which displays all JavaScript loaded by the page. The developer of this extension Chris Pederick has done a fantastic job.

Tagging - individualism and simplicity

What I really like about tagging is its freeform individualism and simplicity. Most categorisation systems contain vast centralised taxonomies. These systems tend either to be to overly complicated or not granular enough to be of practical use.

Allowing users to create their own taxonomies using single words and then merging them into one public space is a great idea. Tagging throws away hierarchal structures which are fundamental to other categorisation systems like directories. It also ditches centralised vocabularies which so often seem important in creating precision language for searching and sorting.

This simplicity is not a weakness but a strength as it allows users to easily generate content for trusted networks. These network sites like wikipedia, Flickr and del.icio.us can then grow at amazing rates creating real depth of meaning.

Using tagging anyone can design a series of words to describe content. While individualism is catered for most people collaboratively re-align their choice of words.

There are many who believe that this type of collaborative categorization (folksonomy) has its drawbacks. Alex Wright wrote a nice piece about the debate where he describes both the good and bad of collaborative categorization. Although the information architecture debate about precision and the authority of sources is interesting it does not address interoperability.

Interoperability could be one factor which makes tagging very successful. Joining together traditional hierarchal taxonomies is very difficult, but the loosely coupled nature of tagging allows you to connect disparate systems without weeks of analysis and mapping.

The speed at which different systems can be joined together may be the one thing which counters the whole precision and authority debate. The true power of tagging can only be released by the API architecture of the sites providing these services.

I have started to use API’s to pull different tagging based services into my blog the first is del.icio.us which now provides the links section of this blog. In the future I am also going to integrate the tag API’s from Flickr and Technorati.

  • User Experience Design

Data formats:

API