Posts

 

Faster version of microformat-node

I have just uploaded a new version of microformat-node. The parser now takes between 30-40 milliseconds to parse an average page, about 8 times the speed of the last version.

Features of new version

  • About 8x faster at parsing
  • Will now load onto Windows based hosting solutions correctly
  • Inbuilt cache system, which can be customised
  • Upgraded logging and trace options
  • Upgraded unit test system
  • Added support for JavaScript promise

 

Example: http://microformat-node.jit.su/ /> Code: https://github.com/glennjones/microformat-node

 

I have changed how the method calls work so if you are using the last version you may have to update your code. The parse methods parseHtml and parseUrl now follow the standard pattern used in node.js and return an error and data object rather than just the data object.

with URL


var microformat = require("microformat-node");

microformat.parseUrl('http://glennjones.net/about', function(err, data){
    // do something with data
});

or with raw html


var microformat = require('microformat-node');

var html = '

Glenn Jones

'; microformat.parseHtml(html, function(err, data){ // do something with data });

using a promise


var microformat = require("microformat-node");

microformat.parseUrl('http://glennjones.net/about').then( function(err, data){
    // do something with data
}));
  • JavaScript
  • Microformats
  • node.js

New node.js microformats parser - microformat-node

I have built a node.js microformats parser, it is based on my previous javascript parsing code. It has been packaged up so you can easily be add to your projects using npm.

Source code: https://github.com/glennjones/microformat-node /> Test server : http://microformat-node.jit.su

Install


npm install microformat-node

or


git clone http://github.com/glennjones/microformat-node.git
cd microformat-node
npm link

Use

with URL


var shiv = require("microformat-node");

shiv.parseUrl('http://glennjones.net/about', {}, function(data){
    // do something with data
});

or with raw html


var shiv = require('microformat-node');

var html = '

Glenn Jones

'; shiv.parseHtml(html, {}, function(data){ // do something with data });

with URL for a single format


var shiv = require("microformat-node");

shiv.parseUrl('http://glennjones.net/about', {'format': 'XFN'}, function(data){
    // do something with data
});

Supported formats

Currently microformat-node supports the following formats: /> hCard, XFN, hReview, hCalendar, hAtom, hResume, geo, adr and tag. Its important to use the right case when specifying the format query string parameter.

Response

This will return JSON. This is example of two geo microformats found in a page.


{
    "microformats": {
        "geo": [{
            "latitude": 37.77,
            "longitude": -122.41
        }, {
            "latitude": 37.77,
            "longitude": -122.41
        }]
    },
    "parser-information": {
        "name": "Microformat Shiv",
        "version": "0.2.4",
        "page-title": "geo 1 - extracting singular and paired values test",
        "time": "-140ms",
        "page-http-status": 200,
        "page-url": "http://ufxtract.com/testsuite/geo/geo1.htm"
    }
}

Querying demo server

Start the server binary:


$ bin/microformat-node

Then visit the server URL


http://localhost:8888/

Using the server API

You need to provide the url of the web page and the format(s) you wish to parse as a single value or a comma delimited list:


GET http://localhost:8888/?url=http%3A%2F%2Fufxtract.com%2Ftestsuite%2Fhcard%2Fhcard1.htm&format=hCard

You can also use the hash # fragment element of a url to target only part of a HTML page. The hash is used to target the HTML element with the same id.

Viewing the unit tests

The module inculdes a page which runs the ufxtract microfomats unit test suite.


http://localhost:8888/unit-tests/

Notes for Windows install

microformat-node using a module called ‘jsdom’ which in turn uses ‘contextify’ that requires native code build.

There are a couple of things you normally need to do to compile node code on Windows.

  1. Install python 2.6 or 2.7, as the build scripts use it
  2. Run npm inside a Visual Studio shell, so for me, Start->Programs->Microsoft Visual Studio 2010->Visual Studio Tools->Visual Studio Command Prompt

If you have the standard release of node it will probably be x86 rather than x64, for x64 there is a different Visual Studio shell but usally in same place.

  • JavaScript
  • Microformats
  • node.js
  • node.js
  • parser

Looking for new things to do

From this Friday I am looking for new things to do. I have pulled out of working full time at the company I co-founded. I will remain a director and major shareholder, and no Madgex is not in trouble in fact the opposite, it has just had its most profitable quarter in its history. After helping restructure it over the last year I have the opportunity to do other things.

This has left me in the lucky position of being able to follow my passions. I want to try to take the product research and design I have done commercially for years and mix it with my interests in open web and standards development.

In the next few months I want to research and build projects in a number of areas:

  • Web Intents/Activities
  • Personnel data stores using services like Dropbox
  • Possibility of semantic data reuse
  • Mobile web apps – the right way

I am sure within a few months I will want to work with teams again as I always want to design and build products that impact people’s lives, that usually means lots of calibrations, but at first I want some time to open my mind to new people and ideas.

No sitting on beaches for me. Watch this space I am about to turn up the volume…

  • madgex
Mentions:

Awesome! Looking forward to the result of your investigation!

DevUp 2012 - Barcelona

Last Friday I went to DevUp 2012 in Barcelona. The event focused on HTML5, but interestingly for me it mixed two different development communities. As well as web developers there is a whole world of games developers who are embracing HTML5 or more precisely Canvas. Darius Kazemi from bocoup did a talk which made a side-by-side comparison of the web and games development culture. As a games developer in a JavaScript company he had some nice insights.

I missed Javier Usobiaga talk on Responsive Web Design, which is a shame, but managed to go to the session by Ibon Tolosana on CocoonJS. It takes HTML5 canvas based games and boosts their performance by using an OpenGL ES execution environment. Through the day a recurring theme seemed to be that the current performance of Canvas on phones is just under what the gaming community are looking for. Although like petrol-heads I feel they will always want just a few extra frames per a second. The best example of this was Miguel Ángel Pastor’s talk on cross compiling JavaScript from C++. Not an approach I would take, but interesting.

I hope people found my talk “Beyond the page” on API’s of interest. The Web Intents demo did not work, although I am in good company as Paul Kinlan had the same trouble when he visited Barcelona. Maybe adding that one last demo at 1am the night before was not a good idea! Next time I am going to have backup screencasts.

The PeopleStore HTML5 app I showed on the day is unfortunately not online, but the codebits area of my site does have demos/code of some of the API’s it uses.

 

/> /> />

Sorry the video does not show the demo’s there is an earlier screencast of some of these demo’s from a previous blog post.

I would like to thank Ideateca for inviting me to speak and putting on such a good event, the live translation services and high quality video which was streamed live added to the events success.

  • Data Portability
  • Events
  • Canvas
  • DevUp
  • DevUp12
  • html5
Mentions:

Congrats for the great talk and thank you for participating in HTML5 DevUp! It’s been a pleasure! Hope to see you again soon! /> Best, /> Isabel

Google are about to murder a good friend of mine

Let me start by saying the good friend is an API. Google have decided to close down the Social Graph API (SGAPI) on the 20th April 2012. I have spent the last couple of months thinking of a measured response, although I do somewhat agree with Jeremy Keith’s sentiment.

 

/> /> “Dear Google, Fuck you. Signed, the people who actually use your APIs”

 

The API provides two main features the first of which lists our distributed identities across the web. So if I give it the URL of my Twitter profile it returns a list of profiles I have on other sites. I wrote an in depth List Apart article about how this feature works. The second feature tries to find links to profiles of people you have listed as friends/followers on social media sites.

Let’s be pragmatic about its true value

Brad Fitzpatrick built this API as a Google 20% project and it has never really lost its experimental roots. From the outset I was not a fan of the social graph friends listing. It’s too problematic, a lot of friend’s data is private and it’s too complex to mark-up and extract from web pages well. I personally wrote off using the social graph element of the API from the beginning. Evan Prodromou also made a good point that developers want to get authentication and social graph data together. I think lanyrd.com is a good example of this approach.

The identity aggregation element of the API was impressive if not a little too raw to be use on commercial sites. The results needed a degree of post processing to increase the quality. Although I would love to say increasing the quality of the results could be completely done by parsing open standards like hCard or FOAF, you do need to connect to some sites proprietary APIs to get profile data.

Google never tackled the quality issue or put the API on a commercial footing, both of which help stop most people using it beyond experimental hacks.

Panda, schema.org and identity based authority in search

In the last couple of years I have lost my faith a little in the ideas which gave birth to the SGAPI. Development of the semantic web and distributed open web, which seem to have drifted with the growth of monolithic services like Facebook, but things are changing.

Google’s recent changes to search have breathed new life into the semantic web concept. As Google tries to increase its search quality it is moving towards identifying entities (the blocks of structured information within a page). The SEO industry is now adding vast amounts of semantic mark-up to the web. This is being done not because it is the right thing to do, but because of the enhanced click through rates providing the right commercial motivation.

More importantly part of Panda’s new ethos is the promotion of identity authority. We can see this, both in Google’s search listing that displays recommendations from your friends and authorship profiles. They are attempting to link people to other entities such as articles by using mark-up like rel=me and rel=author. Profiles and how they are interlinked is a small part of the Panda concept but still important.

/>

 

The future – Getting the food chain right

Today’s web apps are often more about building ecosystems of service relationships than technology. These relationships are often chained together and always need each actor to be rewarded for their part.

Web authors or at least the SEO wing of our industry are now seeing a real return for adding semantic structures for entities such as products and reviews. The mark-up of people and organisations entities still have rogue claim issues and still may not become a strong part of Googles search listing. Let’s hope Google continues to support rel=me for identity authority and it drives them to resolve these issues. At the moment them seems to be moving towards a wall garden approach.

Unfortunately, although Google may have the ability now to build a much better API based on its latest developments parsing semantic information for search, it unlikely it will be created. Google is now bringing together its services into a coherent whole and focusing on building its own monolithic social network Google+. It just does not make commercial sense for it to support the open web without financial return.

Other companies have started to provide successful services in this area. Qwerly.com by Max Niederhofer was one of the most impressive identity aggregation APIs I have seen, it’s now part of fliptop.com. Products like rapportive.com and thesocialcv.com use the same technique under the hood. These companies are providing the next generation pay-as-you-go APIs blending together the semantic web and snowflake APIs.

Let’s hope we see on-going development of this new generation of APIs.

So on 20th April I will have a drink and say goodbye to SGAPI

I would like to thank Brad for giving us the SGAPI and everyone else who has worked on it. Although I can understand the commercial rationale that has driven Google to murder my friend, I am not sure I can forgive them for it.

  • Data Portability
  • Identity
  • sgapi
Mentions:

Sorry.

FWIW, it was more than 20% project developing and maintaining it. It was a full-time job for awhile, until I moved on. The problem was that it continued to be a full-time job for somebody, and we could never find/justify that somebody (or group of people to maintain and improve it). It still needed a lot of improving, like you found.

I also wish it could’ve matured more. :-/

Web Intents Design Push

After giving a talking at UXCamp Brighton about Web Intents, I ended up talking to a few designers about it and how they would love to help with its development and adoption. In fact we all felt that the design community was sometimes left out of the development process, when it has a lot to offer.

So, for the past couple of months I have been working with two of Brighton’s leading UX people Danny Hope and Andy Dennis to organise a UX design event. We are trying a slightly new format that we’ve coined a “design push”. The idea is to take a current technology or topic and focus a group of UX designers on a day of open collaboration with the aim of positively adding to its traction by the wider community.

Web Intents Design Push, 25 February 2012 – Brighton, UK

The first “design push” will be on Web Intents, an idea about how to standardise buttons such as Tweet, Like, Share, etc. on the web. My last few posts will give you some background. I have also created a brief screencast introduction to Web Intents.

I had some fun creating the screencast, I built a presentation using HTML for the first time. It’s worth a play as I built the demo’s of Web Intents directly into the presentation. Best use Chrome if you want to play with the demo’s.

  • Data Portability
  • webintents

Beyond the page - Fullfrontal 2011

Well what an amazing experience fullfrontal was this year. Had lots of fun doing the talk once I got my computer to display the slides (sorry about that). Big thank you to Remy for giving me the opportunity to speak and for putting on Fullfrontal.

/> /> />
View more presentations from Glenn Jones.

Drag and drop demo’s from the talk

Web Intents /> I had lots of positive feedback about web intents, I am trying to put together a UX/design workshop in early January to help build community involvement in its development. I will be posting and tweeting more information about that in the near future.

The Web Intents demo

PeopleStore /> Quite a few people asked me about PeopleStore, am I going to make it into a product or open source it? At the moment it is still a just side project and a playground for new ideas, so sorry it’s not public. That said, after such a lot of interest I am rethinking what to do with this project. If you’re interested in code examples from PeopleStore take a look at the codebits section of this site, a lot of features are explored there.

 

My hi-lights from this year’s talks:

You gotta do what you gotta do /> Was lucky enough to sit with Marcin Wichary during the speakers meal and hear first-hand about his work at Google. I loved the slide deck built completely in HTML and the fact that he linked Safari and Chrome together over sockets to get accelerometer data he needed for just one small part of his presentation. I think the most interesting thing about Marcin’s talk was all the small insights he gave about the design process needed to come up with ideas for the Google search page.

CoffeeScript Design Decisions /> Jeremy Ashkenas talk on CoffeeScript really made me want to try it out. I have a fundamental dislike of any language/abstraction that is designed to write another language. Jeremy’s overview of the architectural ethos of CoffeeScript made me believe it could be a possible exception to my rule. Some of the features they have added to CoffeeScript go to the heart of everyday problems I have with JavaScript. The talk also pointed out how much room there is for improvements to JavaScript itself.

Excessive Enhancement – Are we taking proper care of the Web? /> Phil Hawksworth talk focused on the central issue of just because we can do something does not mean we should. One of his other main themes is something I often talk about with designers/developers which is having a respect for the medium we work in. At art college we used the term “go with the grain” in its widest sense this meant to fully understand the properties of a medium and work to bring them out to their full potential. ubelly.com have done a write up of Phil’s talk which is worth a read.

Brendan Dawes has infectious passion /> I have seen Brendan speak at a few events over the years and he has an infectious passion for innovation and experimentation that just has to be admired. It’s so interesting to see his love for the design of physical objects move him in new directions. I hope I get to see him talk again as I never seem to get bored of hearing about his adventures in design. ubelly.com also interviewed Brendon

Scalable JavaScript Application Architecture /> Nicholas Zakas took us all on a tour of the concepts of module code architecture and how to loosely-couple JavaScript. These ideas are so important to any complex project, yet the nature of JavaScript use on the web today means we often cut many corners in our code design. Note to self: review this slide deck again before starting large JavaScript projects.

Code editing /> Marijn Haverbeke and Rik Arends both did talks on the various aspects of online code editing. I cannot even begin to imagine the effort that it takes to develop one of these code editing environments, they look very impressive. Might be time to put down Visual Studio and move over to a hosted application for code editing. Hope they have drag and drop support />

Links

  • Data Portability
  • JavaScript
  • Microformats
  • webintents

Microformats and SEO

This Friday I gave small talk on microformats and SEO at the web agency Fresh Egg. They focus heavily on SEO as part of their offering and have become interested in marking up semantic data in web pages. This year Googles rich snippets have brought a whole new group of web authors to microformats. Taking a look at Googles new recipe search I can see why microformats has become a hot topic in the SEO industry.

Rather than just provide a general overview I decided it would be fun to mark-up a Yorkshire Pudding recipe. You never know, one day it may appear at the top Googles recipe search.

  • Microformats
  • google
  • seo

Choosing the Right Words - Web Intents

I ran a small session at UXCampBrighton last weekend about Web Intents. At the end of the session I was hoping for a discussion about the use of verbs in Web Intents, but the questions where a lot more wide ranging.

As people grasped the concept they rightfully asked some questions of it. I thought it would be useful to document an aggregation of these conservations and my answers.

The slides

Will social media companies let go of branded buttons?

We discussed the value to companies of having buttons that both advertise and endorse a brand through the use of logos and trademarked phrases. I was asked the question; would social media companies provide services through Web Intents if it meant letting go of this visual branding?

The network effect of sharing social media outweighs any value gained from the promotion of visual identity on buttons.

That’s to say Twitter, Facebook etc. have expanded by linking people as they share social media objects (text, images etc.). Traditional visual branding is not as important as engaging users in the experience of using a service, in this case sharing social media objects.

In a small way we can already see this effect in action as the social media/networking sites allow publishers to re-work their visual identities by removing the terminology and phrases they’ve carefully crafted and promoted. In the BBC example below, the designers did not use the buttons provided by the services, instead they have greyed out the logos and removed the terms ‘tweet’ and ‘recommend/like’.

/>

Most large companies have strict visual guidelines for the legal use of their logo’s and trademarks. Take a look at the Twitter and LinkedIn guidelines as an example. I would suggest in this context that these companies don’t care about small infringements of their visual identity as long as people are encouraged to share using their networks.

The caveat here is that the speed of visual recognition and trust engendered by some of these buttons/logos may help increase traffic to these services. Web Intents would need to create the same or greater levels of traffic to these services while using generic iconography and terminology. If it did not, I am sure the social media companies will not be as happy to embrace Web Intents.

Users will never get the concept

At one level Web Intents can be seen not as a new idea but the standardisation of a pre-existing design pattern into the UI of the browser

Earlier this year the StumbleUpon “Stumble” button passed 25 billion clicks. On AddThis network StumbleUpon has 1.69% marketshare (2 Oct 2011) across all the services it offers. This gives you some idea of the level of interaction which can be mapped to the type of design pattern Web Intents is trying to capture. As long as the user experience developed for Web Intents does not add a lot more complexity, it should be widely understood by the current users of services such as StumbleUpon.

Why don’t we just let Facebook/Twitter dominate – do users really want choice?

Yes, the AddThis statistics make stark reading with 67% of market share in the hands of Facebook and Twitter. Even in light of this, there is a long tail of hundreds of other services that fit the design pattern of Web Intents. Maybe it is the inability to provide choice that defines the current usage patterns not the other way around.

The current status quo has publishers coalescing around a handful of the biggest services, because they have no means of knowing which services are the ones an individual user would prefer. If a site could deliver individualised button/links based on a user’s choice, there should be a substantial gain in utility to the user and traffic for the publisher and the service. This change may not inevitability reduce traffic to large service providers, but instead, increase traffic to smaller ones.

You need also to consider that different communities around the world embrace different services and that sharing a web link is only one type of many services.

The level of choice. /> Hick’s Law or Hick–Hyman Law, roughly states that the time it takes to make a decision increases with the number and complexity of choices. As the decision time increases, the user experience suffers.

Satisfaction curve

/>

I am interested in finding out how publishers define the optimum number of services to display to the user. What factors are foremost; is it purely about screen space; the market share of services or is it Hick’s Law playing a part in these design decisions? More importantly, what results would I get if I could test user choice vs decision complexity in this context?

In page UI vs chrome UI

I believe that keeping the complexity of interactions to an absolute minimum will be a deciding factor in the success of Web Intents

A browser is split into two heavily defined surfaces. There is the web content/page and the chrome. Building interactions that span these two areas is not easy. Building any type of browser UI that overlay’s the HTML content brings up some security concerns.

Taking into account all the above, I still think keeping all the UI contextual to the original area of interaction in the web content is very important. Popping windows away from the original click event or navigating between full pages will cause more mental load for the user. There are already working models we should look to such as current proprietary buttons for sharing and the OAuth UI flow. These have been heavily researched and tested in the real world and should form a starting point for any UI design. In the end it will be the browser development teams that frame the main UI flow.

Common iconography and verbs

I think it is obvious to most people that a common iconography and language is needed for Web Intents. The calls to action need to be both understandable and recognisable as their proprietary counterparts i.e. Twitter ‘tweet’ and Facebook ‘like’

It is unlikely that all the elements of language/ iconography will be fully developed in the API specification process being undertaken by the Chrome and Mozilla teams as they will be surfaced in the HTML of the publishers of sites.

I would like to see a two phased approach. Engage the UX community in an open process to quickly define the visual language for calls to action i.e. the buttons. Then create a simple wizard on webintents.org to generate the code for people to use. Guidelines and wizards for site publishers are fundamental to generating up-take.

In conclusion

Although I was asked many questions the overall response to Web Intents was positive. I was even approached by some UX designers about getting together a workshop/design meet-up to look at the whole UX flow.

The answers above are written from a personal view point. If you’re new to the topic, I would recommend reading the discussion forum to gain a broader insight.

  • Data Portability
  • User Experience Design
  • webintents
Mentions:

This is an awesome summary, lets plan the next event together so I can attend (damn travel). /> How about set something up in Google London? I might be able to get the dev’s in on a hang-out.

Here are my thoughts.

Q: Will social media companies let go of branded buttons?

Honestly, I don’t expect them to. But that is not a problem, I expect they will become a sink for the intent by providing an tag in their page and parsing it. And that is enough to get the process started.

Q: Users will never get the concept

If implemented correctly users will never see “intents” or “activities”, they will just see a “share”, or an “edit” or “pick” button and a list of their services that they can use.

Q: Why don’t we just let Facebook/Twitter dominate – do users really want choice?

It is more than user choice, it is about the developer not having to explicitly support new networks, or removing old dead networks from their code.

It is more than just “share”, which undoubtedly is the biggest first use-case.

Q: In page UI vs chrome UI

You are correct, it is up to the UA to decide, however I am pretty confident and I know in Chrome’s case (you can check the commits) that it will be outside of the Page UI and away from the context of the click. We need to make sure that all of this is un-spoofable; the second that it is only in-page then it becomes open for attack via spoofed code.

Q: Common iconography and verbs

I believe this is important, but can be managed outside the spec. I would like webinents.org to contain the de facto set of common verbs and have activity-streams to maintain their set (and objects), but also let the wide community define their own verbs.

We have a basic mapping to “widgets” (http://widgets.webintents.org) that help define the common look and feel, but I am very very open to this being contributed to via 3rd parties as it is outside my skill-set… In fact github.com/PaulKinlan/WebIntents is waiting for pull requests />

Really like the idea of a wizard and is part of what I intended widgets.webintents.org to be.

Web Intents - Gluing web functionality together

There is a new concept forming at the moment called Web Intents. The name is a reference to the Android feature which allows applications to register their “intent” to handle certain types of actions.

/>

The screenshot above shows all the options for sharing an image on my Android phone. The underlying application does not have the ability to share images itself, so it asks the OS to list the applications that can. The user is then presented with the choices above and the two applications exchange the data required for the feature to work.

Intents work in a similar way to how applications on a desktop register their ability to open and modify a particular file type. An Intent takes things one stage further registering not just a content type, but also a verb to describe an action i.e.

  • Post a Status
  • Edit an Image
  • Share a Bookmark
  • Reply to Post
  • etc

We already have perfect examples of this pattern in use today in the shape of the many social media buttons which are proliferating across the web. At the moment they are not built using the concept of Intents, but they could be.

/> /> etc. /> /> /> On web sites we are often presented with a collection of buttons for services we do not use and more importantly not presented with the ones we do. Erin Jo Richey called these types of sites “button sluts”. There are some unsatisfactory user interface fixes for this issue such as “AddThis” which extend the user’s choice by extending a broken pattern. I don’t think these really solve the underlying problem.

Web Intents could provide a whole new generation of interactions on the web which would complement how it already works. They could give more user choice and de-clutter the web of the visual and mental load created by all these buttons. If widely implemented it would allow large publishers and individuals to provide services on an equal footing.

Two driving forces for the adoption of Web Intents

Social buttons /> We already have multiple use cases for Web Intents in the shape of the many social media buttons. At the moment these buttons are a jumble of proprietary iframe code, which seems set to reinforce the NASCAR problem.

Currently, Twitter’s Web Intents is the most progressive implementation of social buttons. Twitter’s Web Intents API is not the same as the Web Intents API being developed by the Chrome and Mozilla teams, but it is an interesting architecture in its own right. I admire how they used a simple HTTP Get request to form a powerful API and then deployed JavaScript to hijack HTML links to create the buttons.

Web app/service discovery /> The second driving force comes from the search to create a simple discovery mechanism for web app/services. Michael Hanson of Mozilla Labs wrote a post at the beginning of this year about the growing need for a new way to glue web functionally together. The theoretical work around web app/service discovery and social buttons have the same architectural patterns. Mozilla have called their approach “Web Activities” and you can see a demo in a video they posted in July.

Building it into the browsers

Paul Kinlan from Google conceived the idea of Web Intents. He has started documenting their development at http://webintents.org/. Most importantly the Mozilla Lab and Chrome teams have started to work together on a common API to implement Web Intents into their browsers.

User interface and experience ideas

Having a design background I tend to think visually and built an early mental picture of what the user interface for Web Intents could look like. I visualised this before Paul Kinlan published the http://examples.webintents.org/ site and Mozilla Labs posted their video. In doing so I made some assumptions which I thought would be interesting to share; especially as they vary from Paul’s initial demo. It is also worth taking a look at Erin Jo Richey mock ups which provide a slightly different perspective to the user interface design.

Registration of Web Intents /> As a user I would like to be prompted about the registration of a service, if nothing else to make sure I only register with services from sites I trust. The closest model for prompting the user to allow data/service access is the Geo API. It would not take much to reuse the pattern to provide a Web Intents acceptance prompt.

/>

The question of registering multiple services at once would need to be considered. I personally think site authors should be encouraged to create URLs where multiple Web Intents could be registered at once.

Managing Web Intents

/>

Over time our preference for which services we use on the web will change, so browsers will need to provide an interface for us to manage this. In the wireframe above I used a 2-tier-tree navigation. The first tier represents the action verbs like” share” and the other tier content types such as “link”. Technical names such as “mime-types” should not be used and exchanged for more friendly language.

I would also expect some fine grained controls such as the ability to enable and disable services and to change their order of display. With all of these controls we end up with a user interface not too dissimilar to those used to manage our browser plug-in/extensions.

Using a Web Intent

/>

Where a page subscribes to a Web Intent such as sharing a link, I believe the most functional interface would be button with a drop-down menu. This would allow the user to quickly choose from a set of options. Any service which requires further input from the user, like the assignment of tags, would be done through secondary windows.

At first sight the drop-down looks like we are recreating the NASCAR problem. What has to be remembered is that these options were chosen and curated by the user. It follows that they will be limited in number and always be meaningful options for that individual.

Silent registration – the most complex UX problem /> The interface could either silently register the Intents or prompt the user. The designers of Andriod’s OS took the decision to silently register so my wish for more control may not be what the majority of users want.

The Web Intents architecture

The user interface for Web Intents really needs to be built into the browser, but for the time being Paul Kinlan has built a temporary JavaScript library which allows us to experience Web Intents. The code examples below are based on using that library. There are three parts to the architecture.

/> /> Registering a web intent (Site A) /> Registering a new service is simple. Once you have included the webintents.js from the JavaScript library, you add an Intent tag into the header of a page. /> [sourcecode language="html"] /> /> action="http://webintents.org/share" /> type="text/uri-list" /> href=" http://examples.webintents.org/intents/share/sharelink.html" /> title="Kinlan’s Link Share" /> /> [/sourcecode]

/> /> Subscribing to a web intent (Site B) /> Again you need to include the webintents.js file from the JavaScript library and also build an Intent object. This object describes the type of Intent and also passes the data. /> [sourcecode language="javascript"] />

var intent = new Intent(); /> intent.action = "http://webintents.org/share"; /> intent.type = "text/uri-list"; /> intent.data = [ url ];

window.navigator.startActivity(intent); /> [/sourcecode]

/> /> Collecting the data at the end point URL (Site A) /> At the service endpoint you need a small piece of JavaScript to collect the data. The magic bit is how Web Intents passes the data between the windows as they are opened. The sites at both ends of this interaction do not need to know anything about each other. The browser or in this case the JavaScript library negotiates this for the two parties. /> [sourcecode language="javascript"] />

/> [/sourcecode]

The API is still in flux as the Chrome and Mozilla teams align their work. The code examples above will change as this happens. You should visit the webintents.org site for the most up-to-date reference. If you wish to keep informed of how the development of the API is going I would suggest registering to the Google Group.

The Future

I think the future is bright for Web Intents, as it seems to have support from the Chrome and Mozilla teams. There are a number of technical issues outstanding, but none that should derail the project.

The role of brand equity /> I have not heard many people talk about the role of brand equity in the rise of social buttons. I have no idea how much it would cost to put my company’s logo on all the sites that host the Tweet or Follow button – it would be a lot.

Even considering social buttons as a form of advertising underplays their true value. In fact they are more like an endorsement from one brand to another. The Guardian newspaper carrying the tweet button on all its articles is the equivalent to it saying, “we believe that Twitter is the premium sharing service”. One has to ask how interested some service providers will be in a feature which hides this type of brand promotion, even if it is in the best interests of their users.

Although I am wary of the commercial issues and the user experience has yet to be proven in the real world, Web Intents has compelling properties. I think it is well worth promoting.

Useful Links

Reference Sites /> http://webintents.org/ /> http://examples.webintents.org/ /> http://usecases.webintents.org/ /> https://groups.google.com/group/web-intents

Blog Post /> Paul Kinlans – Web Intents a fresh look /> Tantek Çelik – Web Actions a new building block (Google+ comments) /> Erin Jo Richey – Button Sluts and Web Actions /> Tom Gibara – Reservations about the Web Intents system

Chrome – Connecting Web Apps with Web Intents /> Mozilla Labs – Web Apps Update – experiments in Web Activities, App Discovery

IndieWebCamp /> Chris Messina  – Session: Standardizing Web Intents /> Ben Ward – Sessions: How the Indie Web Hooks into Hosted Communities

Reference Technologies /> Android Intents /> Web Introducer /> Activitystreams /> Twitter Web Intents API />OpenService Accelerators

  • Data Portability
  • JavaScript
  • User Experience Design
  • webintents
Mentions:

Great summary, this is exciting stuff and thanks for helping to push it forward.

Re your AddThis mention… Web operators do look to folks like us increasingly for better insight into the effectiveness of these tools, via analytics and other services, and help optimizing these kinds of experiences by offering appropriate and effective choices for users. While not all sites use them in this way, our own tools for example can automatically personalize options for individual users, present the tools that make sense for them, and give the operator metrics on all aspects of that (and a lot more to boot). And they can do all of this via APIs, on top of which they can build whatever UI they might want, including those that only include 3rd party buttons. Even in the simplest case of the ubiquitous “drop-down menus”, the options that are presented to users are based on extensive data analysis (see a glimpse). So there’s more going on here than may meet the eye initially.

On top of that, though, I’m psyched about efforts like WebIntents and frankly have been wanting these things to go forward even more quickly for some time now (we’ve been advocates for as long as anyone). The open stack should clearly encompass the types of core sharing operations we’ve been observing for some time — and our advocacy on things like OExchange, XRD/-based service-discovery on hosts for sharing, and the like, I think shows the support. While the UX angle is the most obvious, there are other implications for operators that should be part of the conversation as well, and that’s where we’re excited to help all of this push forward.

In any event, keep up the great advocacy, see you on the lists!

Data formats:

API