Chris Khalil's Musings

My thoughts on work and life

Web 3.0, User Experience and Intelligent User Interfaces

If Web 2.0 was all about fostering social interconnectivity, then the loosely termed Web 3.0, appears to be about the intelligent web. It’s about, amongst other things, contextually aware user interfaces (UI’s), hyperconnectivity, the semantic web and intelligent agents. These are all concepts which have existed for a very long time. Primitive implementations of Intelligent UI’s and Knowledge based Expert Systems have been around for decades. Successive generations have tried, and largely failed, to get these working and so we’ve seen these technologies re-invented in waves. The failure was often due to both the primitive nature of the machine intelligence and the unwillingness of users to accept some measure of control being surrendered to the machine.

The latest wave promises better things, and maybe we are on the cusp of a time where both machine and human are ready to make the leap. The increasing symbiosis between machine and human has see many of the trust issues erode, as users come to accept that their lives could be made easier by allowing machines to take some degree of control. It may, therefore, be that we see an increase in the number of what Alan Kay termed ‘Indirect Management’[1] interfaces augmenting the now omnipresent direct manipulation interfaces, as the amount of information we have to process in our daily lives becomes too much to handle.

Indirect Management

Indirect Management means machines that learn our preferences, using inference, and that leverage the collective unconsciousness/knowledge of the web to help us manage information overload. Typically, software entities termed ‘agents’ would help manage our goals, tasks or activities.

I think the sheer volume, and nature of information, out there and the growing momentum behind the semantic web might give this wave a better chance of success. The idea that we directly manipulate everything places too much cognitive load on users, machines need to take up some of that slack, if we are to make sense of the digital world especially as computing become more ubiquitous (ubicom). This is a real challenge for those of us working in Human-Computer Interaction (HCI).

Example of Indirect Management

So, a typical example of how this might work, and something of a familiar metaphor, would be the process of booking a holiday. In the real world we might visit a Travel Agent and give them our general holiday preferences and budget. They may even know us and have tacitly learnt some of our preferences from the past (that I had a bad experience on a particular airline or already know where we live and so can pick the best airport). We then trust them to use their expertise to look around and come back with options for us to choose from.

Now if we transpose this example to the web, it may be that we have a trusted advisor agent/site/application on the web (an entity of some sort that we turn to). It would have learnt from its past experiences dealing with us, can leverage expertise and knowledge it’s gained from talking to other customers (and other agents) and is an expert in knowing where to find the best deals and sources of travel information.

Interaction Design Implications

So what does this mean to us who work in the User Experience and Usability fields? Well, it’s still early days, but it may mean we need to surrender some degree of control at the interaction design level. We are used to crafting interfaces with well defined behaviours in mind. Indirect Management means we still design the touch points between the user and the machine, but also – perhaps – we need to create the rules and contracts that exist between human and machine below the interface, to in fact, define very primitive (rule based?) levels on intelligence.

Emergent Behaviour may well dictate the overall system intelligence and this is pretty hard to get a handle on. We can already see this sort of behaviour in numerous recommendation systems, such as Amazon, lastfm etc and their early ancestor, firefly. But these are just the beginning, the real challenges and issues lie ahead. These are exciting times.


[1] A. Kay, “User Interface: A Personal View,” in The Art of Human-Computer Interface Design, B. Laurel, ed., Addison-Wesley, Reading, Mass., 1990, pp. 191-207.

4 Comments

  1. Hi Christopher, the semantic web (or web 3.0) is a very interesting topic and I like what I see in your post, the fact that we engage in user-centered concerns early. I personally think that the idea of underlying intelligence per se could give us tools to highly improve usability (e.g. finding the correct results based on semantics) – but like you said, the touching points between human and machine remain.

    I have another question though: I’m currently working on a project and we are planning to do user testing during the design phase. Since our client is from Australia I was wondering if you had experiences with a) general differences between users in Australia & USA, b) doing remote user testing via video con?

    PS.: if you are interested, I’ve just published a blog post about web 2.0 and why it matters: http://www.mgitsolutions.com/blog/2008/11/what-is-web-20-and-why-it-really-matters/

    cheers, Mike

  2. Christopherkhalil

    November 21, 2008 at 10:59 am

    Mike,

    Thanks for the comment. I’ve done some remote usability testing in the past, with participants from all over the world. Obviously, there are always differences in the semantics of language (a user in the US referring to a Bum means something completely different to a user in Australia/UK referring to a Bum), colour and semiotics. But overall, I’ve found remote usability testing quite useful. You’ll definitely need some desktop sharing tool, and video is useful too.

    Good luck with it all!

  3. These Web X.X labels kill me. LOL!

    I think that as the semantic web becomes more of a reality and we, designers, get it right, it will just mean a shift in the way we approach the design.

    As you mention, we are now designing towards a specific interaction, etc. With Web 3.0, we will need to think in terms of allowing users to select and set up more preferences and then follow and save/cache their selections and interactions as they use the site or application, so that when they return we deliver more of what they may be or more likely to be interested in doing, etc.

    I find it both interesting and challenging. It is a major shift in paradigm for us though and it will require even closer communication with our users. It will also mean that the Business owners need to step further away from the design and let the professionals handle it because it will potentially be more complex to deal with and get set up correctly.

  4. this sounds an interesting topic, as many are centered in it, it makes it of great concern thanks Chris.

Leave a Reply

Your email address will not be published.

*