If Web 2.0 was all about fostering social interconnectivity, then the loosely termed Web 3.0, appears to be about the intelligent web. It’s about, amongst other things, contextually aware user interfaces (UI’s), hyperconnectivity, the semantic web and intelligent agents. These are all concepts which have existed for a very long time. Primitive implementations of Intelligent UI’s and Knowledge based Expert Systems have been around for decades. Successive generations have tried, and largely failed, to get these working and so we’ve seen these technologies re-invented in waves. The failure was often due to both the primitive nature of the machine intelligence and the unwillingness of users to accept some measure of control being surrendered to the machine.
The latest wave promises better things, and maybe we are on the cusp of a time where both machine and human are ready to make the leap. The increasing symbiosis between machine and human has see many of the trust issues erode, as users come to accept that their lives could be made easier by allowing machines to take some degree of control. It may, therefore, be that we see an increase in the number of what Alan Kay termed ‘Indirect Management’ interfaces augmenting the now omnipresent direct manipulation interfaces, as the amount of information we have to process in our daily lives becomes too much to handle.
Indirect Management means machines that learn our preferences, using inference, and that leverage the collective unconsciousness/knowledge of the web to help us manage information overload. Typically, software entities termed ‘agents’ would help manage our goals, tasks or activities.
I think the sheer volume, and nature of information, out there and the growing momentum behind the semantic web might give this wave a better chance of success. The idea that we directly manipulate everything places too much cognitive load on users, machines need to take up some of that slack, if we are to make sense of the digital world especially as computing become more ubiquitous (ubicom). This is a real challenge for those of us working in Human-Computer Interaction (HCI).
Example of Indirect Management
So, a typical example of how this might work, and something of a familiar metaphor, would be the process of booking a holiday. In the real world we might visit a Travel Agent and give them our general holiday preferences and budget. They may even know us and have tacitly learnt some of our preferences from the past (that I had a bad experience on a particular airline or already know where we live and so can pick the best airport). We then trust them to use their expertise to look around and come back with options for us to choose from.
Now if we transpose this example to the web, it may be that we have a trusted advisor agent/site/application on the web (an entity of some sort that we turn to). It would have learnt from its past experiences dealing with us, can leverage expertise and knowledge it’s gained from talking to other customers (and other agents) and is an expert in knowing where to find the best deals and sources of travel information.
Interaction Design Implications
So what does this mean to us who work in the User Experience and Usability fields? Well, it’s still early days, but it may mean we need to surrender some degree of control at the interaction design level. We are used to crafting interfaces with well defined behaviours in mind. Indirect Management means we still design the touch points between the user and the machine, but also – perhaps – we need to create the rules and contracts that exist between human and machine below the interface, to in fact, define very primitive (rule based?) levels on intelligence.
Emergent Behaviour may well dictate the overall system intelligence and this is pretty hard to get a handle on. We can already see this sort of behaviour in numerous recommendation systems, such as Amazon, lastfm etc and their early ancestor, firefly. But these are just the beginning, the real challenges and issues lie ahead. These are exciting times.