March 2007 - Seminars Roll On
Microsoft Research Ponders HCI in Year 2020
Went to Steve Howard's presentation today in the IDG Seminar (our Interaction Design research Group's weekly meet), which was about his invited attendance a few days ago at the Microsoft Research hosted (in Seville, Spain): 'HCI 2020' - a think tank of thought leaders in HCI (Human Computer Interface), about the not-too-distant future of HCI in the year 2020. Steve Howard is our research group leader (and new Head of Department in about one week).
The think tank's web page is here: http://research.microsoft.com/HCI2020/ highlighting the intention of the event as: "By bringing together the world's leading thinkers on this topic we hope to discuss, debate and define a new agenda for HCI in the 21st century - one that puts human values at its core." A list of the people who attended, and a brief biosketch of each, is there too. There was a reading list (these are listed at the HCI-2020 web site) set as 'home work' before the talk-fest.
The assembled experts were asked two questions: "What major changes would you want to make or see by 2020?" and "Was there a shared world view on core human values (2G values)?" They are going to compile their results as a book, some time later this year.
Before looking forward, they looked Back:
1G - 1st generation - values in HCI, were: efficency, effectiveness, and productivity [I would have thought 'user satisfaction' too]. The HCI community has largely solved the 'unusable technology' problem - i.e. the situation where completely unusable technology got to market, often. HCI has apparently gone from 'Know thy user' to 'the user is dead' - now its about 'instrumental agency', about small opportunistic software - not the large apps of today. [Barbara Grosz quoted someone the other day as saying: "there are only two groups of people that call their customers 'users' - the illicit drug industry, and the software industry." ... well, they both deal with addiction, but only one of them is illegal!]
Five things the HCI community don't do well:
1. Patience (disruptive technology is not handled well)
2. Value must be greater than the pain e.g. SMS's unpredicted success.
3. Data. i.e. don't understand content/data. [Hear, hear! - how many times has my emphasis on the importance of good data analysis been dismissed in HCI-peopled conversations].
4. Users are no longer the middle majority, they are the long tail ... e.g. inclusive of the aged, and all social groups. [There's that term 'user' again, in the new dialog - so, I think its a case of the 'user' is dead, long live the User?]
5. Security is killing UeX (the user experience).
"Old Psychology" has made a lack of progress. HCI failures include: video conferencing, eBooks, intelligent tutors, ...
Looking Forward, in their collective attempt to answer Qs2:
2G core human values are ...
The future scenario is pervasive, embedded, ubiquitious. We will have blended systems, meaning a mix of - digital, built, and social systems.
"Stay calm" computing is not enough, "we want to engage, to learn".
They did focus group work on a number of dialectics:
Vitality & Fragility; Creative & Automation; Self Expression & Consumerism; Connectedness & Isolation; Restlessness & Industriousness; Aesthetics & Utilitarianism.
So, what is 2G in the near future talking about? Here's some snippets of their collective view that I took away from the IDG seminar - Its:
Not substitution (but is augmentation)... [I certainly agree here, thats my whole philosophy regarding agents wrt humans];
Not task-oriented (instead - activity, artifacts as mediator not mechanisms) ... [I can see blind-spots in that one, if they disregard tasks and the goals that often drive them];
Not valueless (full of - value laden, care, goodwill) ... [neither was 1G];
Not usability (but rather - use, fun, engagement).
Not HCI (but - use-centred interaction design, design ethnography).
Not methods (but focusing on - hard arenas, lived, experienced).
Mmmm. Bit of a bulge in the attending cohort wrt Microsoft people and the Ubiquitious Journal folk, but otherwise a fairly wide range of people and backgrounds, representing the Thought Leaders. Although, no cognivists nor formalists (user models, grammars, etc) is a bit of a surprise, hopefully either an oversight or under-attendance, rather than engineered... The book should be pretty interesting when it hits the shelves.
Barbara Grosz's second presentation in DIS, University of Melbourne
I attended Prof. Barbara Grosz's second presentation in our building today (see the entry of the 8th March below, for some details of her first talk and the tool 'Colored Trails'). This one was titled: Its Time to Talk: Timing Interruptions in Dynamic Human-Computer Multi-Agent Environments. It was primarily about how an agent could or should prompt a human for information (with the consequence of interrupting their work-flow), when it believed the human had extra information, beyond the system.
Near the end of this talk she showed some slides where Coloured Trails was used in an research application, where, players on the squares were shown to represent things like an 'ambulance' and a 'fire truck' in emergency situations. This got me thinking that Coloured Trail was really the ultimately downsized dungeon'n'dragon game (this side of the text-only D'n'D), where you can move N, S, E or W but only if you have a coloured chip which matches the next square in that direction ... that is, very simple D'n'D! i.e. While you may possess all sorts of coloured chips and any number of them, but there is a simple condition on a single state that governs further progress in any direction. I put the question: "What if a fire truck needed two things before it could advance e.g.: at least half a tank of petrol and a full load of water, before it could advance to a new fire-outbreak (the goal)?" Ans: 'Well, you could represent those two conditions with a specific colour.'
Image #1 : A single D'n'D Interface
Hmm... I did a very simple D'n'D program in Java when I was teaching second year students Java a few years ago (see Image #1), simple graphically, but it can hold multiple objects - icons representing sword, potion, jewel, boot, etc - and you could place multiple conditions/constraints on passing a particular point. You could also advance well beyond the next square, you could effectively have a 'portal' to any square (see Image #2).
Image #2 : The underlying map for a specific D'n'D game.
What it didn't have was multiple players, and hence no exchange/collaboration feature enabling an exchange of pieces between players (e.g. Proposed offer, Proposed gain, Yes, No, Ignore, Counter-offer). If it did, then such a customisable D'n'D game would be so very useful in these simulations in Mixed Human-Agent Systems. Colored Trails lets each player see what the other players possess - colour and number of chips. In D'n'D it would be useful to show 'some' of the artefacts one has, but then 'hide' some others - at least at the meta-game level (when specifying the specific D'n'D) it would be useful to have the choice of whether or not to allow a player to hide some number of artefacts.
Exposing and Hiding Resources in a more Complex Memory State
This all reminds me of a more pressing issue I have in the Knowledge Tree of the DigitalFriend, about which resources (via the directories they are in) should be visible to certain processes. e.g. When Sync'ing between devices, are there machines (e.g. the Work PC) where certain directories shouldn't be copied? Its a similar issue to letting other users 'see' some sections of the Knowledge Tree, if the DigitalFriend was opened up to a group of users - e.g. the family, or a circle of friends, or even one other friend? This is an access issue, and access privileges can always be done in one of two primary ways: allocated on a hierarchy of trust basis; or, allow the ability to allocate specific directories to specific individuals (or machines, or other 'subjects'). I always prefer the non-hierarchical approach when building access levels, as it is far more flexible for the user. But, it does require significant 'keys' in terms of the number of bits that can be turned off or on, to represent all of the directories (in this case). Well, I'll make the first level 'public' meaning that every one/every-device has read access to resources in here. Then there are 8 directories at the second level of FUN, 64 at the third level and 512 at the fourth level. I'll go to the third level, which means I need 512 + 64 + 8 = 584 bits to represent individual YES or NO permissions on the whole visible dial worth of directories. Beyond the fourth level, permissions are based on those of the parent at the fourth level (i.e. hierarchial access thereon down).
The good thing about the FUN interface is that I can set these keys visually, interactively. i.e. All 584 subdirectories /sub-entities have a visual presence on the one screen at the one time. Therefore, I can create a new access key by setting all tiles to black representing 'no access' across the Knowledge tree, then click on individual directories turning them 'on' to a bright colour, to grant access to this key. I then save such a key with each machine I sync with, and the syncing process will need to respect the key. When I make the DigitalFriend capable of sharing parts of a user Knowledge Tree across groups (i.e. incorporating 'Social Worlds'), I'll just allow a user to create and issue these same keys. There will be update issues associated with changes in the Knowledge Tree directory structure. There's also the issue of the need to set up more than one access level, e.g. read, write, execute, etc. I could use multiple colours in each directory to do so - e.g. the eight colours of Octadial, for 8 levels of access.
Barbara Grosz is in Melbourne Town
Prof. Barbara Grosz is visiting our department for several weeks and she gave her first of several agent-oriented presentations in the context of 'Mixed Human-Computer Networks'.
She has done recent work on 'Group Decision Making' and rephrased it as "coordinating the update of intentions". She made the early point that when people meet as a group, when they discuss an agenda and they most commonly form 'Shared Plans for Collaborative Action'. Their deliberations can get grouped under four headings, each is about agreeing upon:
- Intentions of the group
- Mutually agreed beliefs
- Individual (or sub-group) plan for sub-acts
- Intention that the collaboration will succeed (no conflicting intentions)
She identified two primary constraints within teams: the 'common content' constraint (static), and the coordinated cultivated requirements (dynamic). She also emphasied that the 'intentions of the group' "also obligued individuals not to commit to intentions that were not decided upon".
This part of the talk reminded me of other agent research currently underway in the US wrt introducing AO software into the formal 'meetings' of organisations, including automatic capture of meeting minutes - very large budgets have been allocated, and hence, a large cross-section of the US AO and HCI research community are hard at it. While it is certainly an area where AO software is likely to be able to make a contribution - even if only formalising the meeting process and getting accurate minutes, for those organisations/meeting-chairs who currently run a loose, timewasting meeting - it did surprise me that such vast sums of research money have been put into this space? But then, I guess you can view 'the company meeting' as the stadium where the 'corporate mind' can be viewed in action - its ongoing goals, beliefs and intentions. That will certainly put more emphasis on system security!
Anyway, Professor Grosz wasn't discussing corporate meetings, her example was simply two people deciding on going to a movie. Her research was less applied and more fundamentally about human collaboration and negotiation. She put it like this:
The Challenge for Mixed Human-Computer Networks is: to design agents that integrate seemlessly with humans.
This Requires: an understanding of human collaboration and negotiation behaviour.
Their (Barbara and colleagues) Approach was to develop a testbed, a game they have called Colored Trails (CT), which provides a relational scenario with objectives, obligations and tasks. Their use of a game was done to sidestep the need to do much domain modelling ("the game has a minimal domain modelling - no more than a path-finding algorithm"). They settled upon a simple 2-player teams rather than a 10 people teams as originally planned.
GAME RULES: CT is a simple but reportedly engaging multi-user game. It consists simply of a grid of different coloured squares. Each player is located somewhere on the grid. Each player has a set of coloured 'chips' which they can use to move around the grid. The Goal is to get to a particular square on the grid, which has been set at the Goal position. A player can only advance one square, if they have a chip that is the same colour as the adjacent square that they intend to move to - when they do, the chip gets consumed. If they don't have the colour they need, they can swap one of their other coloured chips with a player that has. Each player can see the number and colour of chips that the other players have. There are four basic actions involved: offer chip exchange, accept/ decline, transfer the chip, move to next square.
CT is available from her Harvard site at: www.eees.harvard.edu/ai/ct3.
They then did a bunch of experiments involving games with 5 human and 3 agent players - "the humans didn't know there were agents playing" (and vice versa;). Scoring was based on: performance, distance to goal, number of chips left and the path taken. Findings included: "people are not simply rational, they are not simple 'utility maximisers'." Nothing new there. [She described the economic concept of the "economic ultimatum game", where you have $100 to give away, but you must give it away to get it. How do you disperse it? Well, a 'utility maximiser' would give $1 to 100 different people, to spread their influence. But it turns out that many people wouldn't accept $1. Some people won't even accept $60 ... as it "leaves them with a feeling of obligation that they don't want." Mmm, I came across that in the IDEA Lab during a study that involved interviewing experienced data modellers versus those new to data modelling, to look for significant differences in the way they go about modelling a particular scenario. We were paying $50 to the industry data modellers, those who had more than 5 years of modelling experience. Most of them wouldn't accept any money for the hour of their time we consumed.] The sorts of 'benefits' they studied were: individual benefit; social benefit; what it does for trade; advantage of the outcome.... It got very confusing here, as the theatre was double booked and she rushed through the last part of the prepared talk, in just a few hurried minutes. I'll wait for a paper or two. 'Kobi' Gal was mentioned as recent PhD researcher whose work uses and describes CT3. Another site was given for some other nifty Harvard-originated software caleld IBAL: Program Probabilistic Models ... "gives you inference and learning algorithms". The site is at: www.eees.harvard.edu/~avi/IBAL/
On the way home in the car, same evening, I listened to a computer game review program on the local Triple-J radio station. They reviewed a new game for the Nintendo Wi, called either Mario for Wii, or was it Wario? Most of the discussion was about the various moves and gestures you had to do with the Wii wand (e.g. rowing a boat, sharpening a pencil), but, they briefly mentioned a 'team mode' in which you hook-up with another player to 'operate an object'. Sounds interesting. They didn't say what the objects were, but I pictured in my head a 'bicycle' with one person peddling and the other steering, somewhere in Nintendo 3D land. Now there is another good domain for the study of team behaviour, where there is little necessity to model the domain - everyone knows what a bike is. And isn't that what CT is, a simple coloured grid 'object', with a handful of rules (but research-oriented enough to automatically capture the data), operated by several people, and some agents. So Wario for the Nintendo Wi may also be a very interesting test bed for studying team behaviour at some level.