Rodney Brooks is in Town - I, I, Robots
Tue, 26 Aug 2008 23:40:08 +1000
By: gosh'at'DigitalFriend.org (Steve Goschnick)
Rodney started his presentation with an iRobot TV advertisement that went to air recently in the US - in addition to still being a Professor at MIT's AI Lab, he is currently the CEO of the iRobot Corporation in the US (I wonder how much they paid for the company name? - surely someone had that title after Asimov's classic book titled 'iRobot' long long ago, and the much more recent movie, same title ... a side issue: the iRobot movie - one of my recent favourites - went by greatly under-rated by most critics. Its one of those movies that will get recognised more and more as a classic, as time goes by. The subtlety of Wil Smiths character is great. E.g. While he "has a documented history of violence against Robots" and gets stood down from his detective role due to suspected mental fatigue, he remains far from capable of irrational violence against people, even though taunted by those he suspects of some giant cover-up involving the initial opening murder. I.e. the 'people' in this movie are appropriately post-modern, and far from your usual Hollywood heros.)
Image #1: Once they were just toys of fantasy, but no longer the case.
"In 2002 there were zero robots in homes in the US and zero in the military. By December 2007 there were more than 4,000,000 in US homes and more than 5000 in the military." They have of course been in industry for some time, but get too close to one of those and it will seriously injure you. What has been changing is: who has access to interaction with robots. The robots entering US homes are typically single function, like small floor cleaning robots.
Brooks (the inventor of the subsumption/reactive robot/agent architecture) is still not into 'deliberation' in his Robots (although the Mars robots he was involved with, started to ponder a little, according to his own description of them). He also avoided questions on 'inference' (Stephen Bird asked one regarding speech recognition) and someone else on 'team-work' - perhaps dodging them for commercial reasons? - one of the problems as many academics get deeper commercial interests. The iRobot Corporation robots going into homes are single button, not even a toggle switch (as per their first model) so as not to bamboozle the average home user, apparently.
He painted the picture of a vast future marketplace for domestic robots: the aging population bulge, and their need for help and social interaction - "happening faster in Japan, probably because of the traditional Japanese belief in spirit within objects, perhaps making it easier for them to accept social interaction with Robots." Some American elderly are outsourcing their lives to Mexico where they can afford to buy personal staff/help for much less money, and likewise with some elderly Japanese outsourcing their lives to Thailand.
He described what robots need to attain capability-wise, before they will have a very large impact on our society: the visual perception of a 2 year old child; the language skills of a 4 yo; the physical dexterity of a 6 yo; and the social sophistication of an 8 yo.
He showed some original footage of his robot experiments in 1978, which were very instructive about the advance of technologies used in robotics: it consisted of a video camera ("which cost $50,000 in 1978") mounted on a robot as its eye. It advanced about one metre then it stopped for 6 hours while the computer 'digested' and interpreted the scene for obstacles and objects, then it advanced another metre before stopping for another 6 hours of computation; etc. Now in the DARPA robo-car rally Grand Challenge, the successful unmanned vehicles travel 220 kilometres at about 25 kph and more, along a course not divulged before the beginning of the race. He showed a video of the visual scan of a system using 'real-time radar on a chip'. Why is DARPA running this rally? Because the US military want 15% of their vehicles to be robotic (capable of being unmanned) by 2015.
In terms of robustness, he showed footage of small robots that US soldiers use in Iraq (little tank tracks about 2 foot long, and a camera on a four-bar linkage stork). They literally 'throw' the robot into the open window of a hostile building. It searches all rooms, relaying video back to soldiers outside with a laptop (it reminded me of the War of the Worlds movie - although the aliens reconnaissance robot was a one-eyed serpents head and body extending itself through mid-air, structurally supported from somewhere outside, it nonetheless had the same functionality as those on the ground in current day Iraq). At one stage it fell 10 feet after going over a ledge, righted itself, then continued on its way, virtually uninterrupted. The laptops have been revised with a standard game controller, to suit the recently grown-up kids often from the poor side of town, who are operating these things.
He discussed Moores Law - how a generation of Silicon valley designers and product developers simply needed to look at a chart on the wall, to see 'when' they needed to have their product ready-for-market - i.e. when the necessary computation would be available on the street. Moores Law has been very useful to product designers, developers and producers.
He then borrowed from Moores Law, adapted it to disk space/secondary-memory available in iPods: in 2003 you could get a 10G iPod for $400. For the same $400, the capacity has been doubling every year, coming home around July of each year. So, by 2013 an iPod will be able to hold all the worlds published songs. By 2020 an iPod will be able to hold all the worlds movies, including all those from Bollywood (all 'good' movies by 2014). He used this progression in memory capacity as a lead-in to the amount of data that robots will have access to. If every object sold has an RFID, then a robot having sensed such an object, can look it up to get the 'exact' details about the object, and then use that to determine/know how to deal with it. That sort of detail combined with GoogleEarth down to a 6 inch patch, means that image recognition within robot vision, is not going to need to be very sophisticated at all - contrary to what most AI researchers have previously believed. This obviously leans towards his subsumption architecture, where AI deliberation is not a consideration.
Another theme Brooks underlined for the audience, was the fixed cost of mechanics. While computation is increasing exponentially for a fixed cost, mechanical parts have hardly fallen in price at all - therefore mechanics is getting replaced with computation wherever possible (witness the modern car). So where does that place sophisticated robots? High cost, high value manufactured goods, that ought to be onwardly upgradeable, computationally, after initial purchase.
Some other points he made, but put briefly here:
1. Parallel processing: his (subsumption) robots don't need it.
2. Learning something once achieved, can be communicated to any number of robots. Therefore, most of the learning and 'slow stuff' will be between human and robot, not between robot and robot.
3. He didn't think pattern recognition had advanced very far in 20+ years, and neither does it need to, given the (above outlined) doubling of secondary memory (e.g. disk capacity) every year.
4. RioTinto have completely automating a mine in Western Australia, given the shortage of workers in WA. (This ties in with my long-held view about Australia in general: while it has long been considered a national weakness not having a large population wrt such a vast continent of resources, once you get a significantly advanced robotic workforce, and when much of manufacturing is automated, that same small population becomes an advantage, as the sweatshops of the world will lose their commercial advantage, along with the semi-human slavery they are often built upon ... we just have to prevent all the wealth simply being diverted straight to overseas-based corporate entities - which will no doubt be easier said than done.)
5. He showed video footage of a robot experiment, where the robot had facial expressions, gestures, and other social cues - e.g. it looked where the human communicator was looking, lowered its head, raised its head and moved back when a person is too close, looked at objects the human held up for its attention, etc. Despite the fact that the robot had no understanding of English at all (a point that the human test subjects were deliberately left ignorant about), one guy had a 20 minute conversation with it!
6. Another experiment (video) showed the increasing dexterity that research robots are now capable of: feels things with 'pressure' tactile feedback, it picked up a fragile cardboard box very gingerly, and then put it down gently on a shelf some feet away.
7. When asked about his vision of robots and our immediate future, Rodney Brooks answered with three short-range practical domain issues only:
a). "There will be lots of single-purpose robots" (e.g. his iRobot Corporations vacuuming robot).
b). "Underwater applications - the oceans of the world remain largely under-explored, Robots can help change that rapidly"
c). "The environment is of growing concern, and so robots that can test various aspects of the environment will find great favour."
Of course, he does have his own speculations he put elsewhere, such as in his 2002 book, Robot - the Future of Flesh and Machines.
Note: The Rodney Brooks presentation was put on by NICTA at their University of Melbourne premises, as part of their 'Big Picture' series of presentations. Even so, I think they under-estimated the drawing power of Rodney Brooks, as 'they were sittting in the isles'.