I disagree with what Jose has to say about emergent systems: I don't think we know enough yet to be able to design a system such that a desired, pre-specified behavior results. It's a great idea. I just don't see anyone doing it.
OTOH, I really like his point that there just has not been a need for agents that can perform a wide variety of previously unspecified tasks: we can always build a specific task system. This is a tough AI nut to crack: essentially the automatic programmng problem. Distributed.
During the nineties, at various times I made public critiques of our agents initiative. I critiqued the lack of an operational definition of "software agent" - how can we do science if we don't agree upon the fundamental terms of discourse. Similarly, I critiqued the use of "intelligent agent" as essentially vacuous.
And how can we even do engineering when not only have we not identified a class of problems for which our specific technology, "agents", was particularly useful, but in fact we never had an operational definition that would distinguish our technologies from others?
T'would have been ok were agents a practical engineering initiative. Scuffiness is useful. But KQML and FIPA suffered from academics designing languages in a vacuum instead of studying applications to see what might be needed.
And nothing was ever needed. I gave an invited talk in Paris on the financial applications of software agents. I surveyed the literature and described a small set of applications for which it was commonly claimed agents would be good. Then I pointed out that each of these applications was being done already without agents.
And please, enough with the tired Travel Example. Haven't we proved that no one is persuaded?
It's not that I didn't believe there is no use for agent technology, but maybe not for general problem solving. In 2000, I gave a talk on "Agent-Based Software Engineering". Yes, Nick Jennings was in the audience. I tried to make the case that KQML plus a speciallized "inner language" and a message platform with store-and-forward capability (JATLite) offered certain advantages in specific distributed applications: concurrent engineering in this case.
However, my last slide was titled "Future: the Vanishing Agent". Nick and others violently objected. My advice in this slide was either throw away or hide all of this complicated (and to-date unsuccessful) technology in favor of the newly emerging web service standards.
We have some good lessons from agent languages, such as: there should be a "sorry" message as well as an "error" message. And there should be a standard outer languange with some kind of useful speech acts as was the case in KQML and FIPA, as well as specialized inner languages for specific domains. We just need to get the details right by experimentation in specific domains rather than purely abstract thought informed by no specific domain at all.
But web services also offers a lesson for the agent community: constrain the functionality of any given "agent/service" so that it can be described in a standard way. What good are agents that will accept any message and you have no idea what they will do with it?
Our job is to subvert the web services community, turning it into semantic web services, with composers that can offer new flexibility both to individuals and to busnesses. We could build a "World Wide Wizard" that programs the world. This is a great opportunity for us.
Unfortunately, we seem to be going at it the same way as previously, making many of the same mistakes. Why should formal service description languages be developed by academics in a vacuum again? And are we realy going to bring up the Travel example again?
A fundamental problem is incentives. Academics get rewarded by doing something new and industry folk are told to keep secrets: neither is motivated to share and reuse. As a result, we converge on useful technologies very slowly, except for open systems.
This is why I work on the mostly unfunded open Semantic Web Services Challenge. We pose some small problems of a B2B nature, using as many de facto standards as possible. Then it is up to the participants to solve these problems using their own formalisms. We know if they really solve them by examining the messages they send. And we encourage everyone to steal from everyone else: reuse from the open services and ontologies sandbox.
At the very least, we certify which technologies work for which problems. And provide some evaluation of their software engineering efficacy: the workshop looks at each participant's code and judges how much change had to happen to go from one problem to the next.
In the best case, we converge upon useful formalisms that can reused to good effect and might actually be useful in industry if we can overcome some of the other tech transfer barriers, such as the percieved need to have a fixed legal contract with each supplier.
This is hard work. But industry is moving to services, and there is a need emerging for everything that we've learned about knowledge representation, reasoning, messaging for distributed computations, and AI planning - once public services start having effects in the world.