Maybe it\'s me, but I\'ve not been able to exchange more than a few sentences with ALICE before getting annoyed.
I guess to provide good natural language capacities there are several choices to look at:
1. The talking head experiments by Luc Steels et. al. showed very promising results. The main idea is to talk about things in view space during a guessing game. The agents build up discrimination trees during subsequent games and so come to a more and more sufficient understanding of each others utterances. What I most like about this approach is, that
a) the agents can be trained by the players so the devs don\'t have to do all the teaching, programming etc.,
b) the language will be the language used by the players and even multiple words (e.g. table, Tisch and mesa) can be used to reference the same thing and
c) synonyms, hyponyms etc. are sorted into a taxonomy naturally.
2. A (top level) ontology like DOLCE, WordNet, SUMO, OpenCyC, BFO etc. can be incorporated and fleshed out to look up the meaning of things. This will definitively be a loooot of work but may provide the NSCs with a certain understanding of the players\' utterances.
I\'d prefer to use DOLCE+WordNet since DOLCE is a very clean structured ontology and provided in KIF and OWL. WordNet is quite large and there is an adaptation from WordNet to DOLCE provice, discussed in Gangemi et.al., \"Sweetening Ontologies with DOLCE\", 2002.
Also one should take a look at
http://wonderweb.semanticweb.org where the deliverables D17 and D18 are about the same topic.
But enough reference spamming for now...

Of course, the best would be to combine both approaches to make the NPCs really smart. AFAIK nobody has done this by now :-D
So much from me,
MC