IJCAI'99  Workshop on
 
NEURAL, SYMBOLIC, AND REINFORCEMENT METHODS
FOR
SEQUENCE LEARNING
 
 Co-chairs: Lee Giles and Ron Sun
 
Stockholm, Sweden
August 1, 1999
 


Sequence learning is an important component of learning in many task domains: inference, planning, reasoning, robotics, natural language processing, speech recognition, control, time series prediction, financial engineering, DNA sequencing, etc.  There are many different approaches towards sequence learning, resulting from different perspectives taken in different task domains.  These approaches deal with somewhat differently formulated sequential learning problems (for example, some with actions and some without).

Sequence learning is a difficult task, and more powerful algorithms are needed in all of these domains.  The right approach is to better understand the state of the art in different disciplines related to this topic first.  Therefore, there seems to be a need to compare, contrast, and combine different techniques, approaches, and paradigms, to develop more powerful algorithms.  These techniques and algorithms include recurrent neural networks, hidden Markov models, dynamic programming (reinforcement learning), graph theoretical models, evolutionary computational models, AI planning models, rule-based models, etc. We need a gathering that includes researchers from all of these  orientations and disciplines, beyond narrowly focused topics such as reinforcement learning or neural networks for sequential processing.

The following questions and issues will be addressed:

1. underlying similarity and difference of different models
   1.1 problem formulation (ontological issues)
   1.2 mathematical comparisons
   1.3 task appropriateness
   1.4 performance analysis and bounds

2. new and old models: capabilities and limitations
   2.1 theory
   2.2 implementation
   2.3 performance
   2.4 empirical comparisons in various domains

3. hybrid models: approaches, theories and applications
   3.1 foundations for synthesis or hybridization
   3.2 necessity, advantages, problems, and issues

4. successful sequence learning applications and future extensions
   4.1 examples of successful applications
   4.2 generalization and transfer of successful applications
   4.2 what is needed for enhancing  performance


(6 invited and 11 contributed papers)

PROGRAM:

(Each speaker should leave 5 minutes of their alloted time for  questions and discussions.
Each invited speaker with a 40-minute presentation should include 10 minutes for discussion.)
 

9:00-9:05      Opening remarks, Ron Sun and Lee Giles
 

1. RL and SDM:
 

10:25-10:50 break  

2. Sensory-Motor Sequences:
 

3. Poster Summaries: (5 minutes each; 12:10-12:30)
  12:30-2:00 lunch
 

4. Neural Networks:
 

5. Application-Specific Models:
   
 6. Panel Discussions, and Conclusions:
  7. Poster Presentations:
 (posters should be set up  before the morning coffee break)  


FURTHER POINTS

To encourage discussions, accepted contributions and discussion topics are published on the world wide web before the workshop.  As a consequence, the content of all the talks is known beforehand, so that  presentations and discussions can focus on the technical questions.

Hardcopy ``Working Notes" will be available at the workshop (but are available here online). We are also considering publishing an edited book after the workshop with a major publisher.
 
 

Accessing the workshop papers in postscript:
from Lee Giles'page
from Ron Sun's page

If you want to participate in the workshop, see:
http://www.cs.cmu.edu/~ijcai99
 

Committee Members:

Jack Gelfand, Princeton Univeristy
Lee Giles, NEC Research Institute
Marco Gori, U. of Florence
M. Niranjan, Cambridge Univeristy
Ron Sun, U of Alabama/NEC RI
Gerry Tesauro, IBM
 

Dr. C. Lee Giles (co-chair)
NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA
609-951-2642, giles@research.nj.nec.com

Professor Ron Sun (co-chair)
Department of Computer Science, The University of Alabama, Tuscaloosa, AL 35487
609-951-2781, rsun@cs.ua.edu