Actually, I'm not in computer science or artificial intelligence. My doctoral work is interdisciplinary, primarily electrical engineering. I am building microelectrode arrays on silicon so that we can record the activity of many cells simultaneously, in vivo. I'm also doing the neurophysiology itself. My advisor, who runs the lab I work in day to day, is a neurophysiology professor with a strong interest and involvement in physiologically accurate computer models of neurons and their networks. We are collaborating with a silicon micromachining lab at Stanford to build the silicon devices; I spent most of 1994 up there building the devices that I'm currently using, with lots and lots of help from the folks in the Stanford lab. My undergraduate degree was in physics (minor in philosophy), and I actually have no real background in computer science or artificial intelligence! I pass on this AI symposium pointer more as a matchmaker than a suitor.
However, I want to encourage you, Don, or anyone else here, to check out this symposium and perhaps write to the organizer, Dolores Canamero. According to her bio on the Web, she was a postdoc in Rodney Brooks lab at MIT, and I wonder whether maybe she was one of those you mentioned, who contacted you from MIT then disappeared? I sense that there's perhaps a paradigm shift (uh-oh, buzz word!) going on here, that maybe all those apparently failed early attempts at synergy across fields did have an effect after all, that maybe this symposium is in part a result of those early interactions.
And finally, they seem open and interested in synergy with people from other fields. To quote from the symposium Web site:
"Contributions from fields others than AI, ALife, and robotics (e.g., arts, biology, humanities, social sciences), are also strongly encouraged."
This looks like a relatively highly interactive symposium, so there may be a very good opportunity to cross-fertilize. Check out the Web site! :-)
A couple more thoughts:
1) I looked on the Web for information on the Meck striatum work. I balked when the NY Times page wanted me to register. I did a general Web search and came up with one relavent hit:
http://www.synapcom.com/meck.htm
This looks relavent and interesting. However, it was posted February 1996, and doesn't mention Matthew Matell, the research assistant mentioned in the NYT article, so there's probably newer information that I haven't found on the Web. I didn't find anything at the Duke Web site, except for Meck's email address:
meck@psych.duke.edu
Neither Meck's home page nor a literature search that I did turned up any journal publications by him involving the striatum. Lots on the psychology and pharmacology of timing, though.
2) Don, you wrote:
"I have pointed out over and over that it would be relatively easy to build a computational system that varied the gradients and densities of input to resemble the conditions of stimulus increase, stimulus decrease, and stimulus level that we believe to be the evolved triggers for innate affect. Our central concept that innate affect is an analogic amplifier of its stimulus conditions surely cries out for involvement with the world of computational science."
I do believe that it would be very interesting to build a computational system that mimics your understanding of innate affect, in the way you describe. When I think a bit about how I'd go about building such a system, however, I immediately come up with these questions:
How do you measure stimulus density?
Or, what is stimulus density?
Is stimulus density a scalar (single-valued at any given moment in one individual) or a vector (multiple-valued, like somatic, visual, aural, internal...)? If it's a vector, what are its components, and how do the various innate affects get triggered by stimulus density in various vector directions?
I'm presuming here that stimulus gradient, whether increase or decrease, is just the time derivative of stimulus density. Though if stimulus density is a vector, then stimulus gradient could also involve derivatives in the stimulus density space, I suppose. Mind you, all these ideas and questions arise from an off-the-top-of-my-head mechanistic model of what might be going on.
These are the simplest sorts of questions that come up when you try to build an artificial system that models your theory about how a biological system works. Typically, once you get into the nitty gritty details, if you do your job right, all sorts of more subtle questions come up that you have to explore to get the thing working at all. This is what I meant in my last note about how building an artificial system can help you understand the natural system better. And this is why I think a collaboration with suitably open and communicative artificial intelligence scientists could be a boon for you. OK?
In reading your writings on this forum and on the SSTI site, Don, I get the strong impression that you have a wealth of understandings, and that naturally only a portion of your understanding can come across in any one writing. So I'm wondering, do you understand well enough what stimulus density is, to model it in an artificial computational system? Perhaps it's well established in psychology what stimulus density is, but I've come across the idea several times in various forms, and it always seems to be a label for an intuitive concept -- I like the idea, but it seems quite fuzzy. If you tried to instantiate it in a machine, you would have to nail it down better, at least better than I understand it.
Sincerely, David Kewley