-
Thursday morning
2003-12-11 15:02 in /tech/nips
Waiting for the invited talk to start. Technical difficulties. Sadly, with the first presentation on a Mac. Depressingly, it's not a shortcoming, rather she's trying to do something fancy (multiple monitors, one with presentation and one with her notes) but it doesn't seem to be compatible with the projector system. I figure probably a resolution mismatch. Sigh...
Hmm... maybe it's working now...
"Statistical Language Learning in Human Infants and Adults"
word boundary detection -- based on conditional probability of adjacent syllables. demonstrated with artificial languages in adults, babies, and tamarins. Also works for tone sequences and visual sequences. Not the whole story for language acquisition, though. (Tamarins don't have language.) non-local features, heirarchical features, etc.
- non-adjacent syllables -- hard
- non-adjacent consonants - easy (hebrew)
- non-adjacent vowels -- easy (turkish)
(Tamarins different: syllables and vowels, but not consonants)
similar in music, can't interleave melodies and still hear them.
Correcting for inconsistent input. for example, children learning sign language from late-learning parents with poor accuracy. children surpass parents substantially, even with only one very poor parent.
"Algorithmic vs. Subjective randomness"
Also pretty interesting. The basic idea is that humans have certain ideas about randomness which aren't quite right. I.e., "HHTHT" seems more random than "HHHTT" which seems more random than "HHHHH". There are some ways to try to describe this.
Kolmogorov complexity measures the size of a description needed by a UTM to produce a sequence. But, this isn't so good because humans don't notice all patterns. For example, the binary representation of the Fibonacci sequence looks pretty random, but is easy to generate.
On the other extreme, there is a simple measure of subjective randomness which counts the number of repeating sequences and alternative sequences required to build a sequence. However, this misses the fact that things like mirror symmetry make a sequence seem less random.
The authors examined machines intermediate between these (UTM and FSM) considering pushdown and queue automata as well as readable-stack automata and examines how well they modeled "subjectively random" sequences. They looked at the case where people are shown a full sequence of 8 coin flips at the case where each result is revealed sequentially. The result was that the readable-stack automaton is a good model when people can examine the full sequence, while the queue automaton is a good model in the sequential case (because it models limited memory).
On thing I think this work doesn't capture yet is that a sequence like "HHTHT" seems more random than "HTHTT". (At least, I think it does...) So, subjective randomness doesn't seem to be reversal invariant, which the computation models they use all are. (I'm now noticing that the asymmetry is not as apparent when you see the whole sequence. If you read them, though, I think you will agree that in the sequential case, "HTHTT" seems less random.)
-
Weds Afternoon
2003-12-11 12:22 in /tech/nips
"All Learning is Local" -- multiple agents collaborating with some global goal, with limited local information availible to each one. Applications for load balancing? Maybe not... fairly easy to get global information.
After that, another interesting talk about a strategy for combining strategies in games. It seems to work well for the Prisoner's dilemna, but I was disappointed to see that they don't seem to have done any real testing in practice, for example in Rock-Paper-Scissors competitions.
Leave a comment
Please use plain text only. No HTML tags are allowed.