-
Thursday morning
2003-12-11 15:02 in /tech/nips
Waiting for the invited talk to start. Technical difficulties. Sadly, with the first presentation on a Mac. Depressingly, it's not a shortcoming, rather she's trying to do something fancy (multiple monitors, one with presentation and one with her notes) but it doesn't seem to be compatible with the projector system. I figure probably a resolution mismatch. Sigh...
Hmm... maybe it's working now...
"Statistical Language Learning in Human Infants and Adults"
word boundary detection -- based on conditional probability of adjacent syllables. demonstrated with artificial languages in adults, babies, and tamarins. Also works for tone sequences and visual sequences. Not the whole story for language acquisition, though. (Tamarins don't have language.) non-local features, heirarchical features, etc.
- non-adjacent syllables -- hard
- non-adjacent consonants - easy (hebrew)
- non-adjacent vowels -- easy (turkish)
(Tamarins different: syllables and vowels, but not consonants)
similar in music, can't interleave melodies and still hear them.
Correcting for inconsistent input. for example, children learning sign language from late-learning parents with poor accuracy. children surpass parents substantially, even with only one very poor parent.
"Algorithmic vs. Subjective randomness"
Also pretty interesting. The basic idea is that humans have certain ideas about randomness which aren't quite right. I.e., "HHTHT" seems more random than "HHHTT" which seems more random than "HHHHH". There are some ways to try to describe this.
Kolmogorov complexity measures the size of a description needed by a UTM to produce a sequence. But, this isn't so good because humans don't notice all patterns. For example, the binary representation of the Fibonacci sequence looks pretty random, but is easy to generate.
On the other extreme, there is a simple measure of subjective randomness which counts the number of repeating sequences and alternative sequences required to build a sequence. However, this misses the fact that things like mirror symmetry make a sequence seem less random.
The authors examined machines intermediate between these (UTM and FSM) considering pushdown and queue automata as well as readable-stack automata and examines how well they modeled "subjectively random" sequences. They looked at the case where people are shown a full sequence of 8 coin flips at the case where each result is revealed sequentially. The result was that the readable-stack automaton is a good model when people can examine the full sequence, while the queue automaton is a good model in the sequential case (because it models limited memory).
On thing I think this work doesn't capture yet is that a sequence like "HHTHT" seems more random than "HTHTT". (At least, I think it does...) So, subjective randomness doesn't seem to be reversal invariant, which the computation models they use all are. (I'm now noticing that the asymmetry is not as apparent when you see the whole sequence. If you read them, though, I think you will agree that in the sequential case, "HTHTT" seems less random.)
-
Weds Afternoon
2003-12-11 12:22 in /tech/nips
"All Learning is Local" -- multiple agents collaborating with some global goal, with limited local information availible to each one. Applications for load balancing? Maybe not... fairly easy to get global information.
After that, another interesting talk about a strategy for combining strategies in games. It seems to work well for the Prisoner's dilemna, but I was disappointed to see that they don't seem to have done any real testing in practice, for example in Rock-Paper-Scissors competitions.
-
Tues Afternoon
2003-12-09 17:11 in /tech/nips
music similarity, e.g. for playlist generation. Problems -- lots of nodes, sparse similarity matrix, user collection doesn't necessarily match editorial recommendations. Interesting and seems to produce reasonable results. Similar discussion afterwards to the talk at overture recently: what makes a good playlist, how much similarity is actually desired?
Heirarchical Topic Models -- learning heirarchies of topics from a corpus, with possibility of growing the tree as new data is gathered. I want to understand this talk better. With only 20 minutes, it all went by too fast.
This conference is definitely a bit of a firehose. 20 minutes doesn't seem adequate for a lot of these talks unless you are already very familiar with the topic. However, this is a highly interdisciplinary conference so most people aren't, for any given talk. Plus, in addition to the 26 talks, there's something like 150 posters. I can't even slog through all the abstracts for the posters to figure out which ones I ought to visit.
-
Tuesday morning
2003-12-09 14:23 in /tech/nips
The invided talk this morning was from David Salesin, "The Need for Machine Learning in Computer Graphics". The basic issue is that realistic CG is a lot of work. For example 100,000 man hours in The Perfect Storm. It requires good modeling and human expertise. The question is, can machine learning improve this situation. Unfortunately, the talk didn't actually have much to say about this, but was more a laundry list of possible specific questions to investigate.
Later, a "Graphical Model for Recognizing Scenes and Objects" was interesting. They extended conventional object finders to examine the full scene to give context to features, which vastly improved the success at identifying certain types of objects, for example, keyboards.
I skipped out on most of the wet-work talks. I did catch the tail of what seemed like an interesting talk about obstacle avoidance using custom designed analog chips which shows some analogs to insect vision.
-
A few gripes
2003-12-09 10:51 in /tech/nips
Something is definitely wrong with the wireless network. Web surfing seems to work just fine, but other protocols have issues. SSH, even to the same machines I can see on the web, is incredibly slow. It can take 2-3 minutes for me to get echo on my typing. ping and traceroute don't seem to work at all.
The elevators are another problem. Imagine 1000 people, all on the same schedule, trying to use 6 elevators to go back and forth between 30 floors.
-
Metric Skip Lists
2003-12-09 10:39 in /tech/nips
From yesterday, metric skip lists are the data structure presented for approximate nearest neighbor searching. I wanted to estimate the space required for this data structure a little more precisely. The scaling was O(n log n). Specifically, in the case considered, there are 16 pointers for each octave of distance for each point assuming the data fits on a 2-d manifold in some sense. 16 = 4^2.
So, if n = 1 000 000 ~ 2^20, we require 16 * 20 * 1000000 * 4 bytes = 1.3GB.
If n = 10 000 000 ~ 2^23, we need 16 * 23 * 10 000 000 * 4 = 15 GB.
So, this doesn't actually seem so useful.
-
Algorithmic Tools
2003-12-08 18:05 in /tech/nips
Session III, I picked "Algorithmic Tools Applied to Learning and Inference Problems". This started out quite interesting, but turned into a bit of a slog as he tried to cover 4 example cases, each of which could have been a 1 hour talk, in 30 minutes each.
Of these, only the first looked like it might have some relevance for us, which was "A Data Structure for Nearest Neighbors on Manifolds", which provided a technique for doing approximate near-neighbor searching on a fixed set of point, with O(log n) query time (and O(n log n) space and O(n log n) time to build). I wonder if there are any rigorous guarantees provided by our systems which perform related tasks.
"Markov Decision Processes with Nonrecurring Rewards" talked about a traveling salesman-like problem with time-sensitive rewards associated with each node.
The other two talked about error-correcting codes and learning Markov models. My brain was pretty much full for these and I didn't get too much out of them.
Anyway, now I'm going to do explore the hotel a little more, and also see if I can find Cem.
-
water? What's that?
2003-12-08 15:30 in /tech/nips
Now, I know that both programmers and scientists operate on caffeine, but never did I expect that asking, "Where can I get some water?" would cause me to be stared at like a space alien. "Um, let me check," disappears behind curtain. I am not kidding when I tell you that I overheard someone say "Ask him what he means by 'water'?".
-
Realtime Object Recognition
2003-12-08 15:26 in /tech/nips
Unfortunately, we thought this session started at 1, not 1:30, so arrived a bit late.
corner detection -- not scale invariant (change in resolution)
search for scale invariant feature recognition -- difference of successive gaussian blurs
Unfortunately again, I'm totally wiped out from getting up at 3:30AM. This was quite interesting, but I couldn't keep my eyes open and retreated to my room for a nap.
-
Brain-Computer Interfacing
2003-12-08 11:32 in /tech/nips
Have arrived at the hotel and checked in and registered. Now at the tail end of the first tutorial, "Towards Brain Computer Interfacing". Playing Tetris by thought alone!
Well, pretty much only got the conclusions:
- It works!
- Currently required hundreds of hours to train the system and the user
- This might be okay for medical uses, but not general use
- Probably don't need to aim for near instant usability. Some training is okay, e.g. learning graffiti for Palm, or dictation software.