Subscribe to Computing Intelligence

Showing posts with label Academics. Show all posts
Showing posts with label Academics. Show all posts

Tuesday, July 28, 2009

Upcoming Conference

I just wanted to put up a quick note: at the end of this week the Max Planck Institute for Dynamics and Self-Organization (MPIDS) is going to be hosting a symposium entitled Nonlinear Dynamics and its Applications in Science. I will be helping out with the running of the conference as well as attending most of the talks (assuming I'm not crammed in my office furiously writing my research report). I will try to post a summary of the talks I manage to go to, but we will have to see how that goes... the next couple weeks are going to somewhat hectic.

Wednesday, July 8, 2009

"Bob Loblaw"

As I have been mentioned a few times over the last couple of days, I gave a talk yesterday morning. I mentioned it so often because I was fairly nervous about it, and I was fairly nervous about it not because it was a talk of any great importance, but because I don't have a lot of experience giving technical talks to highly academic people. The talk was titled Phase Response as a Function of Graph Structure, and was essentially an overview of what I have spent my last month doing. The first half of the talk was a mathematical and intuitive development of the concept of phase response, and then the second half was how that related to dynamical networks (primarily of identical weakly coupled oscillators). Robert left a comment to my post about being nervous, pointing out no one was likely to remember my talk in ten years. As this was an intra-departmental talk to an audience of about ten people, I would be surprised if memories lasted even half that long. What was nice, though, was that I received several compliments on the talk, including from the head of the research group. What was less nice was the Ph.D. student I've been working with and I discovered a handful of minor mistakes on the slides the morning of the talk during my last practice run-through, and my audience managed to spot all but one of them (at least it means they were paying attention). I guess that is what happens when you give a talk to an audience primarily composed of mathematicians and physicists... they actually pay attention to the equations you have on your slides!

I will now try to give a brief overview of what the subject Phase Response as a Function of Graph Structure actually means. If you take an arbitrary dynamical system (which is essentially a fancy word for saying a system that evolves through time) that has a stable periodic limit cycle (which means the system has a state that repeats after a period of time T, and small perturbations to that system will disappear over time and it will settle back to the periodic motion), then you can define something called the phase of the system as how far along in the period the system is. Phase is usually parameterized to be between either 0 and 1 or 0 and 2π by convention. I find the 0 and 1 parameterization more intuitive (it essentially translates to what percentage of the period has already passed, with 0.5 being 50% of the way through from whatever point is defined as the period beginning). The idea of phase can then be generalized to the basin of attraction around the limit cycle (which is essentially the region of your dynamical system's feature space which eventually settles onto your limit cycle), such that a point on the limit cycle and point within the basin of attraction are considered to have the same phase if they evolve through time to the same point on the limit cycle. A rough picture of this idea is shown in Figure 1. This leads to the idea of an isochron (the dotted lines in Figure 1), which is the collection of points in your feature space that all share the same phase.


Figure 1: A point on the limit cycle and off that have the same phase. The mustard yellow curve represents the time evolution of a point off the limit cycle as the moves back to the cycle, while the green curve represents the evolution of a point that starts on the limit cycle. When the mustard yellow curve rejoins the limit cycle, it does so at the same point that the green curve reaches in an equivalent length of time. The two starting points are therefore said to have the same phase.

With phase now defined both on and off the limit cycle, one is able to develop the idea of phase response. If a perturbation (essentially, some sort of externally applied influence that drives the system away from its normal time evolution) is applied to a dynamical system with a stable limit cycle, the phase of the unperturbed system and the perturbed system are both defined (assuming the perturbation is small enough that your system remains within the basin of attraction of the limit cycle), and the change in phase resulting from the perturbation is the phase response of the system (see Figure 2).
Figure 2: The phase response (Δφ, where φ is the phase of the system) to a perturbation ε.

Until now, I have left the discussion fairly open-ended about the properties of the dynamical system under analysis. The idea of phase response is usually applied to the analysis of single oscillators. An example of such a system would be the Hodgkin-Huxley model of a neuron exposed to a constant ambient current such that it is tonically firing at a set period. The feature space of the system is then the voltage across the membrane, the applied current, and the ionic concentrations (both intracellularly and extracellularly) of several key ions (such as potassium and sodium). What we have been investigating is the phase response of networks of oscillators coupled together, at which point the coupling relationship between oscillators becomes part of your feature space. A perturbation applied to one element of the network might ellicit a different phase response than a perturbation applied to another element.

On the surface, one might wonder what the point of all of this is. The thing is, coupled dynamical systems are found in all sorts of areas. Networks of neurons are an obvious example, but gene expression is another area of biological research where there are large systems of interacting biochemical pathways. There are examples outside of biology as well, but I am having a hard time thinking of one off the top of my head since my group tends to focus on the biological tie-in of our research. Therefore, having a better understanding of the phase response of networks will lead to a better understanding of these exceedingly complex systems.

Note: Figure 2 was pulled from Christoph Kirst's diploma thesis, Dynamics of Pulse-Coupled Neuronal Oscillators with Partial Reset. Figure 1 was a (rather shoddy) edit of Figure 2 that I made over the weekend using GIMP.

Friday, June 12, 2009

Intellectual Schmoozing

Lately I have been fairly bad about maintaining this blog, and I apologize for my lack of recent posting. A lot of that is due to a moderate degree of mental exhaustion; I spend long hours at the Max Planck Institute for Dynamics and Self-Organization (MPIDS) combined with the generally overwhelming feelings that accompany a move (even a temporary one) to a new country. However, I cannot complain too greatly about being tired, because the work is interesting and I am learning a lot. I am also coming to view neurons from a different perspective, and that is a fun and fascinating thing.

It is not just work and dealing with a different culture that has been keeping me from blogging, though. Academia is as much a social institution as any other human endeavour, and, while at a large university like the University of Toronto it is easy to keep your head down and coast along without ever really being aware of the academic community (aside, of course, from the people teaching your classes), at a small institution like MPIDS the sense of community is much more universal (in general atmosphere, MPIDS has much in common with UTIAS. For those who have forgotten, you can reread my fan-boyish fawning over UTIAS here). So, there have been a couple nights where I have been out late either enjoying food, wine, and beer with a few people from the Institute or at the Institute itself sharing the aforementioned victuals with a larger portion of the general Institute population.

Last night was one of those nights at the Institute, with the party sparked by the coincidence of a guest speaker (an American professor who I believe currently holds a post at an Italian university) and a birthday (one of the Ph.D. students, who, oddly enough, is also Italian). The guest speaker was particularly popular because his presentation was not so much a serious one but was rather a light-hearted application of visual computation algorithms in real-time. The motivation was ostensibly related to the motion tracking capabilities of the retina, but I found that connection tenuous as best (after all, the problem of motion tracking at a low processing level like the retina is highly dependent upon hardware, and the hardware of the eye and a computer differ greatly). Though posessing only a weak connection to visual perception, the talk was essentially an artistic presentation with a strong mathematical basis, and that can be fun too.

After the talk came the pizza, cake, wine, and beer. It also meant intellectual schmoozing in a whole smattering of languages (primarily German and English, of course, but there was also a little bit of Italian, Russian, and Farsi flying around). I got to meet a large number of people from the Institute that I had not yet had a chance to talk to, and we had some very interesting discussions ranging from politics, history, and languages to neuroscience, mathematics, and evolutionary theory. I will end this post with one of the more interesting classical psychological question series (which is fun to bandy about at parties), and the most tongue-in-cheek response I have yet heard to the query (of course coming, as you will see, from an evolutionary biologist).

The question series goes like this:
You are standing next to a branch in a rail line. A train is hurtling down the track without breaks, and the track it is currently set to go down has five workers on it oblivious to the danger. On the other track is a single worker. Do you throw the switch and kill the single worker or let the train continue on its original course where it will kill five workers?

You are now standing on a bridge overlooking a set of train tracks. Once again, there are five workers on the tracks below oblivious to a breakless train hurtling toward them. On the bridge with his back to you is a large man unaware of your presence. If you sneak up behind him and push him off the bridge onto the tracks below, he has sufficient mass to cause the train to dislodge from the tracks and come to a halt before it strikes the workers. Do you push the man off the bridge, or do you let the train continue on its path to kill the five workers?
The psychologically interesting thing is that the majority of people would throw the switch in the first formulation of the question but balk at the physical act of pushing the man in the second question off the bridge to his certain death. The snarky response I got from the evolutionary biologist was, "Am I related to any of them?" When I responded that he was not, he shrugged and said, "Oh, well, then it doesn't matter."

Sunday, April 19, 2009

"You, sir, are a mouthful"

There are two general 'facts' people know about the German language: it is harsh sounding, and it has extremely long words. I would actually disagree with the first part, or at least I think German tends to get a harsher representation than it deserves. This is because most German in popular culture is from war movies, and if people are running, shooting, and worried about killing other people or being killed themselves, they tend to be yelling rather harshly (especially when cast on the villainous side). There is a lot more to the German language than angry men shouting "Schneller! Schneller!" Perhaps Kari can weigh in here with her opinion (if she's still around...), as she has been living in Austria for almost a year.

That said, they do have some ridiculously long words. In their defence, that makes their sentences a lot less wordy, because the reason the words are so long is because German tends to simply stick words together to make new ones. Take, for example, the word for speed limit:
Geschwindigkeitsüberschreitung
That is a pretty long word. However, what if you want to talk about the maximum speed limit?
Hoechsgeschwindigkeitsbegrenzung
Those are pretty impressively long, but I came across a term while studying for my neuroanatomy exam that seems to give them a run for their money. It is the pontomesencephalotegmental complex. Why does it have such a ridiculous name? The answer, basically, is to describe where it is. The ponto part means it is located within the pons, while the mesencephalo part means it is within the midbrain (so it is located at the border between the pons and the midbrain), and the tegmental part means that it is located near the midline (within the tegmentum). The thing is, though, that people are fairly lazy. So, while the pontomesencephalotegmental complex is an informative name, nobody wants to have to say it (except perhaps when one is trying to be impressive at parties). It therefore is usually shortened to PMTC. Of course, this laziness is not unique to anatomy, but happens all over the sciences. People who write a lot of proofs get used to the fact that wrt = "with respect to", ow = "otherwise", and a small coloured in square = QED = Latin for "I'm done". Likewise in anatomy, people get sick of saying "dorsal" and "ventral" all the time so they become D and V, respectively.

This kind of shortening doesn't usually bother me, except when physiologists and anatomists get so comfortable with their acronyms that they forget to define them. I have had several lectures in physiology courses where I have had only a vague idea of where in the brain we might be talking about because everything is just an ugly jumble of capital letters. For example, SN is the subthalamic nucleus, but how is one supposed to know that it doesn't stand for the substantia nigra if one doesn't already know that substantia nigra is usually abbreviated SNr? Therefore, if there are any physiology professors out there who read my blog, I urge you to doublecheck your lecture slides and see if you use any undefined acronyms. You might not even care if your students know that structure specifically, but I would bet you that somewhere out there is a student who doesn't know that you don't care and is therefore wasting a great deal of time trying to figure out what that small collection of letters means.

Note: I seem to have misplaced my German-English dictionary and my German is rather rusty, so the German word examples were pulled from this site.

Wednesday, April 15, 2009

More Musings on Computational Neuroscience Paradigms

A couple months ago I posted a brief description of two overarching paradigms in theoretical computational neuroscience. During the course of writing my final project report, I addressed the same subject in slightly more detail. Since I seem to have strayed from my purported task of publishing pertinent computational neuroscience posts, I thought I would reproduce the two paragraphs in question here. I aready sent them to a friend of mine in biophysics who I know from one of my physiology courses, and he mentioned that I didn't address a couple things that I had never heard of before... so please keep in mind that this is all relatively new content for me, and the paragraphs I post here might simply be the pedestrian musings of an undergraduate amateur. Of course, they could also be brilliantly insightful, but I think the amateur option is a little more likely.

Anyway, here are the paragraphs:
Within the field of theoretical computational neuroscience, there are two general forms in which the problem of cognitive function is mathematically cast: as an adaptive control system and as a dynamical system on the edge of chaos. As with many competing fields of academic thought, disdain from adherents of one mode is often expressed for the ideas of those in the other camp. Fundamentally, the two interpretations are quite similar, as an adaptive controller functions on a dynamical system. However, proponents of the view that the brain functions as a system on the verge of chaos argue that the well-behaved systems generally analysed within the context of control theory fail to take into account the entire activity of the brain and therefore fall short of the goal of generating an accurate physiological model for cognitive function. These proponents also point to the efficacy of mathematical techniques from chaotic and dynamical system analysis to interpretations of electroencephalogram (EEG) readings, which serves as support for the near-chaotic dynamical system interpretation of the brain.

I would argue, however, that while an adaptive control experiment such as the one being implemented here seeks to isolate and investigate a specific cognitive task irrespective of the rest of the neuronal activity (or, in the case of the simulated robots used in this study, assuming no other neuronal activity), such a blinkered approach is not necessarily done out of ignorance of the larger issues of overall cognitive interconnectivity. Rather, I posit that the near-chaotic nature of the global brain behaviour arises out of the necessity of having many simultaneous well-behaved and sometimes contradictory control loops operating as one. The phase transitions apparent in EEG readings could arise from the necessity of transitioning from one set of precedent control loops to another, and a full understanding of the underlying control loops themselves can thus still further our overall understanding of cognitive function. While admittedly ad hoc, I hope this reasoning may serve to at least somewhat mollify those detractors who would dismiss adaptive control as a convenient tool of engineering misapplied to neuroscience. Continued exploration of adaptive control and implicit supervision can therefore have benefits for the field of theoretical computational neuroscience in addition to direct practical benefits in robotics.
I have removed the references, but if anyone is interested in what I am basing the discussion on, let me know and I will send you the appropriate articles.

Friday, March 13, 2009

A Course I'd Like to Take

The other day while I was walking to school my mind was wandering between the uncertain future and reflections on my undergraduate education. As I find myself more and more drawn toward an interest in robotics and computational models of intelligence with my neuroscience background serving in a more supplementary role, I was thinking about how the neuroscience courses I have taken have served my educational development. At a university there tends to be a few aspects of a subject which are more popular with the majority of professors, and this tends to be reflected in the available courses. Here at the University of Toronto (U of T), there are approximately three ways to study the brain: behavioural psychology, microbiology and genetics, and systems neurophysiology. Of those three, I find I prefer the systems approach despite the fact that it tends to be less research-oriented than the microbiology approach (as one may guess, behavioural psychology I have the least time for). The reason I prefer the systems approach is that it tends to take a more global look at the brain and understand how it performs (though it tends to come at this from a more clinical diagnostic perspective than a theoretical modeling one), while the microbiology approach I find frustrating in its excessive detail. Thus, while the microbiology approach tends to be more research oriented, it is in avenues of research which I find to themselves be far more clinically oriented (not that clinically oriented biomedical research is a bad thing - in fact, I am expecting it at some point to likely save my life. It is simply I find the research itself mostly tedious and uninteresting).

I have gotten myself off on a tangent, however. What I intended to do was outline a course which does not exist (as far as I know) but which I would have found fascinating to take. As I mentioned, I find the systems approach to be the most appealing, but most of that approach is done at U of T with a clinical mind. When non-human animals are discussed, it is almost always in the case of a specific study with a mind to extrapolate the information to that which is applicable to understanding and diagnosing deficits in the human brain (despite the fact that we understand many of the widely used model organisms' nervous systems far better than our own). What I would find fascinating would be a course on comparative neurophysiology. For example, our cerebral cortex is, as I understand it, a mammalian novelty (and this is where most of our higher brain functions are found). Despite the avian lack of a neocortex, many birds have an odd similarity to primates in terms of cognitive function (with many extremely visual and social species). A course that examined in detail how the visual system, for example, of predatory birds compared to that of primates might be extremely illuminating in understanding visual processing techniques. Likewise, there are many non-humans which show remarkable manual dexterity and spatial reasoning (elephants with their trunks and confounding cephalopods come to mind). While I would guess that the elephant motor cortex would likely closely resemble our own due to our shared mammality, looking at the motor control mechanisms of invertebrates as dexterous as an octopus could be quite fascinating. So, if any professors happen to be reading this and know someone who might be interested in setting up a course like that, I think it would be quite worthwhile ( I just hope there are other students out there who would find the same thing if someone goes through the trouble of setting it up).

Note: I made the word mammality up. Is there an actual word that means what I was trying to say? Mammalianity?

Monday, February 23, 2009

The University of Toronto Effect

When I first started university I was far too cocky for my own good. I was entering the Engineering Science program at the University of Toronto (hereafter referred to as U of T), which was apparently the best engineering program at the top university in the country. Therefore, I must be, apparently, awesomely smart. I tried to remain humble, but more in a polite rather than actual manner. Reality started to set in about two-thirds of the way through my first semester when we had our second calculus midterm. I skated by the first midterm on latent high school knowledge, pulling off an 85% with minimal studying. I was therefore entirely unprepared for the second midterm and it showed with a 45%. The first thing I had ever failed was a music test in grade 5 when visiting a friend in New Zealand (a test which I maintain was the height of cruelty for someone who is utterly and completely tone deaf, but that is a story for another day). However, my second calculus midterm was the first thing that I can remember failing that actually mattered. Unfortunately, it didn't phase me as much as it should have, something which is reflected in my first year marks. I idled through first year roughly around the middle of the pack. Though I made an attempt at reform by the second semester, it was poorly executed as I didn't actually know how to study, and, aside from an A- in my computer science course based on the fact that it was something I was actually and genuinely good at and an A that I somehow pulled out of nowhere in physical chemistry, my grades remained at the mediocre level of the previous semester. It was a humbling experience. Part of how I rationalized my mediocrity was to completely buy into what my peers and the faculty were selling us. I wasn't truly mediocre, it was just I was middle of the road in a group of three hundred and fifty (two hundred and fifty by second year) of the best and brightest students in the country. We were told we were in the hardest, most challenging program in the university, and I bought it completely. Part of it was my naive acquiescence to scholastic authority, but, to be honest, part of it was also to validate my own self-image.

With my ragged confidence somewhat mollified but not wholly repaired, I went into second year with something to prove. It didn't help that second year we were saddled with a ridiculously time consuming design project on top of the normal course load, but that didn't matter. I became so sleep deprived I exhibited narcolepsy more blatantly than at any other point in my life (I even fell asleep mid-conversation a couple of times. My experiences second year were primarily what led me to eventually go to a neurologist and get diagnosed in third year), but it paid off scholastically. My grade point average and class rank rose significantly, and my clueless cockiness of first year was reformed as grim arrogance.

Of course, my world underwent further convolusions when I hit third year and met my girlfriend. However, this trip down memory lane and how I became the person I am today is already getting longer than it was intended. Perhaps I will elaborate on this story in a future post, but suffice it for now to say that my girlfriend helped me to realise that my education was my own and I should stop simply taking other peoples' (even professors) word without further reflection. Just because a professor tells his class they are the best of the best doesn't mean there are not others doing far more complicated things than you. Eventually I stopped really caring about the reputation of the program I was in and my standing within it, and started caring only about what I was learning. My grim arrogance was once again reformed into the wry but confident trepidation of today (of course, when future me is looking back at how young and stupid current me used to be, my rather positive characterization 'wry but confident trepidation' will instead likely be something like 'unacknowledged hubris covered in a deceptive blanket of false humility'. But, that is future me's problem).

Despite my reformation into what I hope is a moderately wise and decent person, the flames of my past arrogance can still sometimes be stoked, which is what this post originally set out to relate without realising just how much rambling background I was going to give. You see, a small part of my arrogant pride over being a student at the U of T sustains itself off of little confirmational anecdotes like the one that happened today. My neuroanatomy professor has taught neuroanatomy for a fair number of years, but I believe this is her first year teaching it to U of T undergraduates in neuroscience. Over this past week I was fairly proud of the fact that I had earned an 86% on our midterm, only to discover this afternoon that apparently the average was 86%. That is an inordinately high average for a midterm, which is something that invariably seems to happen on the first test given by instructors who are not used to teaching at U of T (I have had a couple professors in the past who were experienced lecturers but new to teaching at U of T). Noticing things like that is what keeps the tiny little worm of arrogance still squirming within my brain.

Reading over this post, the story about my midterm mark sounded much more amusing in my head. I am still going to publish this, but I apologize for that rather anticlimactic ending.

Saturday, January 31, 2009

A Brief Introduction to Computational Neuroscience Paradigms

Within computational neuroscience there seem to be two main theoretical paradigms. In the first, the brain is viewed as an elaborate and nested control system. This branch of investigation tends to use many of the same mathematical models as those used in the engineering discipline of control, albeit with an eye on the biological feasibility and possible neuronal configurations necessary for attaining such a control system. In the second, the brain is viewed as a dynamical system on the edge of chaos, and thus utilizes the mathematical tools found in dynamical system analysis. I have to admit that the latter of these two paradigms I am rather fuzzy on, despite having taken (and done rather well in) a course on Chaos, Fractals, and Dynamics. I am not sure if my inability to fathom what a 'dynamical system on the verge of chaos' means is due to a lack of intellectual capacity on my part or a lack of substance underlying the fancy terms being thrown around on the part of those championing the dynamical system interpretation. My guess is that the two paradigms are not as entirely exclusive as some claim them to be, but I think I will have to gain a better understanding of the application of dynamics to physiology before I can be sure. In the meantime, the control systems approach speaks quite clearly to the (former) engineer in me, and I find the control theory approach rather appealing. It is simple, elegant, and powerful.

Before I continue in this vein, however, I should mention a brief caveat. There is a third branch of thought which I have not included in this description known as machine learning. While it could also be argued to be a paradigm of computational neuroscience (or at least my interpretation of what computational neuroscience ought to be), I have not included it in this discussion because, to me, it is much more a branch of traditional approaches to artificial intelligence. Machine learning tends to focus more on function modeling through stochastic methods. While this provides many powerful tools (some of which are even utilized within the control systems approach), there is a lack of emphasis on physiological feasibility which might provide for a general theory of intelligence. Of course, I think many of the mathematical tricks used in machine learning (like principle components analysis (PCA)) will likely have neuronal correlates found in which our brains somehow provide a system to achieve similar results, machine learning does not tend to be devoted to uncovering methods of cognition as its primary goal.

Now that I have rambled about machine learning, I shall return to control theory. A control system is essentially any system designed to control a variable through time. The actual form the control system takes can be quite varied, including electronic control systems, mechanical ones, and, as I surmise our brains might be, electrochemical. They usually utilize some form of feedback (most often negative), since an open control system (as those without feedback are called) are not really much good at controlling anything. However, I will go into more detail about control theory in another post. This post was simply meant to introduce the idea of the different paradigms, as well as the fact that I am currently more focused on control theory.

Monday, January 26, 2009

Pronunciation

I sometimes find it interesting how a mispronunciation can propagate through academic circles. For example, the Greek letter Φ often gets called 'fai' by English speakers (and virtually every professor I have ever had) when it is technically the letter 'fee'. I have had two professors not do so, the first being my intellectual and erudite linear algebra professor from first year who is vastly well read and interested in a huge variety of fields and made a conscious effort to unlearn the 'fai' pronunciation, and the other was my third year dynamics professor who had quite poor English and therefore clearly had originally learned the correct pronunciation. Unfortunately, the poor fellow was so self-conscious about his poor English (which, admittedly, was quite poor, often rendering his tests and problem sets entirely incomprehensible or quite poorly worded, such as a ball pissing across a plane rather than passing) that he ended up changing his pronunciation when he noticed his students said 'fai' instead of 'fee'.

This post is not about the Greek letter, though. Today I sat through a neuroanatomy lecture and cringed every time our professor said Wernicke's area. Wernicke's area is one of the more well-known and famous areas of the brain due to its uses in language comprehension. Located toward the posterior end of the lateral fissure and surrounding the primary auditory cortex, a person who has suffered damage to Wernicke's area (such as through stroke) suffers a form of aphasia known as either Wernicke's aphasia or fluent aphasia. That person can speak fluidly and continuously, but their speech is mostly nonsensicle. Their own comprehension of others is often likewise impaired, with the appearance of listening but very little apparent processing. As one might surmise by the name of the area, it was first described in detail by a man named Carl Wernicke. The thing is, he was a German physician. Thus, while an English speaker might be tempted to pronounce his name "Were-nick-ee", a much more correct and appropriate pronunciation would be "Ver-nick-eh" (where 'eh' is an 'uh' sort of sound, not the Canadian 'a'... I wish I knew how to do more symbols in html, but I've got to run soon so there is no time to look them up right now). Anyway, I know anatomy has a lot of strange names and it is hard to know how to pronounce them all, but this one is a major one. It just worries me that her pronunciation of all the other parts that I don't know the proper pronunciation of is also wrong, and I will have no way of knowing this until years later when I embarrass myself at a party.

Tuesday, November 25, 2008

Graduate School Applications

People may have noticed a bit of a lapse in posting over the last week or so. With the first round of graduate school applications due on 1 December, my mind has been otherwise occupied. Between composing statements of purpose and tracking down transcripts, I haven't really felt like posting much of anything substantive.

The first two schools I am applying to are University of California Berkeley, and University of California San Diego, both in neuroscience with a plan to do the computational specialization. I just managed to find this morning some admission statistics, so I'm a little worried since my GPA is a bit on the low side (but my general GRE scores are a bit on the high side, so maybe it will balance?). Oh well, the California schools are all in my "stretch school" category, so I'll just have to see what happens. Since application deadlines extend into mid-January (with one additonal one in mid-March), there might be periodic bouts of failure to maintain this blog as I concentrate on application completion. I apologize in advance.

Tuesday, November 11, 2008

How to Think About Science

I was listing to the opening segment of an interesting set of broadcasts collectively entitled "How to Think About Science" and I thought it had some very interesting points. The one I thought that was most interesting was when Simon Schaffer pointed out that science, while normally celebrated as promoting skepticism and a reliance upon personal evidence and observation, was in reality a systematic organization of trust. You will have to listen to the broadcast for his full argument, but it is essentially that no one can practically witness evidence for everything one accepts as true, but where science excels is in giving a powerful framework for deciding who and what should be given credence.

I thought that was a very interesting and thought-provoking observation. It is quite simple and seems obvious after hearing it, but in many ways those are the best thoughts to have. I found myself thinking about it this morning as I read the news. So many of our world's problems, especially in the political sphere, are based on issues of trust. It is one of the exceptionally messy aspects of politics that makes me want to practically avoid the field. It is also why pseudoscientific things like creationism/intelligent design and alternative medicine continue to flourish outside of the scientific world (in the realm of the popular and political) where there is not that system of rigorous evaluation to keep them in check.

Monday, November 3, 2008

Scientist Appreciation: PZ Myers

I haven't done a scientist appreciation in a while, but I have been thinking about this one for about two weeks now. It is a slightly unorthodox scientist appreciation, in that it is primarily an appreciation of PZ Myers' promotional efforts for science rather than his own work specifically (not that he doesn't do good scientific work, it's just not exactly in my field), kind of like my appreciation of Isaac Asimov's contribution to science. Also similar to Isaac Asimov, in many ways it is not PZ Myers' status as a scientist which makes him most famous (although, instead of being famous for writing science fiction, he is famous for his lack of religious views and his willingness to write about such a lack on the internet).

This post, however, isn't about PZ Myers' religion (or lack thereof). It is about his commitment to further an appreciation for science and scientific issues. While much of his blog is centered about intermittently laughing and gnashing one's teeth at the outrageous idiocy of the anti-scientific, I think it is an important contribution to make. Exposing some of the ludicrous claims and fabrications of pseudoscientists might help people think twice about other statements when no supporting evidence is offered. PZ Myers provides an indefatigable stream of commentary, humour, and substantive science and science policy posts. While I may not always agree with his stance on things, when I do not it is very often due to a preconceived notion and ignorant crudility on my part. More imporantly, though, the near constant stream of information provides a place on the internet where those who care about science and scientific issues can gather (if somewhat passively). Since mainstream news doesn't seem to care much about science (after all, look at how much this year's American election revolved around the discussion of science... other than the mocking of fruit fly research), PZ Myers has personally developed an extensive outlet of news and relevant links. Blogs run by skeptics and science enthusiasts may be a dime a dozen on the internet (case in point, look at mine!), but no one does it quite like PZ Myers. For that, he has earned this edition of Scientist Appreciation.

Tuesday, October 21, 2008

Project Background Part I: Mirror Neurons

As I mentioned at the beginning of the semester, I am taking a fourth year project course this year. It is a fairly open-ended project, especially since the focus is outside the usual realm of endeavours that my supervisor performs with his research group. However, my supervisor is a great professor in one of the robotic research groups at UTIAS with a great variety of intellectual interests, and hence he was excited about the idea of letting me have a go at coming up with something interesting. It is slightly daunting, since it means a lot of what I am doing I am just kind of muddling through without direction from above, but at the same time it is exciting because it is my question and my problem to search for an answer to.

Anyway, I will get to the actual project in a little while. Rather than have a giant single post which will take ages to write and then likely not be read due to its length, I decided to break it up into some useful background posts first, and then a post on how this background ties into the experimental question I am pursuing. Suffice for now to say that it is centered around developing communication among multiple independent robots (all in simulation, of course, since physical robotic experiments are much more difficult, expensive, and time consuming to perform).

The first part of necessary background knowledge for the project is a thing called mirror neurons. These are a special class of neurons first identified in macaque monkeys by a scientist named Dr. Giacomo Rizzolatti at the University of Parma. These neurons are special in that they fire (neuroscience shorthand for meaning they change their firing pattern and increase firing frequency) in response not only to an individual performing a specific action, but also for an individual watching another perform that same action. For example, when you watch a game of soccer on television and see the slow motion replay of a brilliantly executed shot on goal, somewhere in your head a set of neurons are playing out the same pattern that they would were you on a field running up to a ball and letting fly your best kick.

The existence of such a system of neurons clearly has important implications for both motor learning as well as social interaction. It helps explain such simple psychological phenomena as why people often have the urge to smile when they see others smile, or feel a rush of testosterone filled manliness when watching a mindless action movie (perhaps that's not an event familiar to everyone, but I know it at least works on a childhood friend of mine who, if given the choice, only watches mindless action films. I don't know if he reads this blog or not, but if he does, he knows who he is). It also has many important implications on learning and social behaviour, although those implications will be discussed in more detail when I talk about the project itself. In the meantime, stay tuned for Part II, which will discuss some different control theory models and their applicability to neuronal systems.

Monday, October 13, 2008

The CUPC Needs Help

The Canadian Undergraduate Physics Conference (CUPC) is the largest conference in North America organized entirely by undergraduate students and right now the 44th annual CUPC is in trouble. Due to several sources of funding falling through, there is not enough money available to cover the costs of the conference. If the conference cannot find adequate support, this will be the 44th and final CUPC, which will be a tremendous shame for science education. The CUPC brings together students from across Canada and the world studying a vast array of subject areas from mathematical and theoretical physics to medical biophysics to engineering and applied physics. This important event gives many students their first experience with academics outside of the classroom, and helps to cultivate an interest in research and higher study.

The conference is only a few short days away and in desperate need of funds. Please go to the website (http://cupc.ca) and donate (or click on the link below).

Thursday, October 2, 2008

The Most Useless Laboratory Report Ever Written

Tonight I am spending a sleep-deprived several hours cobbling together a lab report for my neuroscience laboratory course. I feel the need to vent a little, however, about the course. For one thing, it is very disorganized. I am constantly in a state of confusion as to what is expected of me, and, while I would normally think it was something I was doing wrong, when I talk to other students I discover they are just as bewildered. For just one example of the organizational incompetence, we have to turn assignments in to a website called turnitin.com to check for plagiarism. That is fine, but each course is set up with an ID code and a password to ensure that work you turn in goes to the course it is actually supposed to. However, it took me nearly fifteen minutes to actually register for the course on turnitin.com (and I was one of the fastest, I found out later, only because I thought "Wait, what if someone was a profound idiot?"). You see, the password that was chosen for our lab course consisted of three words (since I'm sure I'm not supposed to broadcast the password across the internet, let's just say the three words were "I hate reports"). On the course website where it tells you how to sign up for turnitin.com, the following instructions were posted:

Password: I Hate Reports
(note: passwords are case and space sensitive!)

Can you guess what the actual password was? ihatereports! After specifically reminding us that passwords were case and space sensitive, you would think they would actually report the password with the correct case/space combination...

Anyway, that isn't what I am really unhappy about with the course. What I am actually rather unhappy about right now is the fact that I am writing a report on data that is not my own. The course organizers decided that the data collected by the class didn't look particularly good, so it wasn't even good enough to have us use the entire class's data instead of our own (which sort of makes sense, since it does give a larger sample size for statistical analysis), but they then also handed us a data set provided by one of the professors. I'm assuming it is something she and her grad students at some point generated, but who knows? Perhaps they simply fudged the numbers and wrote down what ought to have happened. It just doesn't seem right. Yes, I realise that many of the experiments we do in this lab are fiddly and prone to wild error, but that is part of science. Providing students with a prefabricated set of data and asking for a report on that is simply a test in applied statistics (how many different ways can you use the student t-test on this data set?).

Well, I suppose I should stop complaining and get back to the drudgery of analysing my mysterious set of numbers.

Saturday, September 20, 2008

UTIAS

This past Tuesday I took a trip out to UTIAS to see a former professor of mine (and now fourth-year project supervisor, but more on that later). What does such a fancy acronym like UTIAS stand for, you might ask? It is the University of Toronto Institute for Aerospace Studies. Even though I am no longer in Aerospace Engineering, I still have very fond thoughts for UTIAS. I find the building very inspiring. The whole place has an ambience of intellectual excitement and scientific daring that makes me want to do something profound (while it is invigorating, that feeling alone, unfortunately, does not actually yield something profound... at least not yet). Unfortunately, UTIAS is rather difficult to get to, being an awkward 22km away from the University of Toronto main campus and not on a subway line (instead it is a rather long subway trip to the end of the line, and then a further bus ride from there). If you poke around the website for a while, you might see them claim that it is only a 30-45 minute commute by transit from the main campus to UTIAS, and you might think, "for a city the size of Toronto, that's not so bad". Well, I don't think that whoever wrote that part of the website actually took the trip from main campus to UTIAS by transit. The subway trip alone is 45 minutes, and that is assuming your train actually goes all the way to Downsview (the last station) and doesn't instead dump you at Wilson (the second to last stop, where my train for some reason decided it was as far north as it needed to go). Then, assuming you have perfect timing and manage to snag the bus just before it leaves the station, you have another 15-30 minutes (depending on traffic).

Anyway, aside from being hard to get to, you also cannot just show up and waltz about the place. Unlike the main campus, where very few places are locked up, you cannot access the main building without a key card or without buzzing the front desk and signing in. However, if you have a legitimate reason to be there, the institute is very nice. There are a ping-pong and pool table in the cafeteria, a very nice lounge, lots of offices, and even more fancy pictures of space and spacecraft (some real, others rather fanciful... although I haven't yet seen a picture of any of the various incarnations of Enterprise, I wouldn't be surprised if there is one somewhere in the building). Also, I'm not quite sure what it is, but the entire place exudes an ambience of the 50s-70s, when lots of money was spent on scientific endeavours and being a scientist was seen as important, daunting, and wonderful (this is how I still see science, but it doesn't seem to be a normal view. Of course, this is probably just a romanticized view of the Cold War era I have garnered through movies like October Sky).

While I know it would be disruptive for those working at UTIAS, I wish they offered public tours. Between the wind tunnels, the Mars dome, the giant flight simulators, the micro satellite lab, and all the other research areas, I think UTIAS would be a wonderful outing for a family visiting Toronto. So, if you are visiting Toronto and are interested in aerospace, try sending someone at UTIAS an email and ask if you can have a tour. You never know, if you sound excited enough they might let you in.

Monday, September 8, 2008

Last Year of Undergrad

This morning my last year as an undergraduate student started. It seems like this has been a very long time coming, due to the fact that I have spent two extra years being an undergraduate thanks to losing a year in the transfer from engineering to science and the year-long internship I did while still in engineering. Anyway, in preparation for this year, I decided I would post a list of the courses I am going to be taking. It might not be interesting to anyone else, but it's interesting to me.

Computational Complexity and Computability: The name pretty much sums up what this computer science course is about. I am not particularly excited about this course, but neither am I dreading it. It should be interesting enough, and I may even be pleasantly surprised if it turns out to be engaging beyond a passing interest.

Knowledge Representation and Reasoning: This is a fourth year/graduate computer science course that I was actually signed up to take last year, but ended up switching out of due to the fact that I would be taking it at the same time as its prerequisite (which I was also taking while skipping its prerequisite). The content of this course follows fairly traditional approaches to artificial intelligence (constructing knowledge bases of known facts and using logic to reason about further facts). While I find traditional AI quite intuitive, I also think it is impractical and unwieldy when faced with large-scale problems. I am looking forward to this course quite a bit because I like what I have seen of the professor, and he co-wrote the textbook which I also like from the bits that I have read so far.

Neuroscience Laboratory: While overall self-explanatory from the title, I'm not quite sure what to expect with the details of this one. I have heard fairly unfortunate reviews about it from other students I know who have taken it, but, while I would like to focus on theoretical work, I think a practical knowledge of the laboratory is a useful thing to learn. I haven't really done much wet-lab work (really just high-school chemistry and the summer I worked at the Columbia Brewery in quality control), so I'm a little nervous about some aspects of the course.

Neuroanatomy: Also a fairly descriptive title, while I was initially thinking of not taking this course due to the fact that I tend to find the memorization of anatomy incredibly dull, I heard very good things about the course and its instructor. Also, there is a laboratory component to this course as well which should be interesting.

Linear Algebra II: I am not at all looking forward to taking this course. I know linear algebra is an important area of math, and I have been upset with myself for not learning it better in my first year of university those many years back, but that doesn't mean I enjoy it. Oh well, learning cannot always be fun, sometimes it is just necessary...

Chaos, Fractals, and Dynamics: This is my fun math course for the year. While I heard wonderful things about the professor who normally teaches it, he is unfortunately out on sabbatical for the year. I actually had my first lecture this morning, though, so I got a look at the professor who will be teaching the course. While he has an unfortunate tendency to mumble, he is interested in the material and doesn't give the impression to hate/look down upon the students he is teaching, so I'm calling that a win (I know it is stereotyping, but it seems like hatred of and/or disdain for students is a little too common in the mathematics faculty. I won't mention names, but I think of several examples).

Neuroscience I: Systems and Behaviour: A full year grad-listed physiology course, I think this should be a good one. It is taught by several of the same professors who did the motor control systems course I took last year, including the professor who I did research with this past summer. The material looks interesting and challenging.

For those who are counting, you might have noticed this is only seven courses. That is because Systems and Behaviour is a full year course, but even then I only have four courses per semester, rather than the normal full load of five. That is because I am still planning to add a project/undergraduate thesis course, I just need to finalize a professor under whom I will work. This has taken a fair bit longer than I had thought it would, mainly because almost the entire machine learning department is away on sabbatical this year (which has the added unfortunate side-effect of meaning several of the courses I had planned to take are also cancelled, as well as the remaining professors are too busy to take on a project student). Anyway, I have tentative contact with a professor in aerospace engineering (I do recognize the irony here) who does work in robotics, but we still have to find the time to meet in person rather than just through email.

EDIT: Oops, I forgot to add the other activity I am going to be doing this year. Combining my desperate need for physical activity, my dorky love of history, and my male aggressive instincts manifesting in the desire to play with weapons, I have decided to take a fencing class at the athletic centre.

Monday, June 30, 2008

SIWOTI Syndrome...

I had this article shared with me a while ago, and it bugged me enough that I thought it deserved a rebuttal. After all, how could I abide by someone being wrong on the internet?

I think it is important to start off with a brief discussion of what graduate school is. To make sure I wasn't confused about my definition, I looked it up on Wikipedia to get the popularly held concept of graduate school. As I suspected, graduate school primarily refers to degrees earned following a bachelors, and medical school, law school, and an MBA are all referred to as graduate school either unusually or rarely. One way Penelope Trunk may perhaps have realised this, other than looking it up, might have been to think about the names of the standardized tests involved in getting into these different programs. Graduate school tends to require the GRE (Graduate Record Exam), whereas med school requires the MCAT, law school the LSAT, and business school the GMAT.

1.) Her confusion of what graduate school actual consists of is first flagged here. While an argument may be made for the opportunity cost of graduate school (you are not working, and thus are missing out on two to five years of wages as well as possible promotions). However, most graduate students receive at least some money (either as a stipend or from being a teaching assistant) over and above the cost of tuition. Also, not everyone is constantly shifting careers as she seems to think. While it is true that those working in the corporate world often do change careers these days, that change is often as much a change of company as it is a change of vocation. Thus, while obviously graduate school is not for everyone, it is hardly invalidated and obsolete from this objection.

2.) As previously mentioned, graduate school is not synonymous with MBA. Usually it doesn't even mean MBA. For the sake of argument, though, you are referring to an MBA, sure it might no longer be required. But that doesn't mean it isn't an asset. Anyway, I don't really know a lot about the world of business, since that isn't what I want to get into, but I think the fact that you can sometimes get a job without a degree does not mean the degree has no value.

3.) This is really an argument against professional degrees of all sorts, including some undergraduate disciplines such as engineering or a technical program at a college. I agree that it is a problem, but it doesn't mean there is a problem with professional degrees. It means there is a problem with how we educate young people in what options are out there.

4.) I'm not really sure what she is basing this argument on. Just because someone has a degree in something not directly related to a job they are applying for does not mean they are not interested in that job. The main thrust of this argument seems to be relying on a graduate student garnering a large amount of debt, thus forcing them into accepting work to pay it off. However, as was covered in (1), graduate school generally doesn't mean massive debt unless it is professional graduate school. Debt is something to consider, but not everyone plans to open a high-risk start-up company, so having that door closed by having a school debt that needs to be paid off isn't really a drawback for everyone.

5.) This is really just argument 3 focused in a slightly more specific way. However, I think it is rather poorly posed even when focusing on the professional degrees that she means when referring to graduate degrees. Lawyers often go into politics or business. Engineers often go into management and business. An MBA is useful in a wide variety of business related jobs. At the worst a degree is ignored, but even an unrelated degree serves to display a level of intelligence and commitment. The only time it might serve as a drawback is when it puts one at a higher pay bracket than a company is willing to pay.

6.) This one is just silly. Ever heard of an employee review? That is far more detailed feedback than a set of marks. While it is true that performing graduate research with a professor often involves close work with a superior, that isn't always the case. Some professors run their labs like an assembly line. Additionally, courses tend to have very little feedback other than a (sometimes brutally low) number. Most jobs tend to have a direct sub-manager to deal with a small subset of employees, allowing everyone to get direct feedback from someone. This is especially true in project based professional work, which this article struck me as focusing on. One last thing to note, is that graduate work is, for the most part, not coursework, in which case it isn't kids doing "what teachers assign". Graduate work is meant to be research, and while it is guided by experienced specialists in the field, it is controlled and driven by the student. Even in my undergraduate research position this summer I have more control over what I do (or do not do) than when I worked in automation engineering. My professor is well experienced with classical measures and interpretations of EEG experiments, but it is up to me to develop and apply some of the more advanced mathematical techniques that he has never used for EEG analysis. Shockingly, he never even suggested I do this, it was just something that I thought might actual give the research we are doing some novelty and merit.

7.) I have had my quarter-life crisis, and graduate school is where it pointed me. I agree that graduate school should not be a default position, but that doesn't mean it is archaic and obsolete. Graduate school has exceptional merit for those who want to pursue knowledge and research. The fact that many people who go to graduate school (especially when one includes the rather generous definition of graduate school the article seemed to encompass) end up unhappy isn't a clinching argument. Many people in many careers are unhappy. I don't really know a way to fix that, but I believe less education is never the way. Virtually every day I realise just how lucky I was to meet someone who understands the world of academia and decided to take the time to impart that knowledge onto me. It is hard to know what is out there without trying it, but plunging into the working world isn't the only way to get an idea.

Anyway, admittedly I am still young and in many ways unworldly. If I have made blatant mistakes in my rebuttal, I would appreciate it being pointed out to me. That, of course, goes for everything I write, but I tend to be more confident in my statements when writing about scientific ideas than when writing about what life choices people should make.