Subscribe to Computing Intelligence

Showing posts with label Interdisciplinary Importance. Show all posts
Showing posts with label Interdisciplinary Importance. Show all posts

Monday, July 13, 2009

Timothy Williamson and the Philosophy of Philosophy

A couple weeks ago, I went to a lecture at the Paulinerkirche given by Professor Timothy Williamson, a philosopher from Oxford, entitled Armchair Knowledge and the Philosophy of Philosophy. The talk was for a general audience, so I am sure Professor Williamson simplified his arguments and glossed over some of the supporting ideas that have helped spur his own, but I still felt somewhat dissatisfied with overall presentation and thrust of his argument.

Professor Williamson started with a description of the general practice of philosophy as an exercise of cogitation one performs from the comfort of an armchair. He then gave a humorous anecdote about an Irish chemist being surprised that English universities regularly also had philosophy departments (the chemist had assumed that Dublin's Trinity College philosophy department continued to exist as a matter of tradition rather than for any sort of pragmatic utility). Thus, Williamson set up the central conflict on which his talk centered as the question of whether or not the criticism of philosophy as antiquated and made obsolete by experimental science was apt, and what that meant for the motivations and practice of philosophy (the philosophy or philosophy, if you will).

The central thrust of Williamson's argument started with the idea that no one is a pure empiricist, as basing all of one's beliefs on direct empirical evidence is impossible. In this, Williamson is certainly correct. Earning one's education is in many ways an exercise in academic trust, although, as was astutely pointed out in the How To Think About Science series, one of the inherent strengths of science rests not so much with its skeptical roots as with its ability to determine who and what to trust. One of the problems I continually run into in subjects outside of the realm of science (and even within some discourse that claims it is scientific - namely certain branches of psychology) is that the established basis for some discourse is not clearly defined or, in some cases, is clearly defined but erroneous (either in light of later discoveries, in which case it may be excusable, or simply because it was assumed true without empirical evidence, in which case it is less excusable). For example, as those who remember my reviews of a selection of historical treatises on political theory may recall, I was thoroughly disappointed with both Plato and Aristotle. I thought Hobbes did a much better job by specifically and carefully defining his terms and assumptions (some might claim that this was a little overly pedantic on his part, making his text more difficult to digest than one which skips over such dry discourse as careful definitions, but it is important nonetheless). Of course, I think Hobbes' analysis still ends up flawed, but it is much easier to follow his reasoning and in that way determine where I disagree thanks to his methodological approach. I seem to be getting away from myself, however. Getting back to Williamson's talk, I grant that calling for a purely empirical framework for knowledge and belief is not feasible. We do choose to trust knowledge disseminated by other sources, but I think the important point here rests on our determination of the trustworthiness of sources. Basing trust on human charisma, while often the most common method, is unfortunately a highly flawed method as it easily leaves one open to being taken advantage of. The system created around unbiased and rigorous verification of knowledge rooted entirely within the natural world that is modern science is the best that I think we can currently hope for in the department of trust.

Continuing from the fact that everyone accepts knowledge not personally empirically derived, Williamson also brings up the fact that even empirical scientists further process empirical results with a set of mental reasoning tools which Williamson classified as akin to imagination. We are mentally capable of trying out ideas and following avenues of thought which have not explicitly been borne out in the real world. At this point, Williamson went on a slightly odd detour by ruminating on the origins of the human capacity for reasoning given our evolutionary past. He justified its survival advantage by giving an example of a person running from a tiger - the person would be able to gage the appropriate response by running through possible future scenarios in their mind (such as hiding behind a rock, climbing a tree, and so on). Of course, it was a simplistic example, so I won't spend too long quibbling with it, but I do want to point out that any person who stopped to think so carefully while being chased by a tiger was going to be caught and eaten. Our capacity for rational thought serves more to modulate what sorts of behaviours we practice to increase our future survival capacity rather than serving us in speedy split-second survival decisions. Ignoring that nuance for the sake of the argument, however (and it does not particularly change the logic to go from reasoning about what sorts of behaviours are best to practice for future enactment and what sorts of behaviours one should execute in the current moment), it is true that we continually process empirical information (often in ways we are not even immediately aware of - see my series on top-down processing in vision).

Essentially, those two points are what led Williamson to his justification for philosophy. Philosophy is therefore, according to my understanding of Williamson, simply engaging our capacity of hypothetical rational thought as a valid exercise in knowledge derivation. I would contend, however, that Williamson's version of philosophy is continuous with and enveloped by the combined fields of mathematics and science, and the areas of philosophy that remain outside of those fields still have no valid justification as sources of worldly knowledge. As I see it, there are two possible ways in which one can engage the rational faculties that Williamson established to exist. One can ruminate on the purely abstract, such as the field of logic. Philosophy of that sort, however, becomes indistinguishable from the field of mathematics. It can be a valuable avenue of thought, but it does not tell us directly about the world. When philosophy moves beyond the abstract and begins to make statements about reality, then I think it should be held to the same empirical accountability as any theoretical science. Philosophers may not be the ones gathering the empirical data, but that does not excuse them from being aware of the implications of that data. Far too often people entirely ignorant of neurophysiology and even behavioural psychology embark on developing vast treatises of the philosophy of the mind. Of course, philosophers often focus on different questions and aspects of a field, and in that I think they make their most valuable contributions (for example, fields like the philosophy of physics or the philosophy of mathematics, which often draw upon the larger philosophical field of epistemology, are exceedingly important for any scientific field and, in the same way that I think philosophers should make an effort to be aware of at least general trends of empirical results, so too should more scientists be aware of the philosophical underpinnings of their fields). Fundamentally, though, all knowledge of our world is rooted in empirical data, and thus I think the philosophy of philosophy leads us to the same place as the philosophy of science and mathematics.

Tuesday, June 23, 2009

Holographic Stimulation

There hasn't been a lot of activity in the past few days for some reason, which is odd because I think I have actually been writing both more regularly and substantively. Of course, it is summer time and apparently the rest of the northern hemisphere is actually enjoying some warm weather (people may complain about British weather, but it seems Germans don't have it much better), so perhaps people are just off having real life fun instead of sitting inside reading my ramblings. Also, I haven't actually written a lot about science lately, so it is entirely possible that what I think have been substantive posts have simply been amateur attempts at besting the triviality that so readily consumes a blogger's body of work. Ah well, I guess what I am really trying to say is I am intellectually vain and enjoy it when people at least appear to be reading what I write, so you should all tell your friends about this site.

In the meantime, here is a quick return to science. We had a symposium at the Institute today with two rather interesting talks, so I will give a brief summary of each of them (the first talk tonight, the second talk gets a summary tomorrow).

The first talk was by Dr. Christoph Lutz from the Université Paris Descartes. He was describing a new technique his group has developed for more effectively stimulating neurons optically. This requires a bit of background, though. One rather interesting experimental technique for analyzing neuronal properties is optical stimulation (technically called photolysis excitation or inhibition depending, naturally, on whether you excite or inhibit the neurons). I believe it is a fairly recent technique, but I might be mistaken. The basic idea is that you bind a neurotransmitter (in the case of the experiment Dr. Lutz described, they chose the most common excitatory transmitter in the brain: glutamate) to a specific molecule which essentially prevents normal interactions with the transmitter (this is called 'caging' the neurotransmitter). You then bathe the neurons (in this case, a slice of tissue from a rat hippocampus) with the caged neurotransmitter. The inactivating molecule has been specifically selected, however, such that in the presence of a specific wavelength of light it releases the neurotransmitter, thus allowing you to release a targeted dose of neurotransmitter as though you just activated a group of synapses by shining a laser onto the tissue.

What Dr. Lutz and his fellow researchers have done is extend the technique using optical techniques in holography. Up until now, experiments in optical stimulation have used a single column of laser light with various degrees of focus and targetting systems. Using a liquid crystal spatial light modulator, however, you can take a column of laser light and create multiple focus points, even at different focal lengths. Thus, Lutz was able to specifically stimulate along the length of a dendrite using a thin band of focused light without also activating the neurotransmitter farther away from the dendrite that the normal circular column of light would do (this extra neurotransmitter that is activated would then be free to diffuse through the local region, both weakly stimulating the neuronal membrane region being looked for an extended period of time after the laser light was turned off as well as possibly interacting with other nearby dendritic branches). By only stimulating the neurotransmitter directly along the length of the dendritic branch, you can more carefully localize the activated neurotransmitter to much more realistically simulate synaptic input. Alternatively, you are able to simultaneously focus light on multiple branches of a neuron's dendritic tree, allowing you to look at the interaction post-synaptic electric potentials generated to see how the signals interact.

Essentially, Lutz and his fellow researches have provided a novel application of well understood concepts in physics to design a much more powerful experimental technique for probing the properties of neurons. Since the computational power of a neuron rests in the electrochemical dynamics of its cell membrane, this expanded ability to probe the membrane's reaction to targeted chemical stimuli is likely to provide valuable information into the complicated world of neuronal computing.

Tomorrow: Robots with Organic Brains

Saturday, May 9, 2009

Top-down Processing in Visual Perception Part IV: Ramifications

This is the final instalment of my series on top-down processing in the visual system (links to part I introducing the topic, part II discussing faces and anthropomorphizing, and part III discussing artificial edges). While I find the topics of vision and optical illusions to be fascinating in their own right, I think the analysis of perception and cognition is also vitally important. This is by no means an original outlook, as David Hume made the statement in the introduction to his A Treatise of Human Nature:
'Tis evident, that all the sciences have a relation, greater or less, to human nature; and that however wide any of them may seem to run from it, they still return back by one passage or another. Even Mathematics, Natural Philosophy, and Natural Religion, are in some measure dependent on the science of Man; since they lie under the cognizance of men, and are judged by their powers and faculties.
While couched in somewhat archaic English, Hume's statement strikes me as remarkably astute. In many ways, our brains function as vast pattern-matchers. Understanding the underlying cognitive tricks we use to analyse perception is an important endeavour for making sense of our own observations, and avoiding mistakes in our interpretation of experimental results. Of course, the most pertinent application of perceptual understanding is in automated sensory processing applications (like machine vision which I have discussed before), but as Hume pointed out, it also matters in the way our thought processes interact with every other endeavour. We must be wary of our tendency to anthropomorphize, or to view causal connections that are not actually there. Realising our tendency to perform processing without being consciously aware of it helps reinforce the necessity of mathematical, logical, and statistical tools on which to rest one's theories.

Friday, November 14, 2008

Understanding Through Mathematical Concepts

My great aunt is a wonderful lady. A worldly intellectual in her own right, she can speak knowledgeably about Thucydides (which she read in the original Greek, not that wimpy translation stuff I read) and other literature of which I could not hope to compile an exhaustive list, as well as hold her own in a discussion of history and politics, especially if it involves Korea (where she was born and raised through much of her childhood before returning home to Canada). I am also a big fan of my great uncle, but since it was my aunt that made the comment I am going to discuss, I will have to wait for another day to sing his praises. I bring up my esteem for my aunt to put in context a comment she made one night when my girlfriend and I were at my aunt and uncle's for dinner, in which she stated something to the effect that she didn't understand how mathematics could hold any draw as a subject since it was such a dry and abstract thing. I think it was somewhat unfortunate for her that she made such a comment at a table with her husband (a retired aeronautical engineer), my girlfriend (who studies physics and mathematics), a Russian fellow who sails with my uncle and his wife (both who studied mathematics and computer science before moving to Canada), and me (a former aerospace student and now student of computational neuroscience), so she may have been a little unfairly outnumbered by those who had ties to mathematics. A great cry went up around the table and everyone tried to explain all at once that mathematics was, in fact, a wonderful thing. I don't think my aunt (a former graduate of the humanities) was trying to be confrontational at all, but I think she really was baffled (and, unfortunately, I don't think any of our answers really cleared anything up at the time, since the best we came up with was simply that it helps you to see the world differently without really giving any examples). I also don't think my aunt is alone. For many people, mathematics remains a dry and stuffy subject, handy for balancing the books and maybe work in research and design (but even then, there are a fair share of engineers who forsook mathematics upon achieving their degree and getting a job), but beyond that they don't have a concept of it.

While I am no mathematician, I still enjoy mathematics and dabble in it in my studies. I will therefore endeavour to give an example of how mathematical concepts can help explain aspects of the world using a personal insight about another subject that also commonly baffles people: speciation in evolutionary biology. Among critics of evolution, one of the commonly fallacious argument given is, "if evolution is true, why doesn't a dog give birth to a cat?" (or some other ridiculous combination). While that is probably the most ridiculous formulation of the argument, the basic idea that trips people up is understanding how one species can evolve into another. This lack of understanding often leads to the lamentable "middle of the road" half-cocked compromise in which a person accepts "microevolution" while claiming that he still doesn't believe in "macroevolution". To give some insight into how speciation works, at least from a conceptual standpoint, I turn to probability and calculus.

Take a circle with a spinning dial mounted in the middle. If you mark a spot on the circle (say the spot corresponding with '12' on a clock face) as the 0 mark, then you can spin the dial and it will land with some anticlockwise angle from 0 to 360 degrees. Since there are an infinite number of points on the circle, however, if you take your measurement to an arbitrary level of exactness (landing at 10.0000000000001 is different from landing at 10 exactly), the probability of landing at any distinct spot is essentially 0. The only way to obtain a non-zero probability is to talk about a range of possible angles. The probability is then simply the length of that range divided by 360 (thus, having the dial land within the first 90 degrees has a probability of 90/360 = 1/4). Thus, the circle can be divided into regions, each one representing a range of possible angles and thus having a non-zero probability. However, at the borders we see that which region we are in becomes a harsh cut-off over a seemingly negligeable difference. For example, if we divide our circle into four regions of equal size (each representing 90 degree increments), 89.99999999... would fall into region 1 while 90.0000000...001 would fall into region 2, despite an arbitrarily small difference between the two of them. Take the idea of that circle and now morph it in your head to represent an evolutionary lineage. The population of organisms at each moment in time represents one single location on the circle. A region represents a species, and thus a species is said to evolve into another if its region precedes the other. But remember that the demarcation line of our regions was essentially an arbitrary cutoff, a boundary imposed to provide meaning to the system. There is no drastic change in the dial's position when we go from region 1 to region 2, but rather the change can be as infinitesimally small as we want. Likewise, the change from species A to species B is not some drastic, single moment of monumental change such as a dog giving birth to a cat, but is rather a collection of tiny bumps in the dial position as it gradually creeps along the circle going from region 1 to region 2. However, when one compares the dial position from somewhere near the middle of each region, it looks to be very far apart.

This is no lofty or profoundly insightful thing I have come up with. I also recognize that I may have taken some liberties with the specific terms and workings of both mathematics and evolutionary biology, so for anyone who is actually in those fields and upset with me, I apologize (and you have full permission to admonish me in the comments). However, it is something that I have discovered a surprising number of people never really put together on their own. The concept of infinitesimal steps from calculus is a profound thing, and with it many other concepts in the world can be illuminated more fully. That, to me, is how mathematics is not dry or dull. Its concepts are wide reaching, elegant, and profound. If a person understands mathematics, there is a huge variety of subjects that suddenly become easily grasped.

Friday, July 18, 2008

Scientist Appreciation: Paul L. Nunez

Things seem to have worked out nicely. It is Friday, and I just got a little further in the enjoyable book from the library I mentioned taking out yesterday. My appreciation for the book translates to the lead author of the book, and I am thus set to make another contribution to the (made up by me for the purposes of this blog) field of Scientist Appreciation.

Dr. Paul L. Nunez sounds like a pretty cool guy (in my somewhat biased opinion). Why is he so cool, you might ask? Well, he has a highly interdisciplinary background that is very similar to the path that I plan to take (albeit, he switched to neuroscience a fair bit later in his career). He got his PhD in Engineering Physics (which is the old name for the University of Toronto's Engineering Science, the program I started in. Nunez got his degree from University of California at San Diego, though I assume the programs would be at least somewhat similar), but then did his post-doc in the Neurosciences doing EEG studies. What makes it more interesting to me is that most of his engineering work was done in spacecraft propulsion and plasma physics, giving him a link to aerospace engineering (which is the program of specialization I started doing in Eng. Sci. before transferring into science).

I have talked about the importance of interdisciplinary understanding before, so you might correctly conjecture that it is something I feel is important. Hence I highly enjoyed the second to last section of the opening chapter of his book entitled "Philosophical Conflicts" which discusses some of the unfortunate gaps between scientific disciplines. I hadn't realised how much my scientific philosophy had already been moulded by my courses in mathematics and physics until I realised that many of the statements he was making were voicing in words the vague sense of frustration I have had with so many of my courses in the life sciences. For example, he gives the following ratio:

(Time spent in preparation and performance of an experiment)/(Time spent deciding which experiments are worth doing)

And (correctly, I believe) points out that the ratio is much larger in EEG research (and, I think, many areas of biology in general) than in the physical sciences. Pointing out these differences and helping illuminate the underlying causes is, I believe, an important pursuit. It helps one appreciate where researchers in other fields are coming from, hopefully mollifying tensions and fostering the synergistic exchange of knowledge to the betterment of both parties.

Another enjoyable aspect of this section of his book is that he makes his case for the importance of a strong theoretical understanding by way of looking at the history of aerodynamics and aircraft design. While this made me smile because I could reminisce about wind tunnel experiments and the Navier-Stokes equations, it also included some wonderful lines like "If we were mathematicians, we might first try to obtain solutions to these [Navier-Stokes] equations. However, we are not mathematicians, we are airplane designers."

Also, no discussion of aerodynamics would be complete without the inclusion of Prandtl (a man whose work in fluid mechanics is so seminal that John D. Anderson's text Fundamentals of Aerodynamics includes a section titled "Historical Note: Prandtl - The Man". I'm not sure if Anderson intended it to sound like he was colloquially calling Prandtl "the man" or instead intended simply to intimate that this section would focus on Prandtl as a person rather than his scientific works. While I think the latter is more likely, the former interpretation makes me chuckle, so I prefer it). True to form, Nunez closes this section by discussing how Prandtl managed to unify the more mathematically elegant, though practically useless, body of knowledge on frictionless liquids with the empirical knowledge of hydraulics developed by engineers by his introduction of the concept of a boundary layer, thereby allowing fluid mechanics to achieve far greater success as a field with practical applicability but based more solidly in theory.

Anyway, this post seems to have wandered a bit, so suffice to say that I am a fan of Nunez's writing (and, to be fair, Srinivasan's writing too, though I'm fairly certain this part was written primarily by Nunez). Now I should make myself some lunch and get back to reading.

Saturday, April 19, 2008

A Case for Inter-field Knowledge...

They say that the time of the generalist has come and gone, but to me that is a sad thing. Of course, every so often some new "hot" field develops from the merging of several other fields, and the buzz word "interdisciplinary" gets thrown around a lot these days. However, it really shouldn't be just a buzz word, and I have two short anecdotes that I think make that case.

The first is a horrifying tale from my first year psychology course last summer (which I had to take so I would be able to take some of the neuroscience courses I wanted to this year). Since it was a summer course it was not actually being taught by a professor, but rather by a PhD student in developmental psychology. The first couple lectures were devoted to neuropsychology, the quaint name given to the "branch of psychology" devoted to studying the actual physical make-up of the human brain (as opposed to several of the other branches, which is basically just making stuff up that sounds vaguely plausible. That, however, is a rant for another day). Anyway, now I had spent the previous four months realising that intelligence was what I wanted to spend the rest of my life studying, so I had done some modest reading about the brain. It was nothing fancy; I think the least pop-science style book I had read pertaining to the brain was Oxford Press's A Very Short Introduction to the Brain. Anyway, I make this clear because I want to point out how absolutely rudimentary my knowledge of neurophysiology was at this point. Imagine my surprise, then, when this young lady gets up in the front of the class and proceeds to dazzle everyone with several blatantly false statements. While I was a little disgruntled when she told a student that withdrawal of a limb from a painful stimulus was not a reflex, but rather the turning and oral searching of a baby was what was meant by the term reflex (granted, she was a developmental psychologist, so she dealt with infant reflexes more than the withdrawal reflex, but she should still know the definition of the reflex arc and some of the common examples), I was far more horrified when she answered another student's question as to how the myelin sheath helped make signals travel down the axon faster by saying it was "superconducting". Leaving out the blatant misunderstanding of what is meant by superconductivity, she clearly had no clue how myelin works. And this was the lady standing at the front of a class of several hundred students answering questions like she knew what she was talking about, because she was planning to get her PhD in a field that at least tangentially claims to study the brain.

How can a person possibly hope to bring insight to the question of how the brain develops and mediates thought without at least a rudimentary understanding of the underlying hardware? There is a reason they make computer science and software engineering students study digital electronics and lower level languages than Java or Python, because if you don't understand the underlying architecture of a system you are left writing all your MatLab scripts with for loops and uninitialized matrices rather than matrix algebra and left wondering why your program takes hours or even days to run.

Anyway, I had planned originally to also mention how my neuroscience textbook prompted this entire post because of its blatant reference to species selection at the beginning of the chapter on sexual selection, and I was going to spend some time dwelling on the importance of understanding evolution in all its brilliantly nuanced glory when doing anything in the life sciences (including neuroscience), but instead I seemed to have gone on a little too long with my rant about the lack of knowledge of my psychology lecturer. Clearly, I still bore an intellectual grudge.

Perhaps, though, I am being a little too harsh, since this very weekend an intellectual travesty far worse than what I have just described is going to be gracing selected theatres across this continent.

Also, please note that the majority of my "scientific" links were to various Wikipedia pages. I recognise that Wikipedia most certainly is not a scientific source, but the point I was making was how non-specialized this knowledge really is.