Thursday, March 31, 2005

Vikon UI...(qt)

...swapping of sleep and work time....
possible solution is to try to refresh the image with every movement of the camera in the 3d colon rather than aiming to animate it smoothly like a movie.
Main thing to be concerned about is to learn the proper communication mechanism ( signals, slots ) between the widgets and then connect the flatview and the 3dview widgets. Observing the sliceview widgets should help.

After this, the main thing to be done is to convert the existing control panel into a toolbar or a pop-up menu. Got some hints for this from the canvas.cpp example (!).

Next include the code for the volume overview and slice view ( if provided ) to get the 'final' design.

Wednesday, March 30, 2005

Moving images in QT

Aim: To move the flattened colon image in Trolltech QT toolkit

Possible solutions:

1. QMovie :

Problem seems to be that it accepts mng and gif image formats whereas the current flat image is in png .

2. QCanvasSprite :

" The QCanvasSprite class provides an animated moving pixmap on a QCanvas"
I have not used QCanvas in displaying the flattened image.

Still further classes have to be explored. If needed the current pixmap on the widget has be drawn by incorporating QCanvas inorder to use the QCanvasSprite class.

Final result should be a real-time movement of the flat image, corresponding to the movement of the mouse in the 3D colon.

Sunday, March 27, 2005

Difficult problem (partitions) studied by Ramanujan solved

Classic maths puzzle cracked at last

17:53 21 March 2005 news service
Maggie McKee

A number puzzle originating in the work of self-taught maths genius Srinivasa Ramanujan nearly a century ago has been solved. The solution may one day lead to advances in particle physics and computer security.

Karl Mahlburg, a graduate student at the University of Wisconsin in Madison, US, has spent a year putting together the final pieces to the puzzle, which involves understanding patterns of numbers.

"I have filled notebook upon notebook with calculations and equations," says Mahlburg, who has submitted a 10-page paper of his results to the Proceedings of the National Academy of Sciences.

The patterns were first discovered by Ramanujan, who was born in India in 1887 and flunked out of college after just a year because he neglected his studies in subjects outside of mathematics.

But he was so passionate about the subject he wrote to mathematicians in England outlining his theories, and one realised his innate talent. Ramanujan was brought to England in 1914 and worked there until shortly before his untimely death in 1920 following a mystery illness.

Curious patterns

Ramanujan noticed that whole numbers can be broken into sums of smaller numbers, called partitions. The number 4, for example, contains five partitions: 4, 3+1, 2+2, 1+1+2, and 1+1+1+1.

He further realised that curious patterns - called congruences - occurred for some numbers in that the number of partitions was divisible by 5, 7, and 11. For example, the number of partitions for any number ending in 4 or 9 is divisible by 5.

"But in some sense, no one understood why you could divide the partitions of 4 or 9 into five equal groups," says George Andrews, a mathematician at Pennsylvania State University in University Park, US. That changed in the 1940s, when physicist Freeman Dyson discovered a rule, called a "rank", explaining the congruences for 5 and 7. That set off a concerted search for a rule that covered 11 as well - a solution called the "crank" that Andrews and colleague Frank Garvan of the University of Florida, US, helped deduce in the 1980s.

Patterns everywhere

Then in the late 1990s, Mahlburg's advisor, Ken Ono, stumbled across an equation in one of Ramanujan's notebooks that led him to discover that any prime number - not just 5, 7, and 11 - had congruences. "He found, amazingly, that Ramanujan's congruences were just the tip of the iceberg - there were really patterns everywhere," Mahlburg told New Scientist. "That was a revolutionary and shocking result."

But again, it was not clear why prime numbers showed these patterns - until Mahlburg proved the crank can be generalised to all primes. He likens the problem to a gymnasium full of people and a "big, complicated theory" saying there is an even number of people in the gym. Rather than counting every person, Mahlburg uses a "combinatorial" approach showing that the people are dancing in pairs. "Then, it's quite easy to see there's an even number," he says.

"This is a major step forward," Andrews told New Scientist. "We would not have expected that the crank would have been the right answer to so many of these congruence theorems."

Andrews says the methods used to arrive at the result will probably be applicable to problems in areas far afield from mathematics. He and Mahlburg note partitions have been used previously in understanding the various ways particles can arrange themselves, as well as in encrypting credit card information sent over the internet.

Friday, March 25, 2005

A New Company to Focus on Artificial Intelligence


Published: March 24, 2005

SAN FRANCISCO, March 23 - The technologist and the marketing executive who co-founded Palm Computing in 1992 are starting a new company that plans to license software technologies based on a novel theory of how the mind works.

Jeff Hawkins and Donna Dubinsky will remain involved with what is now called PalmOne, but on Thursday they plan to announce the creation of Numenta, a technology development firm that will conduct research in an effort to extend Mr. Hawkins's theories. Those ideas were initially sketched out last year in his book "On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines," co-written with Sandra Blakeslee, who also writes for The New York Times.

Dileep George, a Stanford University graduate student who has worked with Mr. Hawkins in translating his theory into software, is joining the firm as a co-founder.

Mr. Hawkins has long been interested in research in the field of intelligence, and in 2002 he founded the Redwood Neuroscience Institute. He now spends part of his time there while continuing to serve as chief technology officer of PalmOne.

Artificial intelligence, which first attracted computer scientists in the 1960's, was commercialized in the 1970's and 1980's in products like software that mimicked the thought process of a human expert in a particular field. But the initial excitement about machines that could see, hear and reason gave way to disappointment in the mid-1980's, when artificial intelligence technology became widely viewed as a failure in the real world.

In recent years, vision and listening systems have made steady progress, and Mr. Hawkins said that while he was uncomfortable with the term artificial intelligence, he believed that a renaissance in intelligent systems was possible.

Full Text at the New York Times

IBM simulates cortical columns?

Published: March 22, 2005, 5:23 PM PST
By Michael Kanellos
Staff Writer, CNET

IBM has devised a way to let computers think like vertebrates.

Charles Peck and James Kozloski of IBM's Biometaphorical Computing team say they have created a mathematical model that mimics the behavior of neocortal minicolumns, thin strands of tissue that aggregate impulses from neurons. Further research could one day lead to robots that can "see" like humans and/or make appropriate decisions when bombarded with sensory information.
A research paper on the model is expected to come out this week.

The brain consists of roughly 28 billion cells, Peck explained. The 200 million minicolumns essentially gather sensory data and organize it for higher parts of the brain. The minicolumns also communicate with each other through interconnections. Minicolumns are roughly 1/20 of a millimeter in diameter and extend through the cortex.

The mathematical model created at IBM simulates the behavior of 500,000 minicolumns connected by 400 million connections. With it, "we were able to demonstrate self-organization" and behavior similar to that seen in the real world, Peck said.

"What we are trying to do is study the brain at the highest level of abstraction without masking the underlying function," he said.

In a test outlined in the upcoming paper, the system was able to solve a pattern recognition problem that will cause errors on ordinary computers.

Ideally, the algorithm could one day help scientists more fully understand the underlying processing that takes place when people see things. In a nutshell, an image is received, decomposed into color, shape, texture and other attributes and then reassembled, prompting the animal to change its behavior. Not all parts of the process are fully understood, Peck said.

Over the past two years, researchers have increasingly looked toward nature as a model to emulate. Some companies, such as Cambrios, are trying to develop new compounds by exploiting proteins secreted by biological viruses. PalmOne founder Jeff Hawkins, meanwhile, is creating a company that will sell systems that use the same thought processes as the human brain. Intel co-founder Gordon Moore recently said that computers won't likely be able to think like humans unless they are redesigned.

Brains typically think by making predictions about future events by looking at a vast array of past experiences, Hawkins said in a speech Monday at an event unrelated to IBM. Hawkins showed off a prototype application that can recognize shapes it has "seen" in the past.

IBM is presenting the paper at the International Conference on Adaptive and Natural Computing Algorithms in Coimbra, Portugal.

Tuesday, March 22, 2005

OpenVIDIA : GPU accelerated Computer Vision Library

The OpenVIDIA project implements computer vision algorithms on computer graphics hardware, using OpenGL and Cg. The project provides useful example programs which run real time computer vision algorithms on single or parallel graphics processing units(GPU).

Brain hierarchy and the logic operators

Posted in MindBrain Yahoo group by Chris ( )

Hi all, some comments on tools of logic being 'hard wired'.

Through analysis of the manner in which our brains deal with paradox, using XOR to extract objects from complex patterns (the AND realm), as well as analysis of how, out of the neurology, we can elicit the qualities of number types used in Mathematics, so all of the formal logic operators are usable in the reflection of the development and maintenance of mental states.

I have emphasised before the notion of the IDM "Dimension of Precision" that reflects the general characteristics of information processing 'in here', with increasing precision in categorisation, and we can map onto that dimension all of the core logic operators showing a movement from the vague/integrated to the crisp/differentiated. Thus, the dimension of general-to-particular is mapped right-to-left and we overlay the operators:

particular ............ general

IMP ... XOR ... AND ... IOR

(missing are COIN, NOT, NAND, and NOR - all not necessary to make the point)

In paradox processing the more 'obvious' of these is the AND-to-XOR dynamic where we see a pattern in a complex line drawing and 'extract' that pattern to find it is 'paradoxical' in what we SEE is two forms trying to share the one space (e.g. Necker Cube). NO matter how much bandwidth we add, we cannot resolve the issue and so fall back onto the complement of bandwidth, time. So we experience 'oscillations' in the images as the stimulus/response elements of the brain tries to resolve things one step at a time!

The XOR operator reflects the ability to extract parts from a whole, to clearly differentiate those parts and as such reflects the development of a set of elements that make-up the whole. It can have 'issues' when it sees in something more than what is there such that that 'something' is interpretable as 'non-reducing'.

The AND operator applies to the integration of many into a 'whole' without the application of a label - the application takes that implicit whole and makes it explicit in the form of a represention - the label. IOW we differentiate WHOLES by labelling them and so REPRESENTING them in our XOR-realm but not accomodating them in that realm.

The mapping of general to particular introduces a focus on mapping from the symmetric to the asymmetric, and the IMP (Implies) operator is the only operator that is asymmetric in form. The IMP operator reflects an in-built trait of using implication to derive information. Its asymmetry is in the consequences of its actions. For example, lets look at the cartesian coordinate system we use in Mathematics etc.

If I assert a dimension (+/-, and so a dichotomy) and label it as the X axis, and assert it to be fixed, so I have set down an initial context. If I then assert another dimension, 'orthogonal' to the X axis and call it Y, GIVEN Y I can imply X - but given X I cannot imply Y.

As we add dimensions so each new dimension IMPLIES all that is 'before/below' it. As such, using X,Y,Z dimensions (or, here abstracted into the notion of SETS with all points on the dimension being elements of the set - the set relations fall into the concept of subsets etc) we can assert:

Z <= Y <= X, and in its purest form we have Z = Y = X. I can work backwards in deriving implications but I cannot work forwards. There are QUALITATIVE differences here, even if, using the quantitative, Z = Y = X, there is still present a hierarchic format such that I can NEVER imply Z given X, nor Y given X, but I will ALWAYS be able to imply Y from Z and X from Y.

For a more practical example, imagine walking down the street and picking up a piece of paper that has written on it:

"...Z axis was 5". From this snippet of information we can infer the definite presence of the Y axis and the X axis. If, on the other hand the piece of paper said ".. X axis was 6", I cannot definitely infer the presence of any dimensions other than X.

Another example is more 'set theory' oriented where, given the set of pencils, X, and the set of writing tools, Y, so X < Y (is a subset of Y). But also not the PRECISION issue, X is more 'particular' in its focus, Y more general. At best and I can make X the set of writing tools and so X = Y but X can never 'transcend' Y. (and so the sum of parts cannot transcend the whole)

This perspective can move us into issues regarding 'completeness', where the truely complete is in the IOR state - the problem being that this state is more unconscious, more implicit than explicit, more immediate than partial, such that 'completeness' is sensed intuitively at best and so requires something Science has issues with - a leap of faith. To avoid that leap we maintain the concept of incompleteness/uncertainty and focus on IMP and the use of probabilities reasoning (this probabilities reasoning in fact appears to be hard-coded into the brain reflecting the asymmetry)

In our brains, so our more conscious, mediations dominating, states are closer to the IMP/XOR realm than they are to the AND/IOR realm. BUT, integration DOES exist in the realm of the differentiating but is focused WITHIN what has been differentiated; through continuous differentiating, so the AND/IOR elements become manifest not as senses of 'semantics' but into a more concentrated, and so intense, focus on what we label as "syntax" - the focus of integrating is on one's position in the hierarchy of 'parts' that make-up the 'whole', such that all that can matter, all that is 'meaningful', is focused on position.

This focus on POSITION applies to the "dimension of precision" itself in that, as covered in past comments, each point on that dimension can serve as a ground out of which to interpret reality - thus some individuals/collectives can be more "IMP" oriented, others more "XOR", others more "AND". IOW position affects perspective (implied in this is that the position of IMP contains the root of, is the ground for, imagination).

Note that all of the above operators are from ANALYTICAL logic and as such lack the notion of thermodynamic time - time is at best mechanised, but more often removed from perspectives - the analytical will focus on idealisation, the static, the universal. What is needed is the copying of the analytical terms and then their association with time to give us formal representations of the DIALECTICAL processes to aid in 'completing' logic as best we can.

Thus, the IMP operator represents "IF X THEN Y". However, the serial format is not tied to time elements, the focus is purely structural.

Thus in 'full spectrum' logic we would have: "IF X, SO Y" to reflect the presence of structure Y given X and then re-define "IF,,THEN" to be a purely temporal format where "IF X, THEN Y" means that given X Y will follow AFTER it (X = grey clouds, Y = rain) This latter format reflects the rigid sequence of events that in analytical logic is focused more on hierarchy, be it explicit (X < Y) or implicit (X = Y - they are the same 'quantitatively'). As such, in analytical logic any reference to time is in the content of the variables, not in the overall form, not explicitly represented as a universal itself.

The UNIVERSALS of analytical logic, being universals, lack meaning until used in a context (we can play around with the 'pure' forms WITHIN the realm of that universal, but their use is in mapping to local contexts). The lack of temporal references in a world dominated by the arrow of time is an issue - IOW the implicit relationship here is of time being 'secondary', a LOCAL concept to which we can apply the universals; but nature appears to show othwerwise, thermodynamics is no local illusion, it applies universally.


Monday, March 21, 2005

The World as Albert Einstein sees it

A knowledge of the existence of something
we cannot penetrate,
of the manifestations of the profoundest reason
and the most radiant beauty.
It is this knowledge and this emotion
that constitute the truly religious attitude;
in this sense, and in this alone,
I am a deeply religious man.

The World as I See It
Albert Einstein

Saturday, March 19, 2005

What to do in college?

by Paul Graham, a lisp hacker and founder of the company which went on to become Yahoo! Store

March 2005

(Parts of this essay began as replies to students who wrote to me with questions.)

Recently I've had several emails from computer science undergrads asking what to do in college. I might not be the best source of advice, because I was a philosophy major in college. But I took so many CS classes that most CS majors thought I was one. I was certainly a hacker, at least.


What should you do in college to become a good hacker? There are two main things you can do: become very good at programming, and learn a lot about specific, cool problems. These turn out to be equivalent, because each drives you to do the other.

The way to be good at programming is to work (a) a lot (b) on hard problems. And the way to make yourself work on hard problems is to work on some very engaging project.

Odds are this project won't be a class assignment. My friend Robert learned a lot by writing network software when he was an undergrad. One of his projects was to connect Harvard to the Arpanet; it was one of the original nodes, but by 1984 the connection had died. [1] Not only was this work not for a class, but because he spent all his time on it and neglected his studies, he was kicked out of school for a year. [2] It all evened out in the end, and now he's a professor at MIT. But you'll probably be happier if you don't go to that extreme; it caused him a lot of worry at the time.

Another way to be good at programming is to find other people who are good at it, and learn what they know. Programmers tend to sort themselves into tribes according to the type of work they do and the tools they use, and some tribes are smarter than others. Look around you and see what the smart people seem to be working on; there's usually a reason.

Some of the smartest people around you are professors. So one way to find interesting work is to volunteer as a research assistant. Professors are especially interested in people who can solve tedious system-administration type problems for them, so that is a way to get a foot in the door. What they fear are flakes and resume padders. It's all too common for an assistant to result in a net increase in work. So you have to make it clear you'll mean a net decrease.

Don't be put off if they say no. Rejection is almost always less personal than the rejectee imagines. Just move on to the next. (This applies to dating too.)

Beware, because although most professors are smart, not all of them work on interesting stuff. Professors have to publish novel results to advance their careers, but there is more competition in more interesting areas of research. So what less ambitious professors do is turn out a series of papers whose conclusions are novel because no one else cares about them. You're better off avoiding these.

I never worked as a research assistant, so I feel a bit dishonest recommending that route. I learned to program by writing stuff of my own, particularly by trying to reverse-engineer Winograd's SHRDLU. I was as obsessed with that program as a mother with a new baby.

Whatever the disadvantages of working by yourself, the advantage is that the project is all your own. You never have to compromise or ask anyone's permission, and if you have a new idea you can just sit down and start implementing it.

In your own projects you don't have to worry about novelty (as professors do) or profitability (as businesses do). All that matters is how hard the project is technically, and that has no correlation to the nature of the application. "Serious" applications like databases are often trivial and dull technically (if you ever suffer from insomnia, try reading the technical literature about databases) while "frivolous" applications like games are often very sophisticated. I'm sure there are game companies out there working on products with more intellectual content than the research at the bottom nine tenths of university CS departments.

If I were in college now I'd probably work on graphics: a network game, for example, or a tool for 3D animation. When I was an undergrad there weren't enough cycles around to make graphics interesting, but it's hard to imagine anything more fun to work on now.


When I was in college, a lot of the professors believed (or at least wished) that computer science was a branch of math. This idea was strongest at Harvard, where there wasn't even a CS major till the late 1980s; till then one had to major in applied math. But it was nearly as bad at Cornell. When I told the fearsome Professor Conway that I was interested in AI (a hot topic then), he told me I should major in math. I'm still not sure whether he thought AI required math, or whether he thought AI was nonsense and that majoring in something rigorous would cure me of such stupid ambitions.

In fact, the amount of math you need as a CS major is a lot less than most university departments like to admit. I don't think you need much more than high school math plus a few concepts from the theory of computation. (You have to know what an n^2 algorithm is if you want to avoid writing them.) Unless you're planning to write math applications, of course. Robotics, for example, is all math.

But while you don't literally need math for most kinds of hacking, in the sense of knowing 1001 tricks for differentiating formulas, math is a valuable source of metaphors and examples.[3] I wish I'd studied more math in college for that reason.

Like a lot of people, I was mathematically abused as a child. I learned to think of math as a collection of formulas that were neither beautiful nor had any relation to my life (despite attempts to translate them into "word problems"), but had to be memorized in order to do well on tests.

One of the most valuable things you could do in college would be to learn what math is really about. This may not be easy, because a lot of good mathematicians are bad teachers. And while there are many popular books on math, few seem good. The best I can think of are W. W. Sawyer's. And of course Euclid. [4]


Thomas Huxley said "Try to learn something about everything and everything about something." Most universities aim at this ideal.

But what's everything? To me it means, all that people learn in the course of working honestly on hard problems. All such work tends to be related, in that ideas and techniques from one field can often be transplanted successfully to others. Even others that seem quite distant. For example, I write essays the same way I write software: I sit down and blow out a lame version 1 as fast as I can type, then spend several weeks rewriting it.

Working on hard problems is not, by itself, enough. Medieval alchemists were working on a hard problem, but their approach was so bogus that there was little to learn from studying it, except possibly about people's ability to delude themselves. Unfortunately the sort of AI I was trying to learn in college had the same flaw: a very hard problem, blithely approached with hopelessly inadequate techniques. Bold? Closer to fraudulent.

The social sciences are also fairly bogus, because they're so much influenced by intellectual fashions. If a physicist met a colleague from 100 years ago, he could teach him some new things; if a psychologist met a colleague from 100 years ago, they'd just get into an ideological argument. Yes, of course, you'll learn something by taking a psychology class. The point is, you'll learn more by taking a class in another department.

The worthwhile departments, in my opinion, are math, the hard sciences, engineering, history (especially economic and social history, and the history of science), architecture, and the classics. A survey course in art history may be worthwhile. Modern literature is important, but the way to learn about it is just to read. I don't know enough about music to say.

You can skip the social sciences, philosophy, and the various departments created recently in response to political pressures. Many of these fields talk about important problems, certainly. But the way they talk about them is useless. For example, philosophy talks, among other things, about our obligations to one another; but you can learn more about this from a wise grandmother or E. B. White than from an academic philosopher.

I speak here from experience. I should probably have been offended when people laughed at Clinton for saying "It depends on what the meaning of the word 'is' is." I took about five classes in college on what the meaning of "is" is.

Another way to figure out which fields are worth studying is to create the dropout graph. For example, I know many people who switched from math to computer science because they found math too hard, and no one who did the opposite. People don't do hard things gratuitously; no one will work on a harder problem unless it is proportionately (or at least log(n)) more rewarding. So probably math is more worth studying than computer science. By similar comparisons you can make a graph of all the departments in a university. At the bottom you'll find the subjects with least intellectual content.

If you use this method, you'll get roughly the same answer I just gave.

Language courses are an anomaly. I think they're better considered as extracurricular activities, like pottery classes. They'd be far more useful when combined with some time living in a country where the language is spoken. On a whim I studied Arabic as a freshman. It was a lot of work, and the only lasting benefits were a weird ability to identify semitic roots and some insights into how people recognize words.

Studio art and creative writing courses are wildcards. Usually you don't get taught much: you just work (or don't work) on whatever you want, and then sit around offering "crits" of one another's creations under the vague supervision of the teacher. But writing and art are both very hard problems that (some) people work honestly at, so they're worth doing, especially if you can find a good teacher.


Of course college students have to think about more than just learning. There are also two practical problems to consider: jobs, and graduate school.

In theory a liberal education is not supposed to supply job training. But everyone knows this is a bit of a fib. Hackers at every college learn practical skills, and not by accident.

What you should learn to get a job depends on the kind you want. If you want to work in a big company, learn how to hack Blub on Windows. If you want to work at a cool little company or research lab, you'll do better to learn Ruby on Linux. And if you want to start your own company, which I think will be more and more common, master the most powerful tools you can find, because you're going to be in a race against your competitors, and they'll be your horse.

There is not a direct correlation between the skills you should learn in college and those you'll use in a job. You should aim slightly high in college.

In workouts a football player may bench press 300 pounds, even though he may never have to exert anything like that much force in the course of a game. Likewise, if your professors try to make you learn stuff that's more advanced than you'll need in a job, it may not just be because they're academics, detached from the real world. They may be trying to make you lift weights with your brain.

The programs you write in classes differ in three critical ways from the ones you'll write in the real world: they're small; you get to start from scratch; and the problem is usually artificial and predetermined. In the real world, programs are bigger, tend to involve existing code, and often require you to figure out what the problem is before you can solve it.

You don't have to wait to leave (or even enter) college to learn these skills. If you want to learn how to deal with existing code, for example, you can contribute to open-source projects. The sort of employer you want to work for will be as impressed by that as good grades on class assignments.

In existing open-source projects you don't get much practice at the third skill, deciding what problems to solve. But there's nothing to stop you starting new projects of your own. And good employers will be even more impressed with that.

What sort of problem should you try to solve? One way to answer that is to ask what you need as a user. For example, I stumbled on a good algorithm for spam filtering because I wanted to stop getting spam. Now what I wish I had was a mail reader that somehow prevented my inbox from filling up. I tend to use my inbox as a todo list. But that's like using a screwdriver to open bottles; what one really wants is a bottle opener.

Grad School

What about grad school? Should you go? And how do you get into a good one?

In principle, grad school is professional training in research, and you shouldn't go unless you want to do research as a career. And yet most people who get PhDs in CS don't go into research. I didn't go to grad school to become a professor. I went because I wanted to learn more.

So if you're mainly interested in hacking and you go to grad school, you'll find a lot of other people who are similarly out of their element. And if half the people around you are out of their element in the same way you are, are you really out of your element?

There's a fundamental problem in "computer science," and it surfaces in situations like this. No one is sure what "research" is supposed to be. A lot of research is hacking that had to be crammed into the form of an academic paper to yield one more quantum of publication.

So it's kind of misleading to ask whether you'll be at home in grad school, because very few people are quite at home in computer science. The whole field is uncomfortable in its own skin. So the fact that you're mainly interested in hacking shouldn't deter you from going to grad school. Just be warned you'll have to do a lot of stuff you don't like.

Number one will be your dissertation. Almost everyone hates their dissertation by the time they're done with it. The process inherently tends to produce an unpleasant result, like a cake made out of whole wheat flour and baked for twelve hours. Few dissertations are read with pleasure, especially by their authors.

But thousands before you have suffered through writing a dissertation. And aside from that, grad school is close to paradise. Many people remember it as the happiest time of their lives. And nearly all the rest, including me, remember it as a period that would have been, if they hadn't had to write a dissertation. [5]

The danger with grad school is that you don't see the scary part upfront. PhD programs start out as college part 2, with several years of classes. So by the time you face the horror of writing a dissertation, you're already several years in. If you quit now, you'll be a grad-school dropout, and you probably won't like that idea. When Robert got kicked out of grad school for writing the Internet worm of 1988, I envied him enormously for finding a way out without the stigma of failure.

On the whole, grad school is probably better than most alternatives. You meet a lot of smart people, and your glum procrastination will at least be a powerful common bond. And of course you have a PhD at the end. I forgot about that. I suppose that's worth something.

The greatest advantage of a PhD (besides being the union card of academia, of course) may be that it gives you some baseline confidence. For example, the Honeywell thermostats in my house have the most atrocious UI. My mother, who has the same model, diligently spent a day reading the user's manual to learn how to operate hers. She assumed the problem was with her. But I can think to myself "If someone with a PhD in computer science can't understand this thermostat, it must be badly designed."

If you still want to go to grad school after this equivocal recommendation, I can give you solid advice about how to get in. A lot of my friends are CS professors now, so I have the inside story about admissions. It's quite different from college. At most colleges, admissions officers decide who gets in. For PhD programs, the professors do. And they try to do it well, because the people they admit are going to be working for them.

Apparently only recommendations really matter at the best schools. Standardized tests count for nothing, and grades for little. The essay is mostly an opportunity to disqualify yourself by saying something stupid. The only thing professors trust is recommendations, preferably from people they know. [6]

So if you want to get into a PhD program, the key is to impress your professors. And from my friends who are professors I know what impresses them: not merely trying to impress them. They're not impressed by students who get good grades or want to be their research assistants so they can get into grad school. They're impressed by students who get good grades and want to be their research assistants because they're genuinely interested in the topic.

So the best thing you can do in college, whether you want to get into grad school or just be good at hacking, is figure out what you truly like. It's hard to trick professors into letting you into grad school, and impossible to trick problems into letting you solve them. College is where faking stops working. From this point, unless you want to go work for a big company, which is like reverting to high school, the only way forward is through doing what you love.


[1] No one seems to have minded, which shows how unimportant the Arpanet (which became the Internet) was as late as 1984.

[2] This is why, when I became an employer, I didn't care about GPAs. In fact, we actively sought out people who'd failed out of school. We once put up posters around Harvard saying "Did you just get kicked out for doing badly in your classes because you spent all your time working on some project of your own? Come work for us!" We managed to find a kid who had been, and he was a great hacker.

When Harvard kicks undergrads out for a year, they have to get jobs. The idea is to show them how awful the real world is, so they'll understand how lucky they are to be in college. This plan backfired with the guy who came to work for us, because he had more fun than he'd had in school, and made more that year from stock options than any of his professors did in salary. So instead of crawling back repentant at the end of the year, he took another year off and went to Europe. He did eventually graduate at about 26.

[3] Eric Raymond says the best metaphors for hackers are in set theory, combinatorics, and graph theory.

Trevor Blackwell reminds you to take math classes intended for math majors. "'Math for engineers' classes sucked mightily. In fact any 'x for engineers' sucks, where x includes math, law, writing and visual design."

[4] For those interested in graphic design, I especially recommend Byrne's Euclid.

[5] If you wanted to have the perfect life, the thing to do would be to go to grad school, secretly write your dissertation in the first year or two, and then just enjoy yourself for the next three years, dribbling out a chapter at a time. This prospect will make grad students' mouths water, but I know of no one who's had the discipline to pull it off.

[6] One professor friend says that 15-20% of the grad students they admit each year are "long shots." But what he means by long shots are people whose applications are perfect in every way, except that no one on the admissions committee knows the professors who wrote the recommendations.

So if you want to get into grad school in the sciences, you need to go to college somewhere with real research professors. Otherwise you'll seem a risky bet to admissions committees, no matter how good you are.

Which implies a surprising but apparently inevitable consequence: little liberal arts colleges are doomed. Most smart high school kids at least consider going into the sciences, even if they ultimately choose not to. Why go to a college that limits their options?

Thanks to Trevor Blackwell, Alex Lewin, Jessica Livingston, Robert Morris, Eric Raymond, and several anonymous CS professors for reading drafts of this, and to the students whose questions began it.

Friday, March 18, 2005

Sarasvati Project :

Sarasvati : A Sanskrit IME

Sanskrit on Windows XP / 2000 using regular QWERTY keyboard.

Siemens' Small-Picture Thinking

NEW YORK - This morning, the German industrial giant Siemens placed a $1 billion bet on the fast-growing molecular-imaging market. The company announced an agreement to spend that amount on CTI Molecular Imaging, a small 22-year-old company that posted earnings of just $58 million on sales of $402 million for the fiscal year ended Sept. 30, 2004.

Siemens to Buy U.S. Medical Imaging Company CTI for $1 Billion

March 18 (Bloomberg) -- Siemens AG, the world's second- largest maker of medical equipment, agreed to buy CTI Molecular Imaging Inc. in a transaction valued by the company at about $1 billion.

Siemens, based in Munich in Germany, will pay $20.50 for each share of the company, it said in a Business Wire statement today. The company expects the transaction to close in the second quarter. CT shares yesterday closed at $17.53.

Thursday, March 17, 2005

Google courts open-source developers

Google courts open-source developers
By David Becker, CNET
Published on ZDNet News: March 17, 2005, 12:14 PM PT

Google has launched a new site intended to serve as a central resource for developers working on applications related to the popular search engine.

The new Google Code will be a repository for source code, application programming interfaces, or APIs, and other tools to assist developers working on Google-related projects, according to a welcome note on Thursday from Chris DiBona, Google's open-source program manager and former editor of "News for Nerds" site Slashdot.

The site also will profile current and ongoing projects, DiBona said, to give developers a better sense of what's happening in the Google universe. "One thing we really wanted to put up on Google Code was a way of bringing recognition to those people and groups who have created programs that use our APIs or the code we have released," he said.

Wednesday, March 16, 2005

Digital Telugu Project :

This project aims to become a repository of heterogeneous utilities for processing and displaying Telugu text using Unicode. Primarily, we would like to engage in collaborative efforts in developing utilities for inputting Telugu text using RTS.

Message in digitaltelugu Yahoo group:-

Tuesday, March 15, 2005

Global Technology

Global Tech.. Posted by Hello

3D Ray Tracing Card at CeBIT 2005

"As noted at Saarland University is showing a prototype of a 3D Raytracing Card at CeBIT2005. The FPGA is clocked at 90 MHz and is 3-5 times faster in raytracing then a Pentium4 CPU with 30 times more MHz. Besides game engines using raytracing there was a scene of a Boeing with 350 million polygons rendered in realtime.(slashdot)"

3D Chip von de sar

"Wem die virtuellen 3D-Welten herkömmlicher Rasterizer-Grafikkarten nicht real genug erscheinen, sollte die Hallen 27 und 23, in denen ATI ausschweifend seinen 20. Geburtstag feiert beziehungsweise Nvidia seine Partner wie in einer Wagenburg gegen schwere Zeiten um sich schart, nach fünf Tagen World Cyber Games einmal verlassen und quer übers (reale) Gelände in Halle 9 hinten links beim Stand A60 vorbeischauen."(

SaarCOR - A Hardware Architecture for Real Time Ray Tracing

SaarCOR is a hardware architecture for generating highly realistic images of 3D environments in realtime. It implements the well-known ray-tracing algorithm on a single chip and reaches performance comparable to todays graphics technology but uses less hardware resources and requires less memory bandwidth. Possible applications include computer games and realistic visualization in the design and development process of products such as cars, air planes, and buildings.

For the first time SaarCOR allows for displaying highly complex objects with millions of triangles with photo-realistic image quality at realtime. The architecture is highly flexible and scalable and can be optimized for different application scenarios. A first prototype of this graphic board purely based on ray tracing is available.(

Sunday, March 13, 2005

Silicon in my Blood....

.... a blog dedicated to the geeks who firmly believe that Si drives their blood