Simulating the Human Mind
One great challenge that computing faces is building a computer which can reproduce human behaviour. A common goal of those aiming for artificial intelligence is to pass the Turing test — a test meant to ascertain whether a computer is distinguishable from a human.
There are a number of optimists out there who think this is a feasible goal. One article of faith for them is the "Singularity" — the point in time where computer processing power exceeds that of the human brain.
There is of course a powerful point to be made that if a computer can think more and faster than a human, then ipso facto human intelligence has not just been artificially reproduced, but improved upon.
However, I think that this is a fallacious point, simply because it assumes that the human brain is solely digital, and the power of the mind to be calculable to an absolute value.
The problem with computers can be illustrated by the simple issue of facial recognition. A computer can analyse a digital presentation of the human face — no problem.
But take two pictures of the same face and ask the computer to determine whether they are of the same person, and the computer will run into problems unless it has been specifically programmed to avoid them — something which probably defeats the point of artificial intelligence, since an intelligent being would be able to learn on its own.
This problem arises because a computer sees everything in black and white — either something is or it is not. Humans, however, are liable to think more in terms of shades of grey, and so they can see that the two pictures are of the same person, despite the different hairstyle.
Learning presents another problem for artificial intelligence — thus far it has been difficult to develop computers which can heuristically adapt to different situations, and what heuristics we have are far from completely accurate.
These may be surmountable problems, and there's a reasonable case to be made for this argument. After all, the human nervous system does operate in a digital sense — either a nerve is pulsing or it is not, for example.
A computer may be able to outthink a human in terms of calculations per second, but the question is whether we can harness that computing power and program it to overcome the problems associated with binary, digital thinking.
Anything is possible, of course — I could very well be proven wrong, especially considering the exponential rate of advance in the computing industry. But I tend to be skeptical about the potential for artificial intelligence, because if history is anything to go by, the problem is not really in creating computing power, but harnessing it and adapting it to the analog, "shades of grey" world we live in.