We'll have to agree to disagree. I do not see a need for the immaterial. The thought in my mind is just a biochemical reaction. In much the same way a machine may monitor itself, my ability to reflect upon a thought is also a biochemical reaction.
You are talking about a thought and a 'me' ('my mind'). So it is 'you' who is experiencing the thought, right? If the thought is just a biochemical reaction, what exactly is the 'you' which is experiencing the thought? If the thought is just a biochemical reaction, there is no 'you', which could consciously experience it.
I think you're slightly out of your realm of knowledge here. A computer is a machine that can perform computations like we could via paper and pencil. However that does not imply that it can't do everything a human being can. What's more, computers are not just calculators or time savers, they are UNIVERSAL machines that can perform computation equivalent to any machine we could ever construct (see Church/Turing thesis). This is irrespective of the material they are built of, so I could make a computer out of quantum bits, or out of pigeon droppings, it is still able to compute the same things (although their speeds are markedly different).
Side note: I can assure you that I am not at all out of my realm of knowledge (I have studied computer science). Not that I think much education is necessary to understand the very easy principle of computers as you have described:
Everthing you have said here is exactly what I have said: All computers are basically the same (turing machines) and everything they 'do' can also be calculated with a pencil and paper. I agree that this alone does not prove that they would not be a sufficent model for the human mind. But you will agree that there is not much consciousness or intelligence in a pencil and a paper. The only 'intelligence' that is added to the pencil and the paper is a program made by man. Of course only the man is intelligent not the program itself.
Machines do react, examine and make decisions. They can do so deterministically according to pre-programmed rules, or they can do so according to learning, or they can be completely non-deterministic and make choices seemingly at random. I don't see how this differs from a human being.
The problem is that the terms you use imply human abilities. E.g. you say 'They make decisions'. 'They' implies a 'me' which is not there in a machine, since all that happens in a machine is the passive processing of programs. Secondly, something that is deterministic resp. pre-programmed cannot make a decision, simply because - it is pre-programmed. Non-determinism in computers can only be introduced using a random generator in which case you can add a dice to the pencil and the paper. The 'learning' of computers is nothing more than a paraphrase for certain programing techniques, all of which may again be deducted from a simple turing machine. The same goes for all AI-related technologies like fuzzy logic, neuronal networks, self-modifying programs, etc.
Science does not fully explain the material world, but by it's nature it asymptotically closer with each new finding or discovery. You're saying that there are things out there which definitely cannot be explained by science right now, nor can they ever be. I don't see any evidence that suggests that I could believe that. Some of the pre-Ionian ancient Greeks believed in animism -- that is everyday natural processes like rising of the sun and the blooming of the flowers were directly controlled by God. We eventually came to understand these happenings as the result of natural processes. Aren't you making the same mistake with human thought?
Very good point and question. Again the question is are we talking about scientific aspects. The scientific explanation e.g. of sunrise does not compete with a theologic explanation of a divine creation. I agree that this competition was and still is a misunderstanding on both sides. We can get more into this if you want, but the size of this post is getting out of hand already

Here's a thought experiment and a question for you. Let's say I could make a complete replica of your brain and body, down to the last molecule. This being is identical to you down to the position of every atom and electrical potential across each synapse. You are positing that there's something inherently missing from that being. Why?
If consciousness somehow finds a way to inhabit this Frankenstein Monster, it would probably be like an identical twin. If not, it would probably be dead immediately. But just like you, I honestly don't know.
The problem with this thought experiment is that what you have in mind is not really a comlpete replica but an already reduced replica consisting only of the scientific aspects of the body according to the current scientific model (atoms, synapses, electrical potentials, etc.). Thus, the correct experiment would be to simulate the body according to the currrent scientific models in a computer (which should be sufficient according to your believes). In this case, I am sure that this 'program' would not be conscious or intelligent, for all the reasons I have already presented.
BTW, very interesting discussion so far!