The most interesting thing to me about AI is that I believe I could do well in the AI field (I am a programmer by profession).
Now, forget the validity of that statement. It's just what I think - it isn't necessary reality. It's just based on some notions I have about how to go about creating a runaway decision making machine (which is all AI is - and perhaps all a human is, too). I could be wrong, though. I just think I could give it a good go.
The interesting thing about it is this - philosophically, I have very few qualms about creating this AI in terms of whether it would be destructive to the human race - ie, the thrill of creating something so tremendously exponential, completely overcomes any possible misgivings about such a disaster.
I think I might be one of many potential Miles Dysons out there. I would never kill a person. I would never want to harm anybody - yet the thrill of trying to create that runaway decision making machine completely overrides the care I have for civilisation.
Which does not bode well for the human race in that there are probably thousands upon thousands of people like me, who are not particularly strongly motived to think before acting and as we progress, more and more people will truly be capable to pull the trigger on creating something like that - increased tools, methods and computational capacities will ensure it so.
And when you look at it like that, the future is probably grim

I don't think I'll ever have the time to dedicate to trying to destroy human civilisation, but as time goes on, the number of people required to carry that out will decrease - and that's pretty fucking scary.