David J. Chalmers wrote The Singularity, He goes on about what would cause it, what we should do to prepare, etc (the Singularity is the idea that humans may one day upload their mind and therefore their being into the the virtual cloud.
Steve Omohundro, Ph.D. wrote Rationally-Shaped Artificial Intelligence, concerning well, rationally shaped AI (kinda obvious on that one.)
Eliezer Yudkowsky wrote Artificial Intelligence as a Positive and Negative Factor in Global Risk, which goes on about the potential danger we may one day face when our computer finally do become self aware.
I hope that's the kind of information you were looking for concerning the philosophies behind AI. I enjoyed reading them, I forget where I even first heard of them to but I had them on my computer still so they must have been online at some point...
Actually here's Chalmers http://fragments.consc.net/djc/2010/04/the-singularity-a-philosophical-analysis.html
And Omohundro has... Well I can't find the article I have downloaded, not sure why but he does have a website up
Also not sure about where I grabbed the last one. LINK 1
One final resource, http://selfawaresystems.com/ is one I found through Omohundro's website, they have quite a bit of papers on the topic of AI ranging from how (technical) and why (philosophical)
PS: While searching links for you I found this as well, a new one for me to read I hadn't noticed before (or maybe it came out after I last checked, shrugs) Anywho. LINK 2
Hopefully this gets you off to a good start.