How it works
What’s cool about esearch is that you can tell it to save a history of the articles found by your query, and then use another function called efetch to download that history. This is done by adding &usehistory=y to your search, which will generate this XML (in addition to some other XML-tags):
<WebEnv> NCID_1_90555773_130.14.18.48_5553_1335519406_1226217114 </WebEnv>
Once we have extracted the WebEnv string, we just tell PubMed’s efetch to send us the articles saved in WebEnv. There’s one complication, though. PubMed “only” allows us to fetch 10 000 articles in one go, therefore my code includes a loop that will batch download the data, and paste it together in order to create valid XML-code. The XML cutting and pasting is done with gsub, since the unparsed XML-data is just a long string. It’s not the most beautiful solution, but it seems to work.
Now that all XML-data is saved in one object, we just need to parse it an extract whatever PubMed field(s) we’re interested in. I’ve included a function that will parse the XML-code and extract journal counts, although you could use the same method to extract any field.
原文連結 http://www.r-bloggers.com/how-to-download-complete-xml-records-from-pubmed-and-extract-data
Comments