您在這裡

Retrieving web pages with urllib

24 二月, 2015 - 10:49

While we can manually send and receive data over HTTP using the socket library, there is a much simpler way to to perform this common task in Python by using the urllib library.

Using urllib, you can treat a web page much like a file. You simply indicate which web page you would like to retrieve and urllib handles all of the HTTP protocol and header details.

The equivalent code to read the romeo.txt file from the web using urllib is as follows:

import urllib

fhand = urllib.urlopen('http://www.py4inf.com/code/romeo.txt')for line in fhand:    print line.strip()

Once the web page has been opened with urllib.urlopen we can treat it like a file and read through it using a for loop.

When the program runs, we only see the output of the contents of the file. The headers are still sent, but the urllib code consumes the headers and only returns the data to us.

But soft what light through yonder window breaksIt is the east and Juliet is the sunArise fair sun and kill the envious moonWho is already sick and pale with grief

As an example, we can write a program to retrieve the data for romeo.txt and compute the frequency of each word in the file as follows:

import urllib

counts = dict()fhand = urllib.urlopen('http://www.py4inf.com/code/romeo.txt')for line in fhand:    words = line.split()    for word in words:        counts[word] = counts.get(word,0) + 1print counts

Again, once we have opened the web page, we can read it like a local file.