Python Urllib2 from behind a proxy

The number of times I searched for this before understanding...

from urllib2 import ProxyHandler, build_opener, install_opener
try:
  proxy = ProxyHandler({'http': 'http://domain\\user:password@proxy_address:port'})
  opener = build_opener(proxy)
  install_opener(opener)
  print "Proxy Connected\n"
except:
  print "Proxy not Found\n"

This little snippet of code will let you use a python application behind a proxy to access external resources. It's very useful for using 3rd party APIs.

Enjoy,

ab

 

 

Python: Running more than one process at once

Running many scripts at the same time may not come up on a daily basis and may not even be a good idea, but here is a great little code snippet. I had a few scripts that could run in parallel to save time without too much of a performance hit (the scripts grab data from external sources).

from subprocess import Popen
files = ['file1.py',
         'file2.py',
         'file3.py',
         'file4.py',
         'file5.py']

threads = []
for file in files:
    t = Popen(file, shell=True)
    threads.append(t)

[x.wait() for x in threads]

 

This code runs five files at the same time and waits for all five to finish before continuing... Lovely.

Enjoy !

Network Diagrams and Python Web Crawlers

Output 27k 600px 1000zoom invert I'm fascinated by networks and data visualization. I've always wanted to try my hand at making some of the inspiring images I see on blogs like flowing data. This network diagram is my first amateur attempt.

The code

I started by writing a rather simple web crawler in Python. The logic for the bot was:

1. Open a page

2. Create a list of all the links on that page (capture the total number of links)

3. For each link, create a new bot to follow the link and start the whole process again.

This was a great chance to use the Threading module in Python. I am not an expert in threading or multiprocessing. However, threading allowed me to create a new bot for each link I wanted to follow.

Here is the code for my spider class:

'''

Created on Jun 13, 2012

@author: Alex Baker

'''

#imports

import urllib2,BeautifulSoup,time

from threading import Thread

classspider1():

    def scan(self,url,mem, f):

        # Get the url

        usock = urllib2.urlopen(url)

        # Your current URL is now your "old" url and

        # all the new ones come from the page

        old_url = url

        # Read the data to a variable

        data = usock.read()

        usock.close()

        # Create a Beautiful Soup object to parse the contents

        soup = BeautifulSoup.BeautifulSoup(data)

        # Get the title

        title = soup.title.string

        # Get the total number of links

        count = len(soup.findAll('a'))

        # For each link, create a new bot and follow it.

        for link in soup.findAll('a'):

            try:

                # Cleaning up the url

                url = link.get('href').strip()

                # Avoid some types of link like # and javascript

                if url[:1] in ['#', '/','','?','j']:

                    continue

                # Also, avoid following the same link

                elif url == old_url:

                    continue

                else:

                    # Get the domain - not interested in other links

                    url_domain = url.split('/')[2]

                    # Build a domain link for our bot to follow

                    url = "http://%s/" % (url_domain)

                    # Make sure that you have not gone to this domain already

                    if self.check_mem(url, mem)==0:

                        try:

                            # Create your string to write to file

                            text = "%s,%s,%s\n" % (old_url, url, count)

                            # Write to your file object

                            f.write(text)

                            print text

                            # Add the domain to the "memory" to avoid it going forward

                            mem.append(url)

                            # Spawn a new bot to follow the link

                            spawn = spider1()

                            # Set it loose!

                            Thread(target=spawn.scan, args=(url, mem, f)).start()

                        except Exception, errtxt:

                            # For Threading errors print the error.

                            print errtxt

                        except:

                            # For any other type of error, give the url.

                            print 'error with url %s' % (url)

            except:

                # Just keep going - avoids allowing the thread to end in error.

                continue

    def check_mem(self, url,mem):

        # Quick function to check in the "member" if the domain has already been visited.

        try:

            mem.index(url)

            return 1

        except:

            return 0

As you can see, the code is simplistic - it only considers the domain/sub-domain rather than each individual link. Also, because it checks to make sure that no domain is used twice

To run the class, I used something like this:

mem = []

f = open('output.txt', 'w')

url = 'http://justanasterisk.com'# write the url here

s = spider1()

s.scan(url, mem, f)

Once started, it doesn't stop - so kill it after a while (or build that in). Running this on my MacBook, I recorded 27,000 links in about 10 minutes.

The data

The number of data points is small in comparison to some of the sets I've explored using BigQuery or Amazon SimpleDB. However, I wanted to make a visualization and I realized that the number of pixels would really define how many data point were useful. I figured that 10 minutes would give me the structure that I wanted. I used my blog justanasterisk.com as the starting point. I won't attach the data (you can create that yourself) but suffice to say that each line was:

source, destination, # of links on source page

The visualization

Here is where I was out of my element. I browsed a few different tools and the best (read: easiest) solution for my needs was Cytoscape. It is simple to use and has several presets included to make you feel like you've done some serious analysis. For the image above, I used one of the built in layouts (modified slightly) and a custom visual style.

NewImage

Screen Shot 2012 06 18 at 09 38 PM

Screen Shot 2012 06 18 at 09 39 PM

Screen Shot 2012 06 18 at 10 40 AM

I won't underwhelm you with further details, but shoot me an email if you want more. I'll probably add a few more images to this post when I get them rendered.

Best,

~ab

SMS Workflow Madness: Twilio to PHP to Python to Dropbox to Autohotkey to Conquer The World

Recently, I've been trying to trigger some python code using a text message. It has been a complicated little journey, so I thought I'd write it up for you. If you don't want to read through it all, the summary is - twilio to PHP to launch Python to put a file in Dropbox, autohotkey to monitor dropbox and run a python script. Away we go... First, Twilio is a great service if you want to develop anything with text messages. At first, I built a quick fix using If This Then That (which you should check out either way). However, I soon realized that the benefit of a text message is that it is nearly instant. IFTTT only checks tasks every 15 minutes and in a crunch, I would want a response back before then...

So I signed up for Twilio and created my application. The applications can be very complex, but for my purposes, I just needed a few lines of PHP to receive the text from the SMS and then use that information. Here is my test script:

<?php
header("content-type: text/xml");
echo "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n";
$body = $_REQUEST['Body'];
$from = $_REQUEST['From'];
$path = "/var/www/cgi-bin/addfile.py";
$command = "python ".$path." '$body'";
$command = escapeshellcmd($command);
exec($command,$result);
echo "<Response>
<Sms>Thanks for the message:".$body." your num:".$from." </Sms>
</Response>";
?>

There's a lot going on there, but here is the gist. The first two lines format the document as XML for Twilio to understand what should be done. No surprises here.

header("content-type: text/xml");
echo "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n";

The next part pulls the data from the text message into body and from and then passes these values to a python script I wrote to interact with dropbox.

$body = $_REQUEST['Body'];
$from = $_REQUEST['From'];
$path = "/var/www/cgi-bin/addfile.py";
$command = "python ".$path." '$body'";
$command = escapeshellcmd($command);
exec($command,$result);

The final part is the xml. This I pulled straight from the Twilio getting started guide

echo "<Response>
<Sms>Thanks for the message:".$body." your num:".$from." </Sms>
</Response>";

Ok so now we have a file for Twilio to interact with. Next we need to put some content in that python file. Before you try this out, you'll need to install the dropbox api libraries. I used the command

easy_install dropbox

but you might have to do that differently based on your operating system.

#!/usr/bin/python

# Include the Dropbox SDK libraries
from dropbox import client, rest, session
import sys

name = sys.argv[1]

# Get your app key and secret from the Dropbox developer website
APP_KEY = 'xxxxxxxxxxxxxxx'
APP_SECRET = 'xxxxxxxxxxxxxxx'

# Access type will be defined in your dropbox settings
ACCESS_TYPE = 'app_folder'
sess = session.DropboxSession(APP_KEY, APP_SECRET, ACCESS_TYPE)

# I removed this section after obtaining my access_token
# and access_token_secret, but you'll need to do it once.
# The return value will be a string that you can parse.
#request_token = sess.obtain_request_token()
#url = sess.build_authorize_url(request_token)
#print "url:", url
#print "Please visit this website and press the 'Allow' button.
#raw_input()

access_token = "xxxxxxxxxxxxxxx"
access_token_secret= "xxxxxxxxxxxxxxx"

sess.set_token(access_token, access_token_secret)

client = client.DropboxClient(sess)
print "linked account:", client.account_info()

#create the file if it doesn't exist
#f = open('file.txt', "w")
#f.close()

#open it for reading only...
f = open('file.txt')
# put the file to the app_folder in dropbox
response = client.put_file('/'+name+'.txt', f)
# this is the response passed back to PHP for debugging.
print "uploaded:", response

The file above is a bit of a mess but the idea is simple, take an argument as the command, authenticate with dropbox and put a file in dropbox with that name. I've tried a few different ways to do this a Dropbox PHP class or two... The python script turned out to be much easier for me - perhaps you have had better luck?

So now, with all that lovely code above, when I send a text message to my twilio account number, the php file takes the SMS message as a command and launches the python dropbox script, putting a file with that command name in my folder. The last part is an autohotkey script that I have to monitor the app_folder (it's actually sitting in the app folder for simplicity). Here is that file:

#persistent
setTimer check_file,1000
return

check_file:
IfExist, command.txt
{
 filemove command.txt, %A_ScriptDir%\processed\command%A_Now%.txt
 run myprogram.py command
}

This script checks my folder for a file called "command.txt" and then if it finds it, runs a script and moves the file to a processed folder with a time stamp. It's not perfect, as it requires a separate "look" for each command that you want to run, but it was perfect for my needs.

So that's my system. It's not pretty and it has a few more steps than I'd like for efficiency and safety, but it does work. Fast. In fact, a text message can trigger a program on my remote machine within 10 seconds. That is not bad...

Let me know if you've tried something similar or have suggestions on improvements. I'd love to hear it.

-ab