Python and SAP: Part 2 - Getting data from SAP

So now you're connected to SAP (see part 1 here), but what do you do with it?

Well, as the name implies, PyRFC allows you to perform Remote Function Calls from Python. In theory, any valid Function Module in SAP should work. Use Transaction SE37 to view your function module and check the inputs. 

In my case, I wanted to read data from SAP. Therefore, the function module that interested me was RFC_READ_TABLE. In SE37, you can see the inputs for this function module (but I've done the work for you below).

Also, worth noting: You can do a search and see the general fear this solution produces. People get quite alarmist about anything that "queries" SAP. However, let's remember that there are serious controls that you can put in place (see this link). With a good basis team, calling functions and reading data from SAP can be locked down tight. It's clearly a concern, but completely manageable. 

On to the fun part! Let's create a little class for getting data out of SAP. We're going to use a class, rather than a script, so that we can build on it in the next lesson.

The connection class

First, we need to initialize our connection again. 

from pyrfc import Connection
import re

class main():
    def __init__(self):
        ASHOST='my.server.name'
        CLIENT='000'
        SYSNR='00'
        USER='mouser'
        PASSWD='mypassword'
        self.conn = Connection(ashost=ASHOST, sysnr=SYSNR, client=CLIENT, user=USER, passwd=PASSWD)

Ok - pretty simple When we initialize the class, we connect to SAP. You could pass the login parameters to the class - might even be a good idea.

Now we want to create a function to call RFC_READ_TABLE. To do this, we need to pass:

  • Fields: the fields we want to return
  • SQLTable: The table we want to query
  • Where: Any WHERE conditions
  • MaxRows: The maximum number of rows to return
  • FromRow: The starting row of the result set to use

Therefore, our query function should look like this:


    def qry(self, Fields, SQLTable, Where = '', MaxRows=50, FromRow=0):
        """A function to query SAP with RFC_READ_TABLE"""

        # By default, if you send a blank value for fields, you get all of them
        # Therefore, we add a select all option, to better mimic SQL.
        if Fields[0] == '*':
            Fields = ''
        else:
            Fields = [{'FIELDNAME':x} for x in Fields] # Notice the format

        # the WHERE part of the query is called "options"
        options = [{'TEXT': x} for x in Where] # again, notice the format

        # we set a maximum number of rows to return, because it's easy to do and
        # greatly speeds up testing queries.
        rowcount = MaxRows

        # Here is the call to SAP's RFC_READ_TABLE
        tables = self.conn.call("RFC_READ_TABLE", QUERY_TABLE=SQLTable, DELIMITER='|', FIELDS = Fields, \ 
                                OPTIONS=options, ROWCOUNT = MaxRows, ROWSKIPS=FromRow)

        # We split out fields and fields_name to hold the data and the column names
        fields = []
        fields_name = []

        data_fields = tables["DATA"] # pull the data part of the result set
        data_names = tables["FIELDS"] # pull the field name part of the result set

        headers = [x['FIELDNAME'] for x in data_names] # headers extraction
        long_fields = len(data_fields) # data extraction
        long_names = len(data_names) # full headers extraction if you want it

        # now parse the data fields into a list
        for line in range(0, long_fields):
            fields.append(data_fields[line]["WA"].strip())

        # for each line, split the list by the '|' separator
        fields = [x.strip().split('|') for x in fields ]

        # return the 2D list and the headers
        return fields, headers

Easy right? The comments should give you an idea what is happening in each step. The result from SAP is in text, so much of the function is just parsing the data.

So if you wanted to use your new class, you could do something like this:

# Init the class and connect
# I find this can be very slow to do... 
s = main() 

# Choose your fields and table
fields = ['MATNR', 'EAN11']
table = 'MEAN'
# you need to put a where condition in there... could be anything
where = ['MATNR <> 0']

# max number of rows to return
maxrows = 10

# starting row to return
fromrow = 0

# query SAP
results, headers = s.qry(fields, table, where, maxrows, fromrow)

print headers
print results

You can download the full class from here. I suggest you modify it and learn what each portion does before you do any testing on your own system. I've left out error checking and assertions for simplicity.

That's it for now. In the next segment, I'll show you how to write standard SQL and have it converted to the RFC format. In the meantime, ping me if you have questions in the comments.

-ab

 

 

Using a Redirect Manager is like Hosting a Guestbook for Despicable people

I maintain several websites for my work at shift8 and I recently put in place a piece of software to record redirects on my site. Each time someone typed in a url like "http://shift8solutions.com/[REQUEST]" my redirect manager would record the attempt and the IP of the user. The goal was to find broken links and maybe, just maybe, understand if people expected a page to exist that I hadn't created.

How naive am I?

The true result was a log of all the bungled attempts to gain entry to my site. This made me happy because 1. I love data, 2. they did not succeed and 3. I learned a ton. Here is a breakdown of my wannabe hackers by country (top ten offenders):

NewImage

In general, the goal seemed to be direct access to phpmyadmin and various RSS feeds (that do not exist). There were also some very specific references to products that I do not own - most likely references to security issues in other software platforms. I recognized elements from Wordpress, Drupal, and Joomla. In general, it was fishing, not really hacking and it wasn't very sophisticated. I did learn a bit about what is visible to outsiders about my software choices.

For fun, using google tables, I made a heat map of the above data by country - Notice that most of these attempts were made from the US, but Argentina and most of Europe had activity as well.

NewImage

It's bizarre to look through these logs - there is far more information tracked, but it wouldn't be nice to post it. I guess I should say thanks to these guys - it made for an interesting bit of research and my site is stronger for it.

Thanks goons!

~ab

Network Diagrams and Python Web Crawlers

Output 27k 600px 1000zoom invert I'm fascinated by networks and data visualization. I've always wanted to try my hand at making some of the inspiring images I see on blogs like flowing data. This network diagram is my first amateur attempt.

The code

I started by writing a rather simple web crawler in Python. The logic for the bot was:

1. Open a page

2. Create a list of all the links on that page (capture the total number of links)

3. For each link, create a new bot to follow the link and start the whole process again.

This was a great chance to use the Threading module in Python. I am not an expert in threading or multiprocessing. However, threading allowed me to create a new bot for each link I wanted to follow.

Here is the code for my spider class:

'''

Created on Jun 13, 2012

@author: Alex Baker

'''

#imports

import urllib2,BeautifulSoup,time

from threading import Thread

classspider1():

    def scan(self,url,mem, f):

        # Get the url

        usock = urllib2.urlopen(url)

        # Your current URL is now your "old" url and

        # all the new ones come from the page

        old_url = url

        # Read the data to a variable

        data = usock.read()

        usock.close()

        # Create a Beautiful Soup object to parse the contents

        soup = BeautifulSoup.BeautifulSoup(data)

        # Get the title

        title = soup.title.string

        # Get the total number of links

        count = len(soup.findAll('a'))

        # For each link, create a new bot and follow it.

        for link in soup.findAll('a'):

            try:

                # Cleaning up the url

                url = link.get('href').strip()

                # Avoid some types of link like # and javascript

                if url[:1] in ['#', '/','','?','j']:

                    continue

                # Also, avoid following the same link

                elif url == old_url:

                    continue

                else:

                    # Get the domain - not interested in other links

                    url_domain = url.split('/')[2]

                    # Build a domain link for our bot to follow

                    url = "http://%s/" % (url_domain)

                    # Make sure that you have not gone to this domain already

                    if self.check_mem(url, mem)==0:

                        try:

                            # Create your string to write to file

                            text = "%s,%s,%s\n" % (old_url, url, count)

                            # Write to your file object

                            f.write(text)

                            print text

                            # Add the domain to the "memory" to avoid it going forward

                            mem.append(url)

                            # Spawn a new bot to follow the link

                            spawn = spider1()

                            # Set it loose!

                            Thread(target=spawn.scan, args=(url, mem, f)).start()

                        except Exception, errtxt:

                            # For Threading errors print the error.

                            print errtxt

                        except:

                            # For any other type of error, give the url.

                            print 'error with url %s' % (url)

            except:

                # Just keep going - avoids allowing the thread to end in error.

                continue

    def check_mem(self, url,mem):

        # Quick function to check in the "member" if the domain has already been visited.

        try:

            mem.index(url)

            return 1

        except:

            return 0

As you can see, the code is simplistic - it only considers the domain/sub-domain rather than each individual link. Also, because it checks to make sure that no domain is used twice

To run the class, I used something like this:

mem = []

f = open('output.txt', 'w')

url = 'http://justanasterisk.com'# write the url here

s = spider1()

s.scan(url, mem, f)

Once started, it doesn't stop - so kill it after a while (or build that in). Running this on my MacBook, I recorded 27,000 links in about 10 minutes.

The data

The number of data points is small in comparison to some of the sets I've explored using BigQuery or Amazon SimpleDB. However, I wanted to make a visualization and I realized that the number of pixels would really define how many data point were useful. I figured that 10 minutes would give me the structure that I wanted. I used my blog justanasterisk.com as the starting point. I won't attach the data (you can create that yourself) but suffice to say that each line was:

source, destination, # of links on source page

The visualization

Here is where I was out of my element. I browsed a few different tools and the best (read: easiest) solution for my needs was Cytoscape. It is simple to use and has several presets included to make you feel like you've done some serious analysis. For the image above, I used one of the built in layouts (modified slightly) and a custom visual style.

NewImage

Screen Shot 2012 06 18 at 09 38 PM

Screen Shot 2012 06 18 at 09 39 PM

Screen Shot 2012 06 18 at 10 40 AM

I won't underwhelm you with further details, but shoot me an email if you want more. I'll probably add a few more images to this post when I get them rendered.

Best,

~ab