Saturday, December 13, 2014

Say, what are you doing this afternoon?

Bus Stop - Under The Rain by Leonid Afremov
Here's a happy little afternoon project for new users trying to play with a new
language or script, and getting their feet in the open-source ecosystem.

Your phone's handy weather app depends upon the goodwill of a for-profit data provider, and their often-opaque API (14 degrees? Where was it observed? When?) That's a shame because most data collection is paid for by you, the taxpayer.

Let's take the profit-from-data out of that system. Several projects have tried to do this before (including libgweather), but each tried to do too much and replicate the one-data-provider-to-rule-them-all model. And most ran aground on that complexity.


Here's where you come in

One afternoon, look up your weather service's online data sources. And knock together a script to publish them in a uniform format.

Here's the one I did for the United States:
Worldwide METAR observation sites
US DOD and NOAA weather radars
US Forecast/Alert zones

  • Looking for data on non-METAR (non-airport) observation stations, weather radar sites, and whatever forecast and alert areas your country uses.

  • Use the same format I did: Lat (deg.decimal), Lon (deg.decimal), Location Code, Long Name. Use the original source's data, even if it's wrong. Area and zones should use the lat/lon of the centroid.

  • The format is simple CSV, easy to parse and publish.

  • Publish on GitHub, for easy version control, permalinking, free storage, and uptime.

  • Here's the key: Your data must be automatically-updated. Regularly, your program must check the original source and update your published version. How I did it with a cron job. Publish both the data and your method on GitHub. 

  • When you have published, drop me an e-mail so I can link to your data and source.

If you do it right, one afternoon to setup your country's self-updating database. Not a bad little project, you learn a little, and you help make the world a better place.


My country doesn't have online weather data

Sorry to hear that. You're missing some great stuff.

If you live in a country with a reasonably free press and reasonably fair elections, make a stink about it. You are probably already paying for it through taxes, why can't you have it?

If you live somewhere else, then next time you have a revolution or coup, add 'open data' to the long list of needed reforms.


What will this accomplish?

This will create a free, sustainably updated, uniform, crowdsourced set of accurate worldwide data that will be easy to compile into a single global database. If you drop out, your online code will ensure another volunteer can step in.

This is one fundamental tool that other free-weather projects have lacked. And any weather project can use this.

The global database of locations is really small by most database standards. Small enough to easily fit on a phone. Small enough to be bundled with apps that can request data directly from original sources...once they can look up the correct source to use.


How will this change the world?

It's about simple tools that make
it easy to create free, cool software.
And it's about ensuring free access to data you already paid for.

Not bad for one afternoon's contribution.

Friday, December 12, 2014

How to set up automatic Github uploads

Back to basics

For years, I have been playing with US Weather Data.

A couple years ago, I came up with a lookup server for US weather locations, and -of course- had all kinds of plans for how to expand it to more services and more countries.

So, of course, it went nowhere.

This week, I took a look at the issue again, and came up with a simple set of lookup tables (yes, a tiny database) to replace the server. It reduced the amount of code by about 70%, and it's finally in a maintainable, expandable form. It's simple, and ready for a few drive-by patches.

Which leads to the next question...how to share it with the world?

Well, in this decade, sharing software and data means putting it on GitHub.


Creating the GitHub repositories

We have a Python 3 script that creates the database, another Python 3 example script that consumes the database, and the three CSV files that make up the database tables. Most consumers will probably just want the data for their own purposes - they really don't need the scripts (and it would be quite rude for millions of users to hit NOAA servers to produce their own databases each week)

So I created two repositories on GitHub, one for the data and another for the optional scripts.

If you haven't used GitHub, you first go online and create your account and create the repositories (with initial README file) in a web browser. Then:

cd /tmp
git clone https://github.com/ian-weisser/data   # Create a clone of the online repository
cd data
cp /home/somewhere/else/data/* ./               # Add existing data files into the local clone
git add ./*                                     # Tell git to track the new data files
git commit -m "Add initial data"
git push origin master                          # Push the changes back to GitHub
  Username for 'https://github.com':
  Password for 'https://me@github.com': 
cd /tmp 
rm -rf /tmp/data                                # Clean up

Overall, it's pretty simple. For the next iteration:
  • We must automate the upload credentials so it doesn't prompt for a username and password each time.
  • We must add a cron job that creates the data for upload.
  • Let's move this project from /tmp on my laptop to the server where it will live permanently. This also means that the server should pull the latest Python script...which is also hosted on GitHub.

Let's setup the server:


sudo apt-get install git    # On this server, about 3 MB of download
sudo apt-get install python3-httplib2  # Script dependency
mkdir uploads
cd uploads
git clone https://github.com/ian-weisser/data          # Setup the local 'data' repository
git clone https://github.com/ian-weisser/weather       # Setup the local 'software' repository
weather/nws_database_creator.py                        # Use the script to update CSVs in ~/uploads/data
cd data
git config --global user.email "my_email@example.com"  # First time setup
git config --global user.name "My Name"                # First time setup
git config credential.helper store                     # First time using this repo setup
git add -A
git commit -m "Test update"
git push origin master
  Username for 'https://github.com': my-github-login-name   # First time using this repo setup
  Password for 'https://my-github-login-name@github.com':   # First time using this repo setup


Automating the process

So now subsequently, without all the first time setup:

#!/bin/sh

dir=/home/ian/uploads
hub="https://github.com/ian-weisser"
now=$(/bin/date +%F)

cd $dir/weather
/usr/bin/git pull "$hub"/weather      # Update the database_creator script

cd $dir/data
/usr/bin/git pull "$hub"/data         # Need to pull to be able to push later

$dir/weather/nws_database_creator.py  # Do the update
/usr/bin/git add -A                   # Record any changes
/usr/bin/git commit -m "Scheduled update $now"
/usr/bin/git push origin master
exit 0


And now we can run the shell script weekly from a user cron job:

5 6 * * 7 /home/ian/uploads/cron/weekly_weather > /home/ian/uploads/weather.log 2>&1


A couple of tests, and it works.

It's completely automatic.

Once a week, my server downloads the latest make-a-database script, builds the database, and uploads the database to GitHub for the world to use.


The data


If you want to see the final product (the lat/lon lookup tables for US weather), you can get them at:
https://raw.githubusercontent.com/ian-weisser/data/master/metar.csv
https://raw.githubusercontent.com/ian-weisser/data/master/radar.csv
https://raw.githubusercontent.com/ian-weisser/data/master/zone.csv

They will, of course, be automatically updated weekly.

Tuesday, July 22, 2014

Etherape as root

This has popped up as an occasional question in the forums.
If you monitor network activity using Etherape, it's worthless as a user level application. It must be run as root. But running as root requires opening a terminal and entering a password.

Here's how to change the settings in Ubuntu 14.04 to create an Etherape (as root) launcher in Unity that won't require a password. It has only been tested with 14.04. Unity development moves fast - this method may not work for a different version of Ubuntu.

It requires a minor edit to the /etc/sudoers file, and minor edits to two .desktop files. It's easy to undo if you change your mind.


1) I like to document my changes. I have three changes to keep track of, in case I change my mind. So I'm adding a new subdirectory to keep track of this change.

mkdir my_customizations/etherape_launcher




2) Edit the sudoers file. This is the dangerous part. Pay close attention.

You need to know your login account name (fred, Jane, dog, etc.)
You need to know your hostname as shown in /etc/hostname.

$ sudo visudo


At the bottom of the file, add one line. Spacing, spelling, and CAPS are important.

account_name hostname = NOPASSWD: /usr/bin/etherape


Use your account_name and hostname, of course.

Do a CTRL+X to exit, and hit Y to confirm the exit.
Now look carefully at the screen.

If visudo is happy with your changes, it will print no output, but simply return you to a command prompt.

If the screen shows an error, type 'edit' and go right back in to fix the error. If you don't know how to fix the error, delete your added line and save. You MUST NOT exit with a broken sudoers file - a broken sudoers file is 100% guaranteed to break your system! It's very important that you pay close attention to any error messages visudo prints when you try to exit.



3) Test: Open a NEW terminal window, and try 'sudo etherape'.
It should no longer ask for a password.

If you get sudo errors instead, you have broken your system! Stop! Save all your data! Reboot into a recovery console (since you cannot use sudo) and edit /etc/sudoers to undo your changes. Go to an Ubuntu help channel and ask for help on how to do this!



4) Now that we know the sudo change works, move the sudoers line to a separate file

We used visudo to ensure the correct syntax and a working sudoers file. But it's not a great idea to leave undocumented system changes laying around in system-installed (and system-rewritable) files. So let's move the customization to a separate file so we can keep track of it.

$ sudo nano /etc/sudoers.d/etherape

File contents:

# Allows 'sudo etherape' to be run, including from a launcher, wthout password

my_account my_hostname = NOPASSWD: /usr/bin/etherape


Remember to change my_account and my_hostname to the proper values!

Test your change using 'sudo etherape'. It should work without a password, and without an error message.

Go back into visudo and delete the etherape line at the bottom.

Test your change once more using 'sudo etherape'. It should work without a password, and without an error message.


Now /etc/sudoers is back to it's original state, and our change is safely in a separate file. If we want to delete the change, we simply delete /etc/sudoers.d/etherape.



5) Let's link this new sudoers file to our documentation so we can find it again.

$ sudo ln /etc/sudoers.d/etherape \
          my_customizations/etherape_launcher/




6) Edit the etherape-root .desktop file

$ sudo nano /usr/share/applications/etherape-root.desktop


File contents:

[Desktop Entry]
Name=EtherApe (as root)
Type=Application
Categories=GNOME;System;Network;
Comment=Graphical Network Monitor
Comment[es]=Monitor Gráfico de Red
#TryExec=su-to-root                            <-- Comment out
#Exec=su-to-root -X -c /usr/bin/etherape        <-- Comment out
Exec=/usr/bin/sudo /usr/bin/etherape -i wlan0     <-- Add line
Icon=etherape
Terminal=false




7) Edit the etherape (user) .desktop file to hide it. We don't want to delete it; the package manager will expect it to be there.

$ sudo nano /usr/share/applications/etherape.desktop


#[Desktop Entry]           <-- Comment out
Version=1.0
Encoding=UTF-8
Name=EtherApe
GenericName=EtherApe
Comment=Graphical Network Monitor
Comment[es]=Monitor Gráfico de Red
Exec=etherape
Terminal=false
Type=Application
Icon=etherape
Categories=GNOME;System;Network;




8) Let's save links to the changed desktop files so I can find them again someday:

$ sudo ln /usr/share/applications/etherape-root.desktop \
          my_customizations/etherape_launcher/
$ sudo ln /usr/share/applications/etherape.desktop \
          my_customizations/etherape_launcher/




9) Test the new .desktop file.

Open Unity's Dash.
Close any existing search.
Open a new search for Etherape. There should be only one option now: Etherape (as root).
Open it. It should show all interfaces without requiring a password or prompting an error.
Lock it to the Launcher (right-click).
Close etherape. The launcher icon should remain.
Open it again from the launcher. It should work

How to recover from a broken sudo

You cannot fix a broken sudoers file while logged in. You would need access to sudo for that...and it's broken. You can't get to the root account because it's locked in normal use.

So you need to reboot into a condition where root is unlocked and available.

1) Save all your work and then reboot.

2) Hold down the left SHIFT button to get to the GRUB prompt.
If you get a pretty splash screen, reboot and hold down the button earlier.

3) At the Grub prompt, select Advanced Options

4) At the Advanced Options prompt, select any Recovery Console

5) Watch the boot technobabble run across your screen.

6) At the Recovery Console prompt, select Drop To A Root Shell

7) You should see a warning about a read-only filesystem.
The prompt should look like '#'

8) Remount the root filesystem as read-write instead of read-only. We want to make changes that stick.

# mount -o remount /


9) Try a test write. It should not cause an error:

# date > /tmp/test


10) Okay, now go into visudo and fix your mistake.

11) When complete, reboot into normal Ubuntu.

# shutdown -r now

Tuesday, July 8, 2014

Simple geolocation in Ubuntu 14.04

Geolocation means 'figuring out where a spot on the Earth is'.

Usually, it's the even more limited question 'where am I?'


GeoClue


The default install of Ubuntu includes GeoClue, a dbus service that checks IP address and GPS data. Since 2012, when I last looked at GeoClue, it's changed a bit, and it has more backends available in the Ubuntu Repositories.

 

Privacy

Some commenters on the interwebs have claimed that GeoClue is privacy-intrusive. It's not. It merely tries to figure out your location, which can be handy for various services on your system. It doesn't share or send your location to anybody else.

dbus introspection and d-feet

You would expect that a dbus application like GeoClue would be visible using a dbus introspection tool like d-feet (provided by the d-feet package).

But there's a small twist: D-feet can only see dbus applications that are running. It can only see dbus applications that active, or are inactive daemons.

It's possible (and indeed preferable in many circumstances) to write a dbus application that is not a daemon - it starts at first connection, terminates when complete, and restarts at the next connection. D-feet cannot see these when they are not running.

Back in 2012, GeoClue was an always-on daemon, and always visible to d-feet.
But in 2014 GeoClue is (properly) no longer a daemon, and d-feet won't see GeoClue if it's not active.

This simply means we must trigger a connection to GeoClue to make it visible.
Below are two ways to do so: The geoclue-test-gui application, and a Python3 example.




geoclue-test-gui


One easy way to see GeoClue in action, and to make it visible to d-feet, is to use the geoclue-test-gui application (included in the geoclue-examples package)

$ sudo apt-get install geoclue-examples
$ geoclue-test-gui





GeoClue Python3 example


Once GeoClue is visible in d-feet (look in the 'session' tab), you can see the interfaces and try them out.

Here's an example of the GetAddress() and GetLocation() methods using Python3:

>>> import dbus

>>> dest           = "org.freedesktop.Geoclue.Master"
>>> path           = "/org/freedesktop/Geoclue/Master/client0"
>>> addr_interface = "org.freedesktop.Geoclue.Address"
>>> posn_interface = "org.freedesktop.Geoclue.Position"

>>> bus        = dbus.SessionBus()
>>> obj        = bus.get_object(dest, path)
>>> addr_iface = dbus.Interface(obj, addr_interface)
>>> posn_iface = dbus.Interface(obj, posn_interface)

>>> addr_iface.GetAddress()
(dbus.Int32(1404823176),          # Timestamp
 dbus.Dictionary({
     dbus.String('locality')   : dbus.String('Milwaukee'),
     dbus.String('country')    : dbus.String('United States'),
     dbus.String('countrycode'): dbus.String('US'),
     dbus.String('region')     : dbus.String('Wisconsin'), 
     dbus.String('timezone')   : dbus.String('America/Chicago')}, 
     signature=dbus.Signature('ss')),
 dbus.Struct(                 # Accuracy
     (dbus.Int32(3),
      dbus.Double(0.0),
      dbus.Double(0.0)),
      signature=None)
)

>>> posn_iface.GetPosition()
(dbus.Int32(3),               # Num of fields
 dbus.Int32(1404823176),      # Timestamp
 dbus.Double(43.0389),        # Latitude
 dbus.Double(-87.9065),       # Longitude
 dbus.Double(0.0),            # Altitude
 dbus.Struct((dbus.Int32(3),  # Accuracy
              dbus.Double(0.0),
              dbus.Double(0.0)),
              signature=None))

>>> addr_dict = addr_iface.GetAddress()[1]
>>> str(addr_dict['locality'])
'Milwaukee'

>>> posn_iface.GetPosition()[2]
dbus.Double(43.0389)
>>> posn_iface.GetPosition()[3]
dbus.Double(-87.9065)
>>> lat = float(posn_iface.GetPosition()[2])
>>> lon = float(posn_iface.GetPosition()[3])
>>> lat,lon
(43.0389, -87.9065)

Note: Geoclue's accuracy codes



Ubuntu GeoIP Service


When you run geoclue-test-gui, you discover that only one backend service is installed with the default install of Ubuntu - the Ubuntu GeoIP service.

The Ubuntu GeoIP service is provided by the geoclue-ubuntu-geoip package, and is included with the default install of Ubuntu 14.04. It simply queries an ubuntu.com server, and parses the XML response.

You can do it yourself, too:

$ wget -q -O - http://geoip.ubuntu.com/lookup

<?xml version="1.0" encoding="UTF-8"?>
<Response>
  <Ip>76.142.123.22</Ip>
  <Status>OK</Status>
  <CountryCode>US</CountryCode>
  <CountryCode3>USA</CountryCode3>
  <CountryName>United States</CountryName>
  <RegionCode>WI</RegionCode>
  <RegionName>Wisconsin</RegionName>
  <City>Milwaukee</City>
  <ZipPostalCode></ZipPostalCode>
  <Latitude>43.0389</Latitude>
  <Longitude>-87.9065</Longitude>
  <AreaCode>414</AreaCode>
  <TimeZone>America/Chicago</TimeZone>
</Response>




GeoIP


The default install of Ubuntu 14.04 also includes (the confusingly-named) GeoIP. While it has the prefix 'Geo', it's not a geolocator. It's completely unrelated to the Ubuntu GeoIP service. Instead, GeoIP is a database the IP addresses assigned to each country, provided by the geoip-database package. Knowing the country of origin of a packet or server or connection can be handy.

geoip-database has many bindings, including Python 2.7 (but sadly not Python 3). Easiest is the command line, provided by the additional geoip-bin package.

$ sudo apt-get install geoip-bin
$ geoiplookup 76.45.203.45
GeoIP Country Edition: US, United States




GeocodeGlib


Back in 2012, I compared the two methods of geolocation in Ubuntu: GeoClue and GeocodeGlib. GeocodeGlib was originally intended as a smaller, easier to maintain replacement for GeoClue. But as we have already seen, GeoClue has thrived instead of withering. The only two packages that seem to require GeocodeGlib in 14.04 are gnome-core-devel and gnome-clocks
GeocodeGlib, provided by the libgeocode-glib0 package, is no longer included with a default Ubuntu installation anymore, but it is easily available in the Software Center.

sudo apt-get install gir1.2-geocodeglib-1.0


That is the GTK introspection package for geocodeglib, and it pulls in libgeocode-glib0 as a dependency. The introspection package is necessary.

Useful documentation and code examples are non-existent. My python code sample from 2012 no longer works. It's easy to create a GeocodeGlib.Place() object, and to assign various values to it (town name, postal code, state), but I can't figure out how to get GeocoddeGlib to automatically determine and fill in other properties. So even though it seems maintained, I'm not recommending it as a useful geolocation service.

Friday, June 13, 2014

Intermediate GTFS: How to prevent stop_times.txt from hogging all your RAM

In a GTFS file for a large provider (example: Chicago), the stop_times.txt file can be huge.

Here's an example:

$ unzip -v 20140611.cta.gtfs

 Archive:  20140611.cta.gtfs
  Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
 --------  ------  ------- ---- ---------- ----- --------  ----
      235  Defl:N      153  35% 2014-06-10 22:49 6dfdcbd7  agency.txt
     4889  Defl:N      554  89% 2014-06-10 22:49 38544274  calendar.txt
     2139  Defl:N      335  84% 2014-06-10 22:49 f69f2cb4  calendar_dates.txt
    13000  Defl:N     2279  83% 2014-06-10 22:49 59bf453e  routes.txt
 29896363  Defl:N  7158190  76% 2014-06-10 22:54 21c7d003  shapes.txt
  1332298  Defl:N   264714  80% 2014-06-10 23:00 48a736df  stops.txt
355900629  Defl:N 57437701  84% 2014-06-10 23:24 e13831ff  stop_times.txt
     2514  Defl:N      575  77% 2014-06-10 23:24 51dca8cf  transfers.txt
  5654714  Defl:N   656156  88% 2014-06-10 23:25 7056aa15  trips.txt
       42  Defl:N       39   7% 2014-06-10 22:49 87676593  frequencies.txt
    18959  Defl:N     6180  67% 2011-06-20 11:07 bcf138d1  developers_license_agreement.htm
 --------          -------  ---                            -------
392825782         65526876  83%                            11 files

Uncompressed, the stop_times.txt file is 356 MB, 91% of the entire archive.

You can see right away that loading that behemoth into memory will cause a pause (30 seconds, on my hardware) that a user will probably notice. And the way Python dicts use memory, 356 MB of CSV data can easily explode into 3 or 4 GB of a comparable python dict.  And that leaves us with a problem: We can't load the file on-demand because it takes too darn long, we can't keep it stored in an easily-accessible form in memory because it's too darn big. While we could keep it in CSV format in RAM, that's not a fomat we can do anything with - every query would need to iterate through the entire file, and that's sort of the worst of both worlds.




So what can we do?


First, we can filter data immediately upon load, and discard big chunks of it right away. This means the structure of the program must change from:

cal = load_table(calendar.txt)
rou = load_table(routes.txt)
tri = load_table(trips.txt)
sto = load_table(stops.txt)
tim = load_table(stop_times.txt)
svcid = valid_service_ids_for_today(cal)
trips = trips_that_stop_there_today(tri, svc_id, tim)

to something more like:

cal = load_table(calendar.txt)
rou = load_table(routes.txt)
tri = load_table(trips.txt)
sto = load_table(stops.txt)
tim = load_and_filter_stop_times_table(stop_times.txt, filter_criteria)
svcid = valid_service_ids_for_today(cal)
trips = trips_that_stop_there_today(tri, svc_id, tim)

Instead of loading the entire table and holding it in memory, a customized loader filters the data right away, and chuck out the 90% we don't actually want.


Second, we can use Python's read size parameter to prevent a spike in RAM usage (and consequent slowdown of the entire system).

For example, normally you read data with something like:

with gtfs_file.open('stop_times.txt', mode='r') as infile:
    input = infile.read()
output = do_stuff_to(input.decode())
return output


And we can rearrange it to something like:

eof    = False
output = {}
with gtfs_file.open('stop_times.txt', mode='r') as infile:
    while eof == False:
        input_block  = infile.readlines(4096)
        if len(input_block) == 0:
            eof = True

        output_block = do_stuff_to(input_block)
        output.update(output_block)

return output

The EOF flag controls the process, and an empty read triggers the flag.
The input RAM requirements are tiny, since each block re-uses the same memory over and over. Only the filtered output grows.

The performance penalty of this kind of loading seems about 30% (10 seconds) longer than a single read(), but now the process can run happily in the background without slowing other processes. Filtering the huge file down to a manageable size and searchable format also make subsequent use of the data _much_ faster.




Working examples

Here's working sample code of a generic load_table(), which I called map_gtfs_table_to_dict()

Here's working sample code of a load_stop_times(), which I called map_stop_times_to_dict()


Let's use the generic sample code to load trips.txt from a CTA GTFS file:

>>> import map_gtfs_table_to_dict as load_table
>>> import zipfile
>>>
>>> # You must provide your own GTFS file, of course!
>>> gtfs_path = "/home/ian/gtfs/20140611.cta.gtfs"
>>> gtfs_file = zipfile.ZipFile(gtfs_path, mode='r')
>>> all_trips = load_table(gtfs_file, 'trips.txt')
>>>
>>> len(all_trips)
90428
>>>
>>> for trip_id in list(all_trips)[0:5]:
...      print(trip_id, all_trips[trip_id])
... 
45072053995 {'route_id': 'Y', 'direction_id': '0', 'wheelchair_accessible': '1',
'service_id': '104509', 'direction': '0', 'shape_id': '304500033',
'block_id': '45007747580', 'schd_trip_id': 'R501'}
437076941189 {'route_id': '147', 'direction_id': '1', 'wheelchair_accessible': '1',
'service_id': '43701', 'direction': 'North', 'shape_id': '4374519',
'block_id': '437008159276', 'schd_trip_id': '76941189'}
437076941185 {'route_id': '97', 'direction_id': '1', 'wheelchair_accessible': '1',
'service_id': '43701', 'direction': 'East', 'shape_id': '4374369',
'block_id': '437008158984', 'schd_trip_id': '76941185'}
437076941184 {'route_id': '97', 'direction_id': '1', 'wheelchair_accessible': '1',
'service_id': '43701', 'direction': 'East', 'shape_id': '4374369',
'block_id': '437008159008', 'schd_trip_id': '76941184'}
437076941186 {'route_id': '97', 'direction_id': '1', 'wheelchair_accessible': '1',
'service_id': '43701', 'direction': 'East', 'shape_id': '4374369',
'block_id': '437008158986', 'schd_trip_id': '76941186'}
>>> 

There we are, all 90,000 trips in a few seconds. Each line of the GTFS table broken into a dict for easy access.



Now, let's try using the same generic load function to load stop_times.txt

>>> import map_gtfs_table_to_dict as load_table
>>> import zipfile
>>>
>>> # You must provide your own GTFS file, of course!
>>> gtfs_path = "/home/ian/gtfs/20140611.cta.gtfs"
>>> gtfs_file = zipfile.ZipFile(gtfs_path, mode='r')
>>> all_stops = load_table(gtfs_file, 'stop_times.txt')
Traceback (most recent call last):
  File "", line 1, in 
  File "./test.py", line 56, in map_gtfs_table_to_dict
    line_dict[column] = line.split(',')[columns[column]].strip('" ')
MemoryError
>>> 

It took a few minutes to consume all that memory and fail, and everything else on the system slowed to a crawl.

Finally, let's use the custom load_stop_table() to load-and-filter stop_times.txt without reducing the rest of the system to a memory-starved crawl:

>>> import map_stop_times_to_dict as load_table
>>> import zipfile
>>>
>>> gtfs_path = "/home/ian/gtfs/20140611.cta.gtfs"
>>> gtfs_file = zipfile.ZipFile(gtfs_path, mode='r')
>>> stop_list = ['3661', '3698', '15026', '17433']
>>> filtered_stops = load_table(gtfs_file, stop_list)
4379
>>> for line in list(filtered_stops)[0:5]:
...     print(line, filtered_stops[line])
... 
4521987 {'stop_headsign': '79th Red Line', 'departure_time': '13:53:57',
'shape_dist_traveled': '22219', 'arrival_time': '13:53:57',
'pickup_type': '0', 'stop_id': '3661', 'stop_sequence': '35',
'trip_id': '437077075442'}
4538374 {'stop_headsign': '79th Red Line', 'departure_time': '15:04:01',
'shape_dist_traveled': '37847', 'arrival_time': '15:04:01',
'pickup_type': '0', 'stop_id': '3680', 'stop_sequence': '58',
'trip_id': '437077077228'}
4325384 {'stop_headsign': '127th/Lowe', 'departure_time': '17:26:00',
'shape_dist_traveled': '27644', 'arrival_time': '17:26:00',
'pickup_type': '0', 'stop_id': '5992', 'stop_sequence': '42',
'trip_id': '437077000628'}
475149 {'stop_headsign': '95th Red Line', 'departure_time': '18:01:18',
'shape_dist_traveled': '14028', 'arrival_time': '18:01:18',
'pickup_type': '0', 'stop_id': '15026', 'stop_sequence': '20',
'trip_id': '436073915060'}
4112403 {'stop_headsign': '95th Red Line', 'departure_time': '22:08:53',
'shape_dist_traveled': '29370', 'arrival_time': '22:08:53',
'pickup_type': '0', 'stop_id': '15026', 'stop_sequence': '42',
'trip_id': '437076995010'}
>>> 

It's still slow to load the entire file. On my equipment, I can load stop_times.txt in about 30 seconds this way. Varying the block size has some benefit, but you can play with that yourself.

In the final example, the 'line' variable -- the key to that subdict of one trip serving a stop -- is just a counter, the (approximate) CSV row number. stop_times.txt has no unique item on each line that can be used as a dict key.


Thursday, May 15, 2014

ssmtp to email cron output

Cron output in Ubuntu gets discarded. The default install of Ubuntu does not install a Mail Transport Agent (MTA), so cron cannot mail you the output.

You can install an MTA -several are easily available in the Ubuntu repositories- but configuration is not trivial. MTAs date from before the IMAP- and SMTP-based e-mail systems we use today, which makes them highly flexible and configurable...but not necessarily the easiest solution.

ssmtp is a limited-purpose MTA intended to replace the 'mail' command and send all output to an SMTP mailserver. ssmtp is provided by the 'ssmtp' package.

Install


sudo apt-get install ssmtp

Configuration file #1: /etc/ssmtp/ssmpt.conf

This tells ssmtp how to send email to my account.
It's not the best idea to store your e-mail password in world-readable plain text. See how to protect the password properly.

# Config file for sSMTP sendmail
#
# The person who gets all mail for userids < 1000
# Make this empty to disable rewriting.
root=me@example.com

# The place where the mail goes. The actual machine name is required no
# MX records are consulted. Commonly mailhosts are named mail.domain.com
mailhub=my_mail_server.net:465
AuthUser=me@example.com
AuthPass=my_email_password
UseTLS=Yes

# Where will the mail seem to come from?
rewriteDomain=example.com

# The full hostname
hostname=hey_its_my_computer

# Are users allowed to set their own From: address?
# YES - Allow the user to specify their own From: address
# NO - Use the system generated From: address
FromLineOverride=YES


Configuration File #2: /etc/ssmtp/

This directs root mail to my account.

# sSMTP aliases
#
# Format:       local_account:outgoing_address:mailhub
#
# Example: root:your_login@your.domain:mailhub.your.domain[:port]
# where [:port] is an optional port number that defaults to 25.
root:me@example.com:my_mail_server.net:465



And now cron output lands in my normal inbox.