elfs: (Default)

I wish I’d known this a long time ago.  Django’s request object includes a dictionary of key/value pairs passed into the request via POST or GET methods.  That dictionary, however, works in a counter-intuitive fashion.  If a URL reads http://foo.com?a=boo, then the expected content of request.GET['a'] would be 'boo', right?  And most of us who’ve used other URL parsers in the past know that http://foo.com?a=boo&a=hoo know that the expected content of request.GET['a'] would be ['boo', 'hoo'].

Except it isn’t.  It’s just 'hoo'.  Digging into the source code, I learn that in Django’s MultiValueDict, __getitem__(self, key) has been redefined to return the last item of the list.  I have no idea why.  Maybe they wanted to ensure that a scalar was always returned.  The way to get the whole list (necessary when doing an ‘in’ request) is to call request.GET.getlist('a').

Lesson learned, an hour wasted.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

It drives me nuts that we in the Django community rely on Solr or Haystack to provide us with full-text search when MySQL provides a perfectly functional full-text search feature, at least at the table level and for modest projects. I understand that not every app runs on MySQL, but mine do, and I’m sure many of you are running exactly that, and could use this technique without modification.

Well, after much digging, I found an article on MercuryTide’s website covering custom QuerySets with FULLTEXT and relevance, and built this library around it.

I used this rather than Django’s internal filter keyword search, because this technique adds an additional aggregated value, the relevance of the search terms to the search. This is useful in sorting the search, something not automatically provided by the QuerySet.filter() mechanism.

You must create the indexes against which the search will be conducted. For performance reasons, if you’re importing a massive collection of data, it’s better to import all of the data and then create the index. More importantly, when you declare that a SearchManager to be used by a Model, you declare it thusly:

class Book(models.Model):
    ...
    objects = SearchManager()

When you do, you must add an index that corresponds to that list of fields:

CREATE FULLTEXT INDEX book_text_index ON books_book (title, summary)

Notice how the contents of the index correspond with the contents of the Search Manager.  Or you can automate the process with South:

    def forwards(self, orm):
        db.execute('CREATE FULLTEXT INDEX book_text_index ON books_book (title, summary)')

    def backwards(self, orm):
        db.execute('DROP INDEX book_text_index on books_book')

To use the library is fairly trivial. If there is only one index (which can encompass several columns) for any table, you call

books = Book.objects.search('The Metamorphosis').order_by('-relevance')

If there’s more than one index, you specify the index by the list of fields:

books = Book.objects.search('The Metamorphosis', ('title', 'summary')).order_by('-relevance')

Note that that’s a tuple, and must be.

If you specify fields that are not part of a FULLTEXT index, the error message will include lists of viable indices.   It will also tell you if there are no indices.  (Getting that to work was tricky, as it involved database introspection and the decoration of methods, so I’m especially proud of it.)

The library is fully available on my github account: django_mysqlfulltextsearch

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

Last night at the Django meetup, we also talked about unit testing.  Someone mentioned continuous integration, and we all discussed our favorites.  At one point, the fellow at whose offices we were holding the meeting mentioned that his team used Hudson and pulled up an example on the overhead projector.  I mentioned that Hudson was my favorite as well, and he said, “Yes, I think it was your blog entries that led me to this solution.”

That’s the second time this year I’ve walked into a meeting and someone’s said, “I’ve used something you wrote.”  Kinda cool.  Wish it was addictive.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

Inspired by Five Web Files That Will Improve Your Website, I decided this morning to implement OpenSearch on the Indieflix Website. (It’s not up yet, we’re still beta’ing it, and it’s along with a massive list of changes that still need testing, so don’t go looking for it.) OpenSearch is a way to turn your site’s search feature into an RSS feed: you define for (other) search engines how and what you search on your site, and it automagically creates relationships so that your site’s search can be included in remote results. Normally, the results would be returned in an XML container for RSS or Atom, but HTML’s fine for some applications.

As a trivial example, I’m going to add your website’s search engine to Firefox’s drop-down of search engines. I’m going to assume that you already have a search engine enabled on Django. Haystack, or Djapian, or Merquery, or something along those lines.

First, you need a file called opensearch.xml. I put this into my template base directory:

Code behind cuts )

Obviously, there are things to modify here: your real short name, description, image and search paths.   You know where those go.

This goes into your base urls.py (if you haven’t imported direct_to_template, now is the right time):

More code behind the cut )

Again, if you changed the names or locations of the template, you have no one but yourself to blame if stuff blows up.   The name opensearch.xml is not particularly important or required, in either the template (obviously) or in the deployed URL.   To make external spiders and browsers know about your website, this goes into your base.html or site_base.html, somewhere in the headers with your metainformation and style sheets and so forth.

Final cut )

And that’s it.

To test this, load up your home page in Firefox, then click on the search bar’s drop-down. You should see your “short name” offered as a search plug-in. Select it and type some search terms, and Firefox will know to use your search engine, and the results will be from your site.

Pretty cool.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

As I’ve been working on a project at Indieflix, I’ve been evaluating other people’s code, including drop-ins, and for the past couple of days a pattern has emerged that, at first, bugged the hell out of me. Django has these lovely things called context processors– they allow you to attach specific elements of code to the request context before the router and page managers are invoked; the idea is that there are global needs you can attach to the context in which your request will ultimately be processed, and this context can be grown, organically, from prior contexts.

Nifty.  I kept noticing, however, an odd trend: programmers attaching potentially large querysets to the context.  Take, for example, the bookmarks app: it has a context processor that connects a user’s bookmarks to the context, so when time comes to process the request if the programmer wants the list of the user’s bookmarks, there it is.

It took me three days to realize that this is not wasteful.  It says so right there in the goddamn manual: QuerySets are lazy — the act of creating a QuerySet doesn’t involve any database activity. So building up the queryset doesn’t trigger a query until you actually need the first item of the queryset.  It just sits there, a placeholder for the request, until you invoke it in, say, a template.  Which means that you can attach all manner of readers to your contexts, and they won’t do a database hit or other I/O until you drop a for loop into a template somewhere. I’ve been avoiding this technique for no good reason.

This also means that pure “display this list and provide links to controlling pages, but provide no controls of your own” pages are pure templates that can be wrapped in generic views.

Ah!  And this is why I didn’t get why generic views were such a big deal.  Information is often so context sensitive so how could a generic view possibly be useful?  The examples in the book are all tied to modifying the local context in urls.py, but that’s not really where the action is.  The action is in the context processors, which are building that context.

I feel like Yoda’s about to walk in and say, “Good, but much to learn you still have, padawan.”

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

We all know the drill with MySQL and Django.  You have a dev database, probably compressed, and you need to roll it out so your server’s in a “pristine” state before you start running migrations and adding stuff.  And the routine typically looks something like this: gzip -dc dev_database.gz | mysql -u djanguser -p djangodb.  It is also perfectly legitimate (and less error-prone) with Django to do this instead: gzip -dc dev_database.gz | ./manage.py dbshell.

Yes, that’s painfully obvious. But sometimes, we miss the painfully obvious.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

So, I got tired of the way Django-SocialAuth was borked and not working for me, so I forked the project and have put up my own copy at GitHub.

There are three things I noticed about the project right away: First, it forces you to use a broken templating scheme. I haven’t fixed that, but in the meantime I’ve ripped out all of the base.html calls to keep them from conflicting with those of other applications you may have installed. Really, the templates involved have very little meat on them, especially for the login.html page. These are components that would have been better written as templatetags. Second, the project is rife with spelling errors. (The most famous, of course, being that the original checkout was misspelled “Djano”). I am a fan of the notion that a project with spelling problems probably has other problems. I’ll make allowances for someone for whom English is a second language, but I was not filled with confidence. And third, the project violates Facebook’s TOS by storing the user’s first and last name. Along the way I discovered that the Facebook layer was completely non-functional if you, like three million other Facebook users, had clicked “Keep me logged in,” which zeros out the “login expires” field from Facebook. It would never accept you because your expiration date would then always be January 1, 1970, effectively before “now.”

I’ve barely begun hacking on the beast, but already I’ve made some progress. Facebook now works the first time around. I’ve cleaned up much of the spelling and grammar in the documentation, such as it is, and I’ve clipped many of the template naming problems that I saw in my original use of the system. I’ve also revised setup.py so that it runs out of the box, although I’m tempted to go with a different plan, one like django-registration where it is your responsibility to cook up the templates for the provided views. And I’ve ripped out most of the Facebook specific stuff to replace it with calls to PyFacebook, which is now a dependency.

One thing I do want to get to is a middleware layer that interposes the right social authentication layer on those users who come in from the outside world: i.e. if the AuthMeta indicates you’re a facebook user, then request.user will be a lightweight proxy between you and Facebook for those fields that are, by definition, Facebook-only (and a violation of the TOS if you copy them). It might make more sense to have a decorator class, but only if you don’t have a gazillion views.

I haven’t gotten much further than a Facebook layer that satisfies my immediate needs. I haven’t had a need to branch out and explore the Twitter or Oauth components yet. What I needed at the moment was a simple authentication layer that allowed either local users (for testing purposes) or FacebookConnect users, and one that didn’t need to contact Facebook for absolutely every view, whether you wanted it or not, just to check “is this guy still a facebook user?”, which is how the DjangoFacebookConnect toolkit does things. I suppose, if you’re a Facebook app, that’s what you want, but I’m not writing a Facebook app, I’m writing an app that uses FacebookConnect to associate and authenticate my application users’s accounts via their Facebook accounts.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

Eddie Sullivan at Chickenwing Software has a fascinating post entitled The Facebook Platform is Dead. I agree with many of his comments. I don’t think there’s anything terrible about the “Facebook Certified Application” program; that’s a business decision, not a software policy decision. But Sullivan says one thing that set me off. He wrote:

The big companies can afford to hire someone full-time to test and re-test their apps against every change to the back-end, but the rest of us cannot.

To which my reaction is: shut up, and don’t be so damned lazy.

Install Celerity, and get headless testing with WATIR in a rapid-response environment. Install Hudson and get fully automated continual integration. Install Git as your repository, and tell Hudson what your master is, and honor it. Install Cucumber so that when it fails, the failure reports are in clear and unambiguous English. Put this all on that archaic hunk of junk PC in your basement, give it a fresh hard drive and install Debian Linux. Give Hudson a mailserver so it can notify you when an automatic test run fails.

None of this, from building your own PC and installing Linux all the way up to installing Ruby, JRuby, Java, and all of the other tools necessary to support your build environment and make it work, ought to be beyond the ken of the average programmer.

Facebook is just a web application.  Treat it as such.  Test against it.  Get a few Facebook Test Accounts, write a few WATIR scripts to automate their Facebook relationships and friend graphs, write more to log in and go to the application, then test the Hell out of your application.

None of this is hard. If you spend one week teaching yourself how to set Hudson and Git up correctly, you’ll benefit forever from Kent Beck’s famous quote, “transmuting fear into boredom.”  Even better, by putting it on Hudson and Git, you get freedom from even the boredom, for the most part.  Instead, you get knowledge that your fixes don’t break anything, and the capability of backing out when they do.

What is hard is being in the habit of testing.  Of writing testing in terms of expectation. I’m fair at it, but I’m getting better.  I aspire to Beck’s mantra: I’m not a great progammer.  I’m a good programmer with great habits.   Test-driven development (and behavior-driven development) are great ideas (although a lot of TDD zealots go overboard, with the predictable backlash), but integrating them with even better, continuous building and continuous testing, should make all web application development better.

Believe me, Facebook apps are in desperate need of two things: automated testing, and better graphic design.  I can at least contribute to one of these.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

I use Django signals a lot in my professional work, mostly to create specialized tables that track events in the ecosystem of social networking sites that I build.  For example, if I make a post on a social networking site, that causes an event that creates a signal.  That signal will be heard by, for example: (1) a reward mechanism, which might give me a badge/acheivement/sticker/shiny rock/whatever to acknowledge my place in the social network heirarchy, (2) a news mechanism, to look up who my friends are and tell them what I’m doing, (3) a logging mechanism, which will be of interest to my investors, (4) a social media mechanism, which will analyze my relationships with other social networking sites and ping them, among (5, 6, 7) whatever else you can think of.

These are all unique, filtered views of an action I just took that might serve me as agents of attention, reputation, and illumination.

As I’ve been working in this space, I’ve learned three very important rules for Django:

(1) Any Django application (not project, application) that builds its tables via signals and business logic rulesets must only and ever build its tables via signals and rulesets.  It must not have its own views for doing so.  It’s CUD is signals.  Only the R in CRUD may have views for the signal-built application.

(2) When dumping data for your project, never dump data from the signal-based applications.  When you want to reload this data (after the appropriate mangling/filtering/whatever), the objects in your ecosystem models will send out the appropriate signals to build those tables for you.  (Signal senders in your views that alter data?  Shame on you!)

(3) As a consequence of (2), your signal-built data tables must take their dates from their instances.  Otherwise, the signal-built tables become disordered with respect to the events they’re expected to monitor.

Of course there are exceptions to these rules, but this is a very solid way to think about doing signal-based development.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

I needed some rails-like environment settings using Django.  This is the quick and easy way to get that done.   First, after building your “settings.py” file, create an environment under your project root named “env“.   Move your settings.py file to this directory, rename it “development.py"

Now, in your root directory, open up a new file named “settings.py” and put this in:

import os
import os.path

PROJECT_ROOT = os.path.normpath(os.path.dirname(__file__))

local_import = "env/development.py"
if os.getenv("DJANGO_ENV") == 'TEST':
    local_import = "env/test.py"
elif os.getenv("DJANGO_ENV") == 'PRODUCTION':
    local_import = "env/production.py"

import_file = open(os.path.join(PROJECT_ROOT, local_import))
exec(import_file)

Bingo!  Now you have different versions of settings.py depending upon whether or not you’re starting the server in TEST, PRODUCTION, or the default, “development”.  Create production.py and test.py as needed.  You can start the server with:

DJANGO_ENV=TEST ./manage.py runserver

And it will load the correct environment.  Using a shell script to run a test server and then a test harness might not be the most elegant thing in the world, but at least it’s Un*x, and it makes development less stressy.

Sweet!

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

Ah, the bleeding edge.  It’s a war out there!

This morning, Facebook released fbwatir. I’ve just spent the past few hours knocking it around, and have come to the conclusion that it’s pretty mega-borked but it can be saved.

In fact, I now have it working with Cucumber and Firewatir. There are several major flaws in FBWatir, the biggest of which is that it assumes its own responsibility for the browser object. This is broken, and causes Cucumber to spawn an innumerable number of Firefox windows. Commenting out the browser invocation in FBWatir and putting into cucumber’s own env.rb file is much better.

I also discovered that there’s a bug in Firewatir 1.6.2 that assumes that “window zero” is always broken,  but Javascript indexes the windows starting with zero, so window zero is still a valid window ID.  Annoying as hell, but easily monkeypatched away.

I now have Cucumber working with FacebookConnect…

Given how little I know about Ruby, and given how my Ruby expert says I’m doing “inappropriate things” with the Ruby scope, that’s a freaking miracle.  We’re working together to make it work better.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

Sigh.

I’ve just spent the last few hours wandering around the various “open source” analytics programs trying to find the exact right fit for what I want.  I’m not finding it, which means that (headache ahead) I may have to write something myself.  There’s a django-analytics placeholder in GoogleCode, but it’s empty.  I at least have a model!

Basically, I have a distributed subscriber/producer package, and I want to be able to present individual producers with analysis specific to their work.  Because the work is long-form text, I want to be able to tell the viewer that the reader scrolled every paragraph into view (no, really!) and actually read the work, not just scanned it.  Both of these are more or less beyond the province of Piwik, Google Analytics, or OWA.

Time to put the research aside and concentrate on finishing the product.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

We frequently write little functions that populate the Django context, and sometimes we want that context to be site-wide, and we want every page and every Ajax handler, basically everything that takes a request and spews a response, in our application to have access to that information.  It might the user’s authentication, or his authorization, or some profile information.  Or it might be environmental: a site might have figured out what time it is on the user’s site, and will adjust backgrounds and themes accordingly.

The context might be a simple variable.  I have an example right here: is the browser you’re using good enough?  (I know, this is considered Bad Form, but it’s what I have to work with) .  The function has the simple name, need_browser_warning.  The context key may as well have the same name.  Using a constant for the context key is the usual pattern; this ensures the Django programmer won’t get it wrong more than once, at least on the view side.  (The template is another issue entirely.  Set your TEMPLATE_STRING_IF_INVALID in settings.py!)

I wanted something more clever in my context processor.  Here’s sickly clever:

import inspect
def need_browser_warning(request):
    return { inspect.currentframe().f_code.co_name:
        not adequate_browser(request.META.get('HTTP_USER_AGENT')) }

Yeah, that’s a little twisted.  It guarantees that the name of the context key is “need_browser_warning“, and the value is True or False depending upon what the function “adequate_browser” returns, which is what we want, so it’s all good.

Obviously, this isn’t good for everything.  Some context processors handle many, many values.  But for a one-key, this is a nifty way of ensuring name consistency.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

Today’s little snippet: Filtering a loosely coupled many-to-many relationship.  As revealed earlier,  I don’t really “get” the difficulty with many-to-many relationships.  I don’t even get the difficulty with signals; if you define the many-to-many manually, handling signals on it is trivial compared to trying to do it manually in one of the referring classes.

Today, I was working on an issues tracker.  There are two classes at work here, the Profile and the Issue.  One profile may be interested in many Issues, and obviously one Issue may be of interest to many Profiles.

This calls for a ProfileIssue table that stands independent (in my development paradigm) of both Profiles and Issues.   As I was working on a dashboard, I realized that one of the things I wanted was not just a list of the issues the profile was following, but also a list of the issues that the profile was responsible for creating.  As it turned out, adding that query to the ProfileIssueManager is trivial, but requires a little knowledge:

class ProfileIssueManager(models.Manager):
    def from_me(self, *args, **kwargs):
        return self.filter(issue__creator__id = self.core_filters['profile__id'])

The secret here in knowing about the core_filters attribute in the RelatedManager.   It contains the remote relationship key that you can use;  calling from_me from profiles works, but calling it from anywhere else doesn’t.  The IssueRelatedManager won’t have a profile_id and this will blow up.  That’s okay; using it that way is an error, and this is a strong example of Crash Early, Crash Often.

I can here some of you cry, “Now why, why would you need such a thing?” Well, the answer is pretty simple: templates. Having one of these allows me to write:

<p>Issues tracked: {{ profile.issues.count }}</p>
<p>Issues created: {{ profile.issues.from_me.count }}</p>

And everything will work correctly.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

Repeat after me:

  • Registration is not Authentication is not Authorization is not Utilization.
  • Registration is not Authentication is not Authorization is not Utilization.
  • Registration is not Authentication is not Authorization is not Utilization.

I’ll keep reminding myself of that until I figure out how to disentangle the four from this damned Facebook app.  Registering to use the app is not the same thing as authenticating to use the app, and it’s definitely not authorization to determine your level access.  Nor is any of this related to callbacks to the social application network to give you things like lists of friends and writing on your wall; that’s outside the responsibility of SocialAuth anyway.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

If you’ve created Django Application A, and then Django Application B, it is acceptable (and even sometimes necessary) for Application B to reference Application A.  The canonical example is Django contrib.auth; everyone references that beast.  It is not acceptable for you to go and edit Application A to reference Application B.  That is officially Doin’ It Wrong.

In a smarter world, you will never use a Django ManyToMany field.  You will create an individual class referencing both objects of the many-to-many relationship.  You will inevitably need more smarts than a mere two-column table, and a separate class, however small and insignificant, will provide both self-documentation and a chance to define the __unicode__() method for administration. Django is smart enough to hook up the relationships under the hood.

Unit testing is goddamned hard when your application is married to FacebookConnect.  A smarter relationship uses the SocialAuth layer, with additional proxies for information and posting handlers.  That way, not only can your application send updates to Facebook walls, but it can also update its activity on Twitter, and allow authentication via Google, and so on.  By using the SocialAuth layer, you can create a pseudo-layer that handles testing.  You’re still beholding to testing the SocialAuth stuff yourself.

If you’re using SocialAuth, push all of your user-related smarts into the UserProfile object, and always refer to it.  Build your UserProfile object to own the proxy to the user’s authenticated relationships with social media.  After login, leave the user alone!  Better yet, use middleware to attach the profile to the request object automagically if and when it’s present, and live with it.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

The correct call for posting to a user’s facebook wall with Python and pyfacebook, after you’ve established both user authentication via FacebookConnect and gotten stream_publish permission, is:

request.facebook.stream.publish(
    message = render_to_string(template_path, fb_context),
    action_links = simplejson.dumps(
        [{'text': "Check Us Out!", 'href': "http://someurl.com"}]),
    target_id = 'nf')

See that ‘nf’ down there in target_id?  It’s not on any of the Facebook documentation pages, but that is the correct string to post to your user’s facebook Newsfeed. (For that matter, the fact that you have to run the action_links through simplejson, and that they have to match the PublishUserAction API action_links spec, is also not documented; the documentation says it just needs an array of arrays.)  I have no idea how to post to some other user’s newfeed, but at least I’m one step closer.

Oh, another important tip: in order to make my news “stories” consistent, I’m using a template to post them to Facebook.  The template must not have newlines within, or they will show up on Facebook and it’ll look all ugly.  Every paragraph should be one long line of text without line breaks.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

One of the nifty things that Django provides is the
{% url backreference %} syntax, which allows
you to name the targets in your list of URL objects and then refer to
them by an explicit name. You can sometimes use the function name
instead, and Django has a way of turning the function name back into
the URL. It works fine as long as the signature of the backreference
and the signature of the function match.

It’s very nice for RESTful interfaces. But what about AJAXy webpages?
AJAX is all about D(X)HTML and the rendering and animation of user
interfaces in a broser, and there’s a lot of Javascript that comes
along with all that HTML and CSS. And embedded AJAX often comes with
its own set of URL calls. I mean, seriously, what if you want to write
something like this:

<example.js>=
$.getJSON(
"{% url results %}", {},
   function(resp) {
   $("#resultswrapper").html(resp.results);
});

Here, I want to replace the string “url results” with the url
that returns the results and shoves their visuals into the
“resultswrapper” HTML object, whatever it is.

You could do this in Django, making this a templatized object
and spewing it out every time. But often enough this URL
never changes during the lifetime of the program. This is
effectively javascript with a macro embedded in it that you want
substituted once, preferably at start-up. Well, I haven’t done the
start-up for you. But here’s a nifty little chunk of code that’ll do
URL reverse lookups in any static files in
your STATIC_DOC_ROOT directory (by the way, that
STATIC_DOC_ROOT setting is pretty useful for development, if you’re
serving static media out of your Django server, as devs frequently
do):

<addition to settings.py>=
import os
DIRNAME = os.path.normpath(os.path.dirname(__file__))
STATIC_DOC_ROOT = os.path.normpath(os.path.join(DIRNAME, 'static')) + '/'

And here’s the routine. Note that I’ve made it a Django command:
put it wherever you want and use it wisely:

<rendertemplates.py>=
from django.core.management.base import NoArgsCommand
from django.core.management.base import CommandError

from settings import STATIC_DOC_ROOT
from django.template import Template
from django.core.urlresolvers import reverse

import os
import os.path
import re

re_walker = re.compile(r'\.tmpl$')

class Command(NoArgsCommand):
    help = ("Run a series of templates through the Template handler to produce "
            "(semi) static files.  This is mostly useful for javascript "
            "handlers with Ajax calls in them that only need the urls "
            "defined when the application is first built or starts running.  It "
            "allows developers to use the {% url somethingorother %} syntax inside "
            "Ajax handlers without burdening the application at runtime.")

    def handle_noargs(self, **options):
        paths = [os.path.join(STATIC_DOC_ROOT, path[0], filename)
                 for path in os.walk(STATIC_DOC_ROOT)
                 for filename in path[2]
                 if re_walker.search(filename)]

        for fn in paths:
            fn_out = re_walker.sub('', fn)
            open(fn_out, 'w+').write(Template(open(fn, 'r').read()).render({}))

It’s not perfect (hey, I wrote it in about 20 minutes, mostly to
fix this problem in a quick and dirty fashion. Given that I didn’t
know how the innards of the Template class worked, and have never used
the new os.walk() function, this was pretty good.

As always, I’ve provided the source code for this.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)

One thing I see a lot of in professional Django is the importation of ugettext, Django’s internationalization library that leverages the GNU gettext project’s toolkit for generating translation catalogs. Laying the groundwork for translation is important to larger projects intended for universal appeal. Because the underscore is a valid leading character in a function name in both C and Python, it’s also a valid function name in its own right, and gettext recognizes that single function, _(), as the wrapper for a string to be recorded in the translation catalog, and for that string to be the “key” for translations into other languanges.

However, it gets a little old, after a while, typing in the same “import-as” string in module after module. So I decided, to heck with it, I’m just going to make the gettext function (or in Django’s case, ugettext) absolutely everywhere:

<__init.py__>=
from django.utils.translation import ugettext
import __builtin__
__builtin__.__dict__['_'] = ugettext

Put this into the __init.py__ file in the root of your project (the same directory level as urls.py and settings.py) and this installs _() as a global reference into the current running python VM, and now it’s as universally available as int(), map(), or str().

This is, of course, controversial.  Modifying the python global namespace to add a function can be considered maintenance-hostile.  But the gettext feature is so universal– at least to me– that __init__.py is where it belongs.

This entry was automatically cross-posted from Elf's technical journal, ElfSternberg.com
elfs: (Default)
So, I've put up my latest geek piece: Fixing an omission from Django's simplejson: iterators, generators, functors and closures, which is exactly what it says it is. Django's "simplejson", which bundles up server-side structures and exports them to the browser for interpretation and display in fat client applications, is okay, but it fails in some common ways.

For example, if I want to render a tree of data stored in a database as a JSON object, first on the server side I have to devolve the database into a massive dictionaries-of-dictionaries or lists-of-lists, and then pass the product to simplejson. The process of devolution usually involves recursing down the tree structure, and the process of rendering involves recursing down the dictionaries-of-dictionaries structure.

Why not write a class or closure that describes the process, and pass that to the JSON renderer, which will build the JSON object with a single recursive pass? A completely great idea as it eliminates this obscure, error-prone, and wasteful interim dictionaries-of-dictionaries, and it describes the rendering process in clear, tight code.

Except, the JSON handler in Django has no idea how to handle a class or closure designed to do that. It does not understand the next recursive or iterative step when presented with one of those as it recurses. My post addresses this issue.

I confess that this was a sidelight on yesterday's research: figuring out how to create a list-of-lists data object in Dojo. That stymied me, and this is a better implementation. Still, I regret not being able to publish a class called "LOLTrees."

Profile

elfs: (Default)
Elf Sternberg

August 2025

S M T W T F S
     12
3456789
10111213141516
17181920212223
24252627282930
31      

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Aug. 17th, 2025 06:28 am
Powered by Dreamwidth Studios