01 Sep 2011

Winding down...

Posted by Jamis on Thursday, September 1

It’s been a good run, but my thrill for blogging has ebbed. Most of my online conversation occurs on Twitter these days, and my Buckblog has definitely felt the lack of attention.

So, it is with some sadness that I announce the end of the Buckblog. Nothing is actually going away: the articles and existing comments will all remain where they are, but I won’t be posting any more, and no more comments will be accepted on any of them. It’ll all be “read-only” from here on out.

I have no doubt that in coming months and years I’ll get the itch to post something that won’t fit on Twitter. I’m not sure yet how I’ll work that. Maybe I’ll create a Tumblr account for the purpose. Or maybe I’ll just post static HTML pages and link them up via Twitter. I figure I’ll cross that bridge when I come to it.

At any rate, here’s to the future!

Posted in Metablog | 0 comments

07 Jun 2011

Sharing the Inheritance Hierarchy

Posted by Jamis on Tuesday, June 7

I’ve been working more closely with Ruby 1.9 and Rails 3 lately, and while in general it’s been smooth going, there was one particularly disappointing road bump today.

Consider the following code, from a Rails functional test:

1
2
3
4
5
6
7
8
9
10
11
12
class SessionControllerTest < ActionController::TestCase
  tests SessionController
  # ...
end

class LoginTest < SessionControllerTest
  # ...
end

class LogoutTest < SessionControllerTest
  # ...
end

This works great with ruby 1.8.7. The tests call sets the SessionController as the controller under test, and the subclasses gain access to that via the “inheritable attributes” feature of ActiveSupport.

Sadly, this does not work in ruby 1.9. Those tests have errors now, saying that the controller instance variable needs to be set in the setup method, because the inheritable attribute of the parent is no longer being inherited.

Some digging and experimenting helped me pare this down to a simpler example:

1
2
3
4
5
6
7
8
9
10
11
require 'minitest/unit'
require './config/application'

class A < MiniTest::Unit::TestCase
  write_inheritable_attribute "hi", 5
end
p A.read_inheritable_attribute("hi")

class B < A
end
p B.read_inheritable_attribute("hi")

If you run this example with ruby 1.9, it will print “5”, and then “nil”. And the most telling bit: if you comment out the superclass from A’s definition, the program will then print “5” and “5”. It was obviously something that MiniTest was doing.

Another 30 minutes later, I had my answer. MiniTest::Unit::TestCase was defining an inherited callback method, to be invoked every time it was subclassed:

1
2
3
def self.inherited klass # :nodoc:
  @@test_suites[klass] = true
end

ActiveSupport, too, is defining a version of the inherited method, monkeypatching the Class object directly so that subclasses can get a copy of their parent’s inheritable attribute collection. And because MiniTest is descended from Class, MiniTest’s version was overriding ActiveSupport’s version, causing the inheritable attributes to never be copied on inheritance.

Frustrating!

All it takes is the addition of one line—just a single word!—to “fix” MiniTest’s version:

1
2
3
4
def self.inherited klass # :nodoc:
  @@test_suites[klass] = true
  super  # <-- THIS RIGHT HERE
end

You can argue that ActiveSupport shouldn’t be monkeypatching system classes like that. Maybe, maybe not. The point is, it is, and it actually does it in a pretty “good neighbor” way. It saves a reference to any prior “inherited” definition, and calls it from within its version, chaining the calls together.

Sadly, MiniTest’s assumption that it would be the only consumer of the inherited hook in its ancestor chain totally kills ActiveSupport’s attempts at working together. I’ve had to resort to calling the tests helper in each test subclass, explicitly. Not a huge deal, but I’d sure love to have back the two hours I spent debugging this.

The lesson? Always be a good neighbor. Never assume you are the only kid on the playset. Call super when you override a method. Create a call chain when you replace a method in situ. Think of the children!

Posted in Essays and Rants | 8 comments

17 Mar 2011

Maze Generation: More weave mazes

Posted by Jamis on Thursday, March 17

My previous post showed one way to create “weave mazes”, or mazes in which the passages weave over and under one another. The technique I showed was flexible, and could be implemented using several different maze algorithms, but it has (at least) one big drawback: it’s not predictable. You don’t know how many over/under crossings you’ll get until the maze has been generated, and for small mazes (5×5, for instance) you may very often get no crossings at all.

In the comment thread for that post, Robin Houston described an alternate algorithm. Instead of building the crossings as you generate the maze, you do it as a preprocessing step. You scatter crossings (either at random, or deliberately) across the maze, and then generate the maze so that they connect correctly.

Sounds like it could be tricky, yeah? You’ve got all these independent connections, like little graphs floating around, all disjoint... If only there were a way to generate a maze by connecting disjoint trees into a single graph…

What’s that? Oh, right! Kruskal’s algorithm does just that! Let’s take a look at how easy Kruskal’s makes this.

Click here to read the rest of this article.

Posted in Under the Hood | 18 comments

04 Mar 2011

Maze Generation: Weave mazes

Posted by Jamis on Friday, March 4

This is a “weave” maze:

weave maze

The name does not describe any specific algorithm for generating mazes, but rather simply identifies the class of maze, where some passages weave over or under other passages. It’s a type of 3D maze but it can be rendered nicely in 2D, as long as you conform to the following rules:

  • Passages may only move under or over each other when they are perpendicular to each other.
  • A passage may not terminate (dead-end) either under or over another passage.
  • Passages may not change direction either over or under another passage.

If you break any of those rules, you’ll wind up hidden or ambiguous corridors when viewed as a 2D projection, and we don’t want that!

Let’s take a closer look at how the algorithms need to be adapted to support weave mazes.

Click here to read the rest of this article.

Posted in Under the Hood | 21 comments

28 Feb 2011

Weave Mazes: Your Take?

Posted by Jamis on Monday, February 28

Later this week I’m going to post an article about how you might implement “weave mazes”, like this one:

weave maze

This is a maze where some of the passages “weave” over or under other passages. It’s a type of 3D maze, but very constrained (mostly for esthetic reasons):

  • Passages may only move under or over each other when they are perpendicular to each other.
  • A passage may not terminate (dead-end) either under or over another passage.
  • Passages may not change direction either over or under another passage.

Before I post my implementation, though, I wanted to get you thinking: how would you implement a weave maze?

Posted in Under the Hood | 6 comments

22 Feb 2011

Programming Language Survey Results

Posted by Jamis on Tuesday, February 22

A couple of weeks ago (already!) I asked two simple questions of my Twitter followers: what are (up to) four programming languages that you know and love, and what are (up to) four programming languages that you would like to learn?

Around the time I asked the question, I had almost 4200 followers on Twitter. Of those 4200, 38 responded. That’s almost one percent, and it is most definitely not a large enough sample from which to draw meaningful conclusions! Even if all 4200 of my twitter followers responded, it would still have been a strongly biased survey: most people following me on twitter are (or were) Ruby/Rails programmers, or found me through some other Ruby/Rails programmer. Asking this group what their favorite languages are is bound to have some strong biases.

And, sure enough, the top ten “know-and-love” languages among the 38 respondents were:

  1. ruby (32)
  2. javascript (25)
  3. java (13)
  4. python (9)
  5. c (9)
  6. c# (7)
  7. php (6)
  8. c++ (5)
  9. perl (4)
  10. lisp (4)

Compare that to the top ten “want-to-learn” languages:

  1. erlang (22)
  2. objective-c (12)
  3. clojure (12)
  4. python (12)
  5. lisp (10)
  6. haskell (9)
  7. scala (9)
  8. lua (5)
  9. r (5)
  10. c (5)

Ruby didn’t even make that list! (It actually came in at #11.)

So, what I’m trying to say is, the data is strongly slanted toward the preferences of Rails programmers who enjoy Ruby as a programming language. Still, there are some interesting tidbits to find here. I am definitely not a statistician, but here are four points that emerged that I found enlightening:

Every respondent included at least one object-oriented language among their know-and-love’s. For most people, that was Ruby, but Java, Python, C++, Objective-C, and others were mentioned, too. 17 people mentioned two object-oriented languages, and 17 more mentioned three. Three people gave all four of their know-and-love languages as object-oriented languages. What do I take away from this? All we really know is that object-oriented languages are apparently widely taught/learned among my twitter followers. No surprise there!

Only 8 respondents indicated that they know-and-love a functional language. Six people gave only a single functional language among their four know-and-loves, one more gave two, and one more gave three. Is this because people don’t love functional languages? Definitely not. There is simply not enough data to draw any conclusions about why this number is smaller than OO languages, but it does make you wonder, doesn’t it? Are programmers with predominantly OO backgrounds intimidated by functional paradigms? As you’ll see in the next question, it’s definitely not a lack of interest!

The four functional languages that people gave as “know-and-love” are:

  1. lisp (4)
  2. haskell (3)
  3. erlang (3)
  4. clojure (1)

Now, compare that with the next point:

33 of the 38 respondents wanted to learn at least one functional language. That’s almost 87%, versus only 21% that already knew a functional language. There is some serious curiosity there; people know OO, but want to learn functional. And what functional languages are people curious about?

  1. erlang (22)
  2. clojure (12)
  3. lisp (10)
  4. haskell (9)
  5. scheme (2)
  6. ocaml (2)
  7. ml (1)
  8. f# (1)

Erlang, far and away, is the one most people are curious about. Interesting!

The last point: 32 respondents included at least one object-oriented language in their want-to-learn list. 12 people wanted to learn one OO language, 16 wanted to learn two, and four people wanted to learn three OO languages. Which languages?

  1. python (12)
  2. objective-c (12)
  3. scala (9)
  4. lua (5)
  5. ruby (4)
  6. c# (3)
  7. smalltalk (3)
  8. java (2)
  9. javascript (2)
  10. c++ (1)
  11. io (1)
  12. coffeescript (1)
  13. groovy (1)

Curiouser and curiouser! I love that a predominantly Ruby-centric group wants Python and Objective-C above all others. ObjC actually makes sense to me, even aside from the popularity of iOS as a platform. ObjC has many similarities to Ruby, and is a great language for Rubyists. Python, though…I love that it’s tied for #1. It shows a certain open-mindedness in the Ruby community, which gives me warm fuzzies.

Anyway, I’ll say it again: there really wasn’t a large enough sample set to draw meaningful conclusions here. But it’s still fun to look at seeming patterns in the data. Hopefull you’ve found something useful here, too!

Posted in Projects | 4 comments

19 Feb 2011

Kaleidoscope

Posted by Jamis on Saturday, February 19

It’s funny how knowledge leads to knowledge. You start digging deeper into one thing, and discover threads leading off into related fields. I began by researching maze algorithms, decided I wanted to see what mazes in more complex tesselations would look like, and after one thing or anouther found myself learning about Wythoff constructions.

The result is Kaleidoscope, a library for generating uniform tilings using Wythoff constructions.

gem install kaleidoscope

A uniform tiling is a tesselation of a plane using regular polygons (with a few other constraints that I won’t go into here). What this means is that, in essense, you give Kaleidoscope a few input parameters, and it hands you a bunch of regular polygons.

1
2
3
4
5
6
7
8
9
10
11
require 'kaleidoscope'

pattern = Kaleidoscope::Pattern.new(6, 3)

pattern.generate! do |point|
  point.x * point.x + point.y * point.y < 10
end

pattern.polygons.each do |polygon|
  # ...
end

The polygons are generated around the origin of the plane, within a region you specify via the block to the generate! method. Once done, you can translate and scale the polygons to wherever you like, for rendering purposes. Kaleidoscope will even include basic tricoloring data for the polygons, to make it easy to decorate your pattern.

Some obligatory pretty pictures:

Click here to read the rest of this article.

Posted in Announcements Projects | 5 comments

09 Feb 2011

Mazes in CoffeeScript

Posted by Jamis on Wednesday, February 9

Several people have asked how I did the Javascript demos in my maze algorithm articles, and while I’ve answered them a couple of times in the comments, I thought it might be interesting enough to warrent its own post.

The demos are actually implemented in CoffeeScript, a really elegant little language that compiles to Javascript. CoffeeScript lets you ignore (most of) Javascript’s warts, while still enjoying all of Javascript’s strengths. I like.

So, the CoffeeScript sources for my maze demos are at https://github.com/jamis/csmazes.

I’ve tried to anticipate most questions about building, installation, and usage in the readme, but in a nutshell:

  1. You’ll need to install CoffeeScript if you want to do anything with the code.
  2. “cake build” will compile the sources to Javascript.
  3. “examples/index.html” has widgets for all the implemented algorithms for you to play with.
  4. “Maze.createWidget” is the quick-and-easy way to embed a maze somewhere.
  5. You can do it the hard way, too, if you need more control: instantiate a maze with the algorithm of your choice, then call Maze#step until the maze is generated (or Maze#generate to do it all at once).

Note that the implementations there are optimized for animating the algorithms; the source code is not a good place to learn how a typical maze implementation might look. Every algorithm is broken down so that it can be called piecewise, one step at a time. If you were going to implement any of these for any “serious” purpose, odds are you’d do it much more efficiently, and without all the ceremony that csMazes requires.

Still, if you just want to embed an animation of a maze algorithm on a web page, csMazes works quite well. Except for IE7. And probably other IE’s as well. (If you’re an IE guru, I’d appreciate patches, but please make sure your fixes don’t impact the animation performance on other browsers. I was able to make IE render the mazes okay, but then the animation performance on Chrome was abyssmal.)

The code is in the public domain, so do with it what you will. If you do something fun with it, let me know!

Posted in Announcements Projects | 1 comment

07 Feb 2011

Maze Generation: Algorithm Recap

Posted by Jamis on Monday, February 7

(Hello Minecrafters! If you’re looking for random mazes you can build in Minecraft, you might be better served by this page I wrote. It’ll give you block-wise schematics for the maze, and will require less mental translation than the demos here. Just don’t use IE—it won’t work right in that browser. If you want to learn more about random maze generation, though, read on!)

Over the last six weeks I documented eleven different maze generation algorithms. It’s been a blast. I’ve had so much fun researching and implementing these! It’s been great to see some of you taking my advice and applying these to learning a new programming language, as well; I really believe there is a lot of value to be had, there.

I intend to write more maze-related posts, too. Some possible topics include methods for rendering your mazes, ways to implement “weave” mazes (mazes where passages pass over and under other passages), non-rectangular tesselations, and so on. We’ll see where it all goes.

To wrap up the algorithm series, here’s a quick recap of each of the eleven algorithms:

Click here to read the rest of this article.

Posted in Under the Hood | 18 comments

03 Feb 2011

Maze Generation: Sidewinder algorithm

Posted by Jamis on Thursday, February 3

Last of the (eleven!) maze algorithms on my list is the Sidewinder algorithm. With a name like that, it’s got to be cool, right?

Well, possibly. It’s got its problems, but it’s quick and easy to implement, and allows arbitrarily tall mazes. It’s closely related to the Binary Tree algorithm, but manages to get away with only one side being spanned by a passage, instead of two.

In a nutshell, it goes like this:

  1. Work through the grid row-wise, starting with the cell at 0,0. Initialize the “run” set to be empty.
  2. Add the current cell to the “run” set.
  3. For the current cell, randomly decide whether to carve east or not.
  4. If a passage was carved, make the new cell the current cell and repeat steps 2-4.
  5. If a passage was not carved, choose any one of the cells in the run set and carve a passage north. Then empty the run set, set the next cell in the row to be the current cell, and repeat steps 2-5.
  6. Continue until all rows have been processed.

So, while it’s related to the Binary Tree algorithm, it’s a bit more complicated. However, words don’t do it justice; it really is a lot more straightforward than it sounds.

Let’s walk through an example.

Click here to read the rest of this article.

Posted in Under the Hood | 5 comments

01 Feb 2011

Maze Generation: Binary Tree algorithm

Posted by Jamis on Tuesday, February 1

This next algorithm is crazy, crazy simple. It also is the only one of the algorithms I’ve covered with the ability to generate a perfect maze without keeping any state at all. It can build the entire maze by looking at only a single cell at a time.

Here’s how it works: for every cell in the grid, randomly carve a passage either north, or west.

That’s it. You can mix it up if you want, and choose between other diagonal sets: north/east, south/west, or south/east, but whichever diagonal set you select must be used consistently throughout the entire maze.

There’s another nifty attribute of this algorithm: if you can guarantee that for any given cell, you will always carve a particular direction, then you never need to keep any of the maze in memory. When you need to display some portion of the maze, you just “rebuild” it, trivially. This means you could create infinitely large mazes in very little memory.

It’s not all roses, though. A side-effect of this algorithm is that it has a strong diagonal bias. Also, two of the four sides of the maze will be spanned by a single corridor. But for some applications, that might be acceptable.

It’s almost too simple to bother, but let’s take a quick look at the algorithm in practice.

Click here to read the rest of this article.

Posted in Under the Hood | 2 comments

27 Jan 2011

Maze Generation: Growing Tree algorithm

Posted by Jamis on Thursday, January 27

Remember way back in the first article of this series, where I said that Recursive Backtracking was my favorite for generating mazes? Well, I changed my mind. My new favorite is the Growing Tree algorithm.

This algorithm is pretty cool. Configured one way, it mimics the behavior of the Recursive Backtracking algorithm. Configured another, it works almost exactly like Prim’s algorithm. Another trivial change, and you can generate mazes with attributes of both.

It’s really pretty slick. Here’s how it works:

  1. Let C be a list of cells, initially empty. Add one cell to C, at random.
  2. Choose a cell from C, and carve a passage to any unvisited neighbor of that cell, adding that neighbor to C as well. If there are no unvisited neighbors, remove the cell from C.
  3. Repeat #2 until C is empty.

Pretty straight-forward, really. But the fun lies in how you choose the cells from C, in step #2. If you always choose the newest cell (the one most recently added), you’ll get the recursive backtracker. If you always choose a cell at random, you get Prim’s. It’s remarkably fun to experiment with other ways to choose cells from C.

Let’s look at a simple example.

Click here to read the rest of this article.

Posted in Under the Hood | 22 comments

24 Jan 2011

Maze Generation: Hunt-and-Kill algorithm

Posted by Jamis on Monday, January 24

(Note: if you’re reading this in a feed reader, you’re going to be missing out on the illustrations and demonstrations.)

Alright, so the theme last week was “algorithms for generating uniform spanning trees.” If this week has a theme, it might be something like “algorithms that sometimes act like the recursive backtracker, but only kind of.”

Today’s algorithm is the “hunt-and-kill algorithm”. Sounds violent, doesn’t it? It’s actually quite tame. In a nutshell, it works like this:

  1. Choose a starting location.
  2. Perform a random walk, carving passages to unvisited neighbors, until the current cell has no unvisited neighbors.
  3. Enter “hunt” mode, where you scan the grid looking for an unvisited cell that is adjacent to a visited cell. If found, carve a passage between the two and let the formerly unvisited cell be the new starting location.
  4. Repeat steps 2 and 3 until the hunt mode scans the entire grid and finds no unvisited cells.

Let’s walk through an example.

Click here to read the rest of this article.

Posted in Under the Hood | 12 comments

20 Jan 2011

Maze Generation: Wilson's algorithm

Posted by Jamis on Thursday, January 20

The last algorithm I mentioned, Aldous-Broder, was (if you’ll recall) originally devised as a way to generate uniform spanning trees (UST). (Review: a spanning tree is a tree that connects all the vertices of a graph. A uniform spanning tree is any one of the possible spanning trees of a graph, selected randomly and with equal probability.)

Aldous-Broder was effective, but inefficient. For this (the seventh!) article in my series on maze algorithms, I’ll focus on Wilson’s algorithm for selecting uniform spanning trees.

The algorithm goes something like this:

  1. Choose any vertex at random and add it to the UST.
  2. Select any vertex that is not already in the UST and perform a random walk until you encounter a vertex that is in the UST.
  3. Add the vertices and edges touched in the random walk to the UST.
  4. Repeat 2 and 3 until all vertices have been added to the UST.

So, it’s still doing the random walk, but I think you’ll see that this algorithm converges much more rapidly than Aldous-Broder. It’s still annoying to watch an animation of the early stages of the process, but not nearly as bad as the other.

As before, here’s a simple example of the algorithm in action.

Click here to read the rest of this article.

Posted in Under the Hood | 13 comments

17 Jan 2011

Maze Generation: Aldous-Broder algorithm

Posted by Jamis on Monday, January 17

The next maze algorithm I’m going to cover, the Aldous-Broder algorithm, is one of the simplest imaginable. It’s also one of the least efficient. In fact, it is so inefficient that you might wonder why anyone would even bother implementing it, let alone discovering it and having their name attached to it.

D. Aldous and A. Broder were two researchers, working independently, who were investigating uniform spanning trees. A spanning tree, as you may or may not already know, is any tree that connects all the vertexes in a graph. A uniform spanning tree is one chosen randomly (and with equal probability) among all the possible spanning trees of a given graph.

Riveting, isn’t it?

There are actually applications for this. Outside of recreational maze generation, I mean. I don’t know what they are myself, but Wikipedia informs me that they are in probability and mathematical physics. (If you know more specifically how uniform spanning trees are being applied in research, please share in a comment!)

Anyway: Aldous and Broder were researching these uniform spanning trees, and independently arrived at the following algorithm:

  1. Choose a vertex. Any vertex.
  2. Choose a connected neighbor of the vertex and travel to it. If the neighbor has not yet been visited, add the traveled edge to the spanning tree.
  3. Repeat step 2 until all vertexes have been visited.

Remember: this algorithm is notable in that it selects from all possible spanning trees (i.e. mazes) of a given graph (i.e. field) with equal probability. The other algorithms I’ve covered don’t have this property.

Let’s walk through a trivial example. If you haven’t spotted yet why the algorithm is so ineffecient, hopefully the example will demonstrate it.

Click here to read the rest of this article.

Posted in Under the Hood | 1 comment