17 Mar 2011

Maze Generation: More weave mazes

Posted by Jamis on Thursday, March 17

My previous post showed one way to create “weave mazes”, or mazes in which the passages weave over and under one another. The technique I showed was flexible, and could be implemented using several different maze algorithms, but it has (at least) one big drawback: it’s not predictable. You don’t know how many over/under crossings you’ll get until the maze has been generated, and for small mazes (5×5, for instance) you may very often get no crossings at all.

In the comment thread for that post, Robin Houston described an alternate algorithm. Instead of building the crossings as you generate the maze, you do it as a preprocessing step. You scatter crossings (either at random, or deliberately) across the maze, and then generate the maze so that they connect correctly.

Sounds like it could be tricky, yeah? You’ve got all these independent connections, like little graphs floating around, all disjoint... If only there were a way to generate a maze by connecting disjoint trees into a single graph…

What’s that? Oh, right! Kruskal’s algorithm does just that! Let’s take a look at how easy Kruskal’s makes this.

Click here to read the rest of this article.

Posted in Under the Hood | 18 comments

04 Mar 2011

Maze Generation: Weave mazes

Posted by Jamis on Friday, March 4

This is a “weave” maze:

weave maze

The name does not describe any specific algorithm for generating mazes, but rather simply identifies the class of maze, where some passages weave over or under other passages. It’s a type of 3D maze but it can be rendered nicely in 2D, as long as you conform to the following rules:

  • Passages may only move under or over each other when they are perpendicular to each other.
  • A passage may not terminate (dead-end) either under or over another passage.
  • Passages may not change direction either over or under another passage.

If you break any of those rules, you’ll wind up hidden or ambiguous corridors when viewed as a 2D projection, and we don’t want that!

Let’s take a closer look at how the algorithms need to be adapted to support weave mazes.

Click here to read the rest of this article.

Posted in Under the Hood | 21 comments

28 Feb 2011

Weave Mazes: Your Take?

Posted by Jamis on Monday, February 28

Later this week I’m going to post an article about how you might implement “weave mazes”, like this one:

weave maze

This is a maze where some of the passages “weave” over or under other passages. It’s a type of 3D maze, but very constrained (mostly for esthetic reasons):

  • Passages may only move under or over each other when they are perpendicular to each other.
  • A passage may not terminate (dead-end) either under or over another passage.
  • Passages may not change direction either over or under another passage.

Before I post my implementation, though, I wanted to get you thinking: how would you implement a weave maze?

Posted in Under the Hood | 6 comments

07 Feb 2011

Maze Generation: Algorithm Recap

Posted by Jamis on Monday, February 7

(Hello Minecrafters! If you’re looking for random mazes you can build in Minecraft, you might be better served by this page I wrote. It’ll give you block-wise schematics for the maze, and will require less mental translation than the demos here. Just don’t use IE—it won’t work right in that browser. If you want to learn more about random maze generation, though, read on!)

Over the last six weeks I documented eleven different maze generation algorithms. It’s been a blast. I’ve had so much fun researching and implementing these! It’s been great to see some of you taking my advice and applying these to learning a new programming language, as well; I really believe there is a lot of value to be had, there.

I intend to write more maze-related posts, too. Some possible topics include methods for rendering your mazes, ways to implement “weave” mazes (mazes where passages pass over and under other passages), non-rectangular tesselations, and so on. We’ll see where it all goes.

To wrap up the algorithm series, here’s a quick recap of each of the eleven algorithms:

Click here to read the rest of this article.

Posted in Under the Hood | 18 comments

03 Feb 2011

Maze Generation: Sidewinder algorithm

Posted by Jamis on Thursday, February 3

Last of the (eleven!) maze algorithms on my list is the Sidewinder algorithm. With a name like that, it’s got to be cool, right?

Well, possibly. It’s got its problems, but it’s quick and easy to implement, and allows arbitrarily tall mazes. It’s closely related to the Binary Tree algorithm, but manages to get away with only one side being spanned by a passage, instead of two.

In a nutshell, it goes like this:

  1. Work through the grid row-wise, starting with the cell at 0,0. Initialize the “run” set to be empty.
  2. Add the current cell to the “run” set.
  3. For the current cell, randomly decide whether to carve east or not.
  4. If a passage was carved, make the new cell the current cell and repeat steps 2-4.
  5. If a passage was not carved, choose any one of the cells in the run set and carve a passage north. Then empty the run set, set the next cell in the row to be the current cell, and repeat steps 2-5.
  6. Continue until all rows have been processed.

So, while it’s related to the Binary Tree algorithm, it’s a bit more complicated. However, words don’t do it justice; it really is a lot more straightforward than it sounds.

Let’s walk through an example.

Click here to read the rest of this article.

Posted in Under the Hood | 5 comments

01 Feb 2011

Maze Generation: Binary Tree algorithm

Posted by Jamis on Tuesday, February 1

This next algorithm is crazy, crazy simple. It also is the only one of the algorithms I’ve covered with the ability to generate a perfect maze without keeping any state at all. It can build the entire maze by looking at only a single cell at a time.

Here’s how it works: for every cell in the grid, randomly carve a passage either north, or west.

That’s it. You can mix it up if you want, and choose between other diagonal sets: north/east, south/west, or south/east, but whichever diagonal set you select must be used consistently throughout the entire maze.

There’s another nifty attribute of this algorithm: if you can guarantee that for any given cell, you will always carve a particular direction, then you never need to keep any of the maze in memory. When you need to display some portion of the maze, you just “rebuild” it, trivially. This means you could create infinitely large mazes in very little memory.

It’s not all roses, though. A side-effect of this algorithm is that it has a strong diagonal bias. Also, two of the four sides of the maze will be spanned by a single corridor. But for some applications, that might be acceptable.

It’s almost too simple to bother, but let’s take a quick look at the algorithm in practice.

Click here to read the rest of this article.

Posted in Under the Hood | 2 comments

27 Jan 2011

Maze Generation: Growing Tree algorithm

Posted by Jamis on Thursday, January 27

Remember way back in the first article of this series, where I said that Recursive Backtracking was my favorite for generating mazes? Well, I changed my mind. My new favorite is the Growing Tree algorithm.

This algorithm is pretty cool. Configured one way, it mimics the behavior of the Recursive Backtracking algorithm. Configured another, it works almost exactly like Prim’s algorithm. Another trivial change, and you can generate mazes with attributes of both.

It’s really pretty slick. Here’s how it works:

  1. Let C be a list of cells, initially empty. Add one cell to C, at random.
  2. Choose a cell from C, and carve a passage to any unvisited neighbor of that cell, adding that neighbor to C as well. If there are no unvisited neighbors, remove the cell from C.
  3. Repeat #2 until C is empty.

Pretty straight-forward, really. But the fun lies in how you choose the cells from C, in step #2. If you always choose the newest cell (the one most recently added), you’ll get the recursive backtracker. If you always choose a cell at random, you get Prim’s. It’s remarkably fun to experiment with other ways to choose cells from C.

Let’s look at a simple example.

Click here to read the rest of this article.

Posted in Under the Hood | 22 comments

24 Jan 2011

Maze Generation: Hunt-and-Kill algorithm

Posted by Jamis on Monday, January 24

(Note: if you’re reading this in a feed reader, you’re going to be missing out on the illustrations and demonstrations.)

Alright, so the theme last week was “algorithms for generating uniform spanning trees.” If this week has a theme, it might be something like “algorithms that sometimes act like the recursive backtracker, but only kind of.”

Today’s algorithm is the “hunt-and-kill algorithm”. Sounds violent, doesn’t it? It’s actually quite tame. In a nutshell, it works like this:

  1. Choose a starting location.
  2. Perform a random walk, carving passages to unvisited neighbors, until the current cell has no unvisited neighbors.
  3. Enter “hunt” mode, where you scan the grid looking for an unvisited cell that is adjacent to a visited cell. If found, carve a passage between the two and let the formerly unvisited cell be the new starting location.
  4. Repeat steps 2 and 3 until the hunt mode scans the entire grid and finds no unvisited cells.

Let’s walk through an example.

Click here to read the rest of this article.

Posted in Under the Hood | 12 comments

20 Jan 2011

Maze Generation: Wilson's algorithm

Posted by Jamis on Thursday, January 20

The last algorithm I mentioned, Aldous-Broder, was (if you’ll recall) originally devised as a way to generate uniform spanning trees (UST). (Review: a spanning tree is a tree that connects all the vertices of a graph. A uniform spanning tree is any one of the possible spanning trees of a graph, selected randomly and with equal probability.)

Aldous-Broder was effective, but inefficient. For this (the seventh!) article in my series on maze algorithms, I’ll focus on Wilson’s algorithm for selecting uniform spanning trees.

The algorithm goes something like this:

  1. Choose any vertex at random and add it to the UST.
  2. Select any vertex that is not already in the UST and perform a random walk until you encounter a vertex that is in the UST.
  3. Add the vertices and edges touched in the random walk to the UST.
  4. Repeat 2 and 3 until all vertices have been added to the UST.

So, it’s still doing the random walk, but I think you’ll see that this algorithm converges much more rapidly than Aldous-Broder. It’s still annoying to watch an animation of the early stages of the process, but not nearly as bad as the other.

As before, here’s a simple example of the algorithm in action.

Click here to read the rest of this article.

Posted in Under the Hood | 13 comments

17 Jan 2011

Maze Generation: Aldous-Broder algorithm

Posted by Jamis on Monday, January 17

The next maze algorithm I’m going to cover, the Aldous-Broder algorithm, is one of the simplest imaginable. It’s also one of the least efficient. In fact, it is so inefficient that you might wonder why anyone would even bother implementing it, let alone discovering it and having their name attached to it.

D. Aldous and A. Broder were two researchers, working independently, who were investigating uniform spanning trees. A spanning tree, as you may or may not already know, is any tree that connects all the vertexes in a graph. A uniform spanning tree is one chosen randomly (and with equal probability) among all the possible spanning trees of a given graph.

Riveting, isn’t it?

There are actually applications for this. Outside of recreational maze generation, I mean. I don’t know what they are myself, but Wikipedia informs me that they are in probability and mathematical physics. (If you know more specifically how uniform spanning trees are being applied in research, please share in a comment!)

Anyway: Aldous and Broder were researching these uniform spanning trees, and independently arrived at the following algorithm:

  1. Choose a vertex. Any vertex.
  2. Choose a connected neighbor of the vertex and travel to it. If the neighbor has not yet been visited, add the traveled edge to the spanning tree.
  3. Repeat step 2 until all vertexes have been visited.

Remember: this algorithm is notable in that it selects from all possible spanning trees (i.e. mazes) of a given graph (i.e. field) with equal probability. The other algorithms I’ve covered don’t have this property.

Let’s walk through a trivial example. If you haven’t spotted yet why the algorithm is so ineffecient, hopefully the example will demonstrate it.

Click here to read the rest of this article.

Posted in Under the Hood | 1 comment

12 Jan 2011

Maze Generation: Recursive Division

Posted by Jamis on Wednesday, January 12

All of the maze algorithms I’ve covered so far (recursive backtracking, Eller’s, Kruskal’s, and Prim’s) were implemented as “passage carvers”: they started with a solid grid and hollowed the passages out of it. Some algorithms can be implemented a different way (and indeed, some must be implemented this way): as “wall adders”, where the process begins with a large empty space, and adds walls until a maze results.

The “recursive division” algorithm is one that must be implemented as a wall adder. This algorithm is particularly fascinating because of its fractal nature: you could theoretically continue the process indefinitely at progressively finer and finer levels of detail.

It works like this:

  1. Begin with an empty field.
  2. Bisect the field with a wall, either horizontally or vertically. Add a single passage through the wall.
  3. Repeat step #2 with the areas on either side of the wall.
  4. Continue, recursively, until the maze reaches the desired resolution.

You’ll find that at every step, you still have a valid maze. Each repetition of the algorithm simply increases the level of detail. Pretty cool!

Let’s walk through an example to get an idea of how this works in practice.

Click here to read the rest of this article.

Posted in Under the Hood | 15 comments

10 Jan 2011

Maze Generation: Prim's Algorithm

Posted by Jamis on Monday, January 10

My last post was about using Kruskal’s algorithm to generate random mazes. This article is about using another minimal spanning tree algorithm to do the same: Prim’s algorithm.

If you recall, the random variant of Kruskal’s algorithm worked by randomly selecting edges from the graph, and adding them to the maze if they connected disjoint trees. Visually, this had the effect of growing the maze from many random points across the grid.

Prim’s approaches the problem from a different angle. Rather than working edgewise across the entire graph, it starts at one point, and grows outward from that point. The standard version of the algorithm works something like this:

  1. Choose an arbitrary vertex from G (the graph), and add it to some (initially empty) set V.
  2. Choose the edge with the smallest weight from G, that connects a vertex in V with another vertex not in V.
  3. Add that edge to the minimal spanning tree, and the edge’s other vertex to V.
  4. Repeat steps 2 and 3 until V includes every vertex in G.

And the result is a minimal spanning tree of G. Straightforward enough!

With one little change, it becomes a suitable method for generating mazes: just change step 2 so that instead of selecting the edge with the smallest weight, you select an edge at random, as long as it bridges the so-called “frontier” of the maze (the set of edges that move from within the maze, to a cell that is outside the maze).

As before, let’s walk through an example and see how it works in practice.

Click here to read the rest of this article.

Posted in Under the Hood | 3 comments

03 Jan 2011

Maze Generation: Kruskal's Algorithm

Posted by Jamis on Monday, January 3

For the third article in my series on maze algorithms, I’m going to take a look at Kruskal’s algorithm. (I’ve previously covered recursive backtracking and Eller’s algorithm.)

Kruskal’s algorithm is a method for producing a minimal spanning tree from a weighted graph. The algorithm I’ll cover here is actually a randomized version of Kruskal’s; the original works something like this:

  1. Throw all of the edges in the graph into a big burlap sack. (Or, you know, a set or something.)
  2. Pull out the edge with the lowest weight. If the edge connects two disjoint trees, join the trees. Otherwise, throw that edge away.
  3. Repeat until there are no more edges left.

The randomized algorithm just changes the second step, so that instead of pulling out the edge with the lowest weight, you remove an edge from the bag at random. Making that change, the algorithm now produces a fairly convincing maze.

Let’s walk through an example manually, to see how the process works in practice.

Click here to read the rest of this article.

Posted in Under the Hood | 15 comments

29 Dec 2010

Maze Generation: Eller's Algorithm

Posted by Jamis on Wednesday, December 29

Last time I talked about the recursive backtracker algorithm for maze generation. That’s probably always going to be my favorite algorithm for generating mazes, for a variety of reasons, but that’s not going to stop me from looking at others.

For one thing, there are some pretty crazy algorithms out there for generating mazes.

Eller’s algorithm is one of the craziest. It’s also one of the fastest. And it’s the only one I know that let’s you generate mazes of an infinite size. In linear time.

Yeah, it’s that crazy.

It does this by building the maze one row at a time, using sets to keep track of which columns are ultimately connected. But it never needs to look at more than a single row, and when it finishes, it always produces a perfect maze.

Like I did for the recursive backtracking algorithm, here’s the “mile-high” overview of Eller’s algorithm:

  1. Initialize the cells of the first row to each exist in their own set.
  2. Now, randomly join adjacent cells, but only if they are not in the same set. When joining adjacent cells, merge the cells of both sets into a single set, indicating that all cells in both sets are now connected (there is a path that connects any two cells in the set).
  3. For each set, randomly create vertical connections downward to the next row. Each remaining set must have at least one vertical connection. The cells in the next row thus connected must share the set of the cell above them.
  4. Flesh out the next row by putting any remaining cells into their own sets.
  5. Repeat until the last row is reached.
  6. For the last row, join all adjacent cells that do not share a set, and omit the vertical connections, and you’re done!

If you’re at all like me, your head is probably spinning at this point. Let’s back up and work through an example manually, to help you see how the algorithm works in practice. Let’s begin with a simple 5-column row.

Click here to read the rest of this article.

Posted in Under the Hood | 33 comments

27 Dec 2010

Maze Generation: Recursive Backtracking

Posted by Jamis on Monday, December 27

I’ve said before that generating mazes is a great default project when experimenting with a new programming language. I’ve yet to find a better one (but I’d love to hear recommendations). However, before you can dive into generating a maze (especially in a syntax you are unfamiliar with), you had better have a solid grasp of how the process works.

With mazes, you can take your pick of a solid double-handful of algorithms: recursive backtracking, Prim’s, Kruskal’s, Eller’s, Aldous-Broder or Wilson’s algorithms, recursive division, hunt-and-kill, and more.

My favorite, and the one I implement by default, is recursive backtracking. It is fast, easy to understand, and straightforward to implement. You’ll need sufficient memory to store the entire maze in memory, though, and it requires stack space again proportional to the size of the maze, so for exceptionally large mazes it can be fairly inefficient. But for most mazes, it works a charm.

Here’s the mile-high view of recursive backtracking:

  1. Choose a starting point in the field.
  2. Randomly choose a wall at that point and carve a passage through to the adjacent cell, but only if the adjacent cell has not been visited yet. This becomes the new current cell.
  3. If all adjacent cells have been visited, back up to the last cell that has uncarved walls and repeat.
  4. The algorithm ends when the process has backed all the way up to the starting point.

Click here to read the rest of this article.

Posted in Under the Hood | 19 comments