Cute Mage's Tower

AI: Admiral Bootes' Cosmic Definition Explanation

A fascinating idea
To take something oft ignored
And give it its own                     space.
This is part of a multi-part series analyzing the various AI rounds from the 2023 MIT Mystery Hunt and seeing what they can teach us about writing puzzle hunts in general.

A Whole ASCII Art Round????

teammate mentioned that the Creative Pictures Studios round from 2020 was one of their inspirations for creating this round, and the parallels are clearly there. Both of them are rounds which break the concept of what answers can be, plays it straight without telling the hunters what’s going on, and has a metapuzzle that uses some unique features of the answers’ medium. The Creative Pictures Studios round was fairly well received, so it totally makes sense to look towards it for illegal puzzles.

One thing that I really want to highlight about this round is how short it was - and that’s a good thing. ASCII art is a medium with much smaller restrictions on what can be an answer which means that you need to be much more blatant about the art being an answer. Compare this to emojis - while there are hundreds and we’re not super familiar with all of them, there is still a limited number of them. This means that it’s easier to recognize whether something is an answer or just a mess of pixels. This just means that there’s less interesting things to do with ASCII art.

For instance, let’s take a look at the different extraction methods from ABCDE:

There’s not much interesting design space left there, and quite frankly the first and the last extractions are fairly similar. This is not to say that every puzzle has to have a different kind of extraction, but in a round that is fairly gimmicky, there’s not much gimmick left to mess around with. I feel like in a couple years we could see another emoji round in the Hunt. I’m not sure we will ever see another ASCII art round.

This is a place where the difficulty of the puzzles really hurt the round. This is not because I think that the individual puzzles in this round were too hard - it’s pretty clear that this round was targeted to have harder puzzles, and it could certainly have the hardest puzzles of all of the other AI rounds because it had the smallest number of them. This is because of the overall difficulty of the Hunt meant that by the time we hit the AI rounds, we were already asking the question “is it worth it to sink a couple of hours into this puzzle only to bash our heads against it and get nowhere, or should we just buy it and save us the trouble?” Because the ABCDE puzzles were harder, it was way easier to say “nuke it” on these puzzles. This makes it likely that one puzzle will be bought early, which spoils the cool aha of the round. This happened to us on The MIT Mystery Hunt ✅. We ended up buying the answer to the Taylor Swift dropquotes1, got the ridiculous answer that puzzle has, and then realized the whole gimmick for the round. While I don’t know every team’s experience, I can’t imagine that we were the only team to come across this2.

Despite that catch, I do like how the round clued the ASCII art answers. The answer submission box could be made bigger, which both made it easier to input ASCII art and clued that something weird was going on. Apparently Admiral Bootes themself had some dialogue that noted the weirdness when someone hit an Enter key, although I never encountered it.3 Perhaps the biggest thing that helped clue it was the scavenger hunt. I’ve given praise to the scavenger hunt in a post before, but the answer was absolutely hilarious. You were told that the answer is “IVE TOUCHED GRASS”, and then presented a picture of “GRASS” in Braille. You had to notice that the number of dots in Braille were the same number of letters in “IVE TOUCHED GRASS”, and then put one letter on each dot and enter it that way. If you knew the round’s gimmick, then this was easy. If you didn’t, then this was enough of a small puzzle to give you the aha as to what was going on.

Closeness

Imagine the following hypothetical metapuzzles:

  1. Each of the answers is a phrase that clues one specific element. The atomic symbol for that element is somewhere in the answer. Take the letters after each of the atomic symbols and order them by atomic number of the associated element to spell out the meta answer.
  2. Each of the answers can be associated with an element thanks to the title of the puzzle. The atomic symbol for that element is somewhere in that puzzle’s answer. Take the letters after each of the atomic symbols and order them by atomic number of the associated element to spell out the meta answer.
  3. Each of the answers is a one or two word phrase whose initials are the atomic symbol of a chemical element. When you solve that puzzle, it unlocks a piece of the periodic table centered around that element. When you reassemble the pieces from the meta, the atomic symbols of the elements that are missing spell out the answer.
  4. Whenever a puzzle is solved, it unlocks a piece of the periodic table. When you reassemble the pieces from the meta, the atomic symbols of the elements that are missing spell out the answer.

Are all of these puzzles metapuzzles?4 Technically, yeah. Are all of them good metapuzzles? It depends on the context - all of them have a context5 in which they would fit right in. But what they are demonstrating is a quality that I am currently calling “closeness.” Closeness deals with how directly the answers are related to the final meta answer. A metapuzzle that exhibits closeness has answers that feel like an integral part of the meta, while a metapuzzle that doesn’t exhibit closeness has answers that feel like they served as gatekeeping for the metapuzzle but not an integral part of it.

The metapuzzles above are written in order of decreasing closeness. The first puzzle uses both the semantic meaning and the orthographic aspect of the answers, and both of those use the concept of chemical elements heavily. The second puzzle uses just the orthographic aspect of the answers - the answers are still related to the way the puzzle is solved, but only tangentially.

One way to look at closeness is to look at how constrained the metapuzzle is. The more constraints that are on the individual answers, the more ways they are potentially connected to the overall meta. However, constraint is not the only thing to affect closeness. One can imagine a metapuzzle that is very constrained, but the constraints have no relationship to how the puzzle is solved.

Is closeness necessary for a good puzzle? Well, it depends on how we define a good metapuzzle6.

What is a Good Metapuzzle?

To answer this question, we first have to consider how we use metapuzzles in the Mystery Hunt.7

That last one is probably the one that is least talked about, but it is still really important. Every puzzle is a piece of art, filled with all sorts of meaning in various different ways. Just like one can put any old words together and call it a poem, one where the words have been specifically chosen and arranged to supercharge them with meaning will stand out from the rest.

Imagine the following puzzle: You are given a word search. Hidden in the word search is a bunch of dog breeds. When you find all of them, the unused letters give you a URL. At this URL is another word search. This time, the word search contains the last names of every US President. The letters crossed by two or more president names spell out a cryptic clue whose the answer is BEFUDDLE.

Is this a valid puzzle hunt puzzle? Absolutely. Everything mechanically works, every aha is reasonable for the solvers to get, and there is a final answer that players can submit. But the puzzle is missing its soul. There is so much going on in this puzzle and none of it is related to each other. A couple changes could make this much better. See if you can hide the second word search on a dog related site. Somehow make this related to the dogs of US Presidents. See if there is any answer other than BEFUDDLE that can make this work - especially since the method of extraction isn’t particularly tied to the answer word. None of these changes will make the puzzle better mechanically, but it will make the puzzle more satisfying for the solver.

How does this apply to metapuzzles? One major source of data for the metapuzzle is the answers to the feeder puzzles. These feeder puzzles may be close to what is happening in the rest of the puzzle, or they may be farther away. In the puzzle from the previous paragraph, I would not call the dog breeds “close” to what the puzzle is doing. They are only there to obscure the URL hidden in the word search. The hypothetical version at the end of the paragraph would have the dog breeds “closer” to the puzzle, but to really make the dog breeds feel “close” to the puzzle, it would be best if they were used somehow in the second word search somehow.8

Exploring Closeness

I think a great example of seeing closeness in action comes from the development of Introspection, the metapuzzle from the New You City round of the 2022 MIT Mystery Hunt9. The first version of the meta that saw full team playtesting10 looked slightly different than what you see now. Essentially, the first half of the meta was structured like this: Each puzzle had a task as an answer. When you solved a task, you got some number of letters from an “Introspection” section on the meta page. As you got more and more letters, you could slowly wheel of fortune the message which contained a list of items and the words “AND NOW VIGENERE” at the end. Each of the items was a member of a canonically ordered list - these lists were hinted at by the tasks. If you took each item’s place in that list as an alphanumeric in the order they were given, you got a block of text that you could vigenere into the answer11.

This draft tested really well and got a lot of good feedback. However, one of the biggest pieces of feedback that I got was that the answers to the puzzles didn’t feel like they were connected to metapuzzle at all. In my mind, the answers were there to help people solve the wheel of fortune faster as well as cluing the lists that needed to be used, but that’s not how other people saw it12. The change I made from this was to put the Introspection section in alphabetical order and to use the tasks as an ordering mechanism for the letters that the solvers got from the vigenere. This way the answers were mechanically used in getting the answer, and not just gatekeeping the answer. This ordering mechanism made the feeder answers closer to the meta overall.

Closeness is not a scale that I can easily define the various parts of. It’s about how close the puzzle answers feel towards what you’re actually doing as part of the puzzle. Because different people feel differently towards different puzzles, I can’t really define a precise numerical scale of closeness13, but I can come up with some relative comparisons that most people would agree with. A pure meta would be the “closest” you could get, and a metapuzzle where the puzzle answers just served as gatekeeping pieces of the meta and weren’t used at all would be the “furthest” you could get. In addition, I would say that Introspection is closer than Communicating with the Aliens14, the metapuzzle from Sci-Ficisco, and is further than Reference Desk15, the metapuzzle from Reference Point. (Spoilers about those are in the footnotes.) Now, I don’t think any of them are “bad” metapuzzles, but the further a metapuzzle is, the more the other parts have to work to make a satisfying conclusion.

That having been said, I think Space Modules is pretty far.

Modules… in… Spaaaaaaaaace

Let’s walk through the steps necessary to solve this metapuzzle.

  1. Use your lenses (the answers) to overlay on the different constellations. You know that the lens is in the correct place if all the letters from the lens that are overlapping the letters in the constellation are the same.
  2. Each of the python functions clues a phrase with an enumeration equal to the return values.
  3. Each of these phrases can be found in one of the completed constellations, but only when interpreted as a rebus. (For example, THREE RING CIRCUS has the word RING written three times in a circle.)
  4. Not counting that rebus, there will be one letter that isn’t contained in the two lenses. These letters spell “FIND MODULES”
  5. Letters that are in the spaces of the lens can be read off in reading order to give a new rebus and therefore give us a new phrase.
  6. Each of these new phrases contain the name of a python class or method in a python module. The modules begin with the letters A-J, giving an ordering.
  7. Each of the new phrases contain the same number of words as the arguments in the corresponding functions. Take the letter corresponding to the word that was the python class or method to spell out the phrase XKCD PYTHON.
  8. In the xkcd comic titled “Python”, one of the characters starts flying by opening up python and typing “import antigravity”. The answer is “import antigravity”.

First of all, whoa that is a lot. Second of all, the answers aren’t really that close to what’s going on. Sure, you need the answers to work the shell, but there is another whole ISIS section after you are done with the answers. While I can’t place where this would be precisely on the closeness scale, I can definitely put this in between the hypothetical dog/president puzzle and Communicating with the Aliens16. There is some cohesion between the parts, which our dog/president puzzle didn’t have at all. However, while the way the answers are used is similar to Communicating with the Aliens, there are multiple steps necessary after applying the answers, whereas Communicating with the Aliens only has one.

In fact, when you look at the structure of the meta, the meta is trying to tie together two completely different themes. The first theme is all about rebuses and that it’s not just the letters that are important, but also where the letters are with respect to each other in two-D space. The second theme is python. There is a slight connection when it comes to monospace characters but that’s it. That’s it. In fact, the meta deals with all of the rebus part, then completely abandons it to work on the python theme.

Now, closeness doesn’t necessarily make a meta good or bad on its own, but if the answers are far from the meta, then the meta needs to be tied together in other ways to have a satisfying solving experience. Honestly, I’m not sure this qualifies. It was fun to solve, but at some point I stopped feeling like I was solving a metapuzzle and more like I was solving a regular puzzle. In fact, when The MIT Mystery Hunt ✅ was solving this puzzle, we didn’t have the answer to the 5D Diagramless. Our team captain asked me if it was worth it to buy the answer to that puzzle. My response to them was that we had all the information that we would’ve gotten from that answer, but we still hadn’t solved the metapuzzle. That’s a really weird position to be in for a metapuzzle.

Wrapping it Up

The fact that the feeder puzzles to the round didn’t feel close to the metapuzzle doesn’t mean that the puzzle wasn’t fun to solve. I definitely had fun17 solving it. However, I can’t in good conscience call this a satisfying solve. It felt weird at the end, like there was a joke that went on too long for how funny the punchline was. However, that doesn’t mean that the round itself wasn’t fun or interesting - it was very much those things. But thanks to the feeder answers being far away from the actual meta, this definitely dropped from being my favorite AI round. It’s not even close.

– Cute Mage


  1. Wait, why did we do that? Doesn’t this team have a bunch of Taylor Swift stans? Maybe we didn’t know it was Taylor Swift at the time? I never really understood how that puzzle worked. 

  2. I mean, I don’t want to assume other people’s experiences, but like, c’mon. (Also, those with good forethought will realize that this could’ve happened again to The MIT Mystery Hunt ✅. Yes my good reader, it did happen there too.) 

  3. We bought three of the five answers, so we didn’t type many answers in. I certainly didn’t “solve” anything in that round except for the scavenger hunt, and someone who was more involved with that put it in. 

  4. There even is the argument about whether #4 is a metapuzzle in the first place. The problem is that nowadays the community uses metapuzzle to mean both “a puzzle that uses the answers to the other puzzles in the round” and “a finale puzzle to bring closure to a round or hunt.” Meta 4 is definitely not the first one but is the second one. This will come up more in the next section. 

  5. Not all of them fit within the context of the MIT Mystery Hunt, but that doesn’t mean that they won’t fit in any hunt. In particular, Meta 4 would probably never be seen in the MIT Mystery Hunt, but it is similar to how the metas for CiSRA puzzle hunts worked back when they ran. 

  6. Don’t give me that groaning. I am a math teacher. What were you expecting me to say? Of course I’m going to quibble about definitions. It’s second nature at this point! 

  7. One thing that becomes clear looking at this list is that these different people find different characteristics important. Also, while the Mystery Hunt uses all four of these, there are different puzzle hunts that may not use all of these. DASH 11 comes to mind - its metapuzzle didn’t use the answers at all, but instead combined the mechanics of the previous puzzles pairwise to produce new puzzles. 

  8. The more and more I edit this example, the more I realize that this is turning into Crow Facts 3000. It’s a wonderful puzzle that comes together beautifully. Although now that I’m thinking about it, we should probably archive it somehow in case Twitter blows up. I want people to still be able to read the funny Crow Facts. 

  9. Look, I kinda feel bad about using a lot of Bookspace examples given that I was one of the writers of the Bookspace hunt. However, if I’m trying to give other examples than the 2023 Hunt, the 2022 Hunt is the next most recent. Besides, I spent a year of my life thinking about the puzzles. Of course they’re going to be the first that come to my mind. 

  10. The way that Palindrome’s meta development worked was that we split all the people who wanted to write metapuzzles into three meta teams18, and then those teams worked on puzzle development amongst themselves. Once the meta team captain thought that it was ready, it then saw testing outside the team. There were a couple different versions of Introspection that were made in short succession in the team as things changed, so I generally consider the first full draft of the puzzle to be the one that went to testsolvers who weren’t on our meta team. 

  11. I’m not giving away that spoiler here since it’s not relevant to the rest of the story. 

  12. To be fair, I saw the “real” answers to the round as the tasks people did and the phrases that provided those tasks as “clue phrases before the final answer”, but as the round developed more and more, it became clear that this was not the best way of approaching the round. 

  13. For Magic the Gathering Commander players - this is why everyone thinks that their deck is a 7, and no one thinks that it’s a 4. 

  14. In this metapuzzle, the acrostic of the feeder answers tells you what to do (A WORD SEARCH), and then the answers themselves teach you how to use the word search, and the metapuzzle involves reading off the letters that aren’t used in the word search. 

  15. In this metapuzzle, each of the answers can be split into two words such that when a letter is inserted into the first word, they become synonyms. A third synonym of those words is in the title of another puzzle. This forms a chain to read the letters in. 

  16. Haha! Me referencing those puzzles in the past sections was clearly intentional19 foreshadowing for this point in the article! 

  17. And not just because I’m proud of the fact that The MIT Mystery Hunt ✅ was the first team and the fastest team to solve this meta, although that was a nice thing to realize afterwards. 

  18. Go Team Aardvark! 

  19. By intentional, I mean not.