A fascinating idea
To take something oft ignored
And give it its own space.
This is part of a multi-part series analyzing the various AI rounds from the 2023 MIT Mystery Hunt and seeing what they can teach us about writing puzzle hunts in general.
A Whole ASCII Art Round????
teammate mentioned that the Creative Pictures Studios round from 2020 was one of their inspirations for creating this round, and the parallels are clearly there. Both of them are rounds which break the concept of what answers can be, plays it straight without telling the hunters what’s going on, and has a metapuzzle that uses some unique features of the answers’ medium. The Creative Pictures Studios round was fairly well received, so it totally makes sense to look towards it for illegal puzzles.
One thing that I really want to highlight about this round is how short it was - and that’s a good thing. ASCII art is a medium with much smaller restrictions on what can be an answer which means that you need to be much more blatant about the art being an answer. Compare this to emojis - while there are hundreds and we’re not super familiar with all of them, there is still a limited number of them. This means that it’s easier to recognize whether something is an answer or just a mess of pixels. This just means that there’s less interesting things to do with ASCII art.
For instance, let’s take a look at the different extraction methods from ABCDE:
- Clue a specific poem - the answer is the whole poem.
- Give a logic puzzle that is filled with letters - the answer is the whole logic puzzle.
- Clue a grid that is somewhere else in the Hunt.
- Give a picture with a certain place blurred out. The answer is the words in that place in the picture.
- Clue a specific line from a movie that is written out by a computer.
There’s not much interesting design space left there, and quite frankly the first and the last extractions are fairly similar. This is not to say that every puzzle has to have a different kind of extraction, but in a round that is fairly gimmicky, there’s not much gimmick left to mess around with. I feel like in a couple years we could see another emoji round in the Hunt. I’m not sure we will ever see another ASCII art round.
This is a place where the difficulty of the puzzles really hurt the round. This is not because I think that the individual puzzles in this round were too hard - it’s pretty clear that this round was targeted to have harder puzzles, and it could certainly have the hardest puzzles of all of the other AI rounds because it had the smallest number of them. This is because of the overall difficulty of the Hunt meant that by the time we hit the AI rounds, we were already asking the question “is it worth it to sink a couple of hours into this puzzle only to bash our heads against it and get nowhere, or should we just buy it and save us the trouble?” Because the ABCDE puzzles were harder, it was way easier to say “nuke it” on these puzzles. This makes it likely that one puzzle will be bought early, which spoils the cool aha of the round. This happened to us on The MIT Mystery Hunt ✅. We ended up buying the answer to the Taylor Swift dropquotes, got the ridiculous answer that puzzle has, and then realized the whole gimmick for the round. While I don’t know every team’s experience, I can’t imagine that we were the only team to come across this.
Despite that catch, I do like how the round clued the ASCII art answers. The answer submission box could be made bigger, which both made it easier to input ASCII art and clued that something weird was going on. Apparently Admiral Bootes themself had some dialogue that noted the weirdness when someone hit an Enter key, although I never encountered it. Perhaps the biggest thing that helped clue it was the scavenger hunt. I’ve given praise to the scavenger hunt in a post before, but the answer was absolutely hilarious. You were told that the answer is “IVE TOUCHED GRASS”, and then presented a picture of “GRASS” in Braille. You had to notice that the number of dots in Braille were the same number of letters in “IVE TOUCHED GRASS”, and then put one letter on each dot and enter it that way. If you knew the round’s gimmick, then this was easy. If you didn’t, then this was enough of a small puzzle to give you the aha as to what was going on.
Closeness
Imagine the following hypothetical metapuzzles:
- Each of the answers is a phrase that clues one specific element. The atomic symbol for that element is somewhere in the answer. Take the letters after each of the atomic symbols and order them by atomic number of the associated element to spell out the meta answer.
- Each of the answers can be associated with an element thanks to the title of the puzzle. The atomic symbol for that element is somewhere in that puzzle’s answer. Take the letters after each of the atomic symbols and order them by atomic number of the associated element to spell out the meta answer.
- Each of the answers is a one or two word phrase whose initials are the atomic symbol of a chemical element. When you solve that puzzle, it unlocks a piece of the periodic table centered around that element. When you reassemble the pieces from the meta, the atomic symbols of the elements that are missing spell out the answer.
- Whenever a puzzle is solved, it unlocks a piece of the periodic table. When you reassemble the pieces from the meta, the atomic symbols of the elements that are missing spell out the answer.
Are all of these puzzles metapuzzles? Technically, yeah. Are all of them good metapuzzles? It depends on the context - all of them have a context in which they would fit right in. But what they are demonstrating is a quality that I am currently calling “closeness.” Closeness deals with how directly the answers are related to the final meta answer. A metapuzzle that exhibits closeness has answers that feel like an integral part of the meta, while a metapuzzle that doesn’t exhibit closeness has answers that feel like they served as gatekeeping for the metapuzzle but not an integral part of it.
The metapuzzles above are written in order of decreasing closeness. The first puzzle uses both the semantic meaning and the orthographic aspect of the answers, and both of those use the concept of chemical elements heavily. The second puzzle uses just the orthographic aspect of the answers - the answers are still related to the way the puzzle is solved, but only tangentially.
One way to look at closeness is to look at how constrained the metapuzzle is. The more constraints that are on the individual answers, the more ways they are potentially connected to the overall meta. However, constraint is not the only thing to affect closeness. One can imagine a metapuzzle that is very constrained, but the constraints have no relationship to how the puzzle is solved.
Is closeness necessary for a good puzzle? Well, it depends on how we define a good metapuzzle.
To answer this question, we first have to consider how we use metapuzzles in the Mystery Hunt.
- Metapuzzles use the answers to feeder puzzles as part of solving.
- Metapuzzles bring closure to a section of the hunt or to the whole hunt itself.
- Metapuzzles tie in with the plot that is going on, often representing progressing in that plot.
- Metapuzzles themselves are a piece of art.
That last one is probably the one that is least talked about, but it is still really important. Every puzzle is a piece of art, filled with all sorts of meaning in various different ways. Just like one can put any old words together and call it a poem, one where the words have been specifically chosen and arranged to supercharge them with meaning will stand out from the rest.
Imagine the following puzzle: You are given a word search. Hidden in the word search is a bunch of dog breeds. When you find all of them, the unused letters give you a URL. At this URL is another word search. This time, the word search contains the last names of every US President. The letters crossed by two or more president names spell out a cryptic clue whose the answer is BEFUDDLE.
Is this a valid puzzle hunt puzzle? Absolutely. Everything mechanically works, every aha is reasonable for the solvers to get, and there is a final answer that players can submit. But the puzzle is missing its soul. There is so much going on in this puzzle and none of it is related to each other. A couple changes could make this much better. See if you can hide the second word search on a dog related site. Somehow make this related to the dogs of US Presidents. See if there is any answer other than BEFUDDLE that can make this work - especially since the method of extraction isn’t particularly tied to the answer word. None of these changes will make the puzzle better mechanically, but it will make the puzzle more satisfying for the solver.
How does this apply to metapuzzles? One major source of data for the metapuzzle is the answers to the feeder puzzles. These feeder puzzles may be close to what is happening in the rest of the puzzle, or they may be farther away. In the puzzle from the previous paragraph, I would not call the dog breeds “close” to what the puzzle is doing. They are only there to obscure the URL hidden in the word search. The hypothetical version at the end of the paragraph would have the dog breeds “closer” to the puzzle, but to really make the dog breeds feel “close” to the puzzle, it would be best if they were used somehow in the second word search somehow.
Exploring Closeness
I think a great example of seeing closeness in action comes from the development of Introspection, the metapuzzle from the New You City round of the 2022 MIT Mystery Hunt. The first version of the meta that saw full team playtesting looked slightly different than what you see now. Essentially, the first half of the meta was structured like this: Each puzzle had a task as an answer. When you solved a task, you got some number of letters from an “Introspection” section on the meta page. As you got more and more letters, you could slowly wheel of fortune the message which contained a list of items and the words “AND NOW VIGENERE” at the end. Each of the items was a member of a canonically ordered list - these lists were hinted at by the tasks. If you took each item’s place in that list as an alphanumeric in the order they were given, you got a block of text that you could vigenere into the answer.
This draft tested really well and got a lot of good feedback. However, one of the biggest pieces of feedback that I got was that the answers to the puzzles didn’t feel like they were connected to metapuzzle at all. In my mind, the answers were there to help people solve the wheel of fortune faster as well as cluing the lists that needed to be used, but that’s not how other people saw it. The change I made from this was to put the Introspection section in alphabetical order and to use the tasks as an ordering mechanism for the letters that the solvers got from the vigenere. This way the answers were mechanically used in getting the answer, and not just gatekeeping the answer. This ordering mechanism made the feeder answers closer to the meta overall.
Closeness is not a scale that I can easily define the various parts of. It’s about how close the puzzle answers feel towards what you’re actually doing as part of the puzzle. Because different people feel differently towards different puzzles, I can’t really define a precise numerical scale of closeness, but I can come up with some relative comparisons that most people would agree with. A pure meta would be the “closest” you could get, and a metapuzzle where the puzzle answers just served as gatekeeping pieces of the meta and weren’t used at all would be the “furthest” you could get. In addition, I would say that Introspection is closer than Communicating with the Aliens, the metapuzzle from Sci-Ficisco, and is further than Reference Desk, the metapuzzle from Reference Point. (Spoilers about those are in the footnotes.) Now, I don’t think any of them are “bad” metapuzzles, but the further a metapuzzle is, the more the other parts have to work to make a satisfying conclusion.
That having been said, I think Space Modules is pretty far.
Modules… in… Spaaaaaaaaace
Let’s walk through the steps necessary to solve this metapuzzle.
- Use your lenses (the answers) to overlay on the different constellations. You know that the lens is in the correct place if all the letters from the lens that are overlapping the letters in the constellation are the same.
- Each of the python functions clues a phrase with an enumeration equal to the return values.
- Each of these phrases can be found in one of the completed constellations, but only when interpreted as a rebus. (For example, THREE RING CIRCUS has the word RING written three times in a circle.)
- Not counting that rebus, there will be one letter that isn’t contained in the two lenses. These letters spell “FIND MODULES”
- Letters that are in the spaces of the lens can be read off in reading order to give a new rebus and therefore give us a new phrase.
- Each of these new phrases contain the name of a python class or method in a python module. The modules begin with the letters A-J, giving an ordering.
- Each of the new phrases contain the same number of words as the arguments in the corresponding functions. Take the letter corresponding to the word that was the python class or method to spell out the phrase XKCD PYTHON.
- In the xkcd comic titled “Python”, one of the characters starts flying by opening up python and typing “import antigravity”. The answer is “import antigravity”.
First of all, whoa that is a lot. Second of all, the answers aren’t really that close to what’s going on. Sure, you need the answers to work the shell, but there is another whole ISIS section after you are done with the answers. While I can’t place where this would be precisely on the closeness scale, I can definitely put this in between the hypothetical dog/president puzzle and Communicating with the Aliens. There is some cohesion between the parts, which our dog/president puzzle didn’t have at all. However, while the way the answers are used is similar to Communicating with the Aliens, there are multiple steps necessary after applying the answers, whereas Communicating with the Aliens only has one.
In fact, when you look at the structure of the meta, the meta is trying to tie together two completely different themes. The first theme is all about rebuses and that it’s not just the letters that are important, but also where the letters are with respect to each other in two-D space. The second theme is python. There is a slight connection when it comes to monospace characters but that’s it. That’s it. In fact, the meta deals with all of the rebus part, then completely abandons it to work on the python theme.
Now, closeness doesn’t necessarily make a meta good or bad on its own, but if the answers are far from the meta, then the meta needs to be tied together in other ways to have a satisfying solving experience. Honestly, I’m not sure this qualifies. It was fun to solve, but at some point I stopped feeling like I was solving a metapuzzle and more like I was solving a regular puzzle. In fact, when The MIT Mystery Hunt ✅ was solving this puzzle, we didn’t have the answer to the 5D Diagramless. Our team captain asked me if it was worth it to buy the answer to that puzzle. My response to them was that we had all the information that we would’ve gotten from that answer, but we still hadn’t solved the metapuzzle. That’s a really weird position to be in for a metapuzzle.
Wrapping it Up
The fact that the feeder puzzles to the round didn’t feel close to the metapuzzle doesn’t mean that the puzzle wasn’t fun to solve. I definitely had fun solving it. However, I can’t in good conscience call this a satisfying solve. It felt weird at the end, like there was a joke that went on too long for how funny the punchline was. However, that doesn’t mean that the round itself wasn’t fun or interesting - it was very much those things. But thanks to the feeder answers being far away from the actual meta, this definitely dropped from being my favorite AI round. It’s not even close.
– Cute Mage