Advent of Code 2020
My most valued readers! So glad you could drop by again! You are just in time to catch me singing the praises  almost as belated as they are deserved  for an annual coding competition that can be classified as the highlight of many a programmer each year. In addition, I also show and explain all my solutions for it.
In recent years, I have found myself increasingly looking forward to December. Not because of Christmas, not because of my birthday^{1}, not even because the best thing in each successive one seems to be it finally coming to a close  but because of Advent of Code.
In line with my more general resolution of being more open and public about everything I do^{2}, I decided I should note down my thoughts, document my experiences and publish my solutions this time around.
Keeping in line with my general tendencies, however, I am  of course  incredibly late to the party.^{3} Advent ended far more than 4 month ago and I only managed to complete this writeup now, not even in time to ruin the easter bunnys day^{4}.
Nonetheless, despite their seasonal framing, the problems posed are timeless, highly educational and results can be submitted even years after the original “end”. As such, I am hopeful my musings on this particular incarnation  though late  might still prove helpful, interesting and enjoyable to some of my dear readers.
One last word of warning before we get to it:
I am aware, I do tend to say that Every. Single. Time., but this article really is structurally very different from all that came before it. Even if you enjoyed those, it might not come as a surprise if you don’t enjoy this one^{5}.
I wrote a small introduction and a tiny conclusion tying it all together, but really, everything else was written rather independently. Therefore, this should maybe be viewed more like a collection of very small articles or commentaries  one for each day  instead of one grand, cohesive whole. You can freely jump around, look at only the explanations for a single day or even skip all code completely and just peruse my thoughts and opinions at the beginning and end.
What is this and why should I care?
I really don’t think I can put it better than the creator himself, so allow me to first point out that Eric Wastl has presented many times on the topic and watching at least one of those presentations is probably a more valuable investment of your time and attention.
If, however, you intend to stay with me for now and want the shortened, succinct version  also ripped directly from his talks  Advent of Code is what you get if you combine this^{6}:
with something like these:
Beginning on the first of December, each day leading up to Christmas you get another door to open and peer behind. Instead of discovering a piece of chocolate  as is usual in more common variations of an advent calendar  each door reveals a brand new programming puzzle to solve, all of them flimsily tied together with a whimsical and endearing cover story.
Whilst this description alone might, to the uninitiated, sound nice but not particularly exciting, it being the only thing during the last 5 to 10 years that managed to motivate me to consistently wake up and be available at a specific time of day  let alone really early  should tell you there is something very special about it.
And indeed, there is much that differentiates it from other events and makes participating in Advent of Code a rewarding and unique experience:

It is not tied to any specific tool, language or environment. In fact, special care is taken to keep the problems language agnostic and not give any one  compiled or otherwise  a significant edge over others^{7}. As you have to submit a solution, not the code which computes it, anything goes and you are free to chose whichever path you fancy. You can preprocess input, combine multiple tools, solve some puzzles with pen and paper, crowdsource or use whatever other crazy ideas you might come up with. Anything short of hacking the site is welcome and kind of the point. It really teaches you to solve the specific problem in front of you, just as you should do in this strange “real world” I have been hearing so much of.

Its one of a kind two part structure not only increases accessibility but lends itself particularly well as a teaching tool. You see, you do not really get one puzzle each day, you get two, the second of which is unlocked after you successfully solved the first one! Most of the time it is a more complicated variation or an extension of the first part. The effect of this structure, as I see it, is twofold: On the one hand, even complete beginners do get at least one task each day they can realistically hope to solve. On the other hand, what you have learned in part 1 can and must immediately be applied in part 2, consolidating new knowledge and fostering true understanding. You have to rethink and optimize your approach, consider asymptotic complexity and benefit from keeping your code clean and extensible.

The occasional interdependence of many tasks, as seen with the notorious intcode problems in 2019, serves as a wonderful incentive to not only keep code extensible from the start, but even regularly go back to maintain and clean it up. You never quite know if and when you might have to build on something written on a previous day and not considering that might come back to bite you. Again, reminiscent of and not entirely unlike this “real world” thing.
That list is not exhaustive and deeply personal, which is why I will stop now. There is, however, one aspect so important to deserve its own section in this introduction: What really sets Advent of Code apart  more than anything  from all other coding competitions I know of, is its amazing community.
Community
With more than 150000 participants vying for one of the coveted spots on a global leaderboard consisting of a very meager 100 places, one could expect a certain degree of hostility.
What is actually happening is, however, the very opposite. Seldom have I witnessed such incredible supportiveness in so competitive an environment. Everyone helps and encourages everyone else, providing tips and assistance when needed, comparing solutions, sharing thoughts and ideas and just in general being awesome to each other.
To name just one example, I invite you to scroll through this lovely reddit thread, started by the creator himself after, on the very first day of this most recent edition, the servers collapsed under the unpredictably huge load on puzzle unlock. Who got onto the leaderboard on that day was more of a function of who got to the servers first upon their restart than who solved the problem most quickly. As such, no leaderboard points were given at all. Instead of disgruntled voices protesting the decision or complaints about the crash, you’ll find only words of encouragement and understanding.
That incident was not just a fluke either. Solving the puzzles simultaneously with thousands of others around the world, all with different backgrounds, preferences and varying levels of experience and skill provides ample opportunity to learn from and inspire each other. There are even special rewards for exceptionally creative solutions, visualizations and other ideas. And memes. A lot of memes.
The community is, of course, not restricted to reddit exclusively. Many participants  all across the leaderboard  stream their solving live on platforms like twitch and youtube, searching github for “adventofcode” reveals more than 30000 repositories and various subcommunities maintain their own leaderboards.
Related blog posts also exist in abundance. Even if I most definitely won the race for most delayed wrapup post, it is by far less of an excessive margin than you might think. In no particular order of timing and preference, as I love them all, here is a small selection of wrapup posts. There are many more and I encourage you to search for them.
Regardless, I promised to keep this short. Enough preamble, let’s leave the gushing for another day and move on to finally discuss solutions.
Solutions
Setup and tooling
If you read my remarks above attentively, you will know that choosing which set of languages and tools to employ is a pivotal decision. Allow me to briefly explain my sophisticated thought process on that matter:
If all you have is a hammer, everything looks like a nail. If your favorite tool is a very efficient hammer, you can beat everything into submission until it wishes it were one.
Naturally, I exclusively used C++.
I am a lot of things, and one of them is ambitious in all the wrong ways. Last year, I made it onto the global leaderboard just once and I intended to improve on that.^{8} To improve my chances of doing so, I did the bare minimum of preparation and composed a few simple scripts automating repetitive, time consuming site interactions. Nothing fancy, just one for submission, one for fetching the input, copying and opening a boilerplate ready base file for each day and a very simplistic Makefile tying them together.
Disclaimer
All explanations that follow are also reproduced in a github repository, next to the code they pertain to. So if you are only interested in a solution for one specific day, it might be better to just head over there and browse to the corresponding folder.
They also  obviously  spoil the fun of solving the problems on your own. So, if you happen to know me personally and your name starts with H, J, K, M, R or S or if you simply wish to  as opposed to being forced to  do the puzzles on your own first, skip what is written below and return to compare once you are ready ;) You may consider me sorely disappointed if you look at the solutions before your own attempts.
WHAT FOLLOWS DOES CERTAINLY NOT CONTAIN THE BEST CODE I HAVE EVER WRITTEN. I solved all of them at the time they appeared, which was 6 am in the morning where I live, between 8  12 hours ahead of my normal waking time xD
Day 1
Part 1
The first puzzle was simply finding two elements in a list of numbers which sum to another number(in our case the then current year, 2020).
This can, of course, be solved trivially in quadratic time, as can be seen in 01_nested_loop.cpp:
If we want to get fancy, we can sort our input in O(n log n) time, iterate over it and find the potentially corresponding number in O(log n) time, yielding a total running time of O(n log n) again:
Yes, binary_search in the standard library returns a boolean. This is one of the rare occasions where that is actually exactly what we want. For all the others, there is lower_bound and upper_bound.
That is rather nice, but if we are willing to sacrifice just a tiny little bit of additional memory(O(m) with m being the searched for sum) and assume all numbers are nonnegative we could even go linear:
Without the assumption of nonnegativity  which held for my input, but might not for everyones  we can still do a variation of this, but would have to use significantly more memory and/or rely on some sort of hashset.
Part 2
For a tiny bit more difficulty (and were the numbers bigger also a significant spike in runtime for the trivial solutions), the second part asks us to find three numbers that sum to 2020. Luckily for us, the list is reasonably short and even the now cubic nested for loops produce the correct result in a very short time(a very small fraction of a second on even my old machine):
Both of our slightly more sophisticated methods for part 1 can relatively easily be adapted to this new constraint. For the one applying sorting, simply add yet another for loop, yielding a O(n^2 log(n)) runtime. Nothing to write home about, but a step up from cubic…
Again, we can get rid of that tiny annoying log(n) factor, given the same assumptions and using the same technique as above. Adding another for loop results in the following code, for “only” quadratic time.
More than sufficient for the small input set, yet I really did not like stopping here.
Whilst it remains an open research problem whether or not the general 3SUM problem is solvable in subquadratic time, we do have some additional constraints. Our input consists of only nonnegative integers smaller or equal to our target value. Wikipedia claims that for integers in range n to n there exists a solution in O(n log n) utilizing a Fast Fourier Transform on the input set represented as a bit vector, but unfortunately I failed to understand how exactly such a solution could be constructed.
So I tried to find a different, better solution, gave it a tiny bit more thought and my contemplations yielded the following code:
The runtime is bounded fairly simple: lower and higher start n values apart from each other and in each iteration either one is moved towards the other or a solution is found. As such, the while loop runs at most n times. The most expensive operation within is the binary search, running in log n time. As such, the overall complexity is O(nlog(n)) (for sorting) + O(nlog(n)) (for the loop), yielding a glorious O(n*log(n)) in total!
That is all fine and dandy, it even worked for my input and the few others I could find and try it on at the time, but that alone is far from sufficient to prove it could work for any input. As such, I tried to sketch a proof and in so doing quickly had to come to a shattering realization: this is completely wrong and ridiculously stupid!
For instance, it fails with the following simple input:
1 9 1009 1010 2000
Nonetheless, I considered this failure educational and working out why it works for some inputs and how exactly it fails so spectacularly on many others can be quite instructive, which is why I still showcase it here.
Day 2
Part 1
The second day was algorithmically far easier(and imho less interesting) than the first one, as the only difficulty here was avoiding typos and parsing the input. You got a list of words and policies, which described how to determine if a given word was valid:
13 p: word
Count the occurrences of the given letter(p) in the given word(word) and check it is present between the lower(1) and higher(3) bound times. We then had to count the number of valid words, i.e. words that satisfied their own policy.
Really, this was mostly just a typing speed challenge, with standard algorithms doing the bulk of the work. I simply defined a struct describing one such rule/word pair:
Followed by overloading operator«(lazily, without any error handling), so I could extract properly typed input from stdin:
That allowed me to conveniently combine istream_iterator and std::count_if to solve the actual problem:
Part 2
Part 2 was even more trivial, as the interpretation of the policies was changed. Now a valid word has to have the given letter at exactly one of the given positions. Simply change the validity checking lambda and you get the following code:
The only potential issue, and obviously one I stumbled upon and which cost me one minute because of an incorrect answer is that position  of course  starts at 1, not 0 here.
Day 3
Part 1
Yeahy, may the ascii art begin! This was the first of a classical kind of 2d grid based puzzles this year! It was also the first problem this year where what little preparation I had done came in handy, having predefined point and vector types.
Given a map consisting of open spaces (represented as ‘.’) and trees(represented as ‘#’), all we had to do was check how often one would hit a tree when going down a certain slope. Beginning in {0,0}, just adding the vector {3,1} until you hit the bottom, count the elements with value #. To ensure you always reach the bottom, the grid extends infinitely to the right, simply looping around in classic pacman fashion  which implemented via a simple % on the x value:
Part 2
Part 2 was simply doing the same thing with multiple slopes and multiplying the results. Nothing fancy, just moving the counting to a lambda and a small rangefor does the trick:
Day 4
Part 1
Given a list of key:value pairs, separated by empty lines, we were first asked to simply check for the presence of various fields. Quite similar to the first day, this was mostly an exercise in getting parsing somewhat right, which, frankly, annoyed me a little at the time. As on that day, I first simply defined a type for the data:
followed by an overloaded operator»:
We then define what it means to be valid:
and let count_if do all the work:
The different maps were separated by an empty line in between them, which made it rather tedious (and likely a bad idea) to parse as I did by overloading operator» on std::istream. I also  as usual  committed another ridiculously stupid mistake that cost me a good 5 minutes:
Note how I read lines until either std::getline fails or returns an empty line. That is because there are actually two ways a collection of related keyvalue pairs is terminated: With an empty line OR with the end of input. The problem occurs in the latter case. If std::getline fails, it sets, appropriately, the std::ios::failbit on the underlying stream. If I do not clear it, however, this indicates to the istream_iterator that extraction failed and the last entry will not be accepted.
Part 2
Now, in addition to merely checking for their presence, we were tasked with checking each fields validity based on certain criteria. Again, not too difficult, just quite tedious and easy to get wrong, yielding somewhat unsightly code by necessity. Originally, I had a gigantic ifelse block and repeated code, just to get it done as quickly as possible(faaar too slow for the leaderboard, of course), but looking at that code was the sort of torture I could not possibly subject my dear readers to, so I went kind of overboard and constructed a bunch of composable function objects that allowed me to define the criteria in an almost declarative style:
The business logic ones are rather trivial and uninteresting. The ones used for composition can, however, prove educational:
all simply takes a bunch of callables and returns a new callable that utilizes C++17’s fold expressions to call each one of the given checkers in a chain connected via &&, i.e. returning true exactly if all of the given checkers would return true.
one_of does pretty much the same thing, just with , i.e. or, instead of and.
chain, as the other two, takes a collection of callables and returns another callable. This one, however, is constructed in a little more complicated fashion. First, we call a helper chain_impl. This is necessary to get recursion, which is otherwise not easily available in lambda expressions. The helper does one of two things:
 If there is only one callable left, it is returned.
 Otherwise, we construct a new callable that calls the first one and passes the result on to whatever the recursion will construct for all other functions.
Effectively, given functions a, b, c and d, this will construct an expression equivalent to d(c(b(a(input)))), i.e. it chains the calls, using the result of the first as input to the second and so on(similar to the use of pipes in Unix, a  b  c  d).
Day 5
Part 1
This one was a bit more interesting and certainly educational. We were given a lovely description of how a certain airliner numbers their seats by repeatedly subdividing a range, throwing away either the lower or the upper half with each incoming letter. Whilst reminiscent of binary search, it is basically even simpler. The description mirrors exactly the classical definition of what a bit actually is: something that halves your prior uncertainty. As such, the strings proved to be nothing more than a fancy encoding for a 10digit binary number, MSB first. With this realization, completing the task  that of finding the highest id  was trivial:
Note how we cannot use max_element, as that sadly requires a forward iterator.
Part 2
Given the same list, we were now tasked with finding the one and only missing element. The list does not necessarily start or end at the lowest and highest possible id, but our missing element was declared to be somewhere in between. This could  again  be solved in more than one way, with up and downsides of runtime, memory usage or readability. First, the more readable solution, at an additional O(n) memory (for storing all ids) and O(n log n) runtime (for sorting them):
Already pretty decent, but we can do better by knowing and abusing a tiny bit of bit magic, specifically the following lovely properties of the XOR operation:
A ^ A == 0
I.e. a number xor’ed with itself disappears. Further:
A^B == B^A
That is, the operation is commutative, the order in which it is performed does not matter. And finally:
A^0 == A
A number xor’ed with 0 does not change.
All of this can be combined to imply this:
(A^B^C^E...) ^ (A^B^C^D^E...)
will be equivalent to
(A^A) ^ (B^B) ^ (C^C) ^ (E^E) ... ^ D
meaning the only one left over that is not zeroed will be our missing element! This yields this beautifully efficient solution that no one without knowledge of the trick has any realistic hope to grasp:
Knowing the trick, it does seem rather straightforward so and has only constant memory footprint and linear runtime, which is just fabulous ;)
Accessibility can, however, be somewhat regained whilst not only keeping but even improving ever so slightly on this runtime by recognizing there is a simpler pair of operations with very similar properties: + and .
A+0 = A
AA = 0
A+B = B+A
Meaning, of course,
(A+B+C+D+E...)  (A+B+C+E...) = (AA) + (BB) + (CC) + D + (EE)... = D
Simply summing all elements and subtracting that sum from what it would be if none were missing yields exactly the one missing element! Knowing a bit of history and Gauss’s famous formula for computing the sum of all integers up to a given bound((n*n+n)/2), we can skip the second loop and directly compute the desired sum in closed form:
Day 6
Part 1
Given a number of “yesornoquestions”  each identified by a single character  we were provided with a list of “answers”, i.e. lines containing a character if the corresponding question was answered in the affirmative and not containing it otherwise.
Those answers were separated into groups by empty lines and our first task was to count in how many groups each “question”(i.e. character) was present. We then should return the sum of those counts.
Again  as seems to be a theme in those early days  the most difficult or rather time consuming part here is parsing question and input correctly, with the actual task being simple counting. Here is how I did it:
Nothing much to explain here. I simply read the input, line by line, and count the frequency of all characters in an array indexed by the characters value  just like one would do when implementing counting sort. After an empty line, I finalize the group by counting how many entries were not 0  that means they appeared at least once  and reset the counts.
Using an array to count is, of course, a bit of a waste of space, but I deemed it acceptable as the resulting code is succinct, readable and efficient. It also turned out to be useful for part 2.
Part 2
The second part did not, in fact, significantly ramp up the difficulty. Instead of counting how often each answer was given in total, we should only count those answers in a group which were given by everyone in it. As I already counted the frequency, all that was left was determining the group size:
Followed by simply changing a “!=0” to a “==group_size”:
It might have broken if anyone had enthusiastically answered yes more than once on the same question, which, to my great relief, was not the case.
Day 7
Part 1
Finally a graph problem! Given a number of bags, all of which containing a number of different bags, first count all that will, in one of those interior ones, contain a “shiny gold” one. Before tackling the more interesting part here, we unfortunately do have to get some annoying parsing out of the way, which  as so frequently  took the bulk of my required time.
Which bags contain how many of which other bags was given as a string of this form:
vibrant plum bags contain 5 faded blue bags, 6 dotted black bags.
Or sometimes:
faded blue bags contain no other bags.
It ought to be trivial, but I am embarrassed to say that this years problems really drove home the point that my skills in the parsing area are severely lacking and I have some serious brushing up to do. Some higher proficiency with regular expressions, for instance, might have tremendously simplified the following abomination I came up with at the time:
Don’t look at it too closely. Great! Now that we have this, let’s return to the root of our problem. Or rather the reverse. Interpreting the “contains” relation as edges in a graph, we can have one node for each bag type and an edge to all other bags in which it might be contained:
For instance, this description:
green bags contain 2 red bags, 2 black bags.
red bags contain 1 blue bag.
blue bags contain 1 shiny gold bag, 3 black bags.
black bags contain no other bags.
shiny gold bags contain 7 gold bags.
would yield the following:
"red" < "blue" < "shiny gold" < "gold"
 ^
v 
"green"<"black"__
Thanks to the eldritch horror above, such a reversed graph can be constructed easily, representing it as a map from nodename to listofchildrennames:
Given that, all we need to do is walk each possible path away from our starting node (“shiny gold”) and count the total number of nodes we pass by(remembering which we have already seen and counted on the way):
Part 2
Wonderful! Now for part 2 we were tasked with doing basically the reverse. Instead of counting how many different kinds of bags can contain a shiny gold one, we are asked how many bags are contained within. This requires us to consider one more piece of information given in our data, the number telling us how often each kind of bag is contained within. Reversing the edges from part 1 and labeling them with a “cost” representing their quantity yields a graph like this one:
"red" 1> "blue" 1> "shiny gold" 7> "gold"
^ 
2 3
 
"green"2>"black"<__
Which we will, once again, simply walk recursively from our starting location, multiplying the results by the given quantities:
Day 8
Part 1
The interesting emulation problems have finally started! I really loved all the intcode problems in 2019, so this was getting my hopes up!
We started out slowly so and should just fix a simple program which executes one of only three types of instructions:
 acc, adding to the accumulator
 jmp, a relative jump
 nop, my favorite thing, doing nothing
As given, the program contained an infinite loop and we were tasked with determining its state once we enter the loop, i.e. once we execute an instruction for the second time. Without further ado, lets jump right into the code.
First, we define structs for our state and instructions:
Parsing them is thankfully easy:
As is executing them, either adding to the accumulator or adding to the program counter or alternatively doing nothing at all:
To solve our problem, we first read all instructions into a vector, setup a pristine state with accumulator = pc = 0 and begin executing. After each instruction we increment the program counter to determine the next one, whilst keeping track  in a simple hashset  of all positions we have encountered so far. Upon reaching one that was previously seen, we declare the cycle detected and break our loop:
Lovely, lets move on to…
Part 2
Now that we are able to emulate our loop, we are supposed to break it. There is one single nop or jmp instruction that if flipped to the opposite will cause our program to terminate by causing the program counter to reach the end of memory.
The most trivial and perfectly sufficient idea is to simply brute force it. Iterate over every instruction and if it is a jmp or nop, toggle it and try running our program from part 1. If a loop was detected, we try the next one. If not, we have found the instruction to change:
To keep it readable, I outsourced loop detection and gave it a nice, expressive return type:
Whilst this is perfectly sufficient in our case and terminates within a small fraction of a second even on my old machines, it is not exactly ideal. Worst case, its asymptotic complexity seems quadratic. Consider this example:
nop 1
nop 1
nop 1
....
jmp program_size
It would run over each nop, run the complete program to the very end, detect the loop and try the next one. That is, each attempt before the last executes n1 nop and 2 jmp instructions and we need to do that n1 times. Quite terrifying. So lets do better:
Looks a bit more complex than our previous attempt, but the running time turns out to be linear. Allow me to first explain my reasoning:
Assuming we knew for each instruction if beginning execution from it will reach the end, we could execute our code once and simply change the first instruction we encounter that could not reach the end but could if toggled. Doing such a thing would run in linear time, as no loops are entered and each instruction is executed at most once. That is exactly what you see happening in the code above.
This, however, delegates the hard problem to figuring out which instructions can and cannot reach the end:
I am iterating over all instructions once to determine from where each instruction can be reached(compute_jump_sources). Then I iterate again and mark which instructions can reach the end. Turns out that is not quite so hard either. We have to realize the following simple facts:
 A nonjmp instruction can reach the end if it is at the end or if the one following it could reach the end
 A jmp instruction can reach the end if its target is the end or can reach the end
As such, we start with the only one we know about  the end  and work our way upwards. If its not a jump, we simply mark it with the previous result. If its a jump and we know about its destination, we continue with the destinations value. If its a jump and we don’t know about its destination, we can’t proceed and break. Furthermore, if the current instruction can be reached by a jump, we recurse and work our way upward from its source, too.
Runtime analysis of this might seem like a mess, but our seenvector rushes in to save the day: There are at most n jump sources in total (as there are at most n jmp instructions), as such recurse can be called at most n times in total), so we have at most n loops. Each iteration of the loop either breaks or sets a value in the seen vector to true which was not set previously. As each loop can of course only break once, the total number the first can happen is n. As seen has exactly n elements and we only enter an iteration and set a value if it was not set, the total number the second can happen  over all loops  is also n.
Just lovely, isn’t it?
Day 9
Part 1
Surprisingly easy given the previous two assignments, our quest on this day was simply adding up numbers. We were given a list of them and should find the first one that cannot be represented as a sum of two of the 25 preceding ones. The more perceptive of my readers probably realized by now, that in the grand tradition of instructional spiraling, this is but a slightly more complicated variation of what we did on day 1!
As such, a trivial  yet horrific in time complexity  solution is to brute force our way through the problem with 3 ugly nested loops:
Funnily enough, that is still fast enough and if we analyzed its complexity at university level, we would call it O(n), as 25 is a constant and so is 25^2. 375 * n is still O(n), so we are golden xD. Well, as you might have guessed from my somewhat snappy tone here, I don’t really like that much. We also keep all of our range in memory right now, which could be avoided.
We only need the last 25 numbers, so lets start with that:
Yeah, as there is none in the standard library^{9}, implementing a small, bad and severely broken ring buffer might be ever so slight overkill, considering the input list is a mere 1000 numbers and the approach above runs in the fraction of a fraction of a fraction of a second, but who am I if not one to always entertain the notion of severely overengineering the most trivial of tasks?
Utilizing this data structure, our solution changes to look like this:
Our theoretical runtime can be reduced by using the exact same methods employed on day 1: either via sorting or via wasting a potentially huge amount of memory.
Let’s do sorting first. We can’t sort in place, as the original order dictates which elements will be replaced in succeeding steps, which is why we have to copy them into an additional buffer first:
We then walk this buffer once, binarysearching for the missing element:
Lovely, isn’t it? We reduced our complexity down to O(n) * O(25 log 25), which is, asymptotically speaking, completely irrelevant. If the constant were ever increased, however, it could prove useful to reduce it even further.
Last time, we had a convenient constraint on how big our numbers could get, which was completely thrown out for this task. Numbers here can  and apparently do  get arbitrarily big, so its kind of impossible to allocate a big enough array in advance. Whilst we could determine this number in linear time(by simply scanning the input once and saving max and min values), a simple bitset won’t do, as our sliding window might contain more than one copy of the same value and we have to keep track of when all of those fall out of the window, i.e. we have to count. Considering my solution was >20 million, keeping one integer for each possible one in the input constitutes unfathomable abuse of memory even for the likes of me, so I bailed out and decided on using a hashmap instead, which will  at least on average  still provide me with O(1)ish access time:
Part 2
Surprisingly even easier than part 1 this time. Take the number we just obtained and find a contiguous set of size>2 in our data that sums to it. We can do this easily by keeping a running set, always adding the next number and  if we ever run over our target  erase from the start until we are <= to it again:
Day 10
Part 1
Travel power adapters, oh my. The cover story is a cute one, but the actual problem is rather straightforward. Given a number of “joltage” adapters output values, connectable if they are between 1 and 3 apart and the second is higher than the first, figure out how often they are apart by 1 and how often they are apart by 3 if all were connected together. Since we never can connect a higher to a lower adapter, we simply sort and count the difference between every two elements:
Part 2
That was ever so slightly more difficult. My first hunch was a simple recursive solution, counting the options:
Turns out that is not ideal for a number of reasons, most obvious among them is runtime and repeated work. If, for instance, we skip the first adapter and take the second and third on one path, whilst we take the first, skip the second and take the third, the total may be different, but what follows afterwards is exactly the same. Nonetheless, the code above will compute this twice. Every subpath can be reached in a multitude of ways and I recompute it every. single. time. Let’s fix that:
I employed the time honored tradition of slapping a cache onto my recursion to avoid duplicating work. This turns out to be more than fast enough for our problem and a result is available within the fraction of a second again. Nonetheless, we could have done even better, even if I failed to figure that one out myself in the heat of the moment. It was pointed out to me on the #include discord and is visible in this solution. Instead of going forward and counting what could follow, we can  true to the spirit of dynamic programming  go backward and count the pathes we have already seen. Basically the same idea, but we can save the hashmap and know ahead of time how many entries we will need.
Wonderful, but checking the lovely reddit, you will figure out there is an even better, if far more mathy^{10} and slightly less intuitive way running in what is apparently linear time. Turns out there is a connection to the tribonacci sequence.
That was fun and I almost got near the global leaderboard for once, rank 508, my best so far this year.)
Day 11
Part 1
Cellular automata! Quite fitting, to honor John Conway’s legacy  in the sad year of his passing  with another one of those. The cover story here is about seats and people admirably practicing social distancing. If they have too many neighbors, they move away. Given an initial state and a simple set of rules, we had to simulate the seating constellations until they reach a stable, unchanging state. Rather straightforward, so edges, corners and the fact that all cells update simultaneously always complicates issues slightly. Here is one update step:
We create a copy of all cells, and iterate over all coordinates. For each of those we iterate over all valid neighbors and count how many of them are occupied, setting the new value in our copy according to the given rules. We can’t set it in our current grid, as all cells should act as if they updated simultaneously and modifying the grid step by step would change results for later processed cells which contain already updated ones as neighbors.
Part 2
The difficulty was increased ever so slightly by not only considering direct neighbors but everything within ones line of sight. Nothing a simple loop can’t solve, thankfully:
Day 12
Part 1
Given a number of instructions for moving a ship around on a simple 2 dimensional coordinate system (but with cardinal directions), we had to correctly determine which position we end up in. The minor difficulty arises from some directions not being absolute, but relative to the ships current orientation, which we also have to keep track of. Let’s not get ahead of ourselves and first declare the possible instructions and define how they are parsed:
I already did some minimal preprocessing here. For one, I chose to save the directions as 2d vectors for easier addition.
Furthermore, I noticed all turns happened in multiples of 90 degree, so instead of saving the angles, I remembered how many 90 degree turns they correspond to. 90 degree turns are easy:
Whilst it could maybe have been solved more elegantly with complex multiplication, this works perfectly fine for now so and I really like the clarity of this code which is readable without much of a mathematical background (although it’s always nice to have that nonetheless).
Given this scaffolding, all that is left is iterating over the instructions and modifying either position or direction accordingly:
Part 2
Part 2 on this day was nearly identical, only the “move”instructions were changed to not move the ship directly but modify its movement vector instead, which was applied on the “moveforward” command. The turn commands were also changed to turn the movement vector. Again, it could have been simplified with complex numbers, but worked out fine with my trivialized approach, the only change being one case of instruction handling:
Day 13
Part 1
Given a starting time and a number of bus ids  which also represented their round trip time  we were tasked with finding the first bus to arrive at or after the starting time. For this one, all busses started out at 0, meaning we only had determine the first multiple of each id >= the starting time and choose the minimum of those. To do so, we compute the ceil(i.e. rounded up) result of dividing the starting time by id and subsequently multiply id by that result:
Part 2
It really pays off to know a tiny bit of linear algebra from time to time ;) Without it, this one would be really, really hard.
Given not only the periods, but a desired offset for each id, we had to determine the first time satisfying for which each bus arrived exactly its offset later. In other words, we had to solve a series of equations of the form
x = a mod id0
x = b mod id1
x = c mod id2...
That is a case for the famous Chinese remainder theorem!
As such, we first need a way to compute modular multiplicative inverses, which can be done with the extended euclidean algorithm:
For each id, we compute the modular inverse of it with the product of all others and multiply it with that product and the desired offset. Our result is the sum of doing that for each id, modulo the product of all ids. Unfortunately, the offsets are negative in our case and modulo with negative numbers is  ummm  let’s say special in C++, so I used an expensive loop instead:
Day 14
Part 1
This time, we were supposed to trace modified writes to a special memory. Whenever a value is written, it is first modified based on a simple bitmap by either setting the bits to 0 or 1 or alternatively, leaving them as they were in case of an X. For example, given the following input:
mask = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX1XXXX0X
mem[8] = 11
memory at address 8 should obtain the value 1011 (i.e. binary 11), but adding a 1 bit at the 7th position and changing the 1 at the second one to a zero: 1001001 (which is 73 in decimal).
My implementation simply constructed two masks, one to &(representing the X’s) and one to (representing everything else), for those operations:
Which were then applied to modify assignments:
memory[addr] = (value&mask_and)  mask_or;
As the address range was 36 bits and as such incredibly huge, whilst only a small number of cells could actually be used, I chose to represent it as a hashmap:
Part 2
Same input, same mask, but very different interpretation. Now an X does not mean the given value, but any possible one, meaning we will have to iterate over all combinations. Furthermore, 0 means to leave the input untouched(the role previously occupied by X), whilst we modify the address written to, not the value. As a first step, we now create 3 masks(one to set to 1, one to leave untouched and one to try all combinations on):
Listing all options might seem complicated at first, but luckily for us, our languages and computers already have a beautiful system to iterate over all combinations of 0 and 1 for a given number of those strung together: counting! So we do just that and simply shift the relevant bits into their desired places afterwards:
To count bits and determine bit positions, I would have loved to use standard functions, but the bit header is only available from C++20 onwards and I was using C++17 only here. As such, I substituted my own inferior version created for my chess engine, paulchen332.
Day 15
Part 1
Let’s play a number game. We start of by reading a bunch of given numbers and remembering the previous two times they were said. Now, after they were read once, the next number depends on the previous one. If it was spoken for the first time, we speak 0. Otherwise, we speak the difference between the last two times it was spoken. That is all. Our task was to determine the 2020th number spoken. Rather cryptic instructions to read and it took me a few minutes to comprehend what was asked of us, but once that fog had cleared, a solution is rather trivial. First define what we have to remember for each spoken number and a hashmap storing this piece of information:
And then simply simulate the process exactly as given:
Part 2
Nothing changed, we are just asked for a significantly higher number of turns. Seems difficult at first, but I took the lucky shot and tested if my previous solution worked fast enough and it turns out it completed within 8 seconds with a correct result, jumping in leaderboard position from 1700ish to 500ish(far from global again, of course). Nonetheless, I was certain we can do better here.
The first trick I could think of, as it appeared occasionally in problems of this kind, is detecting cycles, so I tried this and it turns out, at least for my numbers and my bound, there was no cycle ;_; So we had to get smarter than that… aaand after far too much time spent on my own attempts I began to google and figured out that:
 Given the right starting conditions it is actually a named sequence with an entry in the OEOIS
 There is a numberphile video about it, which I highly recommend watching. And  of course:
 There is no known way to get at the entries faster than brute force. So anything that is asymptotically better than what I wrote for part 1 would be a minor mathematical breakthrough ;)
As such, all I could do was optimize what I was already doing. The first and most obvious choice is cutting down on unnecessary allocations, by preallocating enough elements into my hashmap:
This cut the runtime down to 5 seconds, which is quite a decent improvement for one line. Switching from a hashmap to a simple vector of 30000000 elements brought another similar improvement, getting the time down to just 2 seconds, switching the type saved from a wasteful std::size_t to int reduced it to 1.5 seconds. In the process, I also made my code look significantly more compact and beautiful than the horrid mess I showed you for part 1:
Hacking around a tiny bit more to get a more compact memory layout by stealing a bit from “last” instead of having a separate variable for “seen” got the time down to about 1 second, at which point I stopped bothering.
Day 16
Part 1
More data validation, yeahy! *sigh*
We have a bunch of tickets with a number of fields, all identified in a language we do not understand. Luckily for us, we also know which fields have to follow which rules, so we might have a shot at ordering them. But first, let’s make sure our data is valid by identifying fields that cannot be valid for any rule and count them.
Rules are given in the form
class: 13 or 57
which implies all entries of the “class” type have to be between 1 and 3(inclusive) or 5 and 7. It’s always exactly two nonoverlapping ranges and the second always starts after the first. So here is how I represented and parsed the rulesets:
With the parsing out of the way, as not exactly infrequently, the real task is just a form of counting:
Part 2
Alright, now we get to the ever so slightly more interesting part: Identifying which column belongs to which field. The simplest way to do this is checking all options. The somewhat more clever one I could think of is to determine the first rule by finding one that can only be valid for one column. Afterwards, rinse and repeat:
First, we setup variables to keep track of which rules and fields we have already identified:
Determining if one and only one previously unassigned entry fits a given rule is done via this lovely lambda:
It checks all, skipping over all assigned ones and checks validity. If no option or more than one valid option is found, we return nullopt, otherwise we return the index.
We then repeat it for all rules until everything becomes clear:
And that’s it for the day. It would  of course  fail spectacularly if more than one valid solution exited ;)
Whilst I am not generally too big a fan of the validation type puzzles  as they really mostly test if you are capable of correctly reading instructions and data  the second part today was amazing and a lot of fun!
Day 17
Part 1
We already had one Game of Life type puzzle this year, so seeing this one did not excite me too much at first. But in this tragic year it seemed needed and  reiterating the idea of spiraling  this one did take the previous one up to 11. Or rather 3 and eventually 4.
We start of with an infinite 3 dimensional “Conway Cube” with only one slight slice initialized and are supposed to extrapolate its growth.
For this first task, I chose the probably worst possible representation of an expanding, very sparsely packed three dimensional grid possible: a vector of vector of string.
Doing it like this made the update step incredibly unwieldy and tedious, as I had to take care to add two layers and resize every layer by 2, whilst keeping track of shifted indices and different bounds for the updated copy:
Certainly not the best code I have ever written, if you care for your eyes you should avert them.
Part 2
You thought 3 dimensions was fun? Let’s add another one and consider 4D hypercubes. My first surprisingly working solution simply extended the approach above logically. Instead of a vector of vector of string, I kept a vector of vector of vector of string and proceeded similarly to above, just one more nesting deeper. Quite the abomination, too horrendous to ever share with any living thing. I did, however, eventually see the error of my ways and recognized the problem description clearly screams for generalization of n dimensions, so generalize is what I did.
First of all, I needed arbitrarily dimensional vector and point structs
with appropriate operator overloads like this:
Computing vectors to all neighbors can be done with some simple recursion:
Given all that, I needed a better, less wasteful representation of the grid/cube/hypercube/whatever, i.e. of infinite space in arbitrary dimensions. The one I chose was a simple set of active/occupied cells:
An unordered_set, i.e. a hashset, might have been more efficient than this tree based one, but I really did not know how to define a good hash function for an arbitrarily large collection of integers, whilst a operator< is very natural.
The update function becomes much better than in part 1, even bearable to look at:
I first iterate over all active/set cells and add one to the neighbor count of each of their neighbors. I then iterate over those counts to create the new active cells according to the rules given. Almost beautiful ;)
Day 18
Part 1
Expression parsing and evaluation! Great thing I have written some very simple toy (and still simple, but not so toy) compilers before :). First thing is simply evaluating with no precedence at all, just considering parenthesis and left to right, which can easily be done in one pass.
First, we define how to parse a single operation:
Skip all whitespace and return operation::add or operation::multiply depending on the character seen(We should only support addition and multiplication). Nothing fancy so far. Parsing an operand, however, already contains most of the logic:
If the operand is a simple number, we read and return it. Otherwise, we first recursively evaluate the subexpression describing it and then return its result. Evaluation itself can now be described in terms of the two functions above:
We first parse the left operand, followed by the operation and its right operand. Our result gets computed as the result of applying the operator to it and the right operand. While there are still more operations to process, we repeat the procedure with result as the new left operand.
Not the most pretty, not the most ugly, just simple enough and it gave the correct result on first try and me place 185th on the damn global leaderboard. So close, yet so far…
Part 2
Now we do have a precedence, just the opposite of what is commonly used. In hindsight, it might have been worth it to just swap the symbols, let some automatic parenthesizing program run over it, swap them again and use bc or something like it. Or copy the expressions to code and use custom types with overloaded operators. I believe many solutions worked exactly like that, taking advantage of already properly implemented precedence rules in languages and calculators. I, as I tend to do in life, went the long and tedious route:
It wasn’t really that long and tedious so. All that had to be adjusted was the evaluate function and the change was rather minimal and intuitive. Ignore subexpressions for a moment  as those are handled recursively by parse_operand anyway  and simply consider what is happening here:
1+2*3+4+5*6+7*9+8+7
That is interpreted as:
(1+2)*(3+4+5)*(6+7)*(9+8+7)
As you can see, we have a product of sums. What we can do, as such, is keep a running product, starting with 1 and a running sum, starting with the first operand. We then add to the sum until we encounter a multiply, at which point we multiply our running product by the sum and reset sum to zero.
With our example:
product = 1, sum = 1;
sum = 3
product = 1*3, sum = 0
sum = 3
sum = 7
sum = 12
product = 3*12, sum = 0
and so on. At the very end, we return the running product multiplied by the last sum. Here it is in code:
Day 19
Part 1
More parsing fun, but this time the difficulty is not so much in parsing but in writing a proper parser. We are given a bunch of simple replacement rules and are supposed to tell how many of the input strings match the rules. At first, they are nonrecursive and provided in a simple form:
0: 1 2 1: “a” 2: 1 3  3 1 3: “b”
This is basically a very simple grammar, with numbers as nonterminals and characters as terminals. I chose to represent the production rules as a simple struct:
If type is terminal, the char matters, otherwise the vector of alternative sequences is relevant. Given that exclusivity, I would have done better to use a std::variant instead, but I guess I was feeling particularly lazy that day.
The parser itself is, as so frequently, a simple recursive function with this signature:
“str” is what remains of the input, “rule_id” identifies the first rule we are trying to match, whilst “follow” is a list of rules we have to match after that first one.
Matching differentiates based on the next rule type. If it is a terminal, i.e. a single character, we simply compare the next input character:
If it does not match, we failed. If it succeeded and we have no more rules to match, we succeed exactly if we consumed all of our input. Otherwise, we recursively try to match the next rule in our sequence.
If, on the contrary, we are dealing with a nonterminal, matching is a little more complicated, as we have to try all options:
For each of our possible sequences, we first create a copy of what has to follow so far and prepend that with the sequence we are currently testing. Then we proceed as we did in the terminal case above, by taking off the first of that sequence and recursing. If any one of the recursive calls succeeded, we propagate success. Otherwise, we propagate failure.
Part 2
For part 2, we were supposed to replace two rules, thereby introducing right recursion. We were warned  in not too weak terms  that we should only try to handle our special case and a general solution would be significantly more difficult. I was stumped for several minutes trying to figure out what such a special cased solution would look like, until I decided to try and see what my solution would do with the modified input. Turns out it seems to have been perfectly general already and spewed out the correct answer within the fraction of a second. Wasted minutes and leaderboard opportunities… Oh well, next year is another chance xD
Day 20
Part 1
This one was quite interesting. A puzzle that was literally… a puzzle ;). Given a bunch of “tiles”, i.e. small rectangles of either “.” or “#”, the task was to reassemble a complete image by placing, rotating and flipping the tiles such that all borders match up. Part 1, however, was a significantly easier first step to this grand endeavor: Determining which tiles are corner tiles. A tiny bit of thought reveals that all tiles appearing on the inside match at least 3 other tiles on at least 3 of their borders. As such, we first need to determine the 4 borders:
“grid” here is a vector of string, the unfortunately inefficient representation I chose at the time. For this part, it was entirely unnecessary to save more than the borders and those could have been a simple bitset/integer instead. Nonetheless, due to an irresistible bout of laziness and in anticipation of what was to come, for better or worse, I did as I did.
Given a way to get at the borders for any given tile, we need a way to check if any borders of two tiles could match, regardless of orientation:
We then can, in horrendously quadratic brute force fashion, determine all possible neighbors for all tiles:
The looked for tiles are those with exactly 2 possible neighbors and we were supposed to return the product of their ids:
All of this could, of course, have failed miserably, as there is no real reason why a corner tile could not match any additional tiles on at least one side, but it seems like we were supposed to get lucky.
Part 2
Alright, now that we have the corners, solving the complete puzzle and reassembling our image should be trivial, right? Well, it should be…
I did, however, run into some troubles along the way, wrapping my head around it was the cause of some minor and major headaches and I spent far more time than I’d be willing to admit until I got all the details just right. My code is a bit of mess, so before we get to it, allow me to explain in general terms what I eventually came up with:
Assume we start out with an empty grid
....
....
....
....
Begin by setting any arbitrary corner tile:
0...
....
....
....
As corner has exactly 2 neighbors and we do not care if our assembled image is in the “correct” orientation  whatever that even means  we can randomly assign one of them to the right and the other to the bottom:
01..
2...
....
....
What now? Consider the position marked “X”:
01..
2X..
....
....
What do we know about the tile which will eventually end up there? It has 4 neighbors, two of which will be 1 and 2. It is also the only tile adjacent to both of them! Two placed neighbors are always enough to uniquely identify any tiles position. As such, we can place all tiles with that property:
01..
23..
....
....
Aaaand, we are stuck again. Or are we? 1 and 2 have exactly one unplaced neighbor left and exactly one space to put it. As such, we can place those, leaving us with:
014.
23..
5...
....
Which provides us with two more positions clearly defined by two neighbors. Filling them gives us two more tiles on the edges. Rinse and repeat until the grid is filled.
Which brings us to the god awful implementation. We first do exactly what we did in part 1, just instead of outputting some value based on the corner ids, we select one of them as the starting point for whatever we will do next:
For simplicities sake, I once again inefficiently represented a grid as map and used a hashmap to keep track of which tiles were already set:
Filling the grid is done as described above:
First, we set the three initial tiles and then, until we have set all of them, we fill in those with two neighbors, advance the ones on the borders and repeat.
To find where to place one with two neighbors, we find the position of those neighbors and take the max of their x and y coordinates:
The direction for the other case is predefined, so we only need to find the unset neighbor and add the direction vector:
Afterwards, we still have to correct flips and rotations, remove the border, compose a complete grid and search for a specified pattern  an ascii art sea monster  within that image. As that is not particularly interesting, quite tedious and even repetitive, I refer you to the complete code instead of elaborating on it any further. It is a mess and you have been warned ;)
Day 21
Part 1
Given a list of unknown ingredients and a list of allergens contained in them (each allergen appearing in exactly one ingredient, each ingredient containing at most one allergen and not all allergens listed every time), we are tasked with figuring out which ingredient contains which allergen. Well, first of all we are asked a simpler question: which ingredients do definitely not contain any allergen. It took me a while to understand what exactly was asked here, but it all comes down to set intersections and differences. Whenever an allergen is contained in two lists, it must be contained in the intersection of those lists, not in the difference. Thanks to the beauty of the C++ standard library, this simple insight leads to a very straightforward solution using simple calls to std::set_intersection and std::set_difference:
Part 2
Having solved this simple question, we now have to uniquely identify which ingredient contains which allergen. Given our previous solution and knowing it is uniquely solvable, this is rather trivial. First, identify one allergen that can only be in one ingredient. Remove it as a possibility from all other allergens. Rinse and repeat:
Day 22
Part 1
Space Cards! Almost missed them from last year, when they proved one of the most interesting challenges, due to their sheer number alone. This year, however, we are far more relaxed. Hardly any shuffling involved, just a simple game between two players.
Both are given a deck of numbered cards and repeatedly and simultaneously draw and reveal their top card. The owner of the higher valued one wins the lower one and puts both cards below their own deck. Once a deck is cleared, the other player is declared the winner. Rather straightforward so far:
If we only wanted to know who wins, it would be enough to check who owns the highest card in the beginning. The required result, however, was a score computed based on the order in the final deck and I couldn’t think of any better solution than a simple simulation.
Part 2
Part 2, as usual, is where things get interesting. Whenever the two top numbers are smaller than what is left of the respective decks, we recurse on a copy of the top that many cards to determine the winner. If a position repeats, player 0 wins by default.
That rule was a bit ambiguous and hard to grasp. What player 0 wins is that single subgame. Not the current round and not the whole game. I lost about 10 minutes on misunderstanding that, as I thought it referred to the round only and got an error I failed to understand. The given example input does not trigger that specific case and erroneous behavior, yielding the correct result even with the misunderstanding. Well, the rest was fairly easy with the difficulty coming mainly from more or less efficiently implementing the game rules, which I notably did not do.
I am certain it could be optimized significantly, as my solution takes about half a second on my laptop, which is huge. I am, however, lazy, so I declared it adequate.
The main game logic is pretty much equivalent to the one from part 1. I just cleaned it up a little bit and used a call to a more complicated function to determine who wins the next card instead of simply comparing the top ones:
I also kept a set of already appeared deck combinations, to implement the “player 1 wins on repetition” rule. next_card_winner is equally trivial:
If there are enough cards in both decks, we recurse into a subgame and return the winner of that. Otherwise, the higher card wins.
Day 23
Well, that was an experience… My sleep deprivation at this point might have hit a new high, causing me to have somewhat less concentration and a much better excuse than usual, but it is just an excuse and I failed to see the obvious solution right in front of my eyes. But that’s something for later, first, lets describe the problem:
Part 1
Given a bunch of numbered cups, arranged in a circle, our opponent  a very intelligent crab, apparently the pinnacle of evolution  rearranges them according to a number of simple rules. We have to determine where a specific cup ends up after a given number of moves. In each move, the crab picks up three cups clockwise after the “current” one and moves them right after the next lower numbered cup. The one after the newly inserted ones then becomes the next “current” cup.
Here is my hacky, godawful looking and performing code simulating one such move:
As you can see, I chose to represent the cups as integers in a giant array and copied and shifted them around, which was likely the worst, most expensive representation possible and which came back to bite me in part 2.
Part 2
Part 2 took the problem up to 11. Or rather up to 1000000 cups * 10000000 moves, causing this most trivial of brute force solutions to fail horrendously.
For the longest of times, I suspected there must be some sort of clever closed form solution. Turns out it was just an issue with implementing it in a little less brain dead way. A lot of cursing later, I realized that  of course  this is NOT a rotate, its simply a freaking linked list and that is one of the very few cases where using one actually proves more efficient and beneficial than a simple vector. Especially as we can still find any given list element in O(1), as they only contain one number. So this simple solution finally gave me my desired star, using two hashmaps to represent the lists:
That is  of course  an utter waste and neither hashmaps nor doubly linked lists are required here. Instead, we can simply store the indices of our followup elements as a sort of in place forward list
and implement a move by changing the followup indices of the current element and the last one taken:
Day 24
Part 1
Flipping tiles in a hex grid. Fun, if only I remembered to get my stupid hex coordinates right. Took me a while to get back to it and in my learning I discovered a beautiful resource going into great detail, which I highly recommend reading.
Especially the little graphic under axial coordinates listing neighbors really saved my skin this time. Everything but this was, of course, rather trivial.
Starting out with an infinitely extending hexagonal grid, all of which white, we were given a number of pathes from a starting point and were supposed to flip the tiles on the destinations to black(or back to white if they were reached a second time).
As such, we need to represent binary colors and offer a way to flip them:
As well as the 6 possible hexadecimal directions and a way to compute coordinates based on them:
As you can see, I chose axial coordinates. To understand why exactly that works, I would like to once again refer you to this wonderful explanation, which is far better than anything I could offer on that topic.
With that groundwork laid(and an overloaded operator» for parsing directions), the input based flipping is rather straightforward:
Part 2
And here we come to the reason for all those shenanigans. We had a seating area cellular automaton, we had a 4 dimensional cellular automaton, why not have one on a hex grid?
The code is almost identical to the one used for arbitrarily dimensional hypercubes before, so in case anything is unclear, I suggest looking at my solution for Day 17. If anything remains unclear, feel free to leave a comment on my blog or simply send me an email ;)
Day 25
For the final task this year, which  as is traditional  came in just one part, we had to break what was basically a DiffieHellman key exchange with very small, brute forceable numbers.
I was so glad I had that multiplicative inverse function from day 13 lying around that I immediately used it to determine the loop number:
Using that result, I could simply execute the algorithm as described to obtain the unknown key:
Turns out, however, that this is incredibly wasteful and instead of precomputing all those powers, I could simply do the same computations with the known key whilst counting the iterations:
Conclusion
Closing thoughts on this edition
And that is it for this year. I tragically never made it onto the leaderboard, but that doesn’t matter.^{11} I had a tremendous amount of fun and learned a lot.
There were a bunch of problems that made me think for what seemed like eons, searching, despairing, believing that there must be some better solution available. Yet, just like real life 2020, things really were as bad as they seemed and all we could do was try and grind through. Save our energy, do it as efficient as possible, with as little lasting damage as possible, but still: the only way out was putting in the work. Every single step. Slowly and tediously crawling to the finishing line.
Sadly, 2020 also marked the day of John Conway’s passing, his absence was felt and his legacy honored with variations of his Game of Life featured rather prominently.
This edition was  in my humble opinion  not the most difficult edition ever and quite harmless compared to 2019. Whilst I was hoping for a bit more of a challenge, that property proved really useful for convincing beginners to try it and learn from it. It is a delicate balance to strike.
What now?
After the event, someone posted a famous picture on reddit, captioned “Morning after the end of AOC 2020” which so adequately expressed my feeling on the eve of the end that I have no choice but to reproduce it here:
After all this time of the puzzles being an integral part of my daily routine, it simply… ended.
This gaping hole in my chest had to be filled, I was craving for something, just anything to make this insufferable feeling of emptiness go away.
There are, of course, previous editions and you can still try and solve all of them. Unfortunately, that was no longer an option:
Great, I know. For all those who really like the interpreter exercises  and I really loved 2019’s intcode  there is one of Eric Wastl’s older puzzles, the Synacor Challenge. Alas, that was the path I took last year.
Oh right, there is one more option. The final day was great, I like cryptography and there is the cryptopals challenge. Why did that come to mind? Ah yes, I remember, I did that three years ago.
Luckily for me, after lots of soul and google searching, I discovered codingame. Not only do they have a number of high quality puzzles, they foster a vibrant community  with puzzle solving and competitions regularly being live streamed and featured on the main site  and their clash of code format, quick 15 minute speed coding competitions, proved mildly addicting. I might be spending an unhealthy amount of time on there. Please send help.
So, if you feel like I did, maybe one of the things mentioned above could be for you. Feel free to invite me to a clash sometime, I am always up for a challenge.
If you fall into really deep despair and desperately need to bridge that gap some other way, you might as well read a blog post about it that appeared way too late for anyone to care ;)
In my next post, which I will publish much much sooner, I promise, I shall return to the topic of emulation. In the meantime, I am looking forward to being destroyed in the comments.

both of which I don’t particularly enjoy celebrating anyway ↩

As written about in my very first article on this blog ↩

It has likely long since ended, but that’s ok. Who likes parties anyway? I am sure  even without me  it was fine ↩

One day I will have my revenge! ↩

And I get a most convenient excuse ;) ↩

one might argue, however, that this very care does give noncompiled languages a special benefit for speed solving, but that is just me being bitter xD ↩

which ultimately proved unsuccessful. ↩

Unfortunately, we did not get a ring in C++20 either. Can anyone tell me what happend to p0059? ↩

which is a good thing! ↩

Of course it does, my soul is crushed. If I have such a thing. Whatever that even means… ↩
Comments powered byTalkyard.