Along with @maciejwolczyk we’ve been training a neural network that learns how to play NetHack, an old roguelike game, that looks like in the screenshot. Recently, something unexpected happened.
Their problem:
So apparently NetHack has a mechanic that slightly changes how the game plays every time it’s full moon according to your system clock
The model wasn’t trained on a full moon. They had a system to set up the environment for replicable results but it didn’t include modifying the system time.
It reminds me of another bug with the system time, which a friend of mine encountered. He was working on hardware and he was getting a lot of units that worked fine at the factory, immediately failed at the client’s location, and then worked again when they were returned to the factory. It turned out that when these machines were turned on, their embedded OS automatically queried some server to update the current time. The client’s internet connection had such high latency that the server’s response only came back after the machine was already in use. This generated a huge delta-t value that triggered the sanity checks and shut the machine down. The factory had a much lower-latency connection and so the race condition could never be replicated there.
As for the weirdest bug I ever encountered myself: a compiler generating bad machine code. I have often said that the worst part of programming is that the computer always does exactly what you tell it to, but that was the one and only time in twenty years that the computer actually didn’t.
That reminds me of The case of the 500-mile email
That was the first thing I thought of.
Their problem was not understanding the game ;)
The weirdest bug… is that there was no bug. We just didn’t know how the game worked.
Story of my entire programming life.
it was the WEIRDEST bug in our chess ai you guys
the pawn captured another pawn that was NEXT TO IT
like what’s going on there
Holy hell
New response just dropped
It makes sense for the pawn:“But it’s right here! Why shouldn’t I kill it?!”
Reminds me of this classic: https://unix.stackexchange.com/questions/405783/why-does-man-print-gimme-gimme-gimme-at-0030
oh damn that’s funny
Not my bug and not CS, but I think that the most-difficult bug(s) I’ve read about is the American Mark 14 torpedo in World War II. A combination of constrained budget for testing before the war, extreme inability to meet supply (and thus provide some for testing), difficulties in observing the things in production in operation (it’s a torpedo, and the target probably isn’t too amenable to you looking at the thing if it doesn’t work well), secrecy, cutting-edge technology, and several other problems, a number of modes of operation (including both a contact and proximity magnetic fuze), and including multiple bugs that had a tendency to mask or affect each other, including specifically:
-
A tendency to run deeper than set (and sometimes go too deep and not hit or detect a ship)
-
A tendency to bend a critical pin on impact if the torpedo impacted a ship at something like right angles, but not at an angle; if bent, the torpedo would not detonate.
-
Testing that happened in the Atlantic, but with most use in the Pacific. It turns out that Earth’s magnetic field is not uniform, and varies enough to throw off magnetic fuzes and cause premature explosions or non-explosions.
…led to the US fighting a war that was heavily-naval, where the main weapon for sinking major ships was the torpedo…but where that torpedo wasn’t really very functional for something like 18 months of fighting.
Wikipedia has a somewhat longer version.
This long explanation is probably the best I’ve read.
-
Reminds me of a production bug we could not replicate for the life of me.
The condition could logically not be reached. impossible.
Turns out, in production we had two threads per process, and one would monkey patch a function in the shared process space with a non multi threading safe locking mechanism.
That took several days to find.
This is a common problem when testing time-based software. And similarly why it’s difficult to test database-drive software. You have to put a lot of effort into setting up a good environment for testing and genuinely understand the software and its dependencies.
A “fun” one I ran into was all our tests passing on my desk, but failing in the test farm
After a month, we realized that having an HDMI cable plugged into the unit was corrupting the SD card due to a memory overwrite in the graphics stack
I had an issue where a client reported a crash on login. The exception and stack trace reported were very generic and lent no clues to the cause. I tried debugging but could not reproduce. I eventually figured out that the crash only happened for release (non-debug) builds that were obfuscated. I couldn’t find the troublesome code, so I figured out which release introduced the issue, then which commit, then went change by change until I was able to find the cause. It turned out to be a log message in a location that was completely unrelated to login. That exact log message was fine a few lines up. Other code worked fine in that location. For some unknown reason, having that log message in that specific location caused a crash in a completely different area of code.
Usually a sign of multiprocessing/multithreading going wrong, e.g. accessing the same resource without proper locks like opening the same logfile in different processes and trying to write simultaneously. Those errors can be triggered just by reformating the code (or obfuscating in this case), thus changing the runtime behaviour slightly. Hard to find, especially since they’re dependent on the speed/workload of the machine running the code.
I was expecting some sort of “Ai discovers new bug in 30 year old software”… cool I’m excited.
Then they were talking about how the bug was persistent, and I’m more intrigued “is the bug some weird emergent behaviour corrupting state somewhere?”
Nope, just another example of a shit in shit out data model.
Java was giving a no such method exception at runtime, but it compiled fine. Granted, that method was recently added to the class, but it was pretty simple and again, you’d expect the compiler to detect things like that.
Turns out the code I inherited from a not-great team had that class in two different places. Maven replaced the one I worked on with the untouched copy, which went into the build.
Yeah, if you get this exception and not doing anything not-“normal” then the chance is high that there are multiple versions of the same class. A possible way to trigger this is when extracting code to a separate module without changing the package. If you copy instead of move and change something you will have a bad time. It is also possible that the IDE complains but building and executing works.
Fun times!
Warning: this is secretly a Nethack thread!
So, the model was playing on average 2,000 points worse because the player was luckier? The things about werewolves and dogs is a factor but is statistically insignificant.
Nethack has a couple of other gotchas like this. They should be grateful they weren’t playing on Friday the 13th…
There was a speed runner that hit a world record or lost a world record run due to a random bit flip because of space radiation. That’s gotta be worse than just not knowing the deeper mechanics of a very old game.