Alright, let me tell you about this headache I ran into not too long ago, something I’ve been calling the ‘shiv deadlock’ mess in my notes. It all started pretty innocently. I was working on this Python tool, nothing too fancy, but I needed a way to package it up so folks could run it easily without messing around with installing dependencies themselves.

Someone mentioned using shiv
. Sounded neat – bundle everything into a single executable file. Looked simple enough. So, I jumped in. Installed shiv
, ran the command pointing to my requirements file and my main script. Boom! Got this .pyz
file. Felt pretty smart, you know? Tested it on my machine, worked like a charm. Sent it over to a colleague.
And that’s when things got weird. They ran it, and… nothing. It just hung. Completely frozen. No error messages, no crash, just stuck. My first thought? “User error, obviously.” But then I tried running it in a clean virtual machine, simulating a fresh environment. Same thing. It started up, maybe printed the first line of output, and then just sat there, deadlocked.
Digging into the mess
Okay, so it wasn’t user error. It was my packaged app. But why? The script ran perfectly fine when I just executed it directly with Python in a regular virtual environment. It only choked inside the shiv
package.
This was confusing. I started poking around.
- Checked if it was grabbing some external resource and waiting forever? Nope, the deadlock happened even before it got to that part.
- Maybe a dependency issue? I started stripping down the requirements, rebuilding the
shiv
archive over and over. Took ages. Still deadlock. - Was it something about how
shiv
runs things? I knew it extracted stuff into a temporary location. Maybe something in my code was fighting over a file lock, or perhaps a library I used initialized funny when run from inside a zip archive?
I spent a good chunk of time just trying different things. I tried running the code with extra verbose logging, but the logs just stopped dead when the freeze happened. It felt like hitting a wall. I even tried using py-spy
to see what the process was doing, but understanding the stack trace from within the packed environment was tricky.

Figuring it out (sort of)
Eventually, I zeroed in on a section of the code that used the multiprocessing
library. It turned out, one of the libraries I depended on was also doing some tricky stuff on import, related to finding its data files. When running normally, everything was fine. But when bundled with shiv
, the combination of how shiv
handled the zipped environment and how multiprocessing
spawned new processes seemed to create a race condition or a lockup. Maybe the child process couldn’t properly access the necessary bits from the zipped parent, or they were both trying to initialize the same resource extracted by shiv
simultaneously.
It was subtle. Took me ages to isolate. The exact ‘why’ was still a bit fuzzy, honestly, but I knew the problematic interaction involved running from the zip and using multiprocessing alongside that specific library’s initialization.
So, what did I do? In this case, the ‘fix’ was more of a workaround. I ended up refactoring the code to avoid using multiprocessing
in that specific way during startup. I shifted things around so that the problematic library interaction didn’t happen within concurrently spawned processes right at the beginning. It wasn’t the cleanest solution, but it stopped the deadlock when running the shiv
-packed version.
It was a good reminder, really. Packaging tools like shiv
are super useful, but they aren’t magic. They change the environment your code runs in, sometimes in ways that can trip you up. Always gotta test the packaged thing thoroughly, not just the raw code. Learned that the hard way!