December 9, 2021. The Log4Shell vulnerability is published. CVSSv3 score: 10.0—the maximum. Every organization running Apache Log4j—which turned out to be almost every organization running Java in any capacity—had a critical remote code execution vulnerability in production.
The first question organizations needed to answer was: “Do we have Log4j in our containers?” Most couldn’t answer it quickly. That inability to answer, more than the vulnerability itself, is the supply chain story worth learning from.
The Inventory Problem
The incident exposed that most organizations didn’t have an accurate inventory of which software components were running in their containerized environments.
Log4j wasn’t a library that teams deliberately chose. It was a transitive dependency—included because another library depended on it, which depended on another library, which included Log4j. Nobody added it to a requirements file. It appeared in build outputs without appearing in any human-written dependency declaration.
Static SBOM generation tools found it when run on current images. The problem was that most organizations didn’t have SBOMs for their current images, didn’t have tooling to query component presence across their fleet quickly, and couldn’t determine within hours whether specific containers were affected.
Organizations with runtime component visibility answered the question fast. Organizations without it spent days.
Log4Shell didn’t break most organizations’ security. It revealed that they didn’t know what was running in their containers.
What a Better State Would Have Looked Like?
Runtime-verified SBOMs for every container
Software supply chain security programs that maintained current SBOMs—including transitive dependencies—could query their inventory immediately when Log4Shell was published. “Which containers include log4j-core?” becomes a database query, not a manual investigation of hundreds of deployment manifests and Dockerfiles.
The organizations that could answer this question in minutes had invested in continuous SBOM generation as part of their build pipeline. The investment paid off in exactly the scenario it was designed for.
Containers without Log4j because it was never needed
The Log4j library was present in many containers not because the application used it directly, but because a transitive dependency included it. Many applications never directly called Log4j functionality at all.
A container vulnerability scanning tool approach based on runtime profiling would have identified which containers actually loaded Log4j at runtime vs. which containers carried it as dead code. For containers where Log4j was loaded only by a dependency that never called it, removing Log4j would have been a safe and complete mitigation.
The organizations that had hardened their images to remove unused transitive dependencies weren’t running Log4j in containers that didn’t need it. The incident simply didn’t affect them.
Practical Steps From the Log4Shell Lessons
Generate SBOMs automatically for every container build. Attach the SBOM to the image as an OCI artifact. When the next Log4Shell-class vulnerability is disclosed, your response starts with a query, not an investigation.
Build component search capability into your security tooling. “Which containers include component X?” needs to be a query your team can answer in minutes. Build or acquire tooling that maintains a searchable component index across your entire container fleet.
Profile transitive dependencies, not just direct dependencies. Standard SBOM tools detect components present in the image—they typically include transitive dependencies because they scan the full package graph. Verify your SBOM generation covers transitive dependencies, not just explicitly declared ones.
Test your incident response capability with simulated queries. Before the next Log4Shell, run an exercise: “A critical CVE has been published in OpenSSL. How long does it take to identify all affected containers in our fleet?” The answer tells you how prepared you actually are.
Remove unused transitive dependencies from production images. Runtime profiling identifies which transitive dependencies actually execute. Those that don’t execute are candidates for removal. Removing them before a critical CVE is published means those CVEs don’t affect you—no incident response required.
Frequently Asked Questions
What did Log4Shell reveal about software supply chain security?
Log4Shell exposed that most organizations lacked an accurate inventory of which software components were running in their containerized environments. Because Log4j arrived as a transitive dependency—never explicitly declared in any human-written requirements file—organizations couldn’t quickly answer whether they were affected. The incident demonstrated that software supply chain security requires continuous, automated SBOM generation covering transitive dependencies, not just direct ones.
Why was Log4j so widespread in containerized environments?
Log4j was pervasive because it was a deeply embedded transitive dependency in the Java ecosystem. Applications included it not by choice but because a library they depended on pulled it in indirectly. This made it invisible to teams reviewing their own dependency declarations—it appeared in build outputs without appearing in any explicitly managed dependency list, which is exactly why so many organizations couldn’t quickly determine their exposure.
How can organizations prepare for the next Log4Shell-class vulnerability?
Organizations should build continuous SBOM generation into every container build pipeline so component presence becomes a queryable database rather than a manual investigation. Equally important is aggressive removal of unused transitive dependencies through runtime profiling—if a transitive dependency never executes at runtime, removing it before a critical CVE is published means that CVE simply doesn’t affect the environment, eliminating the need for incident response entirely.
How long did it take organizations to respond to Log4Shell?
Response times varied dramatically based on inventory capability. Organizations that maintained current, queryable SBOMs across their container fleet could determine their Log4j exposure in hours. Organizations without that infrastructure—the majority—spent days manually reviewing Dockerfiles, deployment manifests, and build outputs to determine whether they were affected. The inventory gap, not the remediation itself, was what made the response slow.
The Next Log4Shell
Security researchers have been finding and will continue to find critical vulnerabilities in widely-used libraries. Log4Shell was not a unique event. The pattern will repeat: a critical CVE in a pervasive library, a window of active exploitation, pressure to respond quickly.
The organizations that respond well to the next Log4Shell-class incident are the ones building the infrastructure now: accurate component inventories, fast query capability across the fleet, runtime-validated SBOMs, and aggressive hardening that removes unused transitive dependencies.
The organizations that will struggle are the ones that treat SBOM generation as a compliance checkbox and don’t build the response infrastructure around it.
Log4Shell happened three years ago. The organizations that built inventory capability in response are materially better positioned for the next incident than those that moved on without changing their approach. The next disclosure is coming. The question is whether your inventory infrastructure is ready for it.