A few weeks ago, the Linux community was shaken by the disturbing news developed by researchers at the University of Minnesota (but, it turned out, not fully run), a way of presenting what they would call a “hypocritical commitment” to the Linux kernel – the idea In order to distribute the hard-to-find behavior, meaningless in itself, it can be arranged to reveal vulnerabilities by later attackers.

This was followed by a quick – in some sense, equally disturbing – announcement that the university had been banned from contributing to kernel development, at least temporarily. Then a public apology from the researchers.

While the development and exploitation of exploitation is often cumbersome, running technically complex “red team” programs against the world’s largest and most important open source project adds a little extra experience. It is difficult for researchers and organizations to imagine naivety or impartiality for not understanding the potentially wide blast radius of such behavior.

Equally accurate, maintainers and project goers have a duty to implement policy and avoid wasting their time. Common sense suggests (and users demand) that they try to produce kernel releases that do not have exploitation. But killing Messenger seems to have missed at least some point – that this was research rather than pure malice, and that it would shed light on a kind of software (and organizational) vulnerability to technical and systemic redress.

The Linux kernel’s scale and complete critique projects are not ready to argue with game-changing, hyperscale threatening models Dello.

I believe that “hypocritical commit” contracts, on each side, are characteristic of the relevant trends that threaten the entire broader open-source ecosystem and its users. That ecosystem has long wrestled with problems of scale, complexity, and problems with free and open source software (FOSS) in every type of human undertaking. Let’s look at that complex of problems:

  • The largest open source projects now present larger targets.
  • Their complexity and speed go beyond the norm where traditional “cons” approaches or so can evoke models of more developed regimes.
  • They are evolving to make things better for each other. For example, to clarify, it is becoming difficult to say clearly whether “Linux” or “Kuburnitis” should be considered as “operating operating system” for distributed applications. Profitable organizations have taken note of this and have begun to reorganize around “full-stack” portfolios and stories.
  • In doing so, some for-profit organizations have begun to distort the traditional pattern of FOSS partnerships. Many experiments are underway. Meanwhile, funding for FOSS and other metrics has seen a decline in headcount promises.
  • OSS projects and ecosystems are adapting in a variety of ways, sometimes making it difficult for for-profit organizations to experience at home or seeing benefits from partnerships.

Meanwhile, the threatening landscape continues to evolve:

  • Attackers are bigger, smarter, faster and more patient, leading to longer games, supply-chain handicaps and so much more.
  • The attacks are more economic, financial and politically profitable than ever before.
  • Users are more sensitive to vector exposure than ever before.
  • The increasing use of public clouds creates new levels of technical and organizational monotony that can enable and justify attacks.
  • Assembling complex commercial-of-the-shelf (COTS) solutions partially or completely from open-source software software creates an extended attack surface whose components (and interactions) are ible accessible and well understood by bad actors.
  • Software software compatibility enables new types of supply chain attacks.
  • In the meantime, all of this is happening as organizations develop nonstrategic skills, relying on cloud vendors and other companies to shift capital costs into operating costs and work harder on security.

The net result is that the Linux kernel’s scale-and-critique project is not ready to argue with game-changing, hyperscale-threatening models. In the specific case we are investigating here, researchers were able to target candidate infiltration sites with relatively little effort (contributors must pay attention to using static analysis tools to evaluate already identified code units), informally “fix” by email Propose, and take advantage of many factors, including their own established reputation as a reliable and frequent contributor, to bring the exploitation of the code closer to being committed.

This was a betrayal, effectively by “from within” the trust system, which historically worked very well to produce historically strong and secure kernel releases. Abuse of trust changes the game, and the need to follow the proposed – to strengthen mutual human trust with systematic reduction – big looms.

But how do you argue with such threats? Verification Formal verification is effectively impossible in most cases. Static analysis will not be able to intelligently reveal an engineering invasion. The project must maintain momentum (after all, there are known bugs to fix). And the threat is asymmetrical: as the classic line goes – the blue team needs to be protected from everything, the red team only needs to succeed once.

I see some opportunities for healing:

  • Limit the spread of monotony. Stuffs like Alva Linux and AWS ‘L’K Distribution of Elastic Search are good, in part because they keep a wide range of free and open source FOSS solutions, but also because they inject technical diversity.
  • Evaluate project governance, organization and funding with a view to reducing total dependence on the human factor, as well as encouraging for-profit companies to contribute to their expertise and other resources. Most for-profit companies will be happy to contribute to open source because they are open, and even if they are not, but in many communities, existing contributors may need a culture change.
  • Accelerate commodification by simplifying the stack and testing the ingredients. Proper responsibility for security pushes up application levels.

Basically, what I am advocating here is that orchestrators like Kubernitis should have less objection, and the impact of Linux should be less. Finally, we should move quickly towards the formal formalization of things like the Unicorn.

Regardless, we need to make sure that both companies and individuals provide resources, continuing to be open source.