Monday, 9 March 2026

When the Government Controls AI: The Constitutional Crisis Nobody in Washington Wants to Debate

The U.S. government isn’t just buying AI tools. It’s building the infrastructure for a surveillance apparatus that would make the authors of the Fourth Amendment reach for their muskets.

OpenAI’s accelerating partnership with the Department of Defense has moved from theoretical debate to operational reality. According to The New Stack, the company that once pledged never to develop military applications has reversed course, working directly with defense agencies on applications that include cybersecurity operations and processing of sensitive government data. The pivot wasn’t subtle. OpenAI quietly updated its usage policies to remove prohibitions on military use, then began courting Pentagon contracts with the enthusiasm of a Beltway defense contractor.

This isn’t about national security in the abstract. It’s about what happens when the most powerful pattern-recognition and data-processing systems ever built are handed to agencies with a documented history of constitutional overreach.

Maya Sulkin, posting on X, raised pointed concerns about the trajectory of government AI adoption, highlighting how rapidly these technologies are being deployed without meaningful public debate or legislative guardrails. The concern resonates far beyond tech policy circles. When AI systems capable of analyzing billions of data points per second are deployed by intelligence and law enforcement agencies, the question isn’t whether they’ll be used for mass surveillance. The question is what’s stopping them.

The answer, right now, is almost nothing.

Consider the precedent. The NSA’s bulk metadata collection program, revealed by Edward Snowden in 2013, operated for years under secret legal interpretations that the FISA court rubber-stamped. That program was primitive by today’s standards — it collected phone records. Modern AI systems can correlate phone records with location data, facial recognition feeds, social media activity, financial transactions, and communication patterns simultaneously. The surveillance potential isn’t additive. It’s multiplicative.

And the legal framework hasn’t kept pace. Section 702 of the Foreign Intelligence Surveillance Act, reauthorized in 2024, still permits warrantless collection of communications data on foreign targets — but “incidental” collection of American citizens’ data continues at scale. Layer AI-powered analysis on top of that collection, and you don’t need to target Americans directly. The system can identify, profile, and track them as a byproduct of its normal operations.

Not Divided, an organization focused on protecting democratic institutions from technological overreach, has been documenting how AI deployment by government agencies threatens constitutional protections. Their research points to a fundamental asymmetry: the government’s capacity to collect and process data about citizens is growing exponentially, while citizens’ ability to understand, challenge, or even know about that collection remains static. No transparency. No accountability. No meaningful consent.

The Fourth Amendment’s protection against unreasonable searches was written for a world where searching someone’s papers required physically entering their home. The Supreme Court has updated that understanding — the 2018 Carpenter v. United States decision held that accessing historical cell-site location records constitutes a search requiring a warrant. But Carpenter addressed a single data type from a single source. It said nothing about AI systems that can fuse dozens of data streams into comprehensive behavioral profiles without any individual search ever being conducted.

That’s the gap. And the government is driving a fleet of trucks through it.

The defense establishment’s AI ambitions go far beyond battlefield applications

The Department of Homeland Security has deployed AI-powered systems at the border that use facial recognition, behavioral analysis, and social media monitoring. Immigration and Customs Enforcement has contracted with data brokers who aggregate location data from commercial apps — data that would require a warrant to collect directly but can be purchased on the open market. The FBI’s use of facial recognition technology has been criticized by the Government Accountability Office for lacking adequate privacy safeguards. These aren’t hypothetical risks. They’re current operations.

OpenAI’s entry into this space adds a new dimension. Large language models and multimodal AI systems don’t just process structured data — they can interpret unstructured text, analyze images, understand context, and generate inferences that would take human analysts months to produce. When The New Stack reported on the company’s defense partnerships, the framing centered on cybersecurity and administrative efficiency. But the same models that can summarize intelligence reports can also analyze intercepted communications at population scale. The same computer vision systems that can identify military equipment in satellite imagery can identify individuals in street-level surveillance footage.

The technology is dual-use by nature. The intentions of today’s operators don’t constrain the applications of tomorrow’s.

Some will argue that democratic oversight provides sufficient protection. It doesn’t. Congressional intelligence committees have repeatedly demonstrated they lack the technical expertise to evaluate AI capabilities and the political will to restrict intelligence agencies. The Church Committee reforms of the 1970s, which created the modern oversight framework after revelations of CIA and FBI domestic surveillance programs, were a response to abuses that had already occurred. We’re watching the conditions for similar abuses being assembled in real time, and the response from Congress has been a handful of hearings and zero binding legislation.

The European Union’s AI Act, whatever its flaws, at least attempts to categorize AI applications by risk level and impose restrictions on the most dangerous uses, including real-time biometric surveillance in public spaces. The United States has no equivalent federal framework. Executive orders on AI safety issued by the Biden administration were largely voluntary and have been rolled back. State-level efforts are fragmented and inconsistent.

So where does that leave American citizens?

Exposed. The combination of commercially available personal data, government surveillance authorities that predate the AI era, and AI systems capable of synthesizing both into detailed individual profiles creates conditions the Constitution’s framers could not have anticipated but clearly would have opposed. The right to be left alone — what Justice Brandeis called “the most comprehensive of rights, and the right most valued by a free people” — is being eroded not by a single dramatic act but by the steady accumulation of technical capabilities deployed without democratic authorization.

The tech industry bears responsibility here too. OpenAI’s shift from “we won’t work with the military” to active defense contracting happened without shareholder votes, public referenda, or legislative approval. It was a business decision. The company determined that government contracts were too lucrative and strategically important to forgo, and it adjusted its principles accordingly. Other AI companies — Palantir, Anduril, Scale AI — never pretended to have such reservations. But OpenAI’s reversal matters precisely because it demonstrates that voluntary ethical commitments in the AI industry are worth exactly as much as the paper they’re not printed on.

Groups like Not Divided are pushing for structural reforms: mandatory algorithmic impact assessments before government deployment, warrant requirements for AI-assisted surveillance, independent technical audits of government AI systems, and sunset clauses that force periodic reauthorization. These aren’t radical proposals. They’re the minimum conditions for democratic governance of powerful technologies.

But they face opposition from an intelligence community that views oversight as an obstacle, a defense industry that views AI as its next major revenue stream, and a political class that views “tough on security” as an electoral imperative. The incentives all point in one direction. More collection. More analysis. More power concentrated in agencies that operate largely in secret.

The constitutional question isn’t complicated. The government should not be able to construct detailed profiles of citizens’ movements, associations, communications, and beliefs without individualized suspicion and judicial authorization. AI makes it technically trivial to do exactly that. The law hasn’t caught up. And every month that passes without action makes the gap harder to close.

This isn’t a technology problem. It’s a democracy problem. And right now, democracy is losing.



from WebProNews https://ift.tt/Ub8AIvc

Sunday, 8 March 2026

OpenAI’s ‘Adult Mode’ Keeps Slipping — and the Reasons Say Everything About AI’s Hardest Problem

OpenAI can’t seem to ship its most controversial feature on time.

The company has delayed the launch of what it internally and publicly calls “adult mode” — a less restrictive version of ChatGPT intended for verified adults — pushing the release date further into the future. Originally expected in the spring, then reportedly aimed for mid-2025, the feature now appears unlikely to arrive before late summer at the earliest, according to reporting by Engadget, citing sources familiar with the matter. The repeated delays reveal something more than typical product scheduling headaches. They expose a fundamental tension at the heart of OpenAI’s ambitions: how to give paying adult users the unrestricted AI experience they want while keeping the company’s reputation, regulatory standing, and safety commitments intact.

The concept behind adult mode is straightforward enough. OpenAI wants to offer a tier of ChatGPT that responds to queries the current system refuses or heavily sanitizes — topics involving explicit content, graphic violence, politically sensitive material, and other areas where the model’s safety filters currently intervene. The idea is that adults who verify their age should be able to interact with AI the way they might with any other uncensored information source. Think of it as the R-rated version of a chatbot that currently defaults to PG-13.

But straightforward concepts don’t always translate into clean execution. And this one has proven especially messy.

According to multiple reports, the delays stem from internal disagreements about where exactly to draw the lines. Engadget noted that OpenAI has struggled with calibrating the feature so it loosens restrictions meaningfully without opening the floodgates to content that could generate legal liability or public backlash. That calibration problem is more art than science. Every threshold decision — what’s permissible in adult mode and what remains off-limits even for verified users — involves a judgment call that different teams within OpenAI apparently see differently.

Sam Altman himself has publicly acknowledged the demand for less filtered AI. In a blog post earlier this year, he described the company’s intention to let ChatGPT be more direct, opinionated, and willing to engage with mature subject matter. The framing was deliberate: OpenAI positioned the move as respecting user autonomy. Adults, the argument goes, don’t need an AI nanny.

That message resonated. Loudly. On X, users have been clamoring for months, with recurring threads demanding that OpenAI stop what many perceive as excessive censorship. The frustration is real and commercially significant — competitors like Grok, xAI’s chatbot integrated into the X platform, have explicitly marketed themselves as less filtered alternatives. Mistral’s models and various open-source projects have attracted users specifically because they don’t impose the same guardrails OpenAI does. Every month adult mode doesn’t ship, OpenAI risks losing engagement to rivals who’ve already made the bet that users want fewer restrictions.

So why not just launch it?

The answer involves at least three intertwined problems. First, age verification itself is a minefield. OpenAI would need a system robust enough to withstand regulatory scrutiny — particularly in the EU, where the AI Act is now being enforced in phases, and in the UK, where the Online Safety Act imposes strict requirements on platforms offering adult content. A simple checkbox won’t cut it. But aggressive ID verification raises privacy concerns that OpenAI, already under fire from data protection authorities in multiple jurisdictions, would rather avoid.

Second, there’s the question of what adult mode actually permits. Sexually explicit content is the most obvious category, but it’s far from the only one. Would adult mode allow detailed instructions for activities that are legal but dangerous? Would it engage with extremist political ideologies for the sake of intellectual debate? Would it generate graphic depictions of violence in creative writing contexts? Each of these categories carries different risks, and OpenAI’s teams reportedly haven’t reached consensus on a unified policy framework that covers them all.

Third — and perhaps most importantly — there’s the reputational calculus. OpenAI is simultaneously trying to close a massive funding round that would value the company at north of $300 billion, convert from a nonprofit structure to a for-profit corporation, and maintain partnerships with Apple, Microsoft, and other enterprise clients who have their own brand sensitivities. Launching a feature that immediately generates headlines about AI-produced pornography or violent content could complicate all of those efforts. The timing has to be right. Or at least not catastrophically wrong.

The competitive pressure, though, isn’t waiting for OpenAI to figure this out. Grok has leaned into its anything-goes persona, and while that’s generated its own controversies, it hasn’t meaningfully damaged xAI’s trajectory. Character.AI, despite facing lawsuits related to its chatbot interactions with minors, continues to attract massive user numbers. The market is sending a clear signal: people want AI that talks to them like an adult, and they’ll go wherever they can find it.

Open-source models have made this even more acute. Projects running on platforms like Hugging Face allow users to deploy completely uncensored language models locally, with zero content restrictions. These aren’t fringe tools anymore. They’re increasingly mainstream among developers and power users. OpenAI’s walled-garden approach looks more constraining by the month.

Inside OpenAI, the delay has reportedly caused friction between product teams eager to ship and safety researchers who want more time to test edge cases. This is a familiar dynamic at the company — the same tension that contributed to the dramatic boardroom coup attempt in late 2023. The safety-versus-speed debate never really resolved; it just went underground. Adult mode has brought it back to the surface.

There’s also a legal dimension that’s gotten less attention but matters enormously. Section 230 of the Communications Decency Act, which shields platforms from liability for user-generated content, has an explicit carve-out for content that’s “obscene” under federal law. If ChatGPT in adult mode generates material that a court deems obscene — a standard that’s notoriously subjective and varies by jurisdiction — OpenAI could face criminal liability, not just civil suits. The company’s lawyers are reportedly very aware of this risk and have been pushing for narrow, carefully defined permissions rather than a broad unlocking of capabilities.

International considerations add another layer of complexity. What’s legal and culturally acceptable in the United States may be prohibited in Germany, Saudi Arabia, or Singapore. OpenAI operates globally. An adult mode that’s available everywhere would need to account for wildly different legal regimes, or the company would need to geofence the feature — a technically feasible but operationally burdensome approach that fragments the user experience.

None of this is insurmountable. But all of it together explains why a feature that sounds simple keeps getting pushed back.

The broader industry is watching closely. Google’s Gemini has its own set of content restrictions that have drawn user complaints, though Google has shown little appetite for an explicit “adult” tier. Anthropic, maker of Claude, has taken perhaps the most conservative approach of any major AI lab, and its leadership has been vocal about the risks of loosening safety filters. Meta, meanwhile, has open-sourced its Llama models, effectively outsourcing the content moderation question to whoever deploys them. Each company has made a different bet about where the market is heading.

OpenAI’s bet is that it can have it both ways — a safe, broadly appealing default product and a more permissive option for adults who opt in. That’s the theory. In practice, the existence of adult mode may make the default mode’s restrictions feel even more arbitrary, prompting questions about why certain content is deemed inappropriate for adults in the first place. It could also create a two-tier perception problem: the “real” ChatGPT that OpenAI wants you to use, and the uncensored version lurking behind an age gate.

For investors, the delay is a footnote in a much larger story about OpenAI’s path to profitability. The company reportedly burned through billions in 2024 and is expected to continue operating at a significant loss through at least 2026. Adult mode, if it drives higher engagement and retention among paying subscribers, could be a meaningful contributor to revenue growth. ChatGPT Plus and the newer Pro tier already command premium prices; an adult mode available exclusively to subscribers would add another reason to pay. Every month of delay is, in a sense, revenue left on the table.

And then there’s the elephant in the room that nobody at OpenAI wants to discuss publicly: the adult entertainment industry. Porn has historically been an early adopter and driver of new technology — from VHS to streaming video to VR. AI-generated adult content is already a massive and rapidly growing market, mostly served by smaller, less scrupulous operators. OpenAI entering this space, even indirectly, would legitimize it. That’s either a massive commercial opportunity or a reputational catastrophe, depending on who you ask within the company.

The most likely outcome, based on the pattern of delays and the signals from OpenAI’s leadership, is that adult mode will eventually launch in a heavily caveated form. Expect narrow permissions — perhaps more tolerance for profanity, violence in fiction, and candid discussion of drugs or sex, but probably not AI-generated explicit imagery or anything that could be classified as hate speech. In other words, a mode that’s less “adult” and more “slightly less cautious.” Whether that satisfies the users demanding it is another question entirely.

What’s clear is that this isn’t just a product delay. It’s a stress test for the entire philosophy of AI safety as practiced by the industry’s most prominent company. OpenAI built its brand on the promise of safe, beneficial AI. Now it’s trying to figure out how much of that promise it can relax without breaking it. The answer, apparently, requires more time than anyone originally expected.



from WebProNews https://ift.tt/O4pMPlq

The 300-Millisecond Cancer Treatment: How FLASH Radiotherapy Is Racing From Lab Bench to Hospital Floor

Imagine a radiation treatment so fast it’s over before you blink. Not minutes. Not seconds. Milliseconds. That’s the promise of FLASH radiotherapy — a technique that delivers ultra-high dose rates of radiation to tumors in a fraction of the time conventional treatments require, while appearing to spare healthy tissue from the devastating side effects that have plagued cancer patients for decades.

The concept sounds almost too good to be true. And for years, skeptics said exactly that.

But a growing body of preclinical evidence, the first human clinical trials, and a surge of engineering innovation are now converging to push FLASH from theoretical curiosity toward clinical reality. The question is no longer whether FLASH works in the lab. It’s whether the physics, the biology, and the economics can align to bring it to patients at scale.

A Century-Old Clue, Rediscovered

The origins of FLASH trace back further than most people realize. As IEEE Spectrum reported in its detailed technical examination of the field, the earliest hints of the FLASH effect appeared in the 1960s and even earlier, when researchers noticed that radiation delivered at extremely high dose rates seemed to produce less damage to normal tissue than the same dose delivered slowly. But the observation was largely shelved. The technology to deliver such dose rates clinically didn’t exist, and the radiobiology community had bigger problems to solve.

The modern resurgence began around 2014, when a team at the Lausanne University Hospital (CHUV) in Switzerland, led by Jean Bourhis and Marie-Catherine Vozenin, published striking results showing that ultra-high dose rate radiation — delivered at rates exceeding 40 grays per second, compared to the roughly 0.01 to 0.03 grays per second used in conventional radiotherapy — could sterilize tumors in mice while leaving surrounding normal tissue remarkably intact. The so-called “FLASH effect” was real, reproducible, and dramatic.

The results electrified the radiation oncology world. Here was a differential effect that, if it translated to humans, could fundamentally alter the therapeutic ratio — the balance between killing cancer and harming the patient that has defined radiation therapy since its inception.

Since then, the preclinical evidence has piled up. Studies in mice, mini-pigs, cats, and dogs have consistently shown that FLASH-rate irradiation produces less skin toxicity, less lung fibrosis, less neurocognitive damage, and less intestinal injury than conventional dose rate radiation, while maintaining equivalent tumor control. The biological mechanism remains incompletely understood, though leading hypotheses center on oxygen depletion: at ultra-high dose rates, radiation may transiently consume all available oxygen in normal tissue so quickly that the chemical reactions responsible for DNA damage in healthy cells can’t fully propagate. Tumors, which are often already hypoxic and have different metabolic profiles, don’t benefit from this protective effect.

Not everyone is convinced the oxygen depletion hypothesis tells the whole story. Some researchers have pointed to differential immune responses, differences in DNA damage complexity, and other radiochemical phenomena. The mechanism matters — not just for intellectual satisfaction, but because understanding it will determine how to optimize FLASH delivery parameters for maximum clinical benefit.

Still, the clinical momentum is undeniable.

The first human patient treated with FLASH radiotherapy received her dose in 2018 at CHUV. A 75-year-old woman with a multiresistant CD30+ T-cell cutaneous lymphoma on her skin received a single 15-gray dose to a 3.5-centimeter tumor in less than 100 milliseconds using a modified clinical electron linear accelerator. The tumor responded completely. Five months later, there was no significant skin toxicity. The case, published in Radiotherapy and Oncology, was proof of principle in a single patient — not proof of efficacy. But it opened the door.

Since then, the first formal clinical trial — the FAST-01 study conducted at Cincinnati Children’s Hospital — treated bone metastasis patients with proton FLASH therapy using Varian’s ProBeam system. Results published in 2022 showed the treatment was feasible and safe, with pain relief comparable to conventional palliative radiation. The trial wasn’t designed to demonstrate the FLASH effect’s tissue-sparing advantage; it was a feasibility and safety study. But it showed that proton FLASH could be delivered to human patients in a clinical setting.

More trials are underway or in planning. As IEEE Spectrum noted, researchers at several institutions are now designing studies to test FLASH in more challenging anatomical sites — brain, lung, abdomen — where the tissue-sparing effect could make the most dramatic clinical difference. The pace is accelerating.

The Engineering Problem Is Enormous

If the biology of FLASH is tantalizing, the engineering is formidable. Delivering therapeutic doses of radiation at rates hundreds or thousands of times faster than conventional treatment requires fundamentally rethinking the machines that produce the beams.

Conventional medical linear accelerators (linacs) were never designed for these dose rates. They operate in pulsed mode, delivering microsecond bursts of radiation, but their average dose rates are far too low for FLASH. Achieving FLASH-level dose rates with electrons is comparatively straightforward — electrons are easy to accelerate, and modified research linacs can reach the necessary intensities. But electrons penetrate only a few centimeters into tissue, limiting their clinical utility to superficial tumors.

For deep-seated cancers — which account for the vast majority of cases where radiation therapy is used — protons or X-rays (photons) are needed. And that’s where the engineering challenges multiply.

Proton FLASH is perhaps the nearest-term pathway for deep tumors. Cyclotron-based proton therapy systems can, in principle, deliver dose rates high enough for FLASH by removing the beam-limiting components that slow delivery in conventional treatments. Varian (now part of Siemens Healthineers) demonstrated this with its ProBeam system in the FAST-01 trial. But proton therapy systems cost $25 million to $200 million to build and operate. They occupy entire buildings. Fewer than 100 proton centers exist in the United States. Scaling proton FLASH to widespread use faces enormous capital and infrastructure barriers.

Photon FLASH — using high-energy X-rays, the workhorse of modern radiation therapy — is even harder. Generating X-rays at FLASH dose rates requires electron beams of extraordinary intensity striking a conversion target, and the physics of bremsstrahlung radiation production means most of the energy is lost as heat. Several groups are working on the problem. According to IEEE Spectrum, researchers at Stanford, SLAC National Accelerator Laboratory, and other institutions have explored using compact linear accelerators and even very high energy electrons (VHEE) in the 50-to-250 MeV range, which can penetrate deeply without a conversion target and could potentially deliver FLASH dose rates throughout the body.

VHEE is an intriguing approach. These electrons are energetic enough to pass through the body much like photons, avoiding the shallow penetration problem of conventional electron beams. And because no conversion target is needed, the beam intensity isn’t limited by target heating. But VHEE accelerators don’t exist in clinical form yet. Building them will require adapting technology from particle physics — compact, high-gradient accelerator structures — for medical use. Several startups and academic groups are pursuing this, but clinical VHEE systems are likely years away.

Then there’s the dosimetry problem. Measuring radiation dose accurately at FLASH rates is extraordinarily difficult. Conventional ionization chambers, the gold standard of radiation dosimetry, suffer from ion recombination effects at ultra-high dose rates that can cause them to underread by 20% or more. New detector technologies — diamond detectors, scintillators, alanine dosimeters, and specialized ionization chamber designs — are being developed and validated, but standardized, clinically accepted dosimetry protocols for FLASH don’t yet exist. Without accurate dosimetry, you can’t safely treat patients. Period.

Treatment planning presents its own challenges. Conventional radiation treatment planning systems assume continuous, low-dose-rate delivery and optimize dose distributions accordingly. FLASH delivery may require entirely new planning paradigms that account for the temporal structure of the beam, the spatial distribution of dose rate (not just dose), and the biological response to ultra-high dose rate irradiation. The interplay between dose, dose rate, fractionation, and the FLASH effect is still poorly characterized. Getting this wrong could mean losing the FLASH effect entirely — or worse, overdosing normal tissue.

And the regulatory pathway is uncharted territory. The FDA has granted breakthrough device designation to Varian’s FLASH-enabled ProBeam system, signaling recognition of the technology’s potential. But the evidentiary bar for widespread clinical approval will be high. Regulators will want to see not just safety and feasibility, but clear evidence of clinical benefit — improved outcomes or reduced toxicity — in well-designed randomized trials. Those trials will take years to complete.

The cost question looms over everything. Proton therapy, even in its conventional form, has struggled to demonstrate cost-effectiveness compared to advanced photon techniques like intensity-modulated radiation therapy (IMRT) and stereotactic body radiation therapy (SBRT). If FLASH requires proton infrastructure, its adoption will be limited to wealthy academic centers. If compact electron or VHEE systems can deliver FLASH at a fraction of the cost, the calculus changes dramatically. The technology that wins won’t just be the one that works best biologically. It’ll be the one that fits into existing clinical workflows and reimbursement structures.

Several companies are positioning themselves in this space. Varian/Siemens Healthineers has the proton FLASH lead. IntraOp Medical is developing electron FLASH systems for intraoperative use. PMB-Alcen in France has built a high-dose-rate electron accelerator specifically for FLASH research. And a handful of startups are pursuing VHEE and compact photon FLASH approaches, though most remain pre-revenue and pre-clinical.

What Happens Next

The next three to five years will be decisive for FLASH radiotherapy. Several pivotal questions must be answered.

First, does the FLASH effect hold up in human patients across multiple tumor types and anatomical sites? The preclinical evidence is strong, but animal models don’t always predict human outcomes. The ongoing and planned clinical trials — particularly those targeting deep-seated tumors where toxicity reduction would be most meaningful — will provide the critical data.

Second, can the mechanism be sufficiently understood to optimize delivery parameters? If oxygen depletion is the primary driver, then tissue oxygenation status, beam temporal structure, and total dose will all interact in complex ways. Clinicians will need reliable biomarkers or predictive models to know when the FLASH effect is being achieved in a given patient’s tissue. Without that, FLASH treatments will be designed partly by guesswork.

Third, can the engineering be democratized? Right now, FLASH-capable systems are bespoke research tools or modified clinical machines available at a handful of centers worldwide. For FLASH to impact cancer care broadly, the technology must become compact, affordable, and reliable enough for community hospitals — not just major academic centers. That’s a tall order, but it’s not unprecedented. IMRT followed a similar trajectory from research curiosity to standard of care over roughly two decades.

And fourth, can the field avoid overpromising? Radiation oncology has a history of enthusiasms — proton therapy, carbon ion therapy, neutron therapy — that were heralded as transformative but ultimately found narrower niches than initially predicted. FLASH’s advocates are aware of this history. Many are deliberately cautious in their public statements, emphasizing the preliminary nature of the evidence. But the hype cycle is powerful, and patient expectations can outpace the science.

So where does that leave us? FLASH radiotherapy represents a genuinely novel approach to an old problem — one grounded in real physics and increasingly supported by real biology. It’s not a sure thing. The engineering barriers are substantial, the clinical evidence is nascent, and the path to broad adoption is long and uncertain. But the potential payoff — radiation therapy that kills tumors without crippling patients — is significant enough to justify the enormous investment of talent and capital now flowing into the field.

The radiation will be fast. The road to the clinic won’t be.



from WebProNews https://ift.tt/89lMNhg

Saturday, 7 March 2026

California’s AB 1043 Forces a Surveillance Mandate on Every Developer — Including the Ones Who Can’t Comply

California’s new age verification law, AB 1043, doesn’t just target Big Tech. It conscripts every software developer — including solo open-source contributors writing code in their spare time — into a surveillance apparatus that many of them lack the technical and financial capacity to build. The law, signed by Governor Gavin Newsom, requires application developers to request an age signal from an “operating system provider or a covered application store” before allowing a user to proceed. That single sentence reveals a profound misunderstanding of how software actually works outside the walled gardens of Apple, Google, and Microsoft.

The problem is immediate and concrete. As Gardiner Bryant detailed in his analysis, the law places the financial and technical burden squarely on developers, not on platform operators. An indie developer shipping a text editor, a calculator, or a local-only note-taking app must now, under this statute, query an external service for an age signal every time the application is “downloaded and launched.” That means every single application — even those designed to function entirely offline — would need internet connectivity baked in solely to phone home for age verification data.

Think about what that means in practice.

A developer in Berlin who writes a free, open-source flashcard app for Linux now has a legal obligation under California law to integrate with an age-signal infrastructure that doesn’t exist on their platform. They must request this signal not from the operating system itself, but from the “operating system provider.” On Ubuntu, who is that? Canonical? The upstream Debian project? Linus Torvalds? The law doesn’t say. On FreeBSD, there is no single provider. On Arch Linux, the operating system is assembled by the user from components maintained by thousands of independent contributors. The statute assumes a corporate structure that simply does not exist across vast portions of the software world.

The Infrastructure That Doesn’t Exist

AB 1043’s language presumes that operating system providers will build and maintain age-signal services capable of handling queries from potentially hundreds of thousands of applications. Apple and Google could theoretically do this — they already gate access to their app stores and maintain user account systems with age data. Microsoft could probably follow suit for its Windows Store offerings. But the law doesn’t limit itself to those platforms.

Linux distributions number in the hundreds. The BSDs — FreeBSD, OpenBSD, NetBSD — are maintained by volunteer communities with annual budgets that wouldn’t cover a single engineer’s salary at Google. As WebProNews reported in its earlier coverage, these communities face a law that treats them as if they were commercial entities with compliance departments and API teams ready to spin up new services on legislative command.

They aren’t. And they can’t.

The practical result is a two-tier system. Developers targeting Apple and Google platforms might eventually have a corporate-provided age signal to query. Developers targeting everything else — Linux, BSD, Haiku, custom embedded systems, scientific computing platforms — face an impossible mandate. There is no age-signal API on Fedora. There is no user-age database maintained by the Void Linux project. The law requires developers to request something from a provider that has no mechanism, no obligation, and in many cases no organizational structure to provide it.

So what happens? Small developers stop distributing to California users. Or they ignore the law and hope they’re too small to attract enforcement. Or they quit. For free and open-source software communities, this isn’t a regulatory inconvenience. It’s an existential threat. A volunteer maintainer who ships a package manager plugin or a media player cannot absorb the cost of legal compliance with a statute that fundamentally misunderstands their operating environment.

Code Is Speech — And Compelled Code Is Compelled Speech

The constitutional problems here are severe. The First Amendment implications of forcing developers to write specific code — code that implements surveillance functionality — run directly into decades of established precedent. As the Electronic Frontier Foundation has documented, the landmark case Bernstein v. United States Department of Justice established in the 1990s that computer code is protected speech under the First Amendment. Daniel Bernstein, a mathematician, sued the federal government over export restrictions on encryption software. The Ninth Circuit Court of Appeals — the same circuit that covers California — ruled that code is expressive conduct entitled to constitutional protection.

AB 1043 doesn’t just regulate what developers can say. It compels them to say something specific. It mandates that every application must contain code to query an age-signal service. This is compelled speech, full stop. The government is ordering developers to embed surveillance infrastructure in their own creative works.

The Supreme Court has been consistently hostile to compelled speech mandates. In National Institute of Family and Life Advocates v. Becerra (2018), the Court struck down a California law requiring crisis pregnancy centers to provide certain notices, holding that the state cannot compel individuals to deliver messages they wouldn’t otherwise choose to communicate. The parallel to AB 1043 is direct: the state is compelling developers to build and deliver a specific technical message — an age verification query — regardless of whether the developer’s software has any connection to content that might harm minors.

And the overbreadth problem is glaring. The law applies to all applications, not just those serving content to children. A command-line tool for managing server configurations doesn’t serve content to anyone, let alone minors. But under AB 1043’s plain text, its developer must still implement the age-signal request. This kind of sweeping mandate — applied without regard to whether the regulated speech has any connection to the government’s stated interest — is precisely the type of law that fails strict scrutiny under First Amendment analysis.

Legal challenges are coming. NetChoice, the industry group that successfully challenged California’s previous Age-Appropriate Design Code Act, has already signaled opposition to AB 1043. The Supreme Court is currently weighing NetChoice v. Paxton and related cases that will further define the boundaries of state power over digital platforms. But those cases involve large commercial entities. AB 1043’s reach into individual developers’ workshops and open-source projects raises questions the courts haven’t fully addressed: Can a state force a solo programmer to rewrite their hobby project to include government-mandated surveillance hooks?

Who This Law Actually Serves

The cynical reading is that AB 1043 was written by and for the major platform companies. Apple already has age verification baked into its App Store. Google has similar infrastructure in the Play Store. Microsoft can build it into Windows relatively easily. For these companies, compliance is a minor engineering task. For everyone else, it’s a wall.

The law effectively entrenches the dominance of three companies by making it legally hazardous to develop software outside their platforms. If you distribute through the App Store, Apple handles the age signal. If you distribute a .deb package on your personal website, you’re on your own — and you’re liable.

This is regulatory capture dressed up as child safety.

Nobody in the open-source community opposes protecting children online. But AB 1043 doesn’t protect children. It creates an unfunded mandate for age-signal infrastructure that doesn’t exist on most platforms, compels developers to write surveillance code in violation of First Amendment principles, and threatens the viability of independent software development. The law’s authors either didn’t understand how software works outside of iOS and Android, or they didn’t care.

Neither explanation is acceptable. California has written a law that assumes the entire world of software development looks like the App Store. It doesn’t. And the developers who will pay the price are the ones least able to afford it — the independent creators, the open-source volunteers, the small shops building tools for communities that Big Tech ignores. They deserve better than legislation drafted in ignorance of the technology it purports to regulate.



from WebProNews https://ift.tt/83TyBZx

Friday, 6 March 2026

F1 Drivers Face Potential Nerve Damage From Car Vibrations, and the Sport Is Only Now Paying Attention

Formula 1 cars are engineering marvels that push human endurance to its limits. But there’s a problem nobody talked about for decades, and it’s finally getting the attention it deserves: the vibrations these machines produce may be causing lasting nerve damage to the drivers strapped inside them.

A report from Digital Trends highlights growing concern within the F1 paddock about whole-body vibration (WBV) exposure and its potential to cause hand-arm vibration syndrome (HAVS) and other neurological conditions. The issue isn’t new in industrial settings — construction workers and miners have dealt with vibration-related injuries for years — but motorsport has largely ignored it. Until now.

The conversation gained serious momentum after research presented by the FIA, Formula 1’s governing body, began quantifying just how much vibration drivers absorb during a race weekend. F1 cars generate forces that shake through the steering column, pedals, and seat at frequencies that can damage peripheral nerves over time. Drivers have long complained about tingling and numbness in their hands and feet after sessions, but these symptoms were typically dismissed as just part of the job. That framing is changing.

Here’s the core issue. Modern F1 cars, particularly since the introduction of ground-effect aerodynamics in 2022, have become significantly stiffer. The cars run extremely low to the ground to maximize downforce, and that means less suspension travel to absorb bumps and curbs. The result is a brutally harsh ride. Porpoising — the bouncing phenomenon that plagued teams in 2022 — drew widespread attention, but even after teams largely solved that specific problem, the baseline vibration levels remain punishingly high.

Lewis Hamilton and George Russell were among the most vocal drivers about the physical toll during the 2022 season, with Hamilton famously struggling to exit his car after the Azerbaijan Grand Prix due to back pain. But the vibration concern extends beyond acute discomfort. It’s the chronic, cumulative exposure across an entire career that researchers are now flagging as the real threat.

HAVS is a recognized occupational disease. It damages blood vessels, nerves, muscles, and joints in the hands and arms. Symptoms include blanching of fingers, loss of grip strength, and permanent numbness. In severe cases, the damage is irreversible. Workers in industries with high vibration exposure are subject to strict regulatory limits — the European Union’s Physical Agents Directive, for instance, sets daily exposure thresholds. F1 drivers? No such limits exist.

That gap is stark.

The FIA has started taking the issue more seriously. According to Digital Trends, research efforts are underway to measure and categorize vibration exposure across different circuits and car configurations. Some tracks are worse than others — street circuits like Singapore and Las Vegas, with their uneven surfaces, generate particularly harsh vibration profiles. And the problem compounds: drivers don’t just race on Sundays. They’re exposed during practice sessions, qualifying, and testing too.

So what can actually be done about it? The solutions aren’t straightforward. Softening the cars’ suspension would reduce vibrations but also compromise aerodynamic performance, which no team wants. Adding damping material to seats and gloves helps at the margins but doesn’t address the fundamental mechanical forces at play. Some engineers have suggested active suspension systems could dramatically reduce driver vibration exposure while maintaining performance, and the FIA has been exploring regulations that might permit such technology in future car designs.

There’s also a generational concern. Drivers now start karting as young children and progress through feeder series that expose them to significant vibrations long before they reach F1. A driver arriving on the grid at 18 may already have a decade of cumulative vibration exposure. The long-term health implications of that trajectory are essentially unknown because nobody has been tracking it.

The sport’s response so far has been measured. The FIA established a working group to study the problem, and teams have begun collecting vibration data more systematically. But there’s no regulatory framework yet, no exposure limits, and no mandatory mitigation measures. It’s still largely voluntary.

Compare that to how seriously F1 treats crash safety. After Ayrton Senna’s death in 1994, the sport overhauled its approach to impact protection, introducing the HANS device, the halo cockpit protection system, and increasingly sophisticated crash structures. Those changes saved lives — Romain Grosjean’s fireball crash in Bahrain in 2020 was survivable largely because of them. But chronic health risks don’t generate the same urgency as spectacular accidents. They’re slow, invisible, and easy to ignore until the damage is done.

Other motorsport series face similar questions. Endurance racing, rally, and even NASCAR subject drivers to sustained vibration, though the specific frequencies and intensities vary. F1’s problem is arguably the most acute because of how stiff and aerodynamically sensitive the cars are, but the broader motorsport community will be watching how the FIA handles this.

Drivers themselves are increasingly willing to speak up. The Grand Prix Drivers’ Association has raised physical welfare concerns repeatedly in recent years, and the vibration issue fits squarely within that conversation. But there’s an inherent tension: drivers don’t want to appear weak or unable to handle the demands of the sport, and teams don’t want regulations that might slow their cars down.

That tension will define how quickly — or slowly — meaningful change happens.

For industry professionals following this story, the key takeaway is simple. The science on whole-body vibration is well-established in occupational health. What’s new is its application to elite motorsport, where the assumption has always been that drivers accept extreme physical demands as part of competition. That assumption is being challenged, and the regulatory, engineering, and medical responses are still in their earliest stages. The data collection happening now will likely shape car design rules for the next generation of F1 regulations, expected around 2026.

And if the research confirms what many already suspect — that current vibration levels pose genuine long-term neurological risk — the sport will face a reckoning. Not the dramatic, crash-driven kind. The slow, uncomfortable kind where you have to admit the cars themselves are hurting the people inside them.



from WebProNews https://ift.tt/gQy3rwT

Thursday, 5 March 2026

Musk’s Orbiting Giant Courts the Carriers It Once Threatened

In the halls of the Fira Gran Via in Barcelona, amidst the annual gathering of the global telecommunications elite, a distinct tension has settled over the proceedings. For years, mobile network operators viewed SpaceX’s Starlink as an existential threat—a bypass mechanism that could render terrestrial infrastructure obsolete in rural areas. Now, however, the narrative has shifted. According to a report by The Information, SpaceX representatives have descended upon the Mobile World Congress (MWC) not as conquerors, but as suitors, pitching a partnership model to the very companies they once unsettled. This strategic pivot highlights a complex reality: while SpaceX possesses the orbital hardware, the legacy carriers hold the one asset Elon Musk cannot manufacture—spectrum rights.

The proposition on the table is the “Direct to Cell” service, a capability designed to beam LTE connectivity directly from low-Earth orbit satellites to unmodified smartphones. For the telecom industry, the offer presents a stark dilemma. Partnering with SpaceX promises to eliminate coverage dead zones without the capital expenditure of building remote towers. Yet, inviting the aerospace giant into their networks risks handing the keys to a competitor that has already begun selling high-speed broadband directly to consumers, bypassing the traditional internet service provider model entirely.

The Trojan Horse Anxiety

The skepticism among telecom executives is rooted in the speed of Starlink’s ascent. Having launched over 5,000 satellites, the constellation now controls a majority of active satellites in orbit. For terrestrial carriers, the fear is that SpaceX’s initial offer—supplemental coverage for text and emergency services—is merely a beachhead. Once the technology matures to support voice and data, the concern is that Starlink could pivot from partner to predator, offering a standalone global mobile service that cuts the carriers out of the revenue loop.

Despite these reservations, the fear of missing out is driving deals. T-Mobile US was the first to break ranks, announcing a major partnership with SpaceX to end mobile dead zones. This collaboration recently moved from theory to practice. As reported by The Verge, the two companies successfully transmitted the first text messages via Starlink satellites to standard cell phones in January, validating the technical feasibility of the project. This milestone places immense pressure on rival carriers to secure similar satellite capabilities or risk marketing disadvantages in coverage wars.

Regulatory Hurdles and Spectrum Wars

SpaceX’s diplomatic mission in Barcelona is necessitated by physics and policy as much as commerce. Unlike its broadband service, which uses dedicated frequencies licensed to SpaceX, the Direct to Cell service operates on the mid-band PCS G block spectrum owned by its partner, T-Mobile. To operate globally, SpaceX must negotiate access to the spectrum holdings of local carriers in every jurisdiction it wishes to serve. They cannot simply beam signals into France or Japan without the blessing of the license holders and local regulators.

The regulatory environment remains a minefield. The Federal Communications Commission (FCC) is currently weighing the frameworks for “Supplemental Coverage from Space” (SCS). While SpaceX pushes for rapid approval, competitors are raising alarms regarding signal interference. SpaceNews reports that Omnispace, a rival satellite operator, has filed complaints with the FCC alleging that SpaceX’s testing is causing harmful interference with their licensed operations. These technical disputes complicate the sales pitch, as prospective telecom partners must weigh the utility of satellite coverage against the risk of regulatory entanglements and service degradation.

The Diverging Paths of Competitors

SpaceX is not the only aerospace firm vying for the carriers’ favor. A distinct divide has formed in the US market. While T-Mobile aligned with Musk, AT&T and Verizon have thrown their weight behind AST SpaceMobile. The technical approaches differ significantly. SpaceX relies on a massive constellation of smaller satellites, whereas AST SpaceMobile utilizes massive, phased-array antennas in orbit to create stronger connections capable of supporting broadband speeds sooner.

The financial stakes for these alliances were underscored recently when AT&T and Verizon, along with Vodafone, committed new capital to AST. According to Reuters, this strategic investment serves as a hedge against SpaceX’s dominance, ensuring that the two largest American carriers have a viable counter-narrative to T-Mobile’s Starlink integration. For the industry insiders in Barcelona, the choice of satellite partner is rapidly becoming a defining strategic decision for the next decade of network architecture.

Financial Imperatives Driving the Pitch

For SpaceX, the push into the cellular market is driven by the voracious capital requirements of its broader ambitions, specifically the Starship program. While Starlink’s broadband division has achieved cash-flow positivity, the addressable market for $120-per-month internet terminals has a ceiling. The global market for mobile connectivity, however, is measured in billions of users. Accessing a slice of widespread monthly mobile subscriptions through revenue-sharing agreements represents a stabilizing cash flow that IPO investors will demand.

Moreover, the unit economics of the Direct to Cell satellites depend on scale. SpaceX must launch thousands of updated satellites to provide continuous coverage. Without carrier partners in Europe, Asia, and South America contributing spectrum and revenue, the constellation remains underutilized over vast swathes of the planet. As noted by Bloomberg, expanding into enterprise and wholesale telecom services is essential for Starlink to meet the revenue projections that justify its massive valuation.

The Future of Hybrid Networks

The discussions in Barcelona signal the beginning of a hybrid era where terrestrial and non-terrestrial networks merge. The strict separation between “satellite phones” and “consumer smartphones” is evaporating. For telecom operators, the decision to partner with SpaceX is a calculated risk: accept the aerospace giant’s help to fix coverage gaps today, while hoping contracts and regulation are sufficient to contain their ambition tomorrow. As the technology moves from testing to commercial availability later this year, the industry will soon discover if they have signed a treaty with an ally or opened the gates to their eventual replacement.



from WebProNews https://ift.tt/hmDNWxk

Shadows in the Code: Google Unearths Legacy iOS Exploit with Potential Ties to Washington

In the opaque corridors of cyber espionage, the line between state-sponsored operations and commercial surveillance has blurred into a gray market where sophisticated weaponry is bought, sold, and occasionally turned against unintended targets. A recent investigation by Google’s Threat Analysis Group (TAG) has illuminated a particularly troubling campaign targeting older Apple devices, specifically those running iOS 12. The discovery is not merely technical; it carries the heavy implication of Western involvement. According to reports analyzing the campaign, the exploit chain bears hallmarks suggesting it may have been developed by entities linked to the United States government, challenging the conventional narrative that sophisticated spyware is the exclusive domain of adversarial regimes.

The campaign in question utilized a "watering hole" attack strategy, a method where attackers compromise websites known to be visited by their targets to infect devices passively. While the focus of modern cybersecurity often remains fixed on the latest hardware, this operation specifically sought out the long tail of legacy users—individuals operating iPhones that have not, or cannot, be updated to the newest operating systems. As detailed in a report by MSN, Google’s researchers identified this threat acting in the wild, exploiting a vulnerability in WebKit, the browser engine that powers Safari and other iOS web browsers. The specificity of the targeting suggests an operational need to surveil individuals who, for reasons of economy or operational security, rely on older technology.

The Persistence of Legacy Vulnerabilities in Modern Espionage

The decision to target iOS 12 is a calculated move that exposes a critical weakness in the mobile ecosystem: the fragmentation of device support. While Apple is lauded for its long-term software support, millions of devices globally remain effectively frozen in time, unable to run modern security protocols. This creates a permanent attack surface for sophisticated actors. The exploit discovered by Google TAG functions as a reminder that "obsolete" does not mean "offline." Intelligence agencies and commercial surveillance vendors (CSVs) understand that high-value targets often maintain older devices as secondary phones, believing them to be less conspicuous, when in reality they are soft targets for n-day exploits—attacks on known vulnerabilities for which patches may exist but haven’t been applied.

This specific campaign highlights the technical prowess required to chain together exploits for older architecture. The attackers had to bypass security mitigations that, while dated, are still formidable on Apple devices. The sophistication of the code is what initially raised eyebrows among researchers. It did not resemble the typical "smash-and-grab" tactics of cybercriminal gangs looking for credit card data. Instead, it showed the patience and engineering depth characteristic of state-backed development or top-tier mercenary firms. The MSN report notes that the fingerprints on the exploit point toward a U.S. origin, a revelation that complicates the geopolitical stance of Western democracies which frequently condemn the proliferation of commercial spyware by authoritarian states.

Tracing the Digital Fingerprints to Western Origins

The assertion that this tool may have originated from the U.S. government or its defense industrial base raises uncomfortable questions about the control of cyber munitions. Historically, the U.S. has maintained a stockpile of "zero-day" vulnerabilities for national security purposes. However, the migration of these tools into the wild—whether through leaks, reverse engineering by adversaries, or the commercial activities of contractors—creates a boomerang effect. If a tool developed for counter-terrorism finds its way into a broader surveillance campaign, the distinction between a lawful intercept tool and a weapon of oppression vanishes. Industry insiders have long warned that code developed in Fort Meade does not always stay there.

Furthermore, the broader context of this discovery aligns with a surge in activity from commercial surveillance vendors who often hire from the ranks of Western intelligence agencies. Companies operating in this space frequently market their wares as "lawful interception" tools intended for government clients to track criminals and terrorists. However, as noted in recent coverage by TechCrunch regarding Google’s ongoing battles with spyware vendors, these tools are routinely abused to target journalists, dissidents, and political rivals. The line between a contractor developing a tool for the U.S. government and that same contractor (or its employees) selling similar capabilities on the international market is often governed by complex, and sometimes porous, export controls.

The Mechanics of the Watering Hole Attack

Technically, the attack vector utilized in this campaign is classic yet devastatingly effective. By compromising a website frequented by the target demographic, the attackers removed the need for the victim to click a suspicious link in a text message—a technique known as a "zero-click" or "one-click" interaction depending on the specific execution. Once the user visited the infected site, the WebKit vulnerability was triggered, allowing the attackers to execute arbitrary code on the device. This provides root access, enabling the exfiltration of messages, photos, and location data. The focus on WebKit is significant; because it is the only allowed browser engine on iOS, a vulnerability there affects every browser on the device, from Safari to Chrome.

The investigation by Google TAG also sheds light on the cat-and-mouse game played between Apple and these vendors. While Apple released a patch for this vulnerability (CVE-202X-XXXX) in a subsequent security update, the window of exposure for users on iOS 12 was substantial. This incident underscores the reality that security is not a static state but a continuous process of patching holes that are often discovered by the adversary first. As reported by The Record, the commercial spyware industry is responsible for the exploitation of a significant percentage of known zero-days, driving a multi-billion dollar market that incentivizes the hoarding of vulnerabilities rather than their disclosure.

The Gray Market of Commercial Surveillance

The ecosystem supporting these exploits is vast and lucrative. It is not merely a few rogue hackers but a structured industry with marketing departments, customer support, and legal teams. When Google identifies a threat potentially linked to the U.S. government, it often points to the complex web of contractors that service the intelligence community. These entities exist in a legal gray zone. They develop capabilities for state agencies, but the intellectual property—the methods of exploitation—can sometimes bleed into commercial products sold to allied nations. This proliferation increases the risk that Western-developed technology will be used against Western interests or values abroad.

This specific case involving iOS 12 serves as a case study in "technical debt" becoming a national security liability. Organizations and governments that fail to upgrade their mobile fleets are effectively inviting this caliber of espionage. The cost of replacing hardware is often cited as a barrier, yet the cost of a compromised device in a sensitive environment is incalculable. Security professionals must view legacy devices not just as old phones, but as active vulnerabilities within their network perimeter. The MSN article reinforces that the attackers are aware of this negligence and are actively capitalizing on it.

Apple’s Battle Against Infinite Patch Cycles

Apple’s response to these threats has been aggressive, introducing features like "Lockdown Mode" for high-risk users. However, Lockdown Mode is a feature of newer operating systems, leaving iOS 12 users without this shield. The company is in the difficult position of trying to secure an ecosystem that spans over a decade of hardware releases. Every patch released for an older version is a tacit admission that the device is still in use and under attack. The discovery of this U.S.-linked exploit forces a re-evaluation of how long a device should reasonably be supported and whether the continued operation of legacy hardware is compatible with modern security requirements.

The implications extend beyond the immediate victims. If U.S. government-developed exploits are being identified in the wild by Google, it suggests a potential loss of control over these digital assets. It mirrors the "EternalBlue" incident, where NSA-developed tools were leaked and subsequently used to fuel the WannaCry ransomware attacks. While the scale here is different—targeted espionage versus mass disruption—the principle remains: cyberweapons are difficult to contain. Once deployed, they can be analyzed, reverse-engineered, and repurposed by other actors, including those hostile to the nation that developed them.

The Geopolitical Boomerang of Cyber Weaponry

Ultimately, this revelation serves as a critical data point for industry insiders tracking the proliferation of cyber capabilities. It challenges the binary view of "attacker" and "defender." In the digital domain, the developers of security tools, the creators of exploits, and the targets are often entangled in a complex web of alliances and contracts. For the Chief Information Security Officer (CISO) or the security architect, the lesson is clear: trust no device, particularly not an old one, and recognize that the sophistication of the threat landscape now includes tools that may have been born in a laboratory funded by tax dollars.

As the dust settles on this specific campaign, the focus must shift to the broader trend. The targeting of legacy iOS versions is likely to continue as long as those devices remain in circulation. The involvement of Western-developed tools in these attacks adds a layer of political complexity that requires transparency and perhaps tighter regulation of the cyber-arms trade. Until then, the digital shadows will continue to hide threats that are both foreign and, uncomfortably, domestic.



from WebProNews https://ift.tt/ezPo0uv