Saturday, 21 March 2026

The Machines Are Browsing Now: How Bot Traffic Is About to Drown Out Humanity Online

Sometime around 2027, a threshold will be crossed that most people won’t even notice. The majority of traffic flowing across the internet — the requests, the page loads, the data calls — will no longer come from human beings. It will come from bots.

That’s the finding from a recent analysis by Search Engine Land, drawing on data from Imperva’s annual Bad Bot Report and broader industry trends. According to the report, automated traffic already accounts for roughly half of all internet activity. The trajectory suggests that by 2027, bots will definitively overtake humans as the primary consumers of web content. Not by a slim margin. By a widening gap that shows no sign of reversing.

This isn’t science fiction. It’s an infrastructure problem, a business problem, and an identity crisis for the open web — all rolled into one.

The numbers have been moving in this direction for years. Imperva’s 2024 Bad Bot Report found that bad bot traffic alone hit 32% of all internet traffic in 2023, the highest level the firm had recorded since it began tracking in 2013. Add in “good” bots — search engine crawlers, monitoring tools, AI training scrapers — and automated traffic easily eclipses what humans generate. The split was roughly 49.6% bot, 50.4% human in 2023. The gap has been narrowing year over year, and at current rates, the crossover point arrives within the next two years.

What’s driving this acceleration? Two forces, primarily. The first is the explosion of generative AI. Large language models from OpenAI, Google, Anthropic, Meta, and dozens of smaller players require enormous volumes of web data to train and retrain. Their crawlers are aggressive, persistent, and increasingly sophisticated. They don’t just visit a page once. They return repeatedly, scraping content at scale to feed models that are themselves generating more automated queries downstream. It’s a compounding loop.

The second force is older but no less potent: commercial bot activity, both legitimate and malicious. Price-scraping bots hammer e-commerce sites. Credential-stuffing bots probe login pages. Content-scraping bots replicate entire publications. Ad fraud bots generate fake impressions. Inventory-hoarding bots snap up concert tickets and limited-edition sneakers before any human finger can click “buy.” These operations have grown more sophisticated, more distributed, and harder to detect.

And the economics favor the bots. Running a botnet or a scraping operation is cheap. Defending against one is expensive.

For publishers, the implications are severe and immediate. Website analytics — the foundation of digital advertising — become unreliable when a significant portion of traffic is non-human. Advertisers paying on a cost-per-impression or cost-per-click basis have long worried about bot-inflated metrics. As automated traffic grows, the signal-to-noise ratio deteriorates further. The digital advertising industry already loses an estimated $84 billion annually to ad fraud, according to Juniper Research. That number is going up, not down.

Search engine optimization, the discipline that has governed web visibility for two decades, faces its own reckoning. Google’s search results are increasingly populated by AI-generated summaries that answer queries without sending users to the source website. Meanwhile, AI companies are crawling those same source websites to build the models that power those summaries. Publishers find themselves in a perverse position: their content trains the systems that reduce their traffic. Search Engine Land notes that this dynamic is already reshaping how SEO professionals think about content strategy, with some publishers experimenting with blocking AI crawlers entirely through robots.txt directives — a blunt instrument with uncertain consequences.

Cloudflare, which sits in front of a massive share of the internet’s traffic, has been sounding alarms of its own. The company reported earlier this year that AI bot traffic to its customers has surged, with some sites seeing AI crawlers account for a disproportionate share of their bandwidth consumption. Cloudflare introduced tools in 2024 specifically designed to let website operators identify and block AI scrapers, a tacit acknowledgment that the existing bot-management toolset wasn’t keeping pace.

The problem extends beyond websites. APIs — the programmatic interfaces that power mobile apps, IoT devices, and cloud services — are even more heavily targeted by automated traffic. Imperva’s data shows that API-directed bot attacks grew 44% year over year. APIs are attractive targets because they’re designed for machine-to-machine communication in the first place, making it harder to distinguish legitimate automated requests from malicious ones.

So what happens when most of the internet’s activity is machine-generated? Several things, none of them comfortable for the incumbents.

First, the economics of web hosting change. Bandwidth costs money. Server capacity costs money. When bots consume more resources than humans, website operators are effectively subsidizing automated access to their content. Some are pushing back. The New York Times sued OpenAI. Reddit struck a licensing deal with Google. Stack Overflow gated its data behind paid API access. These are early skirmishes in what will become a prolonged fight over who pays for the content that trains AI systems.

Second, authentication and verification become paramount. Proving that a visitor is human — through CAPTCHAs, behavioral analysis, device fingerprinting, or cryptographic attestation — shifts from a minor friction point to a fundamental requirement. But every verification step adds latency and degrades user experience. The tension between security and usability will intensify.

Third, the very concept of “web traffic” as a meaningful metric starts to erode. If most visits to a webpage are automated, then traffic numbers tell you less about audience size and engagement and more about how attractive your data is to machines. Media companies, advertisers, and investors will need new frameworks for measuring digital value. Page views won’t cut it anymore. They arguably haven’t for a while.

There’s a deeper philosophical dimension here too. The internet was built for people. Its protocols, its design patterns, its business models — all assume a human on the other end of the connection. A web where machines are the primary users is a fundamentally different thing. It’s less a library and more a warehouse. Less a town square and more a data pipeline.

Some industry observers see opportunity in this shift. Bot management is a growing market, projected to reach $2.1 billion by 2028 according to MarketsandMarkets. Companies like Imperva (now part of Thales), Cloudflare, Akamai, and DataDome are investing heavily in detection and mitigation technologies. Machine learning models trained to identify bot behavior are themselves becoming more sophisticated — an arms race between automated attackers and automated defenders.

But the arms race metaphor only goes so far. Not all bots are adversaries. Search engines need to crawl the web to index it. Price comparison services need to aggregate data to function. Accessibility tools use automated processes to make content available to people with disabilities. The challenge isn’t eliminating bots. It’s distinguishing between the ones that add value and the ones that extract it.

That distinction is getting harder to make. AI crawlers from well-known companies operate in a gray zone — they’re not malicious in the traditional sense, but they consume resources and repurpose content without direct compensation to the creator. The legal and ethical frameworks for this kind of activity are still being built, mostly through litigation and ad hoc licensing agreements rather than coherent policy.

The regulatory picture is fragmented. The EU’s AI Act addresses some aspects of data provenance and transparency but doesn’t directly regulate web crawling. In the United States, the legal status of AI training on copyrighted web content remains unresolved, with multiple cases working through the courts. Japan has taken a permissive stance, while Australia is considering mandatory licensing schemes. No consensus exists.

Meanwhile, the bots keep coming. Faster, smarter, more numerous.

For businesses operating online — which at this point means virtually all businesses — the practical takeaways are straightforward if unglamorous. Invest in bot detection and traffic analysis. Audit your server logs. Understand what percentage of your traffic is human. Rethink metrics that assume human visitors. Review your robots.txt and terms of service. Consider whether your content is being used to train models without your consent, and whether you have recourse.

For the technology industry broadly, the 2027 crossover point should serve as a forcing function. The infrastructure of the internet — its protocols, its economic models, its governance structures — was designed for a human-majority web. That era is ending. What replaces it will depend on decisions being made right now, in boardrooms and courtrooms and standards bodies, about who gets to access what, at what cost, and under what rules.

The machines aren’t coming. They’re already here. They’ve been here for years. The difference is that soon, they’ll outnumber us. And the web will have to reckon with what that means — for commerce, for content, for the basic question of who the internet is actually for.



from WebProNews https://ift.tt/pzNeL2X

Friday, 20 March 2026

The Wayland Wars: How Linux’s Grand Display Server Replacement Fractured a Community and Set Back Desktop Computing

In 2008, a Red Hat engineer named Kristian Høgsberg began writing code that would eventually ignite one of the longest and most bitter technical debates in open-source history. The project was called Wayland, and its mission sounded reasonable enough: replace the aging X Window System — the display server protocol that had powered Unix and Linux graphical interfaces since 1984 — with something modern, secure, and architecturally sound.

Sixteen years later, the transition remains incomplete, contentious, and, according to a growing chorus of developers and power users, actively harmful to the Linux desktop.

The Case Against X11 — And Why Wayland Was Supposed to Fix Everything

The X Window System, commonly referred to as X11 or simply X, is old by any standard. It predates the World Wide Web. It was designed for an era of networked terminals, where a thin client might display an application running on a remote mainframe. That network-transparent architecture — the ability to run a program on one machine and display it on another — was a defining feature, not a bug.

But X11 accumulated decades of technical debt. Its security model was essentially nonexistent: any application could snoop on keyboard input from any other application. Screen capture and input injection required no special permissions. The compositing model was bolted on after the fact through extensions like XComposite, and the result was a patchwork of protocols, hacks, and workarounds that compositor developers had to wrestle with constantly.

Wayland’s pitch was clean. Strip away the cruft. Build a protocol where the compositor is the display server. Give each application its own buffer, eliminate the ability for apps to spy on each other, and let the compositor handle all the rendering directly. No more decades-old extension baggage. No more security holes wide enough to drive a truck through.

On paper, it was compelling. In practice, it broke nearly everything.

Omar Rizwan, an independent developer and researcher, published a detailed technical critique on his blog titled “Wayland set the Linux desktop back by 10 years” that catalyzed renewed debate across the Linux community. His argument is not that Wayland’s goals were wrong, but that the project’s ideological rigidity — its refusal to provide certain capabilities that X11 offered — has resulted in a net loss for Linux desktop users and developers. As Rizwan wrote: the Wayland project “Democrats-falling-in-line’d the Linux desktop into mass-adopting a platform with huge gaps.”

The gaps he identifies are not obscure edge cases. They are fundamental capabilities that professional users and developers rely on daily.

Global hotkeys. Gone, or at best inconsistently implemented through compositor-specific portals. Under X11, any application could register a keyboard shortcut that worked regardless of which window had focus. Wayland’s security model forbids this by design because it would require one application to intercept input destined for another. The result: tools like push-to-talk in Discord, clipboard managers, and automation utilities either don’t work or require fragile workarounds specific to each compositor.

Window positioning. Gone. Under X11, an application could request to be placed at specific screen coordinates. Wayland considers this a compositor concern, not an application concern. So test automation frameworks that need to position windows predictably — broken. Tiling window manager workflows that relied on applications self-positioning — broken. The Wayland developers’ response, per Rizwan’s account, has consistently been that applications shouldn’t need to do this. But they do.

Screen capture and recording. Severely restricted. The same security model that prevents keystroke snooping also prevents legitimate screen recording and sharing without going through an XDG Desktop Portal, which requires user interaction and compositor support. This broke OBS Studio workflows, remote desktop tools, and accessibility software that needed to read screen contents programmatically.

Color management, HDR support, and advanced display features were either missing or arrived years late. Network transparency — X11’s original killer feature — was simply abandoned. The Wayland developers’ position was that VNC or RDP-style solutions could fill the gap. For many sysadmins and remote users, they don’t.

Rizwan’s critique extends beyond missing features to the culture of the project itself. He describes a pattern where users and developers reporting broken workflows were told their use cases were invalid, or that they should file requests with the XDG Desktop Portal project, or that their compositor of choice should implement a proprietary extension. The effect was to shift blame away from the protocol and onto everyone else.

“The Wayland community will mass-close issues on their issue tracker and mass-close merge requests,” Rizwan observed. He compared the dynamic to a political movement that demands loyalty while delivering incomplete results.

The Fragmentation Problem

Perhaps the most damaging consequence of Wayland’s architecture is fragmentation. Under X11, there was one display server protocol. Applications written to the X protocol worked with any window manager or desktop environment. That’s no longer true.

Wayland is a protocol, but it’s a minimal one. Many capabilities that applications need — screen capture, global shortcuts, clipboard management, window positioning — are not part of the core protocol. They’re handled by extensions, and different compositors implement different extensions. GNOME’s Mutter compositor supports certain features. KDE’s KWin supports others. Sway, Hyprland, and the dozens of other Wayland compositors each make their own choices.

For application developers, this means targeting “Wayland” isn’t sufficient. You need to know which compositor your user is running. And you may need to implement multiple code paths. Or just give up and tell users to use X11 through XWayland, the compatibility layer that runs X11 applications inside a Wayland session.

XWayland itself is a telling admission. Years into the Wayland transition, a huge number of applications — including many professional tools, games, and legacy software — still run through XWayland rather than natively. The compatibility layer works, mostly, but it reintroduces many of the performance and security characteristics that Wayland was supposed to eliminate.

The XDG Desktop Portal system was created to address some of these gaps by providing a standardized way for sandboxed applications to request capabilities like screen recording or file access through a user-mediated dialog. But portals are unevenly implemented across compositors, they add latency and complexity, and they fundamentally change the programming model. An automation tool that could previously capture a screen region in a single function call now needs to invoke a D-Bus service, handle an asynchronous user permission dialog, and deal with compositor-specific behavior differences.

Rizwan’s blog post resonated because it articulated frustrations that had been simmering for years in developer communities. Discussions on Hacker News, Reddit’s r/linux, and X (formerly Twitter) have consistently featured complaints about Wayland regressions. But the major distributions kept pushing forward anyway.

Ubuntu switched its default session to Wayland in 2021 (after an earlier attempt in 2017 that was rolled back). Fedora made the switch even earlier. GNOME and KDE both declared Wayland their primary target. The X.Org server, meanwhile, has effectively entered maintenance mode — its last major release was in 2021, and active development has largely ceased.

This creates a ratchet effect. As X.Org development stalls, bugs go unfixed, drivers stop being optimized for it, and the argument for switching to Wayland becomes self-fulfilling. The old system is deteriorating — but it’s deteriorating in part because resources were redirected to its replacement before that replacement was ready.

Not everyone agrees with the critics. Wayland’s defenders — and there are many, including most of the core GNOME and KDE developers — argue that the transition pain is real but temporary, and that the security and architectural benefits are worth it. They point to genuine improvements: better handling of mixed-DPI displays, smoother frame pacing, elimination of screen tearing without hacks, and a security model that’s appropriate for a modern desktop where applications shouldn’t be able to keylog each other.

They also argue that many of the “missing features” are being addressed through ongoing protocol extensions. The ext-global-shortcuts protocol, for instance, is working its way through the standardization process. Color management and HDR protocols are in development. The situation is improving.

But “improving” after 16 years of development is itself part of the critique. X11 had these capabilities — imperfect, insecure, but functional — in the 1990s.

Recent discussions on social media and developer forums suggest the debate is far from settled. On X, developers have shared Rizwan’s post widely, with reactions splitting predictably along ideological lines. Those who prioritize security and architectural purity tend to support Wayland’s approach. Those who prioritize practical functionality and backward compatibility tend to side with the critics.

What Comes Next

The uncomfortable truth is that the Linux desktop is now committed to Wayland whether the transition is complete or not. X.Org is dying. The major desktop environments have moved on. Going back isn’t a realistic option.

But the path forward requires something the Wayland project has historically resisted: compromise. The protocol needs to accommodate use cases that its designers consider architecturally unclean. Global hotkeys, even if they create theoretical security concerns, are a practical necessity. Window positioning, even if it violates the compositor-owns-everything model, is required by real software. Screen capture without portal dialogs is needed by professionals who record, stream, and automate.

Some of this is happening. KDE’s approach has generally been more pragmatic than GNOME’s, implementing extensions and workarounds to maintain functionality that users expect. Hyprland and other independent compositors have taken similarly practical approaches. But the lack of a single standardized protocol for these capabilities means every compositor reinvents the wheel, and application developers pay the cost.

Rizwan’s post ends with a pointed observation. He doesn’t argue that X11 was good. He argues that Wayland’s developers confused “X11 is bad” with “everything X11 does is bad,” and in their zeal to build something architecturally pure, they threw away capabilities that took decades to develop without providing replacements.

The result is a Linux desktop that is, in measurable ways, less capable than it was five years ago. Not in every dimension — Wayland is genuinely better at some things. But in enough dimensions that matter to enough people that the frustration has reached a boiling point.

And so the Linux community finds itself in a familiar position: arguing about infrastructure instead of building on it. The display server wars have consumed enormous amounts of developer time, community goodwill, and user patience. Whether Wayland eventually delivers on its promise or becomes a cautionary tale about the costs of ideological purity in systems design, the price has already been steep.

The desktop Linux market share, stubbornly hovering around 4% globally, can’t afford many more decades-long transitions that leave users worse off in the interim. But that appears to be exactly what it’s getting.



from WebProNews https://ift.tt/wCR0LZS

Thursday, 19 March 2026

Tim Cook’s China Pilgrimage: Why Apple’s CEO Keeps Showing Up in Beijing When It Matters Most

Tim Cook landed in China this week for what Apple billed as the 40th anniversary celebration of its operations in the country. A concert. A photo op. A carefully choreographed display of corporate affection for the world’s second-largest economy. But behind the smiles and the stage lights, Cook’s visit carries weight that extends far beyond any anniversary milestone.

The trip, first reported by AppleInsider, marks yet another in a long string of personal appearances Cook has made in China — visits that have accelerated in frequency as geopolitical tensions between Washington and Beijing have intensified. Cook posted about the visit on Chinese social media platform Weibo, sharing images from the event and expressing gratitude for Apple’s four decades in the country.

Forty years. That’s how long Apple has maintained a presence in China, a relationship that predates the iPhone by more than two decades and one that has become arguably the most consequential supplier-market dependency in the global technology industry.

Apple doesn’t just sell products in China. It builds them there. The vast majority of iPhones, iPads, and MacBooks are assembled in Chinese factories operated by partners like Foxconn and Pegatron. China is simultaneously Apple’s most important manufacturing base and its third-largest market by revenue, generating roughly $67 billion in the company’s Greater China segment during fiscal 2024. That dual role — factory floor and showroom — creates a strategic vulnerability that no amount of supply chain diversification in India or Vietnam has yet meaningfully reduced.

Cook understands this better than anyone at Apple. He built his career on supply chain mastery, and he has cultivated personal relationships with Chinese officials and business leaders for years. His visits aren’t tourism. They’re diplomacy.

The timing of this particular trip is telling. The United States and China remain locked in a trade war that has seen tariffs escalate on both sides. The Biden administration maintained and in some cases expanded Trump-era tariffs on Chinese goods, and the current political environment in Washington shows little appetite for détente. Apple has so far managed to secure exemptions or workarounds for many of its products, but that protective barrier is never guaranteed. Every quarter brings fresh speculation about whether iPhones could be swept into broader tariff actions.

Meanwhile, Apple faces intensifying competitive pressure inside China itself. Huawei’s resurgence has been one of the biggest stories in the global smartphone market over the past 18 months. After years of being hobbled by U.S. sanctions that cut off its access to advanced chips, Huawei stunned the industry in late 2023 with the Mate 60 Pro, which featured a domestically produced 7-nanometer processor from SMIC. The phone sold briskly. Huawei followed up with additional models that have continued to eat into Apple’s share among Chinese consumers, particularly in the premium segment where Apple once faced little domestic competition.

The numbers reflect the shift. According to data from research firms including IDC and Counterpoint, Apple’s iPhone shipments in China declined in multiple quarters during 2024, while Huawei posted strong gains. Apple slipped out of the top five smartphone vendors in China for certain quarters — a position it hadn’t found itself in for years. Cook has acknowledged the competitive dynamics on earnings calls, though he’s typically framed them in optimistic terms, pointing to Apple’s installed base and customer loyalty.

But loyalty is a two-way street in China. And national sentiment plays a role that’s difficult to quantify from Cupertino. Chinese consumers have shown a growing preference for domestic brands, a trend accelerated by pride in Huawei’s ability to produce competitive hardware despite American sanctions. Apple’s brand still carries enormous prestige in China, but prestige alone doesn’t guarantee market share when a credible domestic alternative exists and when buying local carries patriotic overtones.

So Cook keeps showing up. In person. Repeatedly.

His March 2025 visit follows trips in 2024 and 2023, each carefully staged to signal Apple’s ongoing commitment to China. He’s visited Apple Stores, met with developers, praised Chinese innovation, and posed for photos with local partners. The consistency of these appearances stands in contrast to the approach of other major American tech CEOs, many of whom have reduced their China engagement or avoided it altogether amid political pressures at home.

The anniversary concert itself — marking 40 years of Apple in China — serves as a useful framing device. It allows Cook to celebrate the relationship without making overtly political statements. It positions Apple as a long-term partner rather than a fair-weather friend. And it gives Chinese state media positive content to broadcast, which matters in a country where the government’s attitude toward foreign companies can shift the commercial weather overnight.

Apple’s investment in China extends well beyond assembly lines. The company operates multiple research and development centers in the country, employs thousands of Chinese workers directly, and supports millions more through its supply chain and App Store developer community. Apple has said that it supports more than five million jobs in China. That figure, whether precisely accurate or generously calculated, represents the kind of economic footprint that gives both Apple and the Chinese government reasons to maintain a functional relationship even when bilateral tensions flare.

There’s a pragmatic calculus at work. China needs Apple’s jobs and technology transfer. Apple needs China’s manufacturing capacity and consumer market. Neither side benefits from a rupture, which is why the relationship has proven remarkably durable despite tariffs, data privacy regulations, and occasional government-directed boycotts of American products.

Still, the risks are real and growing. China’s data localization requirements have forced Apple to store Chinese users’ iCloud data on servers operated by a state-owned company, Guizhou-Cloud Big Data. Privacy advocates have raised concerns about the arrangement, though Apple has maintained that it retains control of encryption keys. The Chinese government has also restricted iPhone use among government employees in certain agencies, a move widely interpreted as both a security measure and a signal of support for domestic alternatives.

Apple’s response to these pressures has been characteristically quiet and accommodating. The company has complied with Chinese regulations requiring the removal of certain apps from its App Store in the country, including VPN applications and, at various points, apps related to news and political content. These concessions have drawn criticism from human rights organizations and some U.S. lawmakers, but Apple has shown no indication of changing course. The commercial stakes are simply too high.

Cook’s personal brand in China remains strong. He’s one of the few American business leaders who can post on Weibo and generate genuine engagement. His visits receive favorable coverage in Chinese media, and his respectful tone toward Chinese culture and business practices has earned him goodwill that other executives lack. This soft power isn’t accidental — it’s the product of years of deliberate relationship-building that Cook has prioritized since becoming CEO in 2011.

The question hanging over all of this is whether personal diplomacy and anniversary concerts will be enough to sustain Apple’s position in China over the next decade. The structural forces working against the company are formidable. Huawei isn’t going away. Chinese semiconductor capabilities, while still trailing the leading edge, are advancing. Government policy increasingly favors domestic technology self-sufficiency. And the broader U.S.-China relationship shows few signs of warming.

Apple has hedged its bets by expanding manufacturing in India, where it now assembles a growing share of iPhones for both the local market and export. But India is years away from matching China’s manufacturing scale, supplier density, and workforce expertise. Vietnam plays a role too, primarily for accessories and some Mac production. These are meaningful steps, but they don’t eliminate Apple’s China dependency — they merely reduce it at the margins.

For now, Cook’s strategy appears to be one of persistent engagement. Show up. Celebrate the relationship. Invest visibly. Comply with local regulations. And hope that the commercial logic of mutual benefit continues to outweigh the centrifugal forces of geopolitical competition.

It’s a strategy without a clear endgame, which is perhaps the point. In the relationship between the world’s most valuable company and the world’s most populous country, there is no final resolution — only ongoing management. And Tim Cook, more than any other figure in American business, has made that management his personal mission.

The concert is over. The photos have been posted. Cook will fly back to Cupertino, where the next earnings call will bring another round of questions about China. He’ll answer them carefully, as he always does. And then, in a few months, he’ll probably be back in Beijing or Shanghai, doing it all over again.



from WebProNews https://ift.tt/fOI7p6t

Google’s Quiet Infrastructure Play: Why Wi-Fi Credential Sync in Android 16 Matters More Than You Think

Google is threading a needle that most users will never notice — and that’s precisely the point. Buried in the latest Android 16 update is a feature that lets Wi-Fi passwords sync automatically across devices signed into the same Google account. No QR codes. No retyping 20-character strings from the bottom of a router. Just walk in, connect, and move on.

But for enterprise IT departments, device manufacturers, and the broader mobile industry, this small change carries outsized implications about how Google envisions multi-device management, cloud-first networking, and the competitive war with Apple’s already-entrenched iCloud Keychain.

As TechRepublic reported, the Wi-Fi credential sync feature is part of a broader set of updates rolling out with Android 16, which Google has been positioning as a maturity release — one focused less on flashy new capabilities and more on tightening the connective tissue between devices. The feature works through Google’s cloud infrastructure, storing encrypted Wi-Fi credentials and distributing them to other Android devices logged into the same account. It’s the kind of plumbing work that rarely makes headlines but reshapes user expectations over time.

The timing is telling. Apple has offered Wi-Fi password sharing through iCloud Keychain for years, creating a frictionless experience that keeps users locked into its hardware family. When an iPhone user sets up a new iPad or MacBook, known Wi-Fi networks simply appear. It’s one of those invisible conveniences that makes switching to Android feel like a downgrade — not because Android is worse, but because it forces users to repeat mundane setup tasks Apple eliminated long ago.

Google clearly wants to close that gap. And fast.

The sync mechanism reportedly uses end-to-end encryption, meaning Google itself shouldn’t have access to plaintext Wi-Fi passwords stored in the cloud. This matters for enterprise deployments where WPA2-Enterprise or WPA3 credentials could, in theory, be exposed if cloud storage were compromised. Google hasn’t published a detailed white paper on the encryption architecture yet, but the company has historically used its Titan security infrastructure and on-device encryption keys for similar sensitive data synchronization, such as Chrome password sync.

For IT administrators managing fleets of Android devices through Google’s endpoint management or third-party MDM solutions, Wi-Fi credential sync introduces both convenience and complexity. On the convenience side, provisioning new devices for employees becomes faster. A worker who connects to the corporate guest network on their phone will find their tablet or Chromebook already authenticated. But complexity arises in environments where network access is tightly controlled. If an employee’s personal Android device syncs corporate Wi-Fi credentials, that’s a potential policy violation — or at minimum, an audit headache.

Google will likely need to build granular controls into Android Enterprise to let administrators disable credential sync for managed networks. Whether those controls ship with the initial Android 16 release or arrive later remains unclear.

The feature also carries implications for the growing category of Android-powered devices beyond phones. Think about it: Android runs on tablets, cars, TVs, smart displays, wearables, and an expanding range of IoT hardware. A world where Wi-Fi credentials flow automatically to every Google-authenticated device changes the setup experience for all of them. Your new Pixel Tablet connects to your home network the moment you sign in. Your Android Auto head unit picks up credentials from your phone. The friction disappears.

This is infrastructure-level thinking, not feature-level thinking. Google is building toward a world where the Google account itself becomes the master key to network access, device configuration, and cross-platform continuity. Wi-Fi sync is one brick in that wall.

Samsung, which dominates Android hardware sales globally, will be an interesting variable. Samsung has its own SmartThings platform and has historically layered proprietary features on top of stock Android. Whether Samsung embraces Google’s Wi-Fi sync or builds a parallel system through Samsung accounts could fragment the experience for users. Samsung’s One UI has occasionally duplicated Google services — Samsung Internet vs. Chrome, Samsung Notes vs. Google Keep, Samsung Pass vs. Google Password Manager — and Wi-Fi management could become another battleground.

There’s a security dimension worth examining in detail. Wi-Fi credential sync means that compromising a single Google account potentially grants an attacker access to every Wi-Fi network that account’s owner has ever joined. That’s a meaningful expansion of the blast radius from a single account compromise. Google’s existing protections — two-factor authentication, passkey support, suspicious login detection — become even more critical when the account holds network access credentials alongside email, documents, and payment information.

Short version: your Google account just got more valuable to attackers.

The broader Android 16 release includes other updates that, taken together, suggest Google is focused on reducing the setup and management burden across devices. Improvements to Nearby Share (now Quick Share), better cross-device clipboard handling, and tighter integration with Chromebooks all point in the same direction. Google wants the experience of owning multiple Android devices to feel coherent rather than fragmented.

This has been Apple’s advantage for over a decade. The tight integration between iPhone, iPad, Mac, Apple Watch, and AirPods created a gravitational pull that kept users buying Apple hardware. Google’s challenge is harder because Android is an open platform running on hardware from dozens of manufacturers with different update schedules, software layers, and business incentives. Wi-Fi credential sync works because it operates at the Google account level, bypassing manufacturer fragmentation entirely. It doesn’t matter if you have a Pixel, a Samsung Galaxy, or a OnePlus — if you’re signed into the same Google account, the credentials follow.

That’s a smart architectural choice. And it hints at Google’s broader strategy of making the account, not the device, the center of the user experience.

Enterprise adoption of Android has been climbing steadily, particularly in frontline worker deployments, logistics, healthcare, and retail. According to IDC’s most recent data, Android holds roughly 71% of the global smartphone market. In many organizations, especially outside North America, Android devices outnumber iPhones significantly. Features like Wi-Fi credential sync make Android more palatable for IT departments that have historically favored iOS for its consistency and manageability.

But Google will need to address the policy controls question directly. Managed devices in corporate environments often connect to segmented networks with specific access policies. If credential sync allows those network details to leak to unmanaged personal devices, security teams will push back. The Android Enterprise team has generally been responsive to these concerns — the work profile separation model, for instance, keeps corporate and personal data isolated on the same device. A similar approach for network credentials would be the logical extension.

There’s also the question of how this interacts with captive portals and certificate-based authentication. Many enterprise and institutional Wi-Fi networks don’t rely on simple passwords. Universities use eduroam. Corporations use 802.1X with certificate-based authentication. Hotels and airports use captive portals. Wi-Fi credential sync in its current form likely handles PSK (pre-shared key) networks only. Extending it to certificate-based networks would require syncing not just passwords but digital certificates, which introduces a whole different set of security and management challenges.

Google hasn’t said whether certificate sync is on the roadmap. It should be.

For consumers, the feature is straightforward quality-of-life improvement. The kind of thing you don’t think about until you set up a new device and realize you don’t remember the Wi-Fi password for your parents’ house, your office, or your favorite coffee shop. Apple users have taken this for granted. Android users are finally catching up.

The competitive dynamics extend beyond Apple, too. Microsoft has been building its own cross-device features through Phone Link and the broader Windows-Android integration. Amazon’s Fire tablets run a forked version of Android and maintain their own credential management through Amazon accounts. As credential sync becomes table stakes, every platform player will need an answer.

So where does this leave us? Google’s Wi-Fi credential sync in Android 16 isn’t a headline-grabbing feature. It won’t sell phones. It won’t trend on social media. But it’s the kind of infrastructural improvement that, compounded over time, makes the Google account indispensable. And that’s exactly what Google wants. Every feature that deepens the connection between a user and their Google account raises the switching cost to another platform. Wi-Fi sync alone won’t keep someone from moving to iPhone. But Wi-Fi sync plus password sync plus photo backup plus document access plus payment credentials plus messaging history — that’s a gravitational field that gets harder and harder to escape.

Google is playing the long game here. One synced Wi-Fi password at a time.



from WebProNews https://ift.tt/fnSq5Oi

Wednesday, 18 March 2026

ShinyHunters Is Back — And the Snowflake Breach Was Just the Beginning

The hacking collective known as ShinyHunters, already infamous for orchestrating one of the largest cloud data breaches in history through Snowflake’s customer environments last year, has resurfaced with claims of fresh high-profile victims. The group’s latest alleged exploits, reported by The Register, suggest an operation that hasn’t slowed down despite law enforcement pressure and at least one arrest within its ranks.

This time, ShinyHunters is claiming to have compromised data from multiple enterprise targets, posting samples on dark web forums as proof. The group’s tactics appear consistent with its established playbook: targeting cloud infrastructure, exploiting stolen credentials, and monetizing massive datasets. But the scale and audacity of the claims — coming after a period when many assumed the group had been disrupted — signal something more troubling for corporate security teams.

A pattern is emerging. And it’s one that should unsettle every CISO managing cloud-heavy infrastructure.

From Snowflake to Now: The Evolution of a Persistent Threat

ShinyHunters first grabbed global attention in 2020 with a string of breaches hitting companies like Microsoft’s GitHub repositories, Tokopedia, and Mashable. The group operated with a kind of brazen professionalism, listing stolen databases on underground markets with the polish of a SaaS vendor hawking subscription tiers. But 2024 marked their most consequential campaign.

The Snowflake incident, which came to light in mid-2024, wasn’t a breach of Snowflake’s own infrastructure per se. Instead, ShinyHunters and affiliated actors systematically targeted Snowflake customer accounts that lacked multi-factor authentication, using credentials harvested from infostealer malware infections on employee machines. The downstream impact was staggering. Ticketmaster, AT&T, Santander Bank, Advance Auto Parts, and LendingTree were among the confirmed victims, with hundreds of millions of records exposed across the campaign.

Mandiant, which investigated the Snowflake-related intrusions, attributed the activity to a threat cluster it tracked as UNC5537, noting significant overlap with ShinyHunters’ known infrastructure and methods. The firm found that roughly 165 Snowflake customer accounts had been potentially compromised. AT&T alone disclosed that call and text records of nearly all its wireless customers — around 110 million people — had been accessed.

One member of the operation, a Canadian national named Alexander Moucka (known online as “Judische” and “Waifu”), was arrested in late 2024. A Turkish national, John Erin Binns, had already been detained. U.S. authorities unsealed indictments. The conventional wisdom was that the group had been meaningfully degraded.

Conventional wisdom, it turns out, was premature.

The latest claims from ShinyHunters, as detailed by The Register, indicate the group — or at least elements operating under its banner — remains active and capable. The new alleged victims span technology, retail, and financial services sectors. ShinyHunters has posted data samples on the relaunched BreachForums, the same marketplace the group has historically used to peddle stolen information. The samples, while not independently verified at the time of reporting, are consistent with the kind of structured enterprise data the group has trafficked in before: customer PII, internal credentials, API keys, and authentication tokens.

Security researchers who monitor dark web forums have noted that ShinyHunters’ operational tempo appears to have actually increased in early 2026, despite the arrests. This shouldn’t be entirely surprising. Cybercriminal collectives, particularly those organized in loose, decentralized cells, are notoriously resilient. Lose one node, and another picks up the work. The brand persists even when individuals don’t.

There’s also a financial incentive structure that makes retirement unlikely. The Snowflake-related extortion campaign reportedly generated millions of dollars in ransom payments from victims desperate to prevent public disclosure of stolen data. AT&T reportedly paid approximately $370,000 in Bitcoin to have its stolen data deleted — a transaction that, as Wired reported, came with no real guarantee the data was actually destroyed. When the economics are that favorable, the motivation to continue is obvious.

Why Cloud Credential Theft Remains the Most Dangerous Attack Vector

The broader lesson from ShinyHunters’ sustained campaign isn’t just about one group’s persistence. It’s about a systemic vulnerability in how enterprises manage cloud access.

The Snowflake breaches worked because of a devastatingly simple attack chain. Infostealers like Raccoon, Vidar, and RedLine — commodity malware available for as little as $200 per month — infected employee devices, often personal machines used for work. These stealers harvested saved credentials from browsers. Those credentials were then sold in bulk on dark web marketplaces. ShinyHunters and their associates bought them, tested them against Snowflake login portals, and found that a shocking number of accounts had no MFA enabled. No zero-days. No sophisticated exploits. Just stolen passwords and open doors.

Snowflake responded by making MFA mandatory for new accounts and rolling out enhanced authentication controls. But the incident exposed a deeper problem: the shared responsibility model for cloud security, where the provider secures the platform and the customer secures access, breaks down when customers fail to implement basic hygiene. And many still do.

A February 2026 report from Specops Software found that infostealer malware remains one of the fastest-growing threat categories, with credential logs from corporate environments showing up on Telegram channels and dark web shops within hours of infection. The supply chain for stolen credentials is now industrialized. It operates at scale, with specialization at every layer: malware developers, initial access brokers, credential validators, and finally, groups like ShinyHunters that monetize the access.

This is the threat model that keeps security leaders awake. Not the nation-state APT deploying custom implants. The teenager with $200 and a Telegram account buying credentials that unlock terabytes of customer data sitting in a cloud warehouse with no second factor.

The new ShinyHunters claims also raise questions about whether the group has expanded beyond Snowflake-specific targeting. The Register’s reporting suggests some of the newly claimed victims may involve other cloud platforms and SaaS applications. If confirmed, this would represent a broadening of the group’s operational scope — moving from a single-platform credential stuffing campaign to a more diversified approach targeting multiple cloud services.

Enterprise security teams should be watching this closely. The indicators of compromise from the Snowflake campaign — specific infostealer families, credential marketplace listings, characteristic login patterns — have been well documented by Mandiant and CrowdStrike. But if ShinyHunters is shifting tactics, the detection signatures that worked in 2024 may not catch the 2026 variants.

Several things are clear from the latest developments. First, the arrest of individual members hasn’t dismantled ShinyHunters as an operational entity. The group functions more like a brand or franchise than a traditional criminal organization. Second, the fundamental attack vector — credential theft via infostealers, followed by cloud account takeover — remains viable and lucrative. Third, enterprises that haven’t implemented MFA universally across all cloud services, including service accounts and legacy integrations, remain exposed.

And fourth, the stolen data from previous breaches continues to circulate. The information taken from AT&T, Ticketmaster, and other Snowflake victims didn’t disappear when arrests were made. It’s still out there, being resold, recombined, and used for secondary attacks like targeted phishing and identity fraud.

The cybersecurity industry has spent years emphasizing identity as the new perimeter. ShinyHunters is proof that this isn’t just a marketing slogan. It’s an operational reality that too many organizations still haven’t internalized. When a loose collective of young hackers can compromise 165 enterprise cloud accounts and steal records on hundreds of millions of people using nothing more sophisticated than purchased credentials and a lack of MFA, the problem isn’t exotic. It’s fundamental.

For now, the security community watches and waits for independent verification of ShinyHunters’ latest claims. If the data samples prove authentic, expect another wave of breach notifications, regulatory scrutiny, and difficult conversations in boardrooms about why, after everything that happened with Snowflake, the same basic failures keep producing the same catastrophic outcomes.

Some lessons, apparently, require more than one teaching.



from WebProNews https://ift.tt/8jclK51

Tuesday, 17 March 2026

Disney Built a Walking Olaf Robot in Four Months — And It’s Just the Beginning

Disney Imagineering has built a free-roaming, bipedal Olaf robot that walks, talks, and interacts with guests autonomously. No tracks. No tethers. No puppeteer behind a curtain. Just a snowman wandering around like he owns the place.

The project, first revealed at Disney’s recent showcase and covered extensively by TechRadar, represents one of the most ambitious deployments of character robotics ever attempted by the company. Disney Imagineering’s R&D team took roughly four months to go from concept to a functional walking prototype — a timeline that stunned even people inside the organization.

The Olaf robot doesn’t just shuffle forward on a flat surface. It walks with a naturalistic gait, maintains balance, and can operate in the unpredictable environment of a theme park where children run up to it, the ground isn’t perfectly level, and interactions are unscripted. That’s a massive engineering challenge. Boston Dynamics has spent years perfecting bipedal locomotion with Atlas, and even their robots occasionally eat dirt. Disney’s version has to do all of that while also staying in character.

And staying in character is the whole point.

Scott LaValley, a senior R&D Imagineer, described the vision plainly: Disney wants to populate its parks with autonomous characters that guests recognize and love. Not stationary animatronics bolted to a stage. Walking, breathing, reacting characters that roam freely. Think less Hall of Presidents, more Westworld — minus the existential dread.

The technical stack behind Olaf combines several disciplines. Bipedal robotics handles the locomotion. Computer vision and sensor arrays let the robot perceive its environment and avoid obstacles, including small children who will inevitably try to hug it. Natural language processing powers real-time conversations. And a behavioral AI layer ensures the robot acts like Olaf — warm, slightly clueless, obsessed with summer — rather than a generic chatbot on legs.

Four months of work. That’s the part that should get the robotics industry’s attention.

Disney hasn’t disclosed every technical detail, but the speed of development suggests the team built on top of significant prior research. Disney Research has published papers on bipedal locomotion, expressive robot movement, and human-robot interaction for years. The Olaf project appears to be where many of those threads converge into a single consumer-facing product. It’s the difference between publishing a paper and shipping a product — and Disney seems intent on shipping.

The business implications are significant. Theme parks are Disney’s highest-margin segment, generating over $8.3 billion in revenue in fiscal 2024 according to Disney’s own earnings reports. Autonomous character robots could reduce labor costs for character meet-and-greets, extend operating hours, and create entirely new attraction formats. A character that can walk alongside you through a themed land isn’t just a photo op. It’s an experience that justifies premium ticket pricing.

But the challenges are real. Safety is the obvious one — a bipedal robot falling onto a child would be a PR and legal catastrophe. Disney’s engineers have reportedly built in extensive failsafes, including the ability for the robot to safely lower itself to the ground if it detects instability. There’s also the uncanny valley problem. Olaf, as a snowman, sidesteps this neatly — nobody expects photorealistic human movement from a character made of snow. It’s a smart choice for a first deployment.

Other companies are watching closely. Universal, which is opening Epic Universe in Orlando this year, has invested heavily in immersive experiences but hasn’t announced anything comparable in autonomous character robotics. Tesla’s Optimus humanoid robot grabs headlines but remains far from consumer deployment. Disney’s advantage is that it doesn’t need a general-purpose humanoid. It needs specific characters doing specific things in controlled environments. That’s a much more tractable problem.

So what comes next? Disney Imagineering has signaled that Olaf is a proof of concept, not a one-off stunt. The team envisions parks where multiple characters roam simultaneously, interacting with guests and each other. Imagine walking through Galaxy’s Edge and encountering a droid that actually follows you to your next ride. Or a Groot that waves at your kid from across the courtyard.

The technology isn’t limited to bipedal robots either. Disney has also shown progress on quadruped and other non-humanoid form factors, which could bring characters like Simba or Pascal to life in ways that costumes never could.

The broader signal here matters. Disney is treating robotics not as a novelty but as core infrastructure for the next generation of its parks. The four-month development cycle for Olaf suggests the company has built internal tooling and frameworks that can accelerate future character deployments. If that’s true, the gap between Disney and every other themed entertainment company just got wider.

One walking snowman doesn’t change an industry overnight. But it does show where the money and the engineering talent are headed. Disney isn’t just building robots. It’s building the future of how people interact with fictional characters in physical space. And it built the first version in four months.



from WebProNews https://ift.tt/dnLK14j

WhatsApp Is Building Guest Chats for People Without Accounts — Here’s What That Means

WhatsApp is developing a feature that would let people participate in chats without needing a WhatsApp account. That’s a significant departure from how the platform has operated for over 15 years.

The feature, spotted by 9to5Mac, is currently in development and hasn’t been officially announced by Meta. But the implications are substantial — both for WhatsApp’s 2+ billion existing users and for the broader messaging market.

What Guest Chats Actually Look Like

Based on the report, WhatsApp is working on a system where non-users can be invited into conversations through a link or invitation mechanism. Think of it like a guest pass. Someone without the app installed — or at least without a registered account — could join a chat thread, participate in the conversation, and presumably leave when they’re done.

The details are still emerging. We don’t yet know whether guest participants would have access to the full range of WhatsApp features — voice messages, file sharing, reactions — or a stripped-down text-only experience. We also don’t know whether end-to-end encryption, WhatsApp’s signature security feature, would extend to guest participants in the same way it covers registered users.

That encryption question matters. A lot.

WhatsApp has built its brand on privacy guarantees. If guest chats compromise that in any way, the backlash from privacy advocates will be swift. But if Meta has found a way to maintain encryption while opening the door to unregistered participants, that’s a genuine technical achievement worth examining once the feature ships.

Why This Matters for Businesses and Growth

The business angle here is obvious. WhatsApp Business has become a major revenue driver for Meta, particularly in markets like India, Brazil, and Indonesia where the app functions as essential commercial infrastructure. Businesses use it for customer support, order confirmations, appointment scheduling, and direct sales.

But there’s always been a friction point: the customer needs a WhatsApp account. That requirement filters out a segment of potential interactions — older users who haven’t set up the app, people using basic phones, or simply anyone who doesn’t want yet another messaging account. Guest chats could eliminate that barrier entirely.

Consider a small business in São Paulo that currently handles customer inquiries through WhatsApp. Right now, if a potential customer doesn’t have the app, that interaction doesn’t happen — or it moves to email, phone, or SMS, all of which are less integrated with WhatsApp Business’s tools. Guest access changes the math. Every potential customer becomes reachable through WhatsApp’s infrastructure, whether they’ve committed to the platform or not.

And for Meta’s advertising ambitions, more people flowing through WhatsApp — even temporarily — means more data signals, more engagement metrics, and more opportunities to convert guests into full users.

So this isn’t just a convenience feature. It’s a growth strategy.

The competitive implications are worth noting too. Telegram has long allowed a degree of openness through its public channels and groups, and iMessage’s tight integration with SMS means Apple users can message anyone regardless of platform. WhatsApp, by contrast, has been a walled garden. You’re either in or you’re out. Guest chats represent a crack in that wall — an intentional one.

There’s also the regulatory dimension. The EU’s Digital Markets Act has been pushing large messaging platforms toward interoperability. Meta has been working on making WhatsApp interoperable with other messaging services as required by the DMA. Guest chats could be a parallel move — not interoperability in the strict regulatory sense, but a philosophical shift toward openness that aligns with the direction regulators are pushing.

Or it could be entirely unrelated. Hard to say without official commentary from Meta.

From a product design standpoint, the implementation challenges are nontrivial. How do you verify a guest’s identity? How do you prevent spam and abuse from anonymous participants? WhatsApp has spent years fighting spam through phone number verification — guest access potentially undermines that entire framework.

Expect some form of rate limiting, link expiration, or host-controlled permissions. The most likely model mirrors how platforms like Slack handle guest accounts: limited access, time-bound participation, controlled by the person who issued the invitation. WhatsApp could implement something similar, giving existing users the power to invite guests into specific conversations while maintaining control over who stays and for how long.

The Bigger Picture

Meta has been on a years-long effort to monetize WhatsApp more aggressively without alienating its massive user base. The company has tried and abandoned several approaches — remember the short-lived plan to put ads in WhatsApp Status? — and has settled on WhatsApp Business APIs and click-to-chat ads on Facebook and Instagram as its primary revenue mechanisms.

Guest chats fit neatly into that strategy. They lower the barrier to entry for commercial interactions, potentially increasing the volume of business conversations flowing through WhatsApp’s paid infrastructure. More conversations, more API calls, more revenue.

But there’s a user experience tension here. WhatsApp’s simplicity has been its greatest asset. It’s the messaging app your grandmother can use. Adding guest functionality introduces complexity — new permissions, new privacy settings, new potential for confusion. Meta will need to implement this carefully to avoid cluttering an interface that billions of people rely on daily.

The feature is still in development, and there’s no confirmed timeline for a public release. Features spotted in development don’t always ship — WhatsApp has shelved plenty of ideas over the years. But the strategic logic behind guest chats is strong enough that some version of this is likely to reach users eventually.

For businesses already invested in WhatsApp as a communication channel, this is worth watching closely. For competitors like Telegram, Signal, and even traditional SMS providers, it’s a signal that WhatsApp intends to expand its reach beyond its existing user base — not by convincing more people to sign up, but by making sign-up optional.

That’s a fundamentally different approach to growth. And if it works, expect other messaging platforms to follow.



from WebProNews https://ift.tt/CwoI2Vl

Monday, 16 March 2026

Mistral Launches LeanStral: Compressed AI Models That Run Faster and Cheaper Without Sacrificing Much Accuracy

Mistral AI just dropped something that should get the attention of every engineering team running inference at scale. The Paris-based AI company has introduced LeanStral, a new family of compressed models designed to deliver near-original accuracy at significantly lower computational cost. The pitch is simple: same intelligence, smaller footprint, faster responses, lower bills.

LeanStral applies structured pruning and quantization techniques to Mistral’s existing model lineup, producing lighter variants that retain the vast majority of their parent models’ capabilities. The initial release includes compressed versions of Mistral Large and Mistral Small, with Mistral claiming these leaner models can achieve up to 2-3x faster inference speeds while maintaining over 95% of the original model’s benchmark performance. That’s a compelling tradeoff for production environments where latency and cost matter as much as raw capability.

The timing isn’t accidental.

Enterprise AI adoption has hit a wall that has little to do with model quality. It’s about economics. Running large language models in production is expensive — GPU costs, energy consumption, and infrastructure complexity all compound quickly. Companies like Meta, Google, and OpenAI have been racing to make their models more efficient, but Mistral is making compression a first-class product rather than an afterthought. And they’re doing it with models that are already popular among developers who prefer open-weight alternatives to closed APIs.

So how does it actually work? LeanStral uses a combination of techniques. Structured pruning removes entire neurons, attention heads, or layers that contribute least to model performance, rather than zeroing out individual weights. This produces models that are genuinely smaller in architecture, not just sparse. On top of that, Mistral applies quantization — reducing the precision of numerical representations from, say, 16-bit floating point to 8-bit or even 4-bit integers. The combination yields models that need less memory, less compute, and less time per token generated.

The results Mistral is reporting look strong. On standard benchmarks like MMLU, HumanEval, and GSM8K, the LeanStral variants reportedly score within a few percentage points of their full-size counterparts. The compressed version of Mistral Large, for instance, is said to fit comfortably on hardware configurations that would struggle with the original. That opens deployment possibilities on smaller GPU setups and edge devices — exactly where many enterprises want to run inference but can’t justify the infrastructure.

This matters for a specific reason. The AI industry is splitting into two distinct phases. Phase one was about building the biggest, most capable models possible. Phase two is about making those models practical to deploy everywhere. LeanStral is squarely a Phase Two product.

Mistral isn’t alone in pursuing compression. NVIDIA has invested heavily in TensorRT-LLM optimizations. Hugging Face has championed quantized model formats like GPTQ and AWQ through its community. Startups like Neural Magic have built entire businesses around sparse inference. But Mistral’s approach is different in one key respect: the compression is done by the same team that trained the original models. That means the pruning and quantization decisions are informed by deep knowledge of the architecture’s internals, not applied as a generic post-hoc optimization. The result, at least in theory, should be higher-quality compressed models than what third parties can produce independently.

For developers already using Mistral’s API, LeanStral models will be available through the same endpoints with lower per-token pricing. For self-hosted deployments, the compressed weights will be downloadable. Mistral is positioning this as a way to serve more users with the same hardware budget — or the same users with a smaller one.

There’s a broader strategic angle here too. Mistral has been aggressively positioning itself as Europe’s answer to OpenAI and Anthropic, raising over €1 billion in funding and securing partnerships with major cloud providers including Microsoft Azure and Google Cloud. But competing on model size alone is a losing game when your rivals have tens of billions in compute budgets. Competing on efficiency is smarter. If Mistral can offer models that are 80% as capable as GPT-4 at 30% of the cost, that’s a value proposition many CTOs will take seriously.

Not everything is rosy. Compression always involves tradeoffs. A few percentage points of benchmark degradation might not matter for chatbots or summarization tasks, but it could be significant for code generation, mathematical reasoning, or domain-specific applications where precision is non-negotiable. Mistral acknowledges this implicitly by publishing detailed benchmark comparisons, but real-world performance on proprietary datasets will be the true test. Enterprises will need to run their own evaluations.

The open-weight angle deserves attention. Unlike OpenAI’s closed models, Mistral’s compressed variants can be inspected, fine-tuned, and deployed on-premise. That’s a major selling point for regulated industries — finance, healthcare, defense — where data sovereignty requirements make API-only access a non-starter. A smaller, faster model that runs locally on modest hardware is exactly what these sectors have been asking for.

And the competitive pressure is real. Meta’s Llama 3.1 models already come in multiple sizes. Google’s Gemma models target the efficiency-conscious developer. Apple recently released OpenELM with a focus on on-device inference. Every major player is converging on the same insight: the next wave of AI deployment won’t be won by whoever has the biggest model. It’ll be won by whoever makes capable models easiest and cheapest to run.

Mistral’s bet with LeanStral is that systematic, first-party compression is the fastest path to that goal. Early benchmarks support the thesis. But benchmarks aren’t production, and production is where compression artifacts — subtle degradations in output quality, unexpected failure modes on edge cases — tend to surface. The AI community will stress-test these models quickly.

One thing is clear. The era of “bigger is always better” in AI is giving way to something more nuanced. LeanStral is Mistral’s clearest signal yet that it’s building for the companies that need to ship AI products today, not just demo them. Faster inference, lower costs, same API. That’s the pitch. Whether it holds up under real workloads will determine if this becomes a template the rest of the industry follows.



from WebProNews https://ift.tt/xKvCNSm

Sunday, 15 March 2026

Biological Data Centers: Startups Are Building Computers Powered by Human Brain Cells

A new class of data centers doesn’t run on silicon. It runs on human neurons.

Several startups are now developing computing systems built around organoids — lab-grown clusters of human brain cells — arguing that biological processors could dramatically reduce the energy consumption that’s crippling the AI industry’s expansion. The concept sounds like science fiction. It isn’t. And the money flowing into it suggests serious people are taking it seriously.

Futurism reported that companies including Cortical Labs, FinalSpark, and Brainchip are pursuing biocomputing architectures that use living neurons as processing units. The logic is straightforward: the human brain operates on roughly 20 watts of power — about what it takes to run a dim light bulb — while performing cognitive tasks that the most advanced AI systems require megawatts to approximate. That efficiency gap represents an enormous opportunity.

FinalSpark, a Swiss startup, has already built what it calls the Neuroplatform, a system that keeps human brain organoids alive and uses them to perform basic computational tasks. The organoids, each containing tens of thousands of neurons, are maintained in microfluidic environments that supply nutrients and remove waste. Electrodes interface with the living tissue to send and receive signals. It’s crude compared to a modern GPU cluster. But the power consumption is almost negligible.

The timing isn’t accidental.

AI’s energy problem has become impossible to ignore. The International Energy Agency projected that data center electricity consumption could double by 2026, driven largely by AI workloads. Goldman Sachs estimated that a single ChatGPT query uses roughly ten times the electricity of a Google search. Tech giants are restarting nuclear plants, signing unprecedented power purchase agreements, and still struggling to secure enough energy for planned facilities. Against that backdrop, a technology that could process information at a fraction of the energy cost commands attention — even if it’s years from practical deployment.

Cortical Labs, based in Melbourne, demonstrated in 2022 that a dish of human neurons could learn to play Pong. The research, published in the journal Neuron, showed that biological neural networks could adapt their behavior in response to electrical feedback — essentially learning from their environment without being explicitly programmed. The company has since raised funding to scale this approach toward more complex tasks.

Not everyone is convinced the technology can bridge the gap between laboratory curiosity and industrial application. Growing and maintaining living tissue at scale introduces problems that semiconductor manufacturers never face. Organoids die. They’re sensitive to temperature, contamination, and nutrient supply. And the interface between biological tissue and electronic systems remains primitive — reading and writing signals to neurons with anything approaching the precision of digital circuits is an unsolved engineering challenge.

There are also questions no one has fully answered about what these organoids experience. Ethicists have raised concerns about whether brain organoids could develop some form of consciousness or sensation as they grow more complex. A 2024 report from the National Academies of Sciences, Engineering, and Medicine recommended establishing oversight frameworks for organoid research, acknowledging that current ethical guidelines haven’t kept pace with the science. So the industry may face regulatory friction before it faces technical limits.

Still, the trajectory is clear. FinalSpark claims its biological processors are already up to a million times more energy-efficient than traditional silicon chips for certain operations. That figure deserves scrutiny — lab benchmarks rarely survive contact with real-world conditions — but even if the actual advantage is orders of magnitude smaller, the implications for sustainable computing would be significant.

And the applications being discussed go beyond just efficiency. Proponents argue that biological neural networks could excel at pattern recognition, sensory processing, and adaptive learning in ways that digital architectures struggle with despite massive parameter counts. The brain doesn’t just process information efficiently. It processes it differently — using analog signals, massively parallel connections, and mechanisms we still don’t fully understand.

Investment is accelerating. Cortical Labs secured $10 million in funding in 2023. FinalSpark has opened remote access to its Neuroplatform for researchers worldwide. Other players are entering the space, though most remain in stealth. The U.S. Department of Defense has also expressed interest in biocomputing for edge applications where power constraints are severe.

The practical timeline? Long. We’re talking about a technology that can barely play a video game from 1972. Scaling from thousands of neurons to the billions required for meaningful computation presents challenges that no one has a clear roadmap for solving. But the same was true of digital computing in the 1940s, when ENIAC filled a room and could do less than what a modern calculator handles.

What matters now is that the fundamental proof of concept exists. Living neurons can compute. They can learn. They can do it on almost no power. The engineering problems are enormous, the ethical questions are real, and the commercial viability is unproven. But the AI industry’s insatiable appetite for energy has created a problem urgent enough to make biological computing look less like a fringe bet and more like a necessary frontier.



from WebProNews https://ift.tt/3D9cftX

FCC Chair Brendan Carr’s License Threats Over Iran Coverage Signal a New Era of Government Pressure on Broadcast Media

Federal Communications Commission Chairman Brendan Carr has escalated his campaign against broadcast networks, this time threatening the licenses of stations that aired coverage he deemed insufficiently supportive of the Trump administration’s handling of Iran-related developments. The move represents the latest and perhaps most aggressive step yet in a pattern of regulatory intimidation that has alarmed First Amendment advocates, media executives, and constitutional scholars across the political spectrum.

Carr, who was appointed by President Donald Trump, has increasingly wielded the FCC’s licensing authority as a cudgel against media organizations whose editorial choices conflict with the administration’s preferred narratives. According to Business Insider, the FCC chairman specifically targeted broadcast outlets over their coverage of U.S. policy toward Iran, suggesting that certain reporting could jeopardize their ability to operate on the public airwaves.

A Pattern of Regulatory Pressure That Goes Beyond Precedent

The threat against broadcasters over Iran coverage did not emerge in a vacuum. Since assuming the chairmanship, Carr has repeatedly signaled that the FCC would take a more interventionist approach to broadcast content than any of his predecessors in modern memory. Under longstanding FCC practice, license renewals have been treated as largely routine proceedings, with revocations reserved for the most egregious technical or legal violations — not editorial disagreements with the sitting administration.

Yet Carr has turned the license renewal process into something far more politically charged. As Business Insider reported, his public statements have drawn explicit connections between specific news coverage and the potential loss of broadcast licenses, a linkage that previous FCC chairs — both Republican and Democratic — studiously avoided. The implication is clear: networks that produce coverage the administration finds objectionable do so at their own regulatory peril.

The Legal Framework: What the FCC Can and Cannot Do

The FCC’s authority over broadcast licensees is rooted in the Communications Act of 1934, which grants the commission the power to issue, renew, and revoke licenses for use of the public airwaves. The standard for renewal is whether a station has served the “public interest, convenience, and necessity.” Historically, this standard has been interpreted broadly, and outright license denials have been exceedingly rare.

The First Amendment complicates any attempt by the FCC to punish broadcasters for their editorial content. While broadcast media have traditionally received somewhat less First Amendment protection than print media — a distinction rooted in the Supreme Court’s 1969 Red Lion Broadcasting Co. v. FCC decision — the government is still prohibited from engaging in content-based regulation of the press. Legal scholars have argued that threatening license revocation over specific news stories crosses a constitutional line that even the reduced protections afforded to broadcasters do not permit.

Industry Reaction: Fear, Defiance, and Self-Censorship

Inside the broadcast industry, the reaction to Carr’s threats has been a mixture of public defiance and private anxiety. Network executives, speaking on background, have described an atmosphere of uncertainty that has begun to affect editorial decision-making. Some newsroom leaders have acknowledged that the specter of license challenges has prompted more cautious coverage of topics the administration has flagged as sensitive — a chilling effect that media advocates say is precisely the point of Carr’s public statements.

Major media trade organizations have pushed back forcefully. The National Association of Broadcasters has reiterated its position that the FCC should not use its licensing authority to influence news coverage. Press freedom organizations, including the Reporters Committee for Freedom of the Press, have warned that Carr’s approach represents a fundamental threat to the independence of American journalism. “When the government starts telling broadcasters what they can and cannot report, we have crossed into territory that the founders of this country explicitly sought to prevent,” one press freedom advocate told reporters.

The Iran Coverage That Sparked the Latest Confrontation

The specific Iran-related coverage that drew Carr’s ire involved reporting on the Trump administration’s diplomatic and military posture toward Tehran. Several broadcast networks aired segments that included critical analysis of the administration’s strategy, featured commentary from former officials who questioned the approach, and reported on potential consequences of escalation. Carr characterized some of this coverage as misleading and suggested it could constitute a failure to serve the public interest — the legal standard that governs broadcast license renewals.

Critics of Carr’s position have noted that the coverage in question fell well within the bounds of standard journalistic practice. Reporting that includes critical perspectives on government policy is not only permissible but is widely regarded as a core function of a free press. The notion that presenting viewpoints at odds with the administration’s position could endanger a broadcaster’s license has been described by legal experts as both unprecedented and constitutionally suspect.

How This Fits Into the Broader Administration Strategy

Carr’s actions at the FCC are part of a wider pattern of the Trump administration using regulatory and legal mechanisms to pressure media organizations. The administration has pursued or threatened legal action against several major news outlets, and Trump himself has repeatedly called for investigations into networks whose coverage he dislikes. The FCC’s licensing power gives the administration a uniquely potent tool in this effort, because broadcast stations — unlike cable networks, newspapers, or digital media — require government permission to operate.

This dynamic has created a two-tier system in American media, where broadcast outlets face a form of government oversight that their competitors in cable, print, and digital do not. Some analysts have argued that this disparity is increasingly anachronistic in an era when most Americans consume news through platforms that fall outside the FCC’s jurisdiction. Nevertheless, broadcast television remains a significant source of news for millions of Americans, and the threat of license revocation carries enormous financial and operational consequences for station owners.

Constitutional Scholars Sound the Alarm

The legal community’s response to Carr’s threats has been notably bipartisan. Conservative and liberal constitutional scholars alike have expressed concern that using licensing authority to influence editorial content represents a dangerous expansion of government power over the press. Floyd Abrams, one of the nation’s foremost First Amendment attorneys, has previously warned that such tactics, if left unchecked, could fundamentally alter the relationship between the government and the media in the United States.

Some legal experts have suggested that affected broadcasters could challenge any adverse licensing action in federal court, where they would likely argue that the FCC’s actions constitute unconstitutional content-based regulation of speech. Such a case could potentially reach the Supreme Court and force a reexamination of the legal framework governing broadcast regulation — a framework that many scholars believe is overdue for modernization.

What Comes Next for Broadcasters and the FCC

For now, the broadcast industry finds itself in an uncomfortable position. No licenses have actually been revoked, and it remains unclear whether Carr’s threats will translate into formal regulatory action. But the mere possibility has introduced a new variable into the calculations of every broadcast newsroom in the country. Editors and producers must now weigh not only the journalistic merits of a story but also the potential regulatory consequences of airing it — a consideration that would have been unthinkable just a few years ago.

The situation also raises questions about the future of the FCC itself. If the commission’s licensing authority can be wielded as a political weapon, the implications extend far beyond any single administration or any single set of news stories. The precedent being set — or at least being tested — by Carr’s approach could reshape the relationship between the federal government and broadcast media for years to come. Whether the courts, Congress, or the industry itself will push back with sufficient force to prevent that outcome remains an open question, and one that carries significant consequences for the future of press freedom in America.

As the standoff between the FCC and the broadcast industry continues, one thing is clear: the traditional boundaries that separated government regulation from editorial independence are under more strain than at any point in recent memory. The outcome of this confrontation will say a great deal about the durability of the constitutional protections that have long defined the American media system.



from WebProNews https://ift.tt/joiLIYt

Saturday, 14 March 2026

Adobe’s $4.95 Million Settlement Over Hidden Cancellation Fees Exposes the Dark Economics of Subscription Traps

Adobe will pay $4.95 million to settle a federal lawsuit that accused the software giant of burying early termination fees deep within its subscription sign-up process — charges that hit consumers with hundreds of dollars when they tried to cancel plans they didn’t fully understand they’d committed to. The settlement, announced in late June 2025, resolves a case brought by the U.S. Department of Justice on behalf of the Federal Trade Commission, and it marks one of the most significant enforcement actions yet against a major tech company over so-called “dark patterns” in subscription design.

The case dates back to June 2024, when the DOJ filed a complaint alleging that Adobe used deceptive practices to enroll customers in its most lucrative subscription plan — an annual commitment billed monthly — without clearly disclosing the early termination fee (ETF) that kicked in if users tried to cancel before 12 months elapsed. That fee could reach 50% of the remaining subscription cost, sometimes totaling hundreds of dollars. According to the government’s filing, Adobe hid the ETF disclosure behind optional text boxes and hyperlinks during the sign-up flow, ensuring that most consumers never saw it, as reported by CNET.

The numbers tell a damning story. Adobe’s Creative Cloud subscriptions — which include Photoshop, Illustrator, Premiere Pro, and dozens of other professional tools — generate billions in annual recurring revenue. The annual plan billed monthly was the default option presented to new subscribers, and the FTC alleged that Adobe steered customers toward it precisely because it locked them in. Customers who later discovered the ETF and complained were often routed through a convoluted cancellation process designed to retain them. Some reported being transferred between multiple agents, offered temporary discounts, or simply having their cancellation requests ignored.

Under the settlement’s terms, Adobe must pay the $4.95 million penalty and fundamentally restructure how it presents subscription terms to new customers. The company is now required to clearly and conspicuously disclose the existence and amount of early termination fees before a consumer completes enrollment. No more burying the terms in collapsed text or behind hyperlinks. Adobe must also simplify its cancellation process, making it possible for subscribers to cancel through the same digital channels they used to sign up — without being subjected to excessive retention tactics.

Adobe, for its part, did not admit wrongdoing. A company spokesperson told CNET that Adobe had already made changes to its sign-up and cancellation flows before the settlement, and that the company is “committed to ensuring a transparent experience” for its subscribers. The statement was carefully worded, stopping well short of acknowledging the practices described in the complaint.

That’s a familiar posture for companies caught in the FTC’s crosshairs. And it raises a question that industry watchers have been asking with increasing urgency: how many other subscription-based software companies are running essentially the same playbook?

The answer, according to consumer advocates, is a lot of them.

The FTC has been escalating its crackdown on subscription traps for several years. In 2021, the agency issued an enforcement policy statement warning companies that failing to provide simple cancellation mechanisms could violate federal law. Then in 2023, the FTC proposed its “click-to-cancel” rule, which would require that canceling a subscription be as easy as signing up for one. That rule was finalized in late 2024, though legal challenges from industry groups have slowed its full implementation.

The Adobe case fits squarely within this broader regulatory push. But it also stands out because of the company’s market position. Adobe isn’t some fly-by-night subscription box or obscure streaming service. It’s a $200 billion company whose products are standard-issue tools for photographers, designers, filmmakers, marketers, and virtually anyone who works in creative fields. Its shift from perpetual software licenses to a subscription-only model, completed in 2013, was widely studied in business schools as a masterclass in recurring revenue strategy. What the FTC complaint revealed is that the strategy’s success depended, at least in part, on making it very hard to leave.

The early termination fee itself wasn’t illegal. Plenty of companies charge them. The issue was disclosure — or the lack of it. The government alleged that during Adobe’s online enrollment process, the annual-plan-billed-monthly option was presented as the default, with the monthly price prominently displayed. The fact that the customer was committing to a 12-month contract, and that canceling early would trigger a fee equal to half the remaining balance, was disclosed only in fine print that required affirmative clicks to reveal. Most people don’t click. Adobe knew that.

Internal communications cited in the complaint suggested that Adobe employees were aware of widespread customer frustration over the ETF. Customer service logs reportedly showed thousands of complaints from subscribers who felt blindsided by the charges. Some customers reported being charged the fee even after they believed they had successfully canceled. The complaint painted a picture of a company that had optimized its sign-up funnel for conversion while systematically under-investing in transparency.

The $4.95 million penalty is, by any reasonable measure, modest relative to Adobe’s financial scale. The company reported $21.5 billion in revenue for fiscal year 2024, with the overwhelming majority coming from its Digital Media segment, which includes Creative Cloud. A fine of less than $5 million barely registers on a balance sheet that size. Critics have pointed out that the penalty likely represents a fraction of the revenue Adobe earned from the very practices the FTC challenged.

But the injunctive relief — the mandated changes to Adobe’s business practices — may prove more consequential. Requiring clear, upfront disclosure of ETFs and easy cancellation pathways could reduce subscriber lock-in and increase churn rates. For a company whose valuation is built substantially on predictable recurring revenue, even a modest uptick in cancellations could have outsized effects on investor sentiment. Adobe’s stock barely moved on the settlement news, suggesting Wall Street views the changes as manageable. Whether that assessment holds will depend on how the new disclosure requirements affect renewal rates over the next several quarters.

The settlement also includes a provision requiring Adobe to obtain express informed consent before charging the ETF. That means the company can’t simply point to terms of service that a customer technically agreed to but never read. The consent must be affirmative, specific, and separate from the general enrollment process. This is a higher bar than most subscription companies currently meet, and it could become a template for future FTC enforcement actions against other firms.

So where does this leave Adobe’s competitors? Companies like Microsoft, Autodesk, and Figma all operate subscription models with varying degrees of lock-in. Microsoft 365, for instance, offers both monthly and annual plans, but its cancellation and refund policies are generally considered more straightforward than what Adobe was offering. Autodesk has faced its own share of customer complaints about subscription pricing and cancellation difficulties, though it hasn’t attracted the same level of regulatory scrutiny. Figma, which Adobe attempted to acquire in a $20 billion deal that was abandoned in 2023 after antitrust opposition, operates on a more flexible subscription model that doesn’t impose early termination fees on most plans.

The broader software industry is watching this case closely. The subscription model has become the dominant business structure for software companies of all sizes, from enterprise giants to solo-developer SaaS tools. Recurring revenue is prized by investors because it’s predictable and compounds over time. But the Adobe settlement is a reminder that the strategies companies use to maintain that predictability — long-term commitments, auto-renewals, difficult cancellation processes, and opaque fee structures — carry regulatory risk. And that risk is growing.

Consumer advocacy groups have praised the settlement while noting its limitations. The National Consumer Law Center said the case sends a clear message that subscription companies cannot hide material terms from consumers, but argued that the financial penalty should have been larger to serve as a meaningful deterrent. The Electronic Frontier Foundation, which has been vocal about dark patterns in software design, called the settlement “a step in the right direction” but urged the FTC to pursue more aggressive remedies in future cases.

For Adobe’s millions of individual subscribers, the practical impact is already visible. The company’s current sign-up flow now displays the annual commitment and associated ETF more prominently than it did a year ago. The cancellation process has been streamlined, with fewer retention screens and a clearer path to completing a cancellation online. These changes were implemented before the settlement was finalized, likely as a strategic move to demonstrate good faith and limit the scope of any court-ordered remedies.

Still, the underlying tension hasn’t been resolved. Adobe’s most popular Creative Cloud plans remain annual commitments billed monthly, and the ETF still exists — it’s just disclosed more clearly now. Customers who want true month-to-month flexibility must choose a different plan that costs significantly more per month. The economics of the subscription are designed to push users toward the annual commitment. That’s not illegal. But it does mean the company’s incentives remain fundamentally misaligned with consumers who value flexibility.

And this is the core issue the FTC is trying to address, not just with Adobe but across the subscription economy. The agency’s position is that consumers should understand what they’re agreeing to before they agree to it, and they should be able to exit as easily as they entered. Simple principles. But implementing them in an industry that has spent two decades engineering every friction point to maximize retention is proving to be a protracted fight.

The Adobe settlement won’t be the last word on subscription transparency. It may not even be the most important one. But for an industry that has grown accustomed to treating customer inertia as a feature rather than a bug, it’s a $4.95 million reminder that regulators are paying attention — and that the cost of opacity is going up.



from WebProNews https://ift.tt/ZesCzuJ

Artemis II and the Calculus of Acceptable Risk: Why Sending Humans Back to the Moon Is a Bet NASA Can’t Afford to Lose

Four astronauts are preparing to fly around the Moon later this year. It will be the first time human beings have traveled beyond low Earth orbit since December 1972, when Apollo 17’s crew splashed down in the Pacific and the curtain fell on an era. More than half a century of silence followed. Now NASA is attempting something that sounds deceptively simple: send a crew on a loop around the Moon and bring them home. No landing. No surface operations. Just a flyby.

Don’t let the simplicity fool you.

Artemis II, currently targeting a launch no earlier than late 2025 or early 2026 depending on readiness reviews, represents one of the most consequential test flights in NASA’s history. The mission will be the first crewed flight of the Space Launch System rocket and the Orion spacecraft together — a combination that flew once before, uncrewed, during Artemis I in late 2022. That mission revealed problems. Heat shield erosion behaved in ways engineers didn’t predict. Bolts holding the heat shield’s outer layer shed unexpectedly. And the life support system, which had no humans aboard to stress-test it, remains largely unproven in the deep-space environment where Artemis II will operate.

As Ars Technica reported in a detailed examination of the mission’s risk profile, the fundamental question hanging over Artemis II isn’t whether NASA can pull it off — it’s how much risk the agency and its astronauts are actually accepting, and whether that level of risk is being communicated honestly.

The heat shield issue alone would give any engineer pause. During Artemis I’s return, Orion’s Avcoat heat shield — a material with Apollo-era heritage — experienced what NASA has described as unexpected charring patterns and loss of material. Chunks of the ablative coating came off in ways that thermal models hadn’t predicted. NASA spent more than a year investigating, ultimately attributing the anomaly to gases trapped within the heat shield material that expanded and caused pieces to liberate during the intense heating of reentry. The agency says it now understands the phenomenon and has determined that Artemis II can fly safely with the existing heat shield design, though it has committed to a redesigned heat shield for Artemis III and beyond.

That’s a significant caveat. If the heat shield design needs to be changed for future missions, the implicit admission is that the current design is not optimal. NASA’s position is that the material loss observed on Artemis I, while unexpected, did not compromise the structural integrity of the heat shield and that adequate margins remain for a crewed reentry. Engineers have run additional thermal analyses and ground tests. They believe the risk is manageable.

Belief and certainty are different things.

The crew — NASA astronauts Reid Wiseman, Victor Glover, and Christina Koch, along with Canadian Space Agency astronaut Jeremy Hansen — will be flying a spacecraft that has carried humans exactly zero times. Every system that interacts with a crew will be operating in its true environment for the first time. The environmental control and life support system. The crew displays and interfaces. The manual flight control capability, which the astronauts are expected to demonstrate during a segment of the mission. The waste management system. All of it untested with actual human occupants in the thermal and radiation conditions of cislunar space.

This is not unprecedented in the history of spaceflight. Apollo’s first crewed mission beyond Earth orbit was Apollo 8 in December 1968, which sent three astronauts around the Moon on a spacecraft that had only flown with crew once before, in low Earth orbit. The Saturn V rocket that carried them had launched just twice — once successfully, once with significant problems including engine failures and structural oscillations. NASA made the call to go anyway, driven by Cold War urgency and intelligence suggesting the Soviet Union might attempt a circumlunar flight first.

The risk tolerance was different then. As Ars Technica noted, some estimates placed the probability of crew loss on Apollo missions in the range of 5% per flight. NASA administrator Jim Webb reportedly believed there was a one-in-four chance of losing a crew during the program. The astronauts themselves understood this. They flew anyway.

Today’s NASA operates under a fundamentally different set of expectations. The agency’s own probabilistic risk assessments for the Space Shuttle, conducted after the Columbia disaster, suggested loss-of-crew probabilities on the order of 1 in 90 for early shuttle flights. For the Commercial Crew Program — SpaceX’s Crew Dragon and Boeing’s Starliner — NASA set a requirement of no worse than 1 in 270 chance of loss of crew. The agency has not publicly released a comparable number for Artemis II.

That silence is telling.

NASA officials have said publicly that they believe Artemis II’s risk level is acceptable and consistent with the agency’s standards for human spaceflight. But the specifics remain closely held. Part of the reason is institutional: publishing a precise probability invites public debate about whether that number is “safe enough,” a conversation NASA would rather have internally. Part of it is technical: the models used to generate these numbers carry their own uncertainties, and a single figure can be misleading without extensive context.

But there’s also a political dimension. Artemis is the centerpiece of NASA’s human exploration strategy. Billions of dollars have been spent. Contracts with Boeing, Lockheed Martin, Northrop Grumman, and others are deeply embedded in the industrial base. Congressional delegations in Alabama, Louisiana, Florida, Texas, and other states have strong interests in the program’s continuation. Acknowledging elevated risk — even risk that falls within historically accepted bounds — creates ammunition for critics who argue the program is too expensive, too slow, or too dangerous.

And Artemis has no shortage of critics. The SLS rocket, a government-designed and government-built vehicle derived from Space Shuttle components, costs roughly $2.5 billion per launch by most independent estimates, though NASA has resisted confirming a precise per-flight figure. It is expendable — each rocket is used once and destroyed. SpaceX’s Starship, by contrast, is designed to be fully reusable and, if it achieves its cost targets, could launch for a fraction of SLS’s price. Starship is, in fact, a critical part of the Artemis architecture: a modified version called the Human Landing System is supposed to carry astronauts from lunar orbit to the surface on Artemis III.

So NASA finds itself in the awkward position of relying on two very different vehicles built by two very different philosophies — one a government cost-plus megaproject, the other a commercial venture iterating through rapid prototyping and occasional spectacular failures — to accomplish a single goal. The tension between these approaches is real and ongoing.

None of this changes the immediate question facing the Artemis II crew and the engineers supporting them. The mission profile itself is relatively conservative by Apollo standards. Orion will launch atop SLS from Kennedy Space Center’s Pad 39B, enter a high Earth orbit, receive a trans-lunar injection burn from the SLS upper stage, coast to the Moon, perform a free-return trajectory that swings behind the lunar far side, and return to Earth for a splashdown in the Pacific. Total mission duration is approximately 10 days. No orbital insertion at the Moon. No docking with another spacecraft. No landing attempt.

The free-return trajectory is a deliberate risk-reduction choice. If the Orion spacecraft’s service module engine fails after the trans-lunar injection burn, the laws of orbital mechanics will bring the capsule back to Earth without any additional propulsive maneuver. Apollo 13 used a variant of this principle to survive its catastrophic oxygen tank explosion in 1970. It’s a built-in safety net, and it’s one of the reasons NASA chose this mission profile for the first crewed flight.

But the free-return trajectory doesn’t protect against every failure mode. A breach of the crew cabin’s pressure vessel would be fatal regardless of trajectory. A failure of the heat shield during reentry — the scenario that has drawn the most scrutiny — would be catastrophic. A loss of electrical power or life support could turn a 10-day mission into a survival scenario with very limited margins. And the radiation environment between Earth and the Moon, while generally manageable for a short-duration mission, poses a risk during solar particle events that could deliver dangerous doses to the crew if a major solar flare occurs during the transit.

Orion does carry a small radiation shelter area where crew members can huddle during a solar event, using equipment and supplies as additional shielding. NASA has studied this scenario extensively. The protection is adequate for most events but not for the most extreme solar particle storms, which are rare but not impossible. The mission is timed to avoid the predicted peak of Solar Cycle 25, though solar activity forecasting remains an imprecise science.

There’s another factor that receives less public attention: the abort options during launch and ascent. SLS does not carry a traditional launch escape system in the way that Apollo’s Saturn V did, with a tower-mounted solid rocket pulling the capsule away from a failing booster. Instead, Orion has its own Launch Abort System — a set of solid rocket motors mounted on a tower atop the capsule that can pull it free during the first couple of minutes of flight. After that, Orion relies on its own service module engine and the separation capability from SLS to execute abort scenarios at various points during ascent. These abort modes have been analyzed extensively but never tested in an actual emergency. The Launch Abort System was tested once, in an uncrewed pad abort test in 2019 at White Sands, New Mexico. It worked. But an ascent abort — pulling away from a rocket that is actively failing while traveling at high speed through the atmosphere — has never been demonstrated.

This is standard for new crewed vehicles. SpaceX’s Crew Dragon conducted an in-flight abort test in January 2020, deliberately triggering separation from a Falcon 9 at the point of maximum aerodynamic pressure. It succeeded. Boeing’s Starliner has not conducted an in-flight abort test, though its pad abort test in 2019 experienced a partial parachute deployment failure. NASA accepted the risk of flying Starliner without a dedicated in-flight abort demonstration.

Risk acceptance is, ultimately, a human decision made under uncertainty. Engineers can model failure scenarios, calculate probabilities, test components, and run simulations. But spaceflight — particularly on new vehicles — always carries unknowns that models can’t fully capture. The “unknown unknowns,” as former Defense Secretary Donald Rumsfeld once put it in a different context, are what keep flight directors awake at night.

The astronauts themselves appear to accept this reality with the equanimity characteristic of their profession. Reid Wiseman, the mission commander, is a Navy test pilot and veteran of a long-duration stay on the International Space Station. Victor Glover flew to the ISS aboard SpaceX’s Crew Dragon on its first operational mission. Christina Koch holds the record for the longest single spaceflight by a woman. Jeremy Hansen, while a spaceflight rookie, is a former CF-18 fighter pilot. These are people who have spent their careers evaluating and accepting calculated risk.

But their willingness to fly does not absolve NASA of the responsibility to be transparent about what that risk actually is. As Ars Technica’s analysis emphasized, the agency’s reluctance to discuss specific risk numbers for Artemis II stands in contrast to the relative openness it has shown about risk assessments for other programs. After Columbia, NASA published detailed probabilistic risk assessments for remaining shuttle flights. The Commercial Crew Program’s safety requirements, including the 1-in-270 loss-of-crew threshold, are public. For Artemis, the numbers are harder to find.

One reason may be that the numbers aren’t flattering. A new rocket, a spacecraft with one uncrewed test flight, a heat shield that behaved unexpectedly, life support systems untested in their operational environment, and abort modes that have never been exercised in real conditions — all of these factors push the probability of loss of crew higher than what NASA has accepted for routine ISS crew rotation flights. How much higher is the question NASA doesn’t seem eager to answer publicly.

It is worth placing this in historical context. Every first crewed flight of a new American spacecraft has carried elevated risk. John Glenn’s Mercury-Atlas 6 mission in 1962. Gus Grissom and John Young’s Gemini 3 in 1965. The first crewed Apollo flight, Apollo 7, in 1968 — which came after the Apollo 1 fire killed three astronauts during a ground test. The first Space Shuttle mission, STS-1, in 1981, which launched with a crew aboard a vehicle that had never flown to space at all. Doug Hurley and Bob Behnken’s Demo-2 mission on Crew Dragon in 2020. In each case, the crew and the agency accepted risk that was higher than what subsequent missions would carry, because someone has to go first.

Artemis II is that flight for the Artemis program. And the stakes extend beyond the four people in the capsule. A successful mission validates the SLS-Orion architecture, builds confidence for the far more complex Artemis III lunar landing mission, and sustains political and public support for a program that has already consumed decades and tens of billions of dollars. A failure — particularly a fatal one — would be devastating. Not just for the families of the crew, but for NASA as an institution and for the broader cause of human space exploration. The political fallout from a crew loss on Artemis II would almost certainly ground the program for years, if not permanently.

NASA knows this. The agency’s leadership, from Administrator Bill Nelson on down, has repeatedly stated that they will not fly until they are ready and that safety is the top priority. These are the right words. The question is whether the institutional pressures — schedule, budget, political expectations, contractor relationships — create subtle incentives to declare readiness before every concern has been fully resolved.

The history of spaceflight suggests this is not a theoretical concern. The Rogers Commission found that NASA managers overrode engineering objections to launch Challenger in cold weather. The Columbia Accident Investigation Board found that organizational culture and schedule pressure contributed to the decision to fly with known foam-shedding risks. In both cases, the agency’s own internal processes failed to prevent catastrophe.

NASA has implemented significant safety reforms since Columbia, including the creation of an independent safety oversight structure and a stronger role for the chief safety officer. The Aerospace Safety Advisory Panel, an independent body that reports to Congress and the NASA administrator, has been closely monitoring Artemis development and has raised concerns about schedule pressure and workforce fatigue at various points. Whether these safeguards are sufficient to prevent the kind of normalization of risk that contributed to past disasters is something that can only be judged in retrospect.

For now, the Artemis II crew continues to train. Engineers continue to analyze data, close out action items, and prepare the hardware at Kennedy Space Center. The SLS rocket and Orion spacecraft are being stacked and tested. Review boards will convene. Flight readiness reviews will be conducted. And at some point, if all the boxes are checked and all the concerns are addressed to the satisfaction of the people responsible for the decision, four human beings will strap into a capsule atop the most powerful rocket in operation, light the engines, and head for the Moon.

It will be dangerous. How dangerous, exactly, is something NASA would prefer to discuss in qualitative rather than quantitative terms. The astronauts will trust the engineers. The engineers will trust their analysis. And the rest of us will watch, knowing that for all the technology and all the testing and all the reviews, spaceflight remains an inherently hazardous undertaking — one where the margin between triumph and tragedy can be measured in millimeters of heat shield ablator or milliseconds of reaction time.

Fifty-four years is a long time to be away from the Moon. Getting back was never going to be easy. And it was never going to be safe.



from WebProNews https://ift.tt/RKxHQjS