Wednesday, 25 March 2026

AI Security Claims Top Spot in Enterprise Budgets, Dethroning Cloud After Years of Dominance

Enterprise security leaders have reset their priorities. LLM and generative AI protection now leads planned budget growth, edging out cloud security for the first time. This shift comes from a survey of 517 executives, 80% in C-suite or director roles at Fortune 500 and Global 2000 firms, as detailed in Enterprise Technology Research’s 2026 State of Security report. Fifty-nine percent plan to boost spending here in 2026, up from 50% last year. Cloud, long the king, slips to second.

The change reflects AI’s double edge. Tools spread fast inside companies. Attacks weaponize them faster. Prompt injections. Data leaks from shadow AI. Agentic systems with email and budgets create fresh targets, as noted in recent X discussions from Stanford Blockchain Accelerator events. Security can’t lag.

And budgets follow suit. Fifty-four percent of organizations already invest or plan to buy AI security tools within six months. That’s a tipping point. Identity management climbs too, with calls for consolidated vendors. Vendor sprawl eases—only 35% expect more suppliers this year, down from 40% in 2025, per the ETR data. Teams want outcomes, not more logos.

But why now? Enterprises deploy AI at scale. Employees paste sensitive data into public LLMs. Over 70% use gen AI at work; 80% of firms lack visibility, according to Knox News coverage of InnerActiv’s platform launch. “Enterprises don’t need to block AI – they need to enable it safely,” says Ray Shealy, InnerActiv executive. Endpoint telemetry becomes key, watching prompts and behavior where cloud views fall short.

Threats mount. Google Cloud’s Cybersecurity Forecast 2026 predicts an AI arms race: attackers speed up with gen AI; defenders counter via agentic SOCs. Shadow agents loom. Geopolitics and finance fuel it all. Forescout’s report, out yesterday, flags network infrastructure as the new prime target, with OT and ICS risks rising, as covered by Industrial Cyber.

Identity ties in tight. Agentic AI demands new governance. Microsoft urges AI-powered access in its 2026 priorities blog: adaptive protection, agent management, Zero Trust fabrics. Attackers rewrite agents on the fly. Boards demand vigilance.

Gartner echoes this. Its 2026 CIO Priority report frames cybersecurity as growth lever amid AI, cloud, quantum threats. Vendor risk. Technical debt. Data protection. All enterprise-wide. CRN’s experts predict AI-driven vulnerabilities exploding, autonomous defense agents rising, per its top predictions.

So IT managers face choices. Prioritize AI governance. Integrate into stacks. Endpoint visibility. Prompt guards. Model scanning. Agent controls. But consolidation matters. Pick vendors that span identity, AI, cloud. ETR shows the pullback from sprawl.

X chatter reinforces urgency. One post flags 86.8% hiking AI security budgets, 93% expanding SecOps—yet architecture lags, per @agentfolioHQ. Another from @LsaRecruit: AI security dethrones cloud; hiring must catch up. Zenity lands in Citizens Bank’s Cyber 66 report for agent security, tweeted by @zenitysec.

Risks beyond budgets. Shadow AI expands surfaces. World Economic Forum notes 87% see AI vulnerabilities as fastest-growing threat, beating ransomware, via its analysis. Enterprises fly blind on employee AI use.

The playbook? Start with visibility. Map AI flows. Enforce governance at endpoints. Consolidate tools. Train on agent risks. Budgets swell—59% growth signals commitment. But execution decides winners. Laggards face breaches in AI pipelines, models, data.

Cloud dominated for years. Migration pains. Multi-cloud messes. Now AI leaps ahead. It’s the new frontier. Proactive defenses win. Reactive ones cost.



from WebProNews https://ift.tt/NrVd51Q

Tuesday, 24 March 2026

Protecting Your Business: Understanding Liability When Your Driver Causes an Accident

Getting a phone call that one of your fleet vehicles has been involved in a crash is a nightmare for any business owner. Your immediate concern is naturally for the safety of everyone involved. But once the dust settles and the police clear the scene, a harsh reality sets in: your company might be facing a massive financial blow.

When a company car is involved in a wreck, the injured party rarely settles for the driver’s individual insurance limits. It is almost guaranteed that the other motorist will hire a personal injury attorney to investigate the crash and seek compensation directly from your business. Knowing exactly when your company is legally liable—and, just as importantly, when it isn’t—is crucial for managing your operations and protecting your bottom line.

The Cost of Doing Business: Vicarious Liability

In the eyes of the law, employers are generally held responsible for the actions of their staff while they are on the clock. The legal doctrine is referred to as respondeat superior.

If your employee runs a red light while out on a delivery route and hits another vehicle, your business is usually on the hook for the vehicle damage and medical bills. The legal system operates on the premise that since your business profits from having that employee out on the road, your business must also bear the risks associated with their driving.

The “Scope of Employment” Defense

Despite being generally responsible for your employees, you are not automatically responsible for every single thing they do behind the wheel, even if they are driving a car with your logo on the door. “Scope of employment” is an important argument when evaluating whether or not your company is liable for the accident.

For instance, if your sales rep drives forty miles away from their assigned territory to visit a friend and causes a wreck, you can make a strong case that they were acting entirely outside their job duties. Your company would likely not be held responsible for the accident.

Conversely, if that same rep just pulled into a nearby drive-thru to grab a quick lunch between client meetings, courts usually view that as a minor, acceptable detour. In that scenario, liability likely stays with your business. Routine commuting to and from the office is also generally excluded from company liability, unless the employee is driving a specialized fleet vehicle or is required to be strictly on-call.

The Independent Contractor Distinction

Managing liability gets a bit more complicated depending on how you classify your workforce. Generally speaking, a business is not vicariously liable for the negligent actions of an independent contractor. This is the entire foundation of the gig economy; delivery and rideshare apps avoid direct liability for most crashes by classifying their drivers as independent contractors rather than W-2 employees.

However, do not assume this classification is a magic shield for every industry. If you operate a logistics or trucking business, federal regulations severely limit your ability to push liability onto your drivers through contract wording. The Federal Motor Carrier Safety Administration (FMCSA) maintains strict rules ensuring that the motor carrier operating the truck holds ultimate responsibility for public safety, regardless of the exact employment label attached to the driver.

Direct Negligence and Fleet Oversight

Sometimes, your business isn’t just paying for the driver’s mistake—you are paying for your own poor management decisions. If an accident stems from corporate negligence, your company faces direct liability.

  • Negligent Hiring: If you hire an applicant with a history of severe driving infractions, DUIs, or suspended licenses, and they subsequently cause a wreck in a company vehicle, your business is directly at fault. You failed to conduct a basic background check.
  • Negligent Maintenance: Skipping routine brake inspections or ignoring tire replacements to save a few dollars out of the quarterly budget will backfire tremendously. If a poorly maintained company van cannot stop in time and causes a collision, your business is strictly liable for failing to maintain a safe fleet.

Securing Your Operations

Dealing with a commercial vehicle crash requires a highly proactive approach. Hoping for the best is not a viable risk management strategy. Implement strict background checks before handing over any keys, require regular defensive driving training for your staff, and keep meticulous, up-to-date logs of all vehicle maintenance. By running a tight ship and clearly understanding your legal vulnerabilities, you can protect your enterprise from devastating claims and keep your business moving forward.



from WebProNews https://ift.tt/l3W6Ckn

Monday, 23 March 2026

Your Morning Espresso Might Be Keeping You Alive: Inside the Largest Genetic Study Ever Conducted on Coffee and Longevity

For decades, coffee has occupied an uneasy position in the public health conversation — celebrated one year, vilified the next, and perpetually caught between headlines that can’t seem to agree on whether your daily cup is medicine or poison. A massive new study, drawing on genetic data from more than 500,000 people, has landed squarely on one side of the debate. And the answer might make your morning ritual feel a little more virtuous.

Researchers at the Karolinska Institute in Sweden, working with collaborators across multiple institutions, have published findings in Nature Medicine that represent the most comprehensive genetic analysis ever undertaken on coffee consumption and mortality. Their conclusion: drinking coffee appears to causally reduce the risk of death from all causes — not merely correlate with it, but actively contribute to longer life. The distinction matters enormously, and it’s one that has eluded coffee researchers for years.

The study, reported by The Register, employed a technique called Mendelian randomization, which uses naturally occurring genetic variants as a kind of built-in randomized trial. The logic works like this: certain gene variants make people more likely to consume coffee. Because these variants are assigned randomly at conception, they aren’t entangled with the confounding lifestyle factors — exercise habits, diet, socioeconomic status — that have muddied observational coffee studies for generations. If people who are genetically predisposed to drink more coffee also tend to live longer, you can draw a much straighter line between the beverage and the outcome.

That’s exactly what the researchers found.

The team analyzed data from the UK Biobank, a long-running biomedical database of roughly 500,000 British participants aged 40 to 69, and cross-referenced it with genetic information from genome-wide association studies. They identified 15 genetic variants strongly associated with coffee consumption and used those as instrumental variables to estimate the causal effect of coffee on mortality. The results showed a statistically significant reduction in all-cause mortality among those genetically inclined to drink more coffee. The effect held after accounting for smoking status, alcohol intake, body mass index, and physical activity levels.

Previous observational studies had hinted at this relationship for years. Large-scale epidemiological research, including work published in the Annals of Internal Medicine and the New England Journal of Medicine, had repeatedly found associations between moderate coffee consumption — roughly three to five cups per day — and lower rates of death from cardiovascular disease, neurological conditions, and even certain cancers. But association is cheap. The scientific community has long demanded stronger evidence of causation before making any definitive claims. This study gets closer to that threshold than anything before it.

Dr. Erik Ingelsson, a professor of medicine at the Karolinska Institute and one of the study’s senior authors, described the findings as “the strongest evidence to date that coffee consumption is not merely a marker of a healthy lifestyle but an independent contributor to longevity.” He was careful to note that the study doesn’t establish a specific dose-response curve — it can’t tell you whether four cups is better than three, or whether there’s a ceiling beyond which benefits plateau or reverse. But the directional signal is clear.

So what’s actually in coffee that might extend life? The honest answer is that nobody knows for certain, though the candidate list is long and growing. Coffee contains more than a thousand bioactive compounds, many of which have demonstrated anti-inflammatory, antioxidant, or neuroprotective properties in laboratory settings. Chlorogenic acids, trigonelline, melanoidins, and diterpenes like cafestol and kahweol have all drawn attention from biochemists. Caffeine itself — the compound most people associate with coffee — may play a role, but it’s almost certainly not the whole story.

One of the more intriguing mechanisms involves coffee’s apparent effect on chronic low-grade inflammation, a condition sometimes called “inflammaging” that is increasingly recognized as a driver of age-related disease. Research published in Nature Medicine in earlier years demonstrated that caffeine consumption was associated with lower levels of inflammatory markers in the blood, and that this reduction corresponded with improved cardiovascular outcomes. The new Karolinska study doesn’t directly measure inflammatory biomarkers, but its findings are consistent with this hypothesis.

There’s also the liver. Coffee has been one of the most consistently protective dietary factors identified in hepatology research. A 2017 meta-analysis in BMJ Open found that drinking three to four cups of coffee per day was associated with an 18% lower risk of developing any type of liver cancer. Separate research has shown protective effects against cirrhosis, non-alcoholic fatty liver disease, and fibrosis. Given the liver’s central role in metabolism, detoxification, and immune regulation, a beverage that supports hepatic function could plausibly influence mortality through multiple downstream pathways.

Not everyone is cheering.

Critics of Mendelian randomization studies point out that the technique, while powerful, carries its own assumptions and limitations. The most significant is the so-called “pleiotropy” problem: the genetic variants used as instruments might influence mortality through pathways unrelated to coffee consumption. If a gene variant that makes someone drink more coffee also independently affects metabolism or appetite in ways that reduce mortality, the causal inference breaks down. The Karolinska researchers attempted to address this through sensitivity analyses, including MR-Egger regression and weighted median estimation, both of which are designed to detect and correct for pleiotropic effects. The results held across all methods, which strengthens the case but doesn’t eliminate the concern entirely.

There’s a second, more practical limitation. The UK Biobank population skews white, British, and middle-aged. Whether these findings generalize to other ethnic groups, younger populations, or people in different dietary contexts remains an open question. Coffee metabolism varies significantly across populations — people of East Asian descent, for instance, are more likely to carry slow-metabolizer variants of the CYP1A2 gene, which affects how quickly the body processes caffeine. For slow metabolizers, high coffee intake has been linked in some studies to increased cardiovascular risk, a finding that sits uncomfortably alongside the Karolinska results.

The genetics of coffee consumption are themselves fascinating and still being mapped. A 2022 study published in Molecular Psychiatry identified more than 40 genomic loci associated with coffee intake, many of which overlap with genes involved in neurological function, reward pathways, and metabolic regulation. This genetic architecture suggests that coffee drinking behavior is deeply embedded in human biology — not merely a cultural habit but a phenotype shaped by natural selection and physiological need.

And the commercial implications are not lost on the industry.

Coffee is a $500 billion global market. The specialty coffee segment has grown at roughly 12% annually over the past five years, driven by consumer interest in quality, sustainability, and increasingly, health claims. Research linking coffee to longevity provides a powerful tailwind for an industry that has already moved aggressively into functional beverage territory, with brands marketing everything from mushroom-infused blends to collagen-spiked cold brews. A causal link to reduced mortality — if the finding is replicated and widely accepted — would be the most potent marketing asset the coffee industry has ever received.

But the health halo comes with caveats that matter to consumers and regulators alike. Most of the beneficial compounds in coffee are sensitive to preparation method. Espresso and French press coffee, for example, contain higher levels of diterpenes than filtered drip coffee, and those diterpenes can raise LDL cholesterol levels. The Karolinska study doesn’t differentiate between preparation methods, which means the mortality reduction it identifies is an average effect across all types of coffee consumption represented in the UK Biobank. It’s entirely possible that some preparation methods deliver more benefit than others — or that some deliver net harm to specific subpopulations.

Decaf also complicates the picture. Several earlier observational studies found that decaffeinated coffee conferred mortality benefits similar to regular coffee, suggesting that caffeine isn’t the primary active ingredient. If that’s the case, then Mendelian randomization studies based on genetic variants for caffeine metabolism may be capturing something slightly different from what they intend. The Karolinska team acknowledged this limitation and called for future research that can disentangle the effects of caffeine from those of other coffee compounds.

Meanwhile, the broader field of nutritional epidemiology continues to grapple with its own credibility crisis. High-profile failures — including the now-discredited claims about red wine and resveratrol, and the decades-long mischaracterization of dietary fat — have made both scientists and the public wary of sweeping dietary pronouncements. Mendelian randomization was supposed to be the corrective, a way to extract causal signals from observational noise. It’s better than what came before. But it’s not a randomized controlled trial, and in nutrition science, the distance between “better evidence” and “definitive proof” remains significant.

Dr. Walter Willett, a professor of epidemiology and nutrition at Harvard’s T.H. Chan School of Public Health who was not involved in the Karolinska study, has long argued that the totality of evidence on coffee points toward benefit. In comments to media over the years, he has described coffee as “one of the most studied dietary factors in the world” and has noted that the consistency of findings across different study designs and populations is itself a form of evidence. The new genetic data, he has suggested, adds another layer of confidence to what was already a strong observational foundation.

For the average coffee drinker, the practical takeaway is reassuringly simple. If you drink coffee and tolerate it well, there’s no reason to stop and considerable reason to continue. Three to five cups a day appears to be the sweet spot identified by most research, though individual tolerance varies. If you experience anxiety, insomnia, or gastrointestinal distress, those symptoms matter more than any population-level mortality statistic. And if you don’t drink coffee, this study alone probably isn’t sufficient reason to start — though it wouldn’t be unreasonable to consider it.

What makes this moment different from previous waves of coffee optimism is the methodological rigor behind the claim. Mendelian randomization isn’t perfect, but it represents a genuine advance over the observational studies that dominated the field for decades. The sample size — half a million people — provides statistical power that smaller studies couldn’t achieve. And the consistency of the effect across multiple analytical approaches suggests that the finding isn’t an artifact of a single statistical choice.

The Karolinska team has indicated that follow-up research is already underway, including studies that will attempt to identify which specific compounds in coffee are responsible for the mortality reduction, and whether the effect is mediated primarily through cardiovascular, metabolic, or neurological pathways. They’re also planning analyses in more diverse populations, which will be essential for determining whether these findings apply globally or primarily to European-descent populations.

Coffee, it turns out, may be doing more than waking you up. It may be keeping you around longer. The evidence isn’t airtight — in science, it rarely is. But it’s the best evidence we’ve had. And for the billions of people who reach for a cup every morning, that’s worth knowing.



from WebProNews https://ift.tt/0CtW6x9

Apple’s Quiet iPad Refresh: Why an A18-Powered Entry-Level Tablet in Early 2026 Matters More Than You Think

Apple is preparing to update its most affordable iPad with the A18 chip, a move that would bring Apple Intelligence to the company’s cheapest tablet for the first time. The refresh is on track for early 2026, according to supply chain reporting, and it carries implications that extend well beyond a simple spec bump.

The timeline was first reported by AppleInsider, citing analyst Jeff Pu of Haitong International Securities. Pu’s note to investors confirmed that the entry-level iPad — currently in its 11th generation with an A16 chip — will receive the A18 processor, the same silicon that debuted in the iPhone 16 lineup last fall. Mass production is expected in the first half of 2026, placing it squarely in Apple’s spring product cycle.

This isn’t a rumor that materialized overnight. Bloomberg’s Mark Gurman had previously indicated that Apple was working on an A18-equipped iPad as part of a broader push to make Apple Intelligence available across the full product line. The logic is straightforward: Apple Intelligence, the on-device AI system Apple introduced at WWDC 2024, requires a minimum of the A17 Pro or M1 chip to function. The current entry-level iPad, stuck on A16, can’t run it. That’s a problem when your marketing strategy increasingly revolves around AI features.

So the A18 iPad isn’t just an upgrade. It’s a prerequisite.

Apple’s entry-level iPad has long served a specific role in the lineup. It’s the volume play — the model that schools buy in bulk, that parents hand to children, that first-time tablet buyers reach for when the price of an iPad Pro or even an iPad Air seems excessive. The current 11th-generation model starts at $349, making it the least expensive way into Apple’s tablet line. And yet it’s now the only iPad that can’t run Apple’s flagship software features.

That gap matters commercially. Apple doesn’t break out iPad sales by model in its earnings reports, but analysts have long estimated that the base iPad accounts for a significant share of total unit volume. Leaving that installed base without access to Apple Intelligence creates a fragmented experience — the kind of inconsistency Apple has historically worked hard to avoid. Customers walking into an Apple Store and comparing models would find that the cheapest option lacks the AI writing tools, image generation, and Siri enhancements that Apple has been promoting aggressively since late 2024.

The A18 chip solves this cleanly. Built on TSMC’s second-generation 3-nanometer process, the A18 includes a 16-core Neural Engine capable of 35 trillion operations per second. It has the raw processing headroom to handle on-device machine learning tasks without offloading them to the cloud — a core tenet of Apple’s privacy-first approach to AI. Dropping it into the entry-level iPad doesn’t just check a compatibility box. It fundamentally changes what that device can do.

There’s a manufacturing angle here too. By early 2026, the A18 will be a mature chip. Apple will have been producing it at volume for over a year by then, which typically means improved yields and lower per-unit costs from TSMC. Apple has a well-established pattern of cascading its newest silicon down through the product line as production economics improve. The iPhone SE, the base iPad, the entry-level Apple TV — these products have all historically received chips that are one or two generations behind the flagship iPhone, arriving at a point when the silicon is cheapest to produce.

Jeff Pu’s research note also touched on other Apple products in the pipeline. He expects updated iPad Air models with M4 chips and an OLED display upgrade for the iPad Air at some point, though the timeline for those changes extends further out. The entry-level iPad refresh appears to be the nearest-term iPad launch on Apple’s schedule.

Pricing remains an open question. Apple could hold the line at $349, absorbing the component cost increase as it has done with previous generational updates. Or it could nudge the price upward modestly, banking on the Apple Intelligence feature set to justify the premium. History suggests Apple will try to maintain the current price point — the entry-level iPad’s positioning as an affordable gateway device is too strategically valuable to compromise with a significant price hike.

But don’t expect dramatic design changes. Reports from multiple supply chain analysts suggest the physical form factor will remain largely unchanged. Same screen size. Same Lightning-to-USB-C port situation that was resolved in the current generation. The update is about what’s inside, not what’s outside.

The timing also aligns with Apple’s education sales cycle. Spring launches for the entry-level iPad have historically coincided with school purchasing decisions in the United States and other major markets. An A18-powered iPad available by March or April 2026 would land just as school districts finalize technology budgets for the following academic year. Apple Intelligence features — particularly writing assistance and research tools — have obvious applications in educational contexts, giving Apple’s sales teams a compelling new pitch to IT administrators.

There’s a competitive dimension worth considering. Samsung, Lenovo, and a growing number of Chinese manufacturers have been pushing AI features into their tablet lineups at various price points. Google’s Android ecosystem has made on-device AI capabilities available across a broader range of hardware price tiers than Apple currently offers. By confining Apple Intelligence to its premium devices through most of 2025, Apple has left an opening for competitors to claim the “AI tablet for everyone” positioning. The A18 iPad would close that gap.

Apple’s services strategy adds another layer. Every iPad sold is a potential subscriber to Apple One, iCloud+, Apple Music, Apple TV+, Apple Arcade, and the growing list of services that now generate more than $100 billion in annual revenue for the company. An iPad that can run Apple Intelligence is a more compelling device, which means higher engagement, which means higher services attach rates. The math works in Apple’s favor even if the hardware margins on the entry-level iPad are thinner than on its premium siblings.

And then there’s the developer story. App developers building for Apple Intelligence need users who can actually run their apps. As long as the cheapest iPad lacks the required hardware, developers face a fragmented target market. Some of their users can access AI features. Some can’t. That complicates development decisions and limits the addressable audience for AI-powered apps. Bringing the entire iPad line up to Apple Intelligence spec removes that friction.

None of this is happening in isolation. Apple is in the middle of a broader hardware refresh cycle that’s systematically bringing Apple Intelligence compatibility to every product category. The iPhone SE 4, expected in early 2025, will bring the A18 to Apple’s cheapest phone. Updated MacBooks have already moved to M4 chips. The iPad is simply the next domino.

The cadence is deliberate. Apple announced Apple Intelligence in June 2024. It shipped the first features in iOS 18.1 that October. By the end of 2026, if current plans hold, every device Apple sells — phone, tablet, laptop, desktop — will be capable of running the full Apple Intelligence feature set. That’s a two-year transition from announcement to universal availability. Fast by Apple’s standards.

For investors, the A18 iPad refresh is a relatively minor event in isolation. It won’t move Apple’s stock price on its own. But it’s a signal of execution — evidence that Apple’s plan to democratize its AI capabilities across the entire hardware line is proceeding on schedule. And in a market where AI credibility matters to Wall Street, that kind of systematic follow-through counts.

The entry-level iPad has never been Apple’s most exciting product. It’s not supposed to be. It’s supposed to be the one that sells in the largest numbers, reaches the widest audience, and pulls the most people into Apple’s orbit. With the A18 chip inside, it’ll finally be able to do all of that while also running the software Apple is betting its future on.

Early 2026. No fireworks. Just a very deliberate closing of a very deliberate gap.



from WebProNews https://ift.tt/Zew2OTb

The Quiet Genius Who Proved Information Is Physical: Charles Bennett’s Turing Award and the Birth of Quantum Information Science

For more than four decades, Charles H. Bennett worked inside IBM Research, pursuing ideas so far ahead of their time that most of his peers didn’t know what to make of them. He proposed that computation could, in theory, consume no energy. He co-invented quantum teleportation. He helped establish that information isn’t merely abstract — it’s as physical as a rock, a river, or a photon spinning through fiber optic cable.

Now, at 81, Bennett has received computing’s highest honor. The Association for Computing Machinery announced that Bennett is the recipient of the 2025 A.M. Turing Award, often called the Nobel Prize of computing, carrying a $1 million prize funded by Google. It’s a recognition that has been, by many accounts, long overdue.

As IBM reported, Bennett is the sixth IBM researcher to receive the Turing Award, joining a lineage that includes John Backus, the inventor of Fortran, and Benoit Mandelbrot, who pioneered fractal geometry. But Bennett’s contributions are arguably the most conceptually radical of the group. His work didn’t just improve existing technology. It redefined what technology could mean.

The story begins in the early 1970s, when Bennett, a young researcher at IBM’s Thomas J. Watson Research Center in Yorktown Heights, New York, started thinking about a deceptively simple question: Does computation require energy? The prevailing wisdom, rooted in a 1961 principle articulated by IBM physicist Rolf Landauer, held that erasing a bit of information necessarily generates heat — a minimum amount of entropy dictated by the second law of thermodynamics. This is known as Landauer’s principle, and it established a hard thermodynamic floor beneath computation.

Bennett’s insight was both elegant and counterintuitive. He demonstrated that computation itself doesn’t need to be irreversible. In a landmark 1973 paper, he showed that any computation can, in principle, be performed in a logically reversible manner — meaning no information needs to be erased, and therefore no minimum energy need be dissipated. The energy cost Landauer identified was real, but it was tied specifically to the act of erasing information, not to computing per se. This distinction sounds arcane. It isn’t. It strikes at the very foundation of what a computer is and what it can become.

“Charlie saw, earlier and more clearly than anyone else, that information is physical,” said Dario Gil, IBM’s Senior Vice President and Director of Research, in a statement carried by IBM Think. “His work established the field of quantum information science.”

That field barely existed when Bennett started. The very phrase “quantum information” would have drawn blank stares at most computer science departments in the 1980s. But Bennett, working with collaborators across physics and mathematics, steadily constructed the theoretical architecture for an entirely new way of processing and transmitting information.

In 1984, Bennett and Gilles Brassard of the Université de Montréal invented quantum key distribution — a protocol, now known as BB84, that uses the quantum properties of photons to create encryption keys that are provably secure against any eavesdropper, regardless of computational power. This wasn’t a better lock. It was a fundamentally different kind of lock, one whose security is guaranteed not by mathematical difficulty but by the laws of physics themselves. BB84 remains the most widely implemented quantum cryptography protocol in the world, and its intellectual descendants are now embedded in commercial systems sold by companies from Switzerland to China.

Then came teleportation.

In 1993, Bennett, Brassard, and four other physicists published a paper proposing that the complete quantum state of a particle could be transmitted from one location to another — not by physically moving the particle, but by exploiting quantum entanglement and classical communication. They called it quantum teleportation. The name invited misunderstanding. This wasn’t Star Trek. No matter was being beamed anywhere. But what was being transmitted — the full quantum information describing a particle’s state — was being faithfully reconstructed at a distant location while being destroyed at the origin, in perfect compliance with the no-cloning theorem of quantum mechanics.

The 1993 teleportation paper, as IBM noted, has been cited thousands of times and is considered one of the foundational results in quantum information theory. Experimental demonstrations followed within a few years. Today, quantum teleportation is not merely a theoretical curiosity — it’s a core operation in many proposed architectures for quantum networks and distributed quantum computing.

Bennett’s contributions don’t stop there. He co-developed the concept of quantum computational complexity, co-discovered entanglement distillation (a method for extracting high-quality entanglement from noisy quantum channels), and helped formalize the resource theory of entanglement. He also contributed to the theory of quantum error correction, which is now the central engineering challenge facing every company trying to build a practical quantum computer — from IBM itself to Google, Microsoft, and a constellation of startups.

What makes Bennett unusual, even among Turing laureates, is the breadth of his influence across disciplines. He is claimed by physicists, computer scientists, information theorists, and cryptographers alike. His work sits at a crossroads that didn’t exist before he helped build it. And he did much of it with a style that colleagues describe as gentle, curious, and almost playfully rigorous.

“He has an extraordinary ability to find the deep question hiding inside a practical problem,” Brassard told reporters. The two have collaborated for over 40 years — one of the most productive partnerships in the history of the field.

The Turing Award arrives at a moment when quantum computing is receiving unprecedented investment and scrutiny. IBM itself has been aggressive in the space, unveiling increasingly powerful quantum processors and laying out a roadmap toward fault-tolerant quantum computation. Google claimed “quantum supremacy” in 2019 with its Sycamore processor. Startups like PsiQuantum, IonQ, and Quantinuum are racing to demonstrate commercial viability. Governments around the world — the United States, China, the European Union, and others — have committed billions to quantum research.

But the theoretical scaffolding on which all of this rests was erected, in large part, by Bennett and a small group of collaborators who started working when quantum computing was considered, at best, a speculative thought experiment. The hardware engineers building today’s superconducting qubits and trapped-ion systems are, whether they realize it or not, building on foundations Bennett laid decades ago.

It’s also worth placing Bennett’s recognition in the context of recent Turing Awards. Last year’s prize went to Andrew Barto and Richard Sutton for reinforcement learning. The year before, it was Avi Wigderson for contributions to computational complexity theory. The selection of Bennett signals the ACM’s recognition that quantum information science has matured from a niche curiosity into a central pillar of computer science — one with implications for cryptography, materials science, drug discovery, optimization, and machine learning.

Bennett himself has been characteristically modest about the honor. In his career at IBM, which has spanned more than 50 years, he has never held a management position or led a large research group. He is, in the truest sense, a scientist — someone who followed questions wherever they led, even when the answers seemed to have no practical application for decades.

Some of those answers are just now becoming practical. Quantum key distribution systems are being deployed in metropolitan fiber networks. Quantum teleportation is being tested over satellite links. Quantum error correction codes derived from the theoretical framework Bennett helped create are being implemented on real hardware. The distance between Bennett’s blackboard and the engineering lab has been shrinking, year by year, for four decades.

And yet the most profound aspect of Bennett’s legacy may be philosophical rather than technical. Before his work, information was widely regarded as something ethereal — patterns, abstractions, software as opposed to hardware. Bennett, building on Landauer’s insight, helped establish that information is always instantiated in a physical system, subject to physical laws. Quantum information takes this further: the laws governing information at the quantum scale are stranger, richer, and more powerful than anything classical physics anticipated.

This idea — that information is physical, and that the physics of information determines the limits of computation, communication, and cryptography — is now so embedded in modern science that it’s easy to forget someone had to fight for it. Bennett did. Quietly, persistently, brilliantly.

The $1 million prize will be formally presented at the ACM’s annual awards banquet later this year. Bennett, who still holds the title of IBM Fellow Emeritus, is expected to attend. For a man who helped invent the theoretical tools that may define the next century of computing, it’s a fitting, if belated, coronation.



from WebProNews https://ift.tt/ZWYfADt

Faraday Future Escapes the SEC’s Grip — But Its Road Ahead Is Still Full of Potholes

The Securities and Exchange Commission has quietly closed its four-year investigation into Faraday Future Intelligent Electric Inc., the beleaguered electric vehicle startup that has been teetering between ambition and oblivion since its founding more than a decade ago. The company disclosed the development in a regulatory filing on March 21, 2026, confirming that the SEC’s enforcement division had concluded its probe without recommending any further action against the firm. No charges. No penalties. Just a long exhale from a company that has had precious few reasons to celebrate in recent years.

The investigation, which began in 2022, centered on allegations that Faraday Future had misled investors about the number of pre-orders and reservations for its flagship FF 91 luxury electric SUV. At the time, the company was a freshly minted public entity, having completed a merger with a special purpose acquisition company in July 2021 — a deal that valued it at roughly $3.4 billion. That valuation now reads like a relic from another era. As TechCrunch reported, the SEC’s decision to drop the case removes one of the most persistent legal clouds hanging over the company, though it hardly signals a return to health.

To understand how much weight this investigation carried, you have to rewind to the frenzied SPAC boom of 2020 and 2021, when dozens of EV startups raced to public markets through blank-check mergers. Faraday Future was among the most high-profile of these, partly because of the outsized personality of its founder, Jia Yueting, the Chinese tech mogul who fled to the United States in 2017 amid the financial collapse of his LeEco conglomerate. Jia filed for personal bankruptcy in 2019 and subsequently stepped back from a formal executive role at Faraday Future, though his influence over the company has remained a persistent source of scrutiny and controversy.

The SEC probe was not the company’s first brush with regulatory trouble related to the SPAC deal. In 2022, the agency had already concluded a separate investigation that resulted in a $9.5 million settlement after finding that Faraday Future had made misleading claims in connection with the merger. That earlier action focused on the company’s representations about binding vehicle reservations, which turned out to be far less firm than investors had been led to believe. The fresh investigation that just concluded appears to have been a continuation or expansion of that earlier scrutiny, probing whether additional securities violations had occurred.

Faraday Future’s stock barely moved on the news. That tells you something.

Shares of the company, which trades on the Nasdaq under the ticker FFIE, have been hovering in penny-stock territory for much of the past year. The stock closed at $3.07 on March 21, a figure that reflects the aftermath of multiple reverse stock splits designed to keep the company in compliance with Nasdaq’s minimum listing requirements. In real terms, the destruction of shareholder value has been staggering. Investors who bought in at the SPAC merger price have lost nearly all of their money.

And yet, Faraday Future persists. The company began delivering the FF 91 2.0 — its ultra-luxury electric vehicle priced north of $300,000 — in limited quantities starting in late 2023. But production volumes have been minuscule. The company has struggled with chronic cash shortages, supplier disputes, and an inability to scale manufacturing at its Hanford, California plant. In its most recent quarterly filing, Faraday Future reported delivering just a handful of vehicles, a figure that would be embarrassing for a boutique automaker, let alone one that once aspired to compete with Tesla and Lucid Motors.

The company’s financial position remains precarious. Faraday Future has repeatedly warned in SEC filings that there is “substantial doubt” about its ability to continue as a going concern — the accounting profession’s polite way of saying the lights could go out. It has survived largely through a series of small capital raises, convertible note offerings, and what can only be described as financial improvisation. Each round of fundraising has diluted existing shareholders further, creating a vicious cycle that has made the stock increasingly unattractive to institutional investors.

So what does the SEC’s decision to close its investigation actually mean for the company’s prospects? In practical terms, it eliminates the risk of another costly settlement or, worse, enforcement action that could have included restrictions on the company’s ability to raise capital. That’s not nothing. For a company that lives and dies by its access to external funding, the removal of a hanging regulatory threat provides at least a marginal improvement in its negotiating position with potential investors and lenders.

But the structural challenges facing Faraday Future go far beyond regulatory risk. The EV market itself has shifted dramatically since the company first went public. The era of easy money and sky-high valuations for pre-revenue EV startups is over. Fisker, another SPAC-era EV company, filed for bankruptcy in 2024. Lordstown Motors met a similar fate. Canoo has been delisted. The winnowing has been brutal, and the survivors — companies like Rivian and Lucid — have tens of billions of dollars in backing from strategic investors like Amazon and Saudi Arabia’s Public Investment Fund. Faraday Future has no comparable patron.

The company has pinned its hopes on a broader product strategy. Beyond the FF 91, Faraday Future has discussed plans for a more affordable vehicle, the FF 81, and even a mass-market model. These ambitions, however, require capital the company simply doesn’t have. Building a new vehicle program from scratch costs billions. Faraday Future’s total cash on hand, as of its last disclosure, was measured in the tens of millions — an amount that wouldn’t cover a single quarter of operating expenses at most automakers.

Jia Yueting’s role continues to loom over everything. While he formally holds the title of Chief Product and User Ecosystem Officer — a characteristically grandiose Silicon Valley designation — his actual influence on corporate governance and strategy has been a recurring concern for analysts and investors. The 2022 SEC settlement specifically cited the company’s failure to disclose certain connections between Jia and entities involved in the SPAC transaction. Trust, once broken with regulators and investors, is extraordinarily difficult to rebuild.

There’s also the question of whether the FF 91 itself is a viable product. At a price point above $300,000, the vehicle competes not with mainstream EVs but with the likes of Rolls-Royce and Bentley — brands with decades of heritage and loyal customer bases. The FF 91 offers impressive specifications on paper: a 1,050-horsepower drivetrain, advanced autonomous driving features, and a rear-seat experience designed to rival a first-class airline cabin. Reviews from the handful of automotive journalists who have driven the car have been mixed, praising the powertrain but noting inconsistencies in build quality and software polish. For a vehicle at this price, inconsistency is a dealbreaker.

The broader EV industry context doesn’t help either. Tesla has been aggressively cutting prices across its lineup, compressing margins for every other manufacturer. Chinese automakers like BYD and NIO are expanding globally with vehicles that offer compelling technology at far lower price points. Legacy automakers — GM, Ford, Volkswagen, Hyundai — have poured billions into their own electric platforms. The market Faraday Future hoped to enter is now saturated with well-funded competitors, many of whom have the manufacturing scale and brand recognition that a startup simply cannot replicate overnight.

Still, the SEC’s decision not to pursue further action is a genuine, if modest, positive development for a company that has been accumulating negatives for years. It allows management to tell a cleaner story to potential investors — or, perhaps more realistically, to potential acquirers or strategic partners. There has been intermittent speculation that Faraday Future’s most valuable asset may ultimately be its intellectual property portfolio and its California manufacturing facility, rather than its ability to operate as an independent automaker.

The EV SPAC era produced a generation of companies built more on PowerPoint decks and celebrity endorsements than on engineering rigor and financial discipline. Many are now gone. Faraday Future, against considerable odds, is still standing — barely. The SEC investigation’s conclusion removes one existential threat. But the company’s fundamental challenge remains unchanged: it needs to prove it can build cars people want, in volumes that matter, at a cost structure that doesn’t consume cash faster than it can be raised.

That’s a tall order for any startup. For one with Faraday Future’s history, it borders on the Herculean.

As TechCrunch noted, the SEC’s letter to Faraday Future included the standard caveat that the closure of an investigation “must in no way be construed” as a finding that no violations occurred. The agency reserves the right to reopen matters at any time. It’s the regulatory equivalent of “we’re watching you.” Whether Faraday Future can use this reprieve to chart a viable path forward — or whether it simply delays an inevitable reckoning — will depend on decisions made in the coming months about fundraising, production targets, and whether the company’s leadership can finally match its ambitions with execution.



from WebProNews https://ift.tt/5Hf0IhY

Sunday, 22 March 2026

Google Summer of Code 2026: Twenty-Two Years In, the World’s Largest Open-Source Mentorship Program Still Hasn’t Run Out of Steam

Google is once again opening applications for its Summer of Code program, now in its twenty-second consecutive year — a remarkable stretch for any corporate-sponsored initiative, let alone one that pays newcomers to write open-source software. The 2026 edition, announced on the Google Open Source Blog, carries the tagline “Open Source, Open Doors” and invites contributors of all experience levels to apply for funded mentorship slots with established open-source organizations worldwide.

The program’s longevity alone tells a story. Since 2005, Google Summer of Code (GSoC) has funded more than 20,000 participants from over 115 countries, channeling millions of dollars into stipends for contributors who might otherwise never have found an on-ramp into open-source development. That’s not philanthropy dressed up as marketing. It’s a pipeline — one that feeds talent into the broader open-source supply chain that Google, and virtually every major technology company, depends on.

This year’s structure maintains the flexible format Google adopted in recent cycles. Contributors can choose between medium-sized projects (~175 hours) and large projects (~350 hours), a change from the program’s original all-or-nothing, full-summer commitment. The shift, first introduced in 2022, was designed to accommodate participants who hold jobs, attend school year-round, or live in regions where a North American summer schedule doesn’t apply. It worked. Application numbers from the Global South surged after the change, according to Google’s own program data.

But the 2026 cycle arrives at a moment when open-source software faces a tangle of pressures that didn’t exist when GSoC launched two decades ago.

The most obvious: artificial intelligence. Large language models are now capable of generating functional code, raising questions about what “mentorship” means when a contributor can prompt an AI to produce a working patch in seconds. Google’s announcement doesn’t directly address this tension, though the company has increasingly integrated AI tooling into its own development workflows. The implicit bet is that GSoC’s value was never just about the code — it was about teaching people how to participate in distributed, consensus-driven software communities. That’s a skill no model can replicate yet.

There’s also the sustainability question. Open-source maintainers are burned out. A 2024 report from the Linux Foundation found that nearly half of all maintainers are unpaid volunteers, and a significant percentage reported considering stepping away from their projects entirely. GSoC addresses this obliquely: by pairing new contributors with mentoring organizations, it theoretically distributes some of the workload. In practice, mentoring itself is work — often uncompensated beyond the satisfaction of growing the next generation. Several past mentoring organizations have publicly noted the strain of participating in GSoC while simultaneously maintaining their own codebases.

So why do they keep signing up?

The answer is straightforward. GSoC remains one of the few programs that reliably converts casual users into committed contributors. Organizations like the Apache Software Foundation, the Python Software Foundation, and CNCF have participated repeatedly, citing long-term retention of GSoC alumni as active maintainers and community members. For projects struggling to attract new blood — which is most of them — that conversion rate is hard to replicate through any other mechanism.

Google’s 2026 announcement emphasizes inclusivity, noting that the program is open to anyone 18 or older, not just students. This broader eligibility, introduced in 2022 when the program dropped “Students” from its informal branding, was a significant expansion. Career changers, self-taught developers, and professionals from adjacent fields like data science and design are now explicitly welcomed. The program page lists technical writing and UI/UX projects alongside traditional coding tasks.

The timeline is tight. Mentoring organizations have already submitted their applications, and the list of accepted organizations will be published in the coming weeks. Contributor applications open shortly after, with a deadline that typically falls in early April. Google pays stipends directly to accepted contributors, with amounts adjusted by region using a purchasing-power-parity model — a practice that has drawn both praise for its equity considerations and criticism for paying different amounts for identical work.

Financially, the stipends range from roughly $1,500 for a medium project in lower-cost regions to $6,600 for a large project in higher-cost ones. Not life-changing money for a software engineer in San Francisco. Potentially transformative for a university student in Nairobi or Dhaka. This asymmetry is, in many ways, the program’s most powerful feature. It puts real resources in the hands of people for whom a few thousand dollars can fund months of focused technical development.

And the competition for spots is fierce. In recent years, acceptance rates have hovered around 10-15%, making GSoC more selective than many graduate programs. Contributors must submit detailed project proposals, demonstrate familiarity with their chosen organization’s codebase, and often complete preliminary contributions or “micro-tasks” just to be considered. The process itself is educational — a crash course in technical writing, project scoping, and community engagement.

Google’s motivations aren’t purely altruistic, of course. The company’s infrastructure runs on open-source software, from the Linux kernel to Kubernetes to TensorFlow. Every competent new contributor who enters the open-source world is, in some indirect but real sense, subsidizing Google’s own operations. The program also generates goodwill among developers, a constituency Google courts aggressively through initiatives like Google Developer Groups, the Chrome developer ecosystem, and Android’s open-source underpinnings.

Critics have occasionally questioned whether GSoC creates dependency — whether organizations become reliant on a steady stream of Google-funded labor rather than building their own sustainable contributor pipelines. It’s a fair concern. But the alternative, for many smaller projects, isn’t self-sufficiency. It’s obscurity and eventual abandonment.

The 2026 program also arrives amid heightened government interest in open-source security. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has been vocal about the risks posed by undermaintained open-source projects, particularly after high-profile vulnerabilities like Log4Shell exposed how thin the maintenance layer can be for critical software. Programs that bring new contributors into these projects aren’t just nice to have. They’re part of the security infrastructure now.

For prospective applicants, the advice from past participants is consistent: start early. Don’t wait for the official application window to begin engaging with your target organization. Join their mailing lists, IRC channels, or Discord servers. Read their contribution guidelines. Fix a small bug. Show up before you’re asked to.

Twenty-two years is a long time for any program to sustain momentum. Google Summer of Code has outlasted Google+, Google Reader, and dozens of other initiatives the company launched with far more fanfare. Its persistence suggests something durable about the model — or at least about Google’s recognition that the open-source commons requires ongoing investment, even when the returns are diffuse and difficult to measure on a quarterly earnings call.

Applications for the 2026 cycle are open now. Details are available on the Google Open Source Blog and the official GSoC website. For thousands of aspiring developers around the world, this remains one of the most accessible entry points into professional-grade open-source contribution. The door is open. Walking through it is the hard part.



from WebProNews https://ift.tt/yWOV90i

Tesla’s Electric Semi Has Already Won Over the Hardest Crowd in Transportation: The Drivers Themselves

Long-haul truckers are not easily impressed. They spend hundreds of hours a week behind the wheel of machines they know intimately, and they tend to be skeptical of anything that threatens to upend the way freight moves across America. So when drivers who’ve actually logged miles in Tesla’s Semi start describing the experience in terms usually reserved for sports cars, the trucking industry pays attention.

The Tesla Semi, first unveiled in prototype form back in 2017, has spent years in a kind of industrial purgatory — promised but perpetually delayed, shown off but never mass-produced. That’s changing. Tesla has confirmed plans to begin volume production at its new facility in Nevada, with CEO Elon Musk targeting meaningful output starting in 2026. And the early reviews from professional drivers who’ve operated the truck in its limited deployment are striking in their enthusiasm.

As reported by Slashdot, truckers who’ve had seat time in the Semi have praised its acceleration, handling, and overall driving dynamics. The electric drivetrain delivers instant torque — a characteristic that matters enormously when you’re pulling 80,000 pounds up a highway on-ramp. Drivers have noted that the center-seated driving position, unconventional for a Class 8 truck, actually improves visibility and reduces fatigue on long hauls. The absence of a diesel engine’s vibration and noise is another factor drivers cite repeatedly.

This isn’t marketing spin from Tesla’s communications department. It’s coming from the people who haul freight for a living.

PepsiCo has been the Semi’s most prominent early customer, operating a fleet of the trucks out of its Frito-Lay facility in Modesto, California, since late 2022. The beverage and snack giant initially took delivery of a handful of units and has been using them on routes in Northern California. PepsiCo’s real-world deployment has provided the most substantive operational data available on the Semi’s capabilities. The company has reported that the trucks are meeting or exceeding range expectations on certain routes, with some loads traveling over 400 miles on a single charge — a figure that, if consistently reproducible, would cover a significant portion of regional freight operations in the United States.

Range anxiety remains the central objection from fleet operators considering electrification. And it’s a legitimate concern. The American Trucking Associations estimates that the average long-haul trip covers roughly 500 miles per day. Tesla has claimed the Semi will offer up to 500 miles of range, but real-world performance depends heavily on load weight, terrain, weather, and driving behavior. The gap between laboratory specs and highway reality has killed plenty of promising technologies in trucking before.

But here’s what makes the driver enthusiasm notable: it suggests that Tesla may have solved, or at least substantially mitigated, the ergonomic and operational problems that plague many first-generation commercial EVs. Truckers don’t care about press releases. They care about whether the air conditioning works at hour nine, whether the regenerative braking feels predictable on a mountain downgrade, and whether the seat doesn’t destroy their lower back after 600 miles. The early feedback indicates Tesla got these details right.

The production timeline is the real question mark. Tesla’s new manufacturing facility near Reno, Nevada, is being purpose-built for Semi production, but the company has a well-documented history of missing its own deadlines. The Semi was originally supposed to enter production in 2019. Then 2020. Then 2021. The limited 2022 deliveries to PepsiCo were more proof-of-concept than commercial launch. Musk has now pointed to 2026 as the year volume production begins in earnest, with a target of producing 50,000 units annually once the factory reaches full capacity.

Fifty thousand units a year would be significant. The U.S. Class 8 truck market typically sees around 250,000 to 300,000 new registrations annually. If Tesla captured even a fraction of that, it would represent one of the most consequential shifts in freight transportation in decades.

The competition isn’t standing still. Daimler Truck’s Freightliner eCascadia is already in production and operating with multiple fleet customers. Volvo Trucks has been delivering its VNR Electric in North America. Nikola, despite its well-publicized corporate scandals, has shipped hydrogen fuel cell and battery-electric trucks to customers. And Chinese manufacturers like BYD are aggressively expanding their commercial vehicle footprint globally.

What separates Tesla’s approach is vertical integration. The company manufactures its own battery cells, designs its own power electronics, and controls its charging infrastructure through the Tesla Megacharger network, which is specifically designed for the Semi. Each Megacharger is designed to deliver up to 1 megawatt of power, theoretically adding 400 miles of range in about 30 minutes. That’s close to the time a driver would spend at a truck stop for a federally mandated rest break — a coincidence that is almost certainly not a coincidence.

The charging infrastructure challenge is enormous and often underappreciated. A single diesel truck stop can refuel dozens of trucks simultaneously. Replicating that throughput with electricity requires massive grid connections, substantial capital investment, and coordination with utilities that aren’t accustomed to thinking about energy demand in these terms. Tesla’s advantage is that it has experience building out a proprietary charging network for its passenger vehicles and can apply those operational lessons to the commercial side. Whether it can scale fast enough is another matter entirely.

Cost economics will ultimately determine adoption rates. Diesel prices fluctuate, but the fuel typically represents about 30% to 40% of a trucking company’s operating costs. Electricity is cheaper per mile in most markets, and electric drivetrains have far fewer moving parts than diesel engines, which means lower maintenance expenses over the life of the vehicle. The Semi’s total cost of ownership could be substantially lower than a comparable diesel truck — but only if the purchase price comes down and the charging infrastructure is available where fleets need it.

Tesla hasn’t publicly disclosed a final price for the production Semi. Early estimates pegged the 300-mile range version at around $150,000 and the 500-mile version at $180,000, but those figures date back to the 2017 unveiling and almost certainly don’t reflect current costs. A new diesel Class 8 truck typically sells for $130,000 to $180,000 depending on configuration. Federal tax credits and state-level incentives can offset some of the premium for electric trucks, but the math has to work without subsidies for mass adoption to take hold.

The driver shortage that has plagued the trucking industry for years could also work in Tesla’s favor. If electric trucks are genuinely more comfortable and less fatiguing to drive — as early testers suggest — they could help attract younger workers to an industry that has struggled with recruitment. The average age of a long-haul trucker in the United States is 57. An industry that can’t replace its retiring workforce has a powerful incentive to adopt any technology that makes the job more appealing.

There’s also the regulatory dimension. California’s Advanced Clean Fleets rule requires manufacturers to sell an increasing percentage of zero-emission trucks starting in 2024, with a goal of 100% zero-emission sales by 2036 for certain vehicle categories. Several other states have adopted or are considering similar mandates. These regulations create a guaranteed market for electric trucks regardless of whether individual fleet operators are enthusiastic about the transition. For Tesla, regulatory tailwinds represent a structural advantage that compounds over time.

Not everyone is convinced. Some industry veterans argue that battery-electric technology is fundamentally unsuited for the longest-haul routes and that hydrogen fuel cells will ultimately prove more practical for cross-country freight. The weight of battery packs reduces payload capacity — a critical factor when freight revenue is calculated by the pound. And the electrical grid in many parts of the country simply isn’t ready to support large-scale truck charging without significant upgrades.

These are real constraints. But they’re engineering and infrastructure problems, not physics problems. And the trucking industry has a long history of adapting to new powertrain technologies — from steam to gasoline to diesel — once the economics become undeniable.

What the early driver feedback reveals is something that spreadsheets and engineering specifications can’t fully capture: the Tesla Semi is, by most accounts, a genuinely good truck to drive. That matters more than analysts might think. In an industry where driver retention is a chronic problem and operator satisfaction directly affects safety and productivity, building a truck that people actually want to spend twelve hours in is no small achievement.

The next eighteen months will determine whether Tesla can translate driver enthusiasm and prototype success into industrial-scale production. The company has the factory, the technology, and a growing order book that reportedly includes commitments from Walmart, UPS, and several other major shippers in addition to PepsiCo. What it needs now is execution — the unsexy, grinding work of manufacturing thousands of identical vehicles to consistent quality standards, week after week, month after month.

That has always been the hardest part. And for Tesla, which has stumbled through multiple “production hell” episodes with its passenger vehicles, the challenge of building heavy trucks at scale shouldn’t be underestimated. But if the Semi performs in volume production the way it has in limited deployment, Tesla will have accomplished something remarkable: convincing the most pragmatic, tradition-bound segment of the transportation industry that the future runs on electrons, not diesel.

The truckers, it seems, are already on board.



from WebProNews https://ift.tt/I6v2WC0

Jeff Bezos Wants to Put Data Centers in Orbit — and Blue Origin Just Showed It’s Not Science Fiction

Blue Origin has spent two decades being mocked as a vanity project. Too slow. Too secretive. Always behind SpaceX. But a classified internal initiative called Project Sunrise, now partially revealed, suggests Jeff Bezos’s space company has been quietly building something that could reshape the economics of cloud computing itself — by moving server farms off Earth and into orbit.

The concept sounds absurd until you examine the math.

GeekWire reported that Project Sunrise is a multi-year effort within Blue Origin to design, prototype, and eventually deploy orbital data center modules that would ride the company’s New Glenn heavy-lift rocket into low Earth orbit. The modules would process workloads for Amazon Web Services — the cloud division that generates the bulk of Amazon’s operating profit — while exploiting the natural vacuum and cold of space for cooling, one of the most expensive line items in terrestrial data center operations.

Cooling alone accounts for roughly 40% of a large data center’s energy consumption. In space, radiative cooling panels can reject heat directly into the near-absolute-zero background without compressors, chillers, or water. That single advantage, if engineered correctly, could cut the per-rack operating cost of high-density AI compute by a margin wide enough to justify the launch expense.

And the launch expense is dropping fast.

SpaceX’s Starship promises to bring per-kilogram costs to orbit below $100 — potentially far below. Blue Origin’s New Glenn, which completed its first orbital flight earlier this year, targets a similar cost curve with its reusable first stage. At those prices, the capital expenditure of lofting server racks becomes comparable to the cost of building a new hyperscale facility in Virginia or Oregon, where land, power, and water are increasingly contested resources.

Project Sunrise didn’t materialize in a vacuum — no pun intended. The strain on terrestrial data center infrastructure has become acute. According to the International Energy Agency, data centers consumed roughly 460 terawatt-hours of electricity globally in 2024, a figure projected to more than double by 2030 as generative AI workloads explode. In Northern Virginia’s “Data Center Alley,” Dominion Energy has warned that new facilities face multi-year waits for grid connections. Local governments in multiple states have imposed moratoriums on new construction, citing water usage and noise complaints from industrial cooling systems.

The political headwinds are real. Communities that once welcomed the tax revenue from server farms are pushing back. Environmental groups have targeted the water consumption of evaporative cooling towers — some large facilities consume millions of gallons per day, rivaling small cities. Moving even a fraction of that compute off-planet would relieve pressure on terrestrial grids and aquifers alike.

But there’s a harder question: latency.

Light travels fast, but low Earth orbit is still 550 kilometers up at minimum. A round trip to an orbital data center and back adds roughly 4 to 8 milliseconds of latency, depending on altitude and ground station placement. For real-time applications like high-frequency trading or multiplayer gaming, that’s disqualifying. For large-scale AI model training, batch processing, scientific simulation, and data analytics — workloads that are latency-tolerant but compute-hungry — it’s a non-issue. These are precisely the workloads consuming the most power and cooling capacity on the ground today.

Blue Origin’s internal documents, portions of which GeekWire reviewed, reportedly describe a phased approach. Phase one involves deploying small test modules aboard New Glenn’s upper stage to validate thermal management, radiation hardening, and autonomous operations in orbit. Phase two scales to full rack-density modules with inter-satellite laser links for high-bandwidth connectivity. Phase three — the ambitious endgame — envisions constellations of orbital data centers serving as overflow capacity for AWS during peak demand periods.

The integration with AWS is the strategic linchpin. No other space company has a captive hyperscale cloud customer. SpaceX has Starlink, which is a connectivity play, not a compute play. Bezos owns both the rocket company and the cloud company. The vertical integration mirrors what he did with Amazon’s logistics network — building the trucks, the warehouses, and the delivery routes, then opening them to third parties once the economics worked.

Some industry veterans are skeptical. “You’re talking about putting sensitive electronics in one of the harshest radiation environments imaginable, with no ability to send a technician when something breaks,” said one former AWS infrastructure executive who spoke on condition of anonymity. “The failure modes are completely different from anything we deal with on the ground.” Radiation-induced bit flips, micrometeorite impacts, thermal cycling as modules pass in and out of Earth’s shadow every 90 minutes — these are engineering problems with solutions, but expensive ones.

Others see the logic clearly. Satellite hardware has operated reliably in orbit for decades. Modern rad-hardened processors, while slower than their commercial counterparts, have narrowed the performance gap considerably. And the new generation of AI accelerators from Nvidia, AMD, and custom silicon shops are being designed with error-correcting architectures that could tolerate the orbital radiation environment with minimal performance penalty.

The power question is solvable too. Solar panels in orbit receive unfiltered sunlight with no atmospheric losses and, at the right orbital inclination, near-continuous illumination. A single orbital data center module with deployable solar arrays could generate more consistent power than a ground facility dependent on grid reliability and backup diesel generators. Blue Origin’s Project Sunrise documents reportedly specify a modular solar array design capable of delivering 150 kilowatts per module — enough to power several high-density AI training racks.

There’s precedent for this kind of thinking, even if no one has executed at scale. Microsoft conducted Project Natick, submerging a sealed data center pod on the seafloor off Scotland’s Orkney Islands. The experiment ran for two years and demonstrated that the sealed, cooled environment produced a failure rate one-eighth that of a conventional land-based data center. The lesson wasn’t that underwater data centers were the future — it was that removing human access and environmental variability dramatically improved reliability. Orbital modules could replicate that finding.

The financial implications for Amazon are significant. AWS generated $107 billion in revenue in 2024 and is on pace to exceed $130 billion in 2025, according to Amazon’s earnings reports. But capital expenditure on data center construction has surged past $75 billion annually, and the company has signaled that figure will keep climbing. If orbital data centers could handle even 5% of AWS’s total compute workload, the savings on land acquisition, utility contracts, water rights, and cooling infrastructure could amount to billions annually at steady state.

Wall Street hasn’t priced this in. Analyst models for Amazon still treat data center capex as a purely terrestrial line item. The disclosure of Project Sunrise, if confirmed at scale, would force a fundamental reassessment of AWS’s long-term cost structure and competitive moat. Google and Microsoft, the other two hyperscale cloud giants, have no comparable space launch capability. Google has invested in satellite imaging through various ventures, and Microsoft has its Azure Orbital ground station service, but neither company can put hardware into orbit on its own rockets.

That asymmetry matters.

Blue Origin has also been building out its in-space manufacturing capabilities. The company’s orbital reef commercial space station program, developed in partnership with Sierra Space and Boeing, is designed for permanent human presence in low Earth orbit. An orbital data center doesn’t require human presence — but having a crewed platform nearby for occasional servicing missions could extend hardware lifespans and enable upgrades that pure robotic operations cannot.

The timing of the Project Sunrise disclosure is notable. It comes as Blue Origin prepares for its second New Glenn launch and as the company accelerates hiring for its Advanced Programs division. Job postings reviewed on LinkedIn and Blue Origin’s careers page in recent weeks reference “orbital infrastructure,” “space-based computing architectures,” and “thermal management for sustained orbital operations” — language consistent with the reported scope of Project Sunrise.

Meanwhile, the broader space industry is converging on the idea that orbit isn’t just for communications and Earth observation anymore. Axiom Space is building commercial modules for the International Space Station. Vast Space is developing its Haven-1 commercial station. And several startups, including Lumen Orbit and OrbitsEdge, have explicitly pitched orbital data centers as their core business model, though none have the launch capacity or cloud customer base that Blue Origin and AWS provide.

GeekWire noted that Lumen Orbit, a Y Combinator-backed startup, has been developing small orbital computing payloads, but the company’s total planned capacity would amount to a rounding error compared to what Blue Origin could deploy on a single New Glenn flight. The difference in scale is orders of magnitude.

There are regulatory hurdles. The Federal Communications Commission governs satellite communications, and orbital data centers would need spectrum allocation for their ground links. The Federal Aviation Administration licenses launches. Space debris mitigation requirements from both the FCC and international bodies would apply. And export control regulations — particularly ITAR restrictions on space hardware — could complicate the use of orbital compute resources by international AWS customers. None of these are insurmountable, but each adds time and cost.

The environmental argument cuts both ways. Rocket launches produce carbon emissions and deposit particulate matter in the upper atmosphere. A single New Glenn launch burns roughly 60 metric tons of liquid natural gas (methane) and liquid oxygen. Scale that to dozens or hundreds of launches per year for data center deployment and replenishment, and the atmospheric impact becomes a legitimate concern. Blue Origin would need to demonstrate that the total lifecycle carbon footprint of orbital compute — including launch emissions — is lower than the equivalent terrestrial alternative. Given the coal and natural gas still powering many electrical grids, that case may be easier to make than it first appears, but it requires rigorous accounting.

So where does this leave the competitive picture? If Project Sunrise delivers on even a fraction of its reported ambitions, it gives AWS a structural cost advantage that neither Google Cloud nor Microsoft Azure can replicate without their own launch vehicles — something neither company is building. SpaceX could theoretically partner with one of them, but Elon Musk’s complicated relationship with the rest of Big Tech makes such a partnership fraught. And SpaceX’s own computing ambitions appear focused on Starlink’s edge computing capabilities, not hyperscale cloud workloads.

The irony is thick. For years, Bezos was criticized for pouring billions into Blue Origin with no clear commercial rationale beyond space tourism and government contracts. The company burned through an estimated $15 billion before generating meaningful revenue. But if orbital data centers become viable, Blue Origin transforms from a billionaire’s hobby into the most strategically important infrastructure company in the world — the entity that builds and operates the physical layer beneath the fastest-growing segment of the global economy.

That’s not a guarantee. It’s a bet. But Jeff Bezos has made bets like this before. He built AWS when Wall Street analysts questioned why a bookseller needed server farms. He built a logistics network that rivals FedEx and UPS when critics called it wasteful. In both cases, the initial skepticism gave way to recognition that Bezos was building infrastructure for a future that hadn’t arrived yet.

Project Sunrise fits that pattern exactly. The future it’s building for — one where Earth’s surface can’t support the energy and cooling demands of exponentially growing AI workloads — is arriving faster than most people expected. And Blue Origin, the company everyone counted out, may have been preparing for it all along.



from WebProNews https://ift.tt/wd2FmH0

The Universe Doesn’t Care About Our Spreadsheets: Why the Search for Extraterrestrial Intelligence Is Stuck in a Statistical Fog

For more than six decades, humanity has been listening for a signal from the cosmos. Radio telescopes have scanned the skies. Optical surveys have searched for laser pulses. Governments and private foundations have spent hundreds of millions of dollars on the effort. And still — nothing. Not a whisper, not a ping, not a single confirmed detection of intelligence beyond Earth.

The silence is maddening. It’s also, according to a growing number of scientists and statisticians, deeply misunderstood.

A new book by Canadian data scientist and author Kelly Chicken, The Pale Blue Data Point, argues that the way we’ve been framing the search for extraterrestrial intelligence — commonly known as SETI — is riddled with statistical fallacies, anthropocentric assumptions, and a kind of cosmic wishful thinking that would embarrass a first-year graduate student in any other field. As reviewed in Literary Review of Canada, the book doesn’t merely question whether aliens exist. It questions whether we even know how to ask the question properly.

That distinction matters more than it might seem.

The Drake Equation, formulated by astronomer Frank Drake in 1961, has long served as the intellectual scaffolding for SETI. It attempts to estimate the number of communicative civilizations in the Milky Way by multiplying together a series of factors: the rate of star formation, the fraction of stars with planets, the fraction of those planets that develop life, the fraction of life that becomes intelligent, and so on. The equation looks scientific. It has variables and multiplication signs. But as Chicken and others have pointed out, most of its terms are essentially guesses — some of them spanning ranges of ten orders of magnitude or more.

“You can get any answer you want from the Drake Equation,” is a complaint that has echoed through astronomy departments for years. Chicken’s contribution, according to the Literary Review of Canada, is to apply formal Bayesian reasoning to the problem and show just how little our priors constrain the outcome. When you honestly account for our uncertainty about each parameter, the posterior distribution for the number of civilizations in the galaxy is staggeringly wide — consistent with zero civilizations and consistent with millions. The equation, in other words, tells us almost nothing.

This isn’t a new critique in the strictest sense. Philosophers and statisticians have raised versions of it before. But the book arrives at a moment when SETI is experiencing something of a renaissance in funding and public attention, and when the temptation to overinterpret thin data has arguably never been greater.

Consider the recent excitement around anomalous signals and unidentified aerial phenomena. NASA’s 2023 independent study on UAPs acknowledged that no evidence linked any sighting to extraterrestrial origin, yet the mere establishment of the study generated headlines suggesting the agency was “taking aliens seriously.” The Pentagon’s All-domain Anomaly Resolution Office has similarly struggled to tamp down public expectations even as it catalogues hundreds of unexplained cases, most of which are likely mundane. The cultural appetite for contact is enormous. The evidentiary basis remains vanishingly thin.

And then there’s the exoplanet revolution — a word I use advisedly, since the discovery of thousands of worlds orbiting other stars genuinely has transformed our understanding of the galaxy. NASA’s Kepler and TESS missions have confirmed that planets are common, that rocky worlds in habitable zones exist in abundance, and that the conditions for life as we understand it are not unique to our solar system. But as Chicken apparently argues, the leap from “conditions permitting life” to “life exists” to “intelligent life exists” to “intelligent life that communicates in ways we can detect” involves a chain of inferences, each link weaker than the last.

The sample size problem is brutal. We have exactly one example of life arising from chemistry: Earth. One example of intelligence: Homo sapiens. One example of a technological civilization capable of broadcasting into space: us. Drawing statistical conclusions from a sample of one is, to put it gently, fraught.

Chicken’s approach, as described by the Literary Review of Canada, is to treat this honestly. Rather than plugging optimistic values into Drake’s framework and declaring that the galaxy should be teeming with civilizations — then puzzling over Fermi’s famous paradox about why we haven’t heard from them — she suggests we confront the possibility that the emergence of intelligence is so improbable that we may be alone. Not definitely alone. But plausibly alone. The data, such as it is, doesn’t rule it out.

This is an uncomfortable position for many in the SETI community, and the discomfort is revealing.

Much of modern SETI advocacy rests on a rhetorical structure that treats the existence of extraterrestrial intelligence as a near-certainty, then frames the search as merely a matter of building better instruments and listening longer. The Breakthrough Listen initiative, funded by the late Yuri Milner with $100 million, describes itself as the most comprehensive search for alien communications ever undertaken. Its scientists use the Green Bank Telescope in West Virginia, the Parkes Observatory in Australia, and the MeerKAT array in South Africa to survey millions of stars across a wide range of frequencies. The technical capabilities are genuinely impressive. But technical capability and the probability of success are different things, and conflating them is precisely the kind of error Chicken’s book seems designed to expose.

So where does this leave us?

One response is to say that the search is still worth conducting even if the odds are long, because the payoff of a confirmed detection would be so immense that even a tiny probability justifies the investment. This is a defensible position. It’s essentially a Pascal’s Wager for the scientific age, and it has the advantage of being transparent about the uncertainty involved. But it requires honesty about that uncertainty — an honesty that is sometimes lacking in popular science communication, where the discovery of a potentially habitable exoplanet is routinely framed as a step toward finding alien life, as though the intervening steps were mere formalities.

Another response, increasingly popular among astrobiologists, is to shift the focus from intelligence to life itself — specifically, to biosignatures in the atmospheres of exoplanets that might indicate biological processes. The James Webb Space Telescope has already begun characterizing the atmospheres of some nearby worlds, and future missions like the proposed Habitable Worlds Observatory aim to do so with far greater precision. Detecting methane and oxygen in the atmosphere of a rocky planet in the habitable zone wouldn’t prove life exists there, but it would be suggestive. And unlike the search for radio signals from intelligent civilizations, the search for biosignatures doesn’t require the target to be actively trying to communicate.

This is a more modest goal. Also a more achievable one.

But even here, the statistical challenges are formidable. As a 2024 paper in Nature Astronomy noted, distinguishing biological from abiotic sources of atmospheric gases is extraordinarily difficult with current technology. False positives are a genuine concern. And the temptation to announce a detection before the evidence is airtight — driven by funding pressures, media attention, and the sheer human desire to not be alone in the universe — is real.

Chicken’s book, from what the review describes, doesn’t argue that SETI should be abandoned. It argues that SETI should be honest. Honest about the assumptions baked into its models. Honest about the width of the error bars. Honest about the difference between “we haven’t found anything yet” and “we’ve barely begun to look” — a distinction that SETI advocates often invoke but that itself requires careful statistical treatment. How much of the relevant search space have we actually covered? The answer depends enormously on what you assume about the nature of the signal you’re looking for, and those assumptions are themselves uncertain.

There’s a deeper philosophical issue at play, too. The Copernican principle — the idea that Earth and humanity occupy no special position in the cosmos — has been one of the most productive assumptions in the history of science. It led us from geocentrism to heliocentrism to the recognition that our sun is one of hundreds of billions of stars in one of hundreds of billions of galaxies. But when applied to the question of intelligence, the Copernican principle can become a kind of dogma. If we’re not special, the reasoning goes, then intelligence must have arisen elsewhere. But this conflates the mediocrity of our physical location with the mediocrity of the biological and cognitive processes that produced us. Those are separate claims, and the second one doesn’t follow from the first.

The philosopher Nick Bostrom has explored a related idea with his concept of the “Great Filter” — the hypothesis that somewhere between dead matter and spacefaring civilizations, there’s a step so improbable that almost no lineage makes it through. If the filter is behind us (say, the origin of life itself, or the evolution of eukaryotic cells), then we’re rare but safe. If it’s ahead of us (say, civilizations tend to destroy themselves before becoming interstellar), then the silence of the cosmos is a warning. Chicken’s statistical framework doesn’t resolve this question, but it does clarify just how open it remains.

The timing of the book is interesting for another reason. Public discourse around extraterrestrial intelligence has become increasingly entangled with UFO culture, conspiracy theories, and congressional hearings featuring whistleblower testimony of dubious provenance. The 2023 testimony of David Grusch, a former intelligence official who claimed the U.S. government possessed non-human craft, generated enormous media coverage but no verifiable evidence. The conflation of serious scientific inquiry with unsubstantiated claims about government cover-ups has made it harder, not easier, to have a rigorous conversation about the probability of extraterrestrial intelligence.

Chicken’s data-driven approach is, in that context, a welcome corrective. Not because it settles the debate, but because it insists on intellectual discipline in a field that has sometimes been too willing to let enthusiasm substitute for evidence.

The universe may well be full of intelligent beings. Or it may be empty of them, save for one species on one small planet orbiting an unremarkable star in the outer arm of a spiral galaxy. The honest answer is that we don’t know. And the honest corollary is that our uncertainty is not a temporary condition to be resolved by the next telescope or the next survey. It is, for now, the fundamental reality of the question — a reality that no equation, however elegant, can wish away.

What Chicken seems to be saying, ultimately, is that the cosmos doesn’t owe us company. And our spreadsheets, however sophisticated, can’t manufacture it.



from WebProNews https://ift.tt/hz6UO8D

Saturday, 21 March 2026

The Machines Are Browsing Now: How Bot Traffic Is About to Drown Out Humanity Online

Sometime around 2027, a threshold will be crossed that most people won’t even notice. The majority of traffic flowing across the internet — the requests, the page loads, the data calls — will no longer come from human beings. It will come from bots.

That’s the finding from a recent analysis by Search Engine Land, drawing on data from Imperva’s annual Bad Bot Report and broader industry trends. According to the report, automated traffic already accounts for roughly half of all internet activity. The trajectory suggests that by 2027, bots will definitively overtake humans as the primary consumers of web content. Not by a slim margin. By a widening gap that shows no sign of reversing.

This isn’t science fiction. It’s an infrastructure problem, a business problem, and an identity crisis for the open web — all rolled into one.

The numbers have been moving in this direction for years. Imperva’s 2024 Bad Bot Report found that bad bot traffic alone hit 32% of all internet traffic in 2023, the highest level the firm had recorded since it began tracking in 2013. Add in “good” bots — search engine crawlers, monitoring tools, AI training scrapers — and automated traffic easily eclipses what humans generate. The split was roughly 49.6% bot, 50.4% human in 2023. The gap has been narrowing year over year, and at current rates, the crossover point arrives within the next two years.

What’s driving this acceleration? Two forces, primarily. The first is the explosion of generative AI. Large language models from OpenAI, Google, Anthropic, Meta, and dozens of smaller players require enormous volumes of web data to train and retrain. Their crawlers are aggressive, persistent, and increasingly sophisticated. They don’t just visit a page once. They return repeatedly, scraping content at scale to feed models that are themselves generating more automated queries downstream. It’s a compounding loop.

The second force is older but no less potent: commercial bot activity, both legitimate and malicious. Price-scraping bots hammer e-commerce sites. Credential-stuffing bots probe login pages. Content-scraping bots replicate entire publications. Ad fraud bots generate fake impressions. Inventory-hoarding bots snap up concert tickets and limited-edition sneakers before any human finger can click “buy.” These operations have grown more sophisticated, more distributed, and harder to detect.

And the economics favor the bots. Running a botnet or a scraping operation is cheap. Defending against one is expensive.

For publishers, the implications are severe and immediate. Website analytics — the foundation of digital advertising — become unreliable when a significant portion of traffic is non-human. Advertisers paying on a cost-per-impression or cost-per-click basis have long worried about bot-inflated metrics. As automated traffic grows, the signal-to-noise ratio deteriorates further. The digital advertising industry already loses an estimated $84 billion annually to ad fraud, according to Juniper Research. That number is going up, not down.

Search engine optimization, the discipline that has governed web visibility for two decades, faces its own reckoning. Google’s search results are increasingly populated by AI-generated summaries that answer queries without sending users to the source website. Meanwhile, AI companies are crawling those same source websites to build the models that power those summaries. Publishers find themselves in a perverse position: their content trains the systems that reduce their traffic. Search Engine Land notes that this dynamic is already reshaping how SEO professionals think about content strategy, with some publishers experimenting with blocking AI crawlers entirely through robots.txt directives — a blunt instrument with uncertain consequences.

Cloudflare, which sits in front of a massive share of the internet’s traffic, has been sounding alarms of its own. The company reported earlier this year that AI bot traffic to its customers has surged, with some sites seeing AI crawlers account for a disproportionate share of their bandwidth consumption. Cloudflare introduced tools in 2024 specifically designed to let website operators identify and block AI scrapers, a tacit acknowledgment that the existing bot-management toolset wasn’t keeping pace.

The problem extends beyond websites. APIs — the programmatic interfaces that power mobile apps, IoT devices, and cloud services — are even more heavily targeted by automated traffic. Imperva’s data shows that API-directed bot attacks grew 44% year over year. APIs are attractive targets because they’re designed for machine-to-machine communication in the first place, making it harder to distinguish legitimate automated requests from malicious ones.

So what happens when most of the internet’s activity is machine-generated? Several things, none of them comfortable for the incumbents.

First, the economics of web hosting change. Bandwidth costs money. Server capacity costs money. When bots consume more resources than humans, website operators are effectively subsidizing automated access to their content. Some are pushing back. The New York Times sued OpenAI. Reddit struck a licensing deal with Google. Stack Overflow gated its data behind paid API access. These are early skirmishes in what will become a prolonged fight over who pays for the content that trains AI systems.

Second, authentication and verification become paramount. Proving that a visitor is human — through CAPTCHAs, behavioral analysis, device fingerprinting, or cryptographic attestation — shifts from a minor friction point to a fundamental requirement. But every verification step adds latency and degrades user experience. The tension between security and usability will intensify.

Third, the very concept of “web traffic” as a meaningful metric starts to erode. If most visits to a webpage are automated, then traffic numbers tell you less about audience size and engagement and more about how attractive your data is to machines. Media companies, advertisers, and investors will need new frameworks for measuring digital value. Page views won’t cut it anymore. They arguably haven’t for a while.

There’s a deeper philosophical dimension here too. The internet was built for people. Its protocols, its design patterns, its business models — all assume a human on the other end of the connection. A web where machines are the primary users is a fundamentally different thing. It’s less a library and more a warehouse. Less a town square and more a data pipeline.

Some industry observers see opportunity in this shift. Bot management is a growing market, projected to reach $2.1 billion by 2028 according to MarketsandMarkets. Companies like Imperva (now part of Thales), Cloudflare, Akamai, and DataDome are investing heavily in detection and mitigation technologies. Machine learning models trained to identify bot behavior are themselves becoming more sophisticated — an arms race between automated attackers and automated defenders.

But the arms race metaphor only goes so far. Not all bots are adversaries. Search engines need to crawl the web to index it. Price comparison services need to aggregate data to function. Accessibility tools use automated processes to make content available to people with disabilities. The challenge isn’t eliminating bots. It’s distinguishing between the ones that add value and the ones that extract it.

That distinction is getting harder to make. AI crawlers from well-known companies operate in a gray zone — they’re not malicious in the traditional sense, but they consume resources and repurpose content without direct compensation to the creator. The legal and ethical frameworks for this kind of activity are still being built, mostly through litigation and ad hoc licensing agreements rather than coherent policy.

The regulatory picture is fragmented. The EU’s AI Act addresses some aspects of data provenance and transparency but doesn’t directly regulate web crawling. In the United States, the legal status of AI training on copyrighted web content remains unresolved, with multiple cases working through the courts. Japan has taken a permissive stance, while Australia is considering mandatory licensing schemes. No consensus exists.

Meanwhile, the bots keep coming. Faster, smarter, more numerous.

For businesses operating online — which at this point means virtually all businesses — the practical takeaways are straightforward if unglamorous. Invest in bot detection and traffic analysis. Audit your server logs. Understand what percentage of your traffic is human. Rethink metrics that assume human visitors. Review your robots.txt and terms of service. Consider whether your content is being used to train models without your consent, and whether you have recourse.

For the technology industry broadly, the 2027 crossover point should serve as a forcing function. The infrastructure of the internet — its protocols, its economic models, its governance structures — was designed for a human-majority web. That era is ending. What replaces it will depend on decisions being made right now, in boardrooms and courtrooms and standards bodies, about who gets to access what, at what cost, and under what rules.

The machines aren’t coming. They’re already here. They’ve been here for years. The difference is that soon, they’ll outnumber us. And the web will have to reckon with what that means — for commerce, for content, for the basic question of who the internet is actually for.



from WebProNews https://ift.tt/pzNeL2X