Wednesday, 13 May 2026

Microsoft Fires Back: Why Windows 11’s CPU Boost Isn’t Cheating

Scott Hanselman didn’t hold back. The Microsoft vice president took to X last week to confront critics head-on. Their target? A new Windows 11 feature that briefly maxes out CPU clocks to make menus snap open and apps launch faster.

Call it the Low Latency Profile. It ramps processor frequency for one to three seconds during interactive tasks. Start menu. Context menus. App launches. The result feels immediate. Tests show up to 70% faster Start menu responses and 40% quicker launches for built-in apps like Edge and Outlook. (Windows Central, May 7, 2026)

But not everyone cheered. Online voices labeled it a band-aid. A lazy shortcut. Proof that Windows had grown too bloated to run efficiently without brute force. Hanselman pushed back. Hard.

“Apple does this and y’all love it.” He followed with a sharper point. “All modern operating systems do this, including macOS and Linux. It’s not ‘cheating’; this is how modern systems make apps feel fast: they temporarily boost the CPU speed and prioritize interactive tasks to reduce latency.” (Pureinfotech, May 11, 2026)

The exchange revealed more than one executive’s frustration. It exposed a deeper tension in how users judge operating systems today. Speed. Responsiveness. That instant feel when you click. Benchmarks matter less than perception. And Windows 11 has struggled with that perception for years.

Modern interfaces carry weight. The Start menu no longer simply unhides a static list. It pulls cloud recommendations, web results, live tiles. File Explorer handles thumbnails, previews, network shares. Background services multiply. Web technologies replace lean native code. Each addition extracts a cost in latency. Milliseconds add up.

So Microsoft turned to a proven tactic. Predict high-priority user actions. Boost frequency and scheduler priority. Complete the task quickly. Drop back to idle. Smartphones do it constantly. Tap the screen. Cores wake. Clocks spike. Frame renders. Power falls away milliseconds later. Users never notice the dance. They just feel the device responds.

macOS takes the same approach. Aggressive clock boosts on clicks and animations. Quality of Service classes help the scheduler anticipate needs. Linux kernels rely on frequency governors and schedutil to wake performant cores the moment UI interaction begins. The techniques differ in detail. The goal stays identical. Reduce perceived lag.

Hanselman drove that message home. He pointed critics to macOS’s powermetrics tool. Check it yourself, he suggested. Watch the bursts. He also corrected misconceptions about Linux. “Linux achieves its responsiveness through the same methods, using the kernel scheduler, CPU frequency governors, and modern CPU boost technologies like schedutil.” The negativity, he added, sometimes came from “computer science enthusiasts without experience in computer science making assumptions based on their intuition.”

Yet the criticism landed because it touched a nerve. Windows 11 launched with hardware requirements that frustrated many. Early builds felt heavier than Windows 10 in daily use. Later updates introduced AI features that some saw as distractions from core reliability. Trust eroded. So even a sensible engineering choice met skepticism. Why does my PC need to redline the CPU just to open the Start menu?

The answer sits in the evolution of software. Older Windows versions did less. Windows 95’s Start menu displayed a pre-rendered panel. No scaling gymnastics. No search indexing in the background. No synchronization with online accounts. That simplicity delivered raw speed on modest hardware. Today’s expectations demand more. Users want rich previews, personalized suggestions, seamless integration across devices. Delivering that without lag requires clever resource management.

This Low Latency Profile forms one piece of a larger initiative. Microsoft calls it Windows K2 internally. The effort combines short CPU bursts with deeper code optimization. Teams strip legacy components. They migrate more shell elements to WinUI 3 for lighter rendering. Scheduler tweaks improve how the OS handles processor power states and C-state transitions. The company has already begun shipping some of these changes to Insiders and retail users. (Windows Latest, May 11, 2026)

Early tests impress. On budget hardware and virtual machines, the difference turns sluggish experiences snappy. ARM-based systems like those with Snapdragon X Elite benefit especially. Their rapid power-state transitions pair perfectly with brief boosts. Battery and thermal impact stays low because bursts last seconds, not minutes.

But Hanselman stressed balance. “There are actual things wrong and smart people are working to fix them.” The boost doesn’t replace optimization. It complements it. Microsoft pursues both. Legacy code cleanup continues. File Explorer gains attention. The Run dialog moves to native frameworks. Performance work stretches across multiple fronts.

The episode highlights how Microsoft communicates engineering decisions in 2026. Executives engage directly on social platforms. They explain trade-offs in plain language. Transparency carries risk. Critics seize on admissions that the OS needs help. Yet silence would fuel conspiracy theories about hidden tricks.

Users ultimately vote with their experience. If the Start menu opens instantly, if apps feel immediate, if the system stays cool and efficient, complaints fade. The Low Latency Profile aims for exactly that outcome. It doesn’t promise higher benchmark scores in sustained workloads. It targets the moments that shape daily satisfaction. Click. Respond. Done.

Whether the feature ships widely this year remains unclear. Testing continues in Insider builds. Adjustments to duration and triggers could still occur. What won’t change is the underlying principle. Modern operating systems manage power and performance dynamically. They always have. The difference now lies in how aggressively and intelligently they do so.

Microsoft has joined the conversation openly. Hanselman’s defense may not sway every skeptic. It does clarify the playing field. Apple does it. Linux does it. Smartphones perfected it. Windows 11 is catching up in visibility and effectiveness. The real test arrives when millions of users encounter the smoother experience. Then the debate shifts from theory to results.

And results, in the end, determine whether Windows wins back the fans it seeks.



from WebProNews https://ift.tt/UgXlJA1

Amazon Halts High-Speed E-Bike Sales in California as Deadly Crashes Mount

Amazon has drawn a firm line in California. The retail giant will no longer sell electric bikes capable of exceeding the state’s strict speed limits for legal e-bikes. The decision follows months of pressure from Attorney General Rob Bonta and local prosecutors alarmed by a surge in fatal collisions involving young riders.

California draws clear distinctions. Class 1 e-bikes offer pedal assistance up to 20 mph. Class 2 models add throttle but cap at the same speed. Class 3 bikes, which require riders to be at least 16, reach 28 mph with pedal assist. Anything faster or lacking proper pedals crosses into moped or motorcycle territory. That shift demands a license, registration, insurance and often higher age minimums.

The change isn’t abstract. KCRA 3 Investigates flagged multiple Amazon listings advertising speeds over 40 mph. Some models pushed even higher. After the station shared examples with the company, Amazon moved. It now requires third-party sellers to meet state laws, its own policies and speed classifications. Non-compliant products have been pulled. Others face review.

“We are seeing a surge of safety incidents on our sidewalks, parks, and streets,” Bonta said in an April consumer alert titled “Too Fast, Too Furious.” He warned parents and riders directly. “If your or your teen’s electric two-wheeled vehicle goes too fast, it might be a motorcycle or a moped — not an e-bike.”

Orange County District Attorney Todd Spitzer welcomed the step. He noted more than 100 deaths nationwide tied to e-bike and e-motorcycle crashes. Two recent tragedies hit close. Thirteen-year-old Benson Nguyen of Santa Ana died after crashing an e-motorcycle traveling around 35 mph in Garden Grove. In Lake Forest, an 81-year-old veteran named Ed Ashman was struck and killed by a 14-year-old on a similar machine.

Prosecutors have filed charges against parents in related cases. One Yorba Linda father allegedly modified his son’s vehicle to exceed 60 mph. The boy had already gone through impound and safety training. Another parent in Aliso Viejo faces involuntary manslaughter charges after her son crashed fatally despite prior warnings. These incidents underscore a pattern. Young riders treat powerful machines like toys. The results prove otherwise.

Amazon’s announcement landed Friday. It came weeks after Bonta’s alert and direct outreach from investigators. The company told the Orange County Register it demands every product on its platform follow applicable regulations. Compliance checks continue. Yet as of early this week, some borderline models lingered in carts. One YVY bike rated between 30 and 38 mph remained available for California delivery, according to a Gizmodo check.

The episode exposes cracks in the marketplace. Third-party sellers flood platforms with imported machines that blur lines between bicycle and motor vehicle. These so-called hooligan bikes often weigh heavily, lack adequate brakes for their speed and attract underage users who skip helmets, training or licenses. One industry observer called the Amazon move progress. Such bikes, the person said, simply should not be on public roads when operated by 14-year-olds unfamiliar with traffic rules.

But the crackdown raises questions too. Compliant e-bike makers have complained for years that rogue models damage the category’s reputation and endanger everyone. Safety advocates point to rising clashes. E-bike riders mix with pedestrians on paths, frustrate transit users and spark debates in cities trying to cut car use. Hikers and cyclists have tangled with faster machines on trails.

State law requires permanent labels on e-bikes. Those stickers must list the class, motor wattage and top assisted speed. Many imported products ignore the rule. Sellers market them as e-bikes anyway. Buyers in California who click purchase on a 40-mph model could unknowingly acquire something that demands motorcycle endorsement.

And enforcement lags. Local police struggle to distinguish compliant bikes from illegal ones on sight. Bonta’s office partnered with district attorneys across the Bay Area and beyond to issue the alert. The goal was education first. Amazon’s response suggests the message registered.

Other retailers have taken notice. Walmart already blocks non-compliant models for California addresses. Smaller direct-to-consumer brands may feel less immediate pressure, yet the signal is clear. Major platforms won’t risk liability or regulatory heat.

The broader market keeps growing. E-bikes promised affordable, green mobility. Many models deliver exactly that. They help commuters skip traffic, let older adults stay active and reduce short car trips. Yet the fastest segment undercuts those gains. Speed sells. So does minimal regulation. Until crashes mount.

California isn’t alone. New Jersey enacted tough rules effective this July. Riders of machines over 20 mph need a driver’s license, registration and insurance. The law drew fire from cycling groups and environmental organizations worried about climate targets. Similar tensions bubble in other states.

Amazon’s pivot won’t eliminate dangerous machines. Determined buyers can still order from overseas sites or local shops that skirt rules. Private property use remains legal for non-street machines. But removing easy one-click access from the nation’s largest online marketplace changes the equation. It forces conversation about what counts as a bicycle in an era of 5,000-watt motors.

Prosecutors and regulators insist the law has been settled for years. The three-class system dates back well before the current boom. Manufacturers and sellers simply ignored it when convenient. Bonta’s alert and the KCRA probe applied pressure where it counts. At the point of sale.

Shoppers face new realities. Those seeking legitimate Class 3 transport can still find options on Amazon. Models capped at 28 mph with proper labeling should remain. Thrill seekers chasing 40 mph or more must look elsewhere. And they should understand the legal consequences. A traffic stop on a misclassified machine can bring fines, impoundment and insurance complications.

The episode also highlights platform responsibility. Amazon hosts millions of third-party listings. Policing every speed claim proved difficult until spotlighted by journalists and attorneys general. Now the company investigates similar products and coordinates with law enforcement. That shift may ripple beyond California.

Industry watchers expect tighter scrutiny nationwide. Major retailers dislike headlines about deadly crashes tied to their sites. Insurance carriers grow wary. Cities debate trail access and speed limits on shared paths. The humble e-bike has become a policy battleground.

Amazon’s decision won’t end the debate. But it marks a turning point. Speed without accountability carries costs. California officials decided those costs had grown too high. Retailers are following suit. Riders, parents and sellers now navigate the consequences. Some faster than others.



from WebProNews https://ift.tt/PJNSwhu

Tuesday, 12 May 2026

Debian Draws A Line: Reproducible Builds Become Mandatory For Its Next Release

Debian’s release team delivered a quiet bombshell this weekend. Halfway through the development cycle for the next major version, code-named Forky, officials declared that the distribution must ship only reproducible packages. The change took effect immediately. Migration tools now block any new package that fails to build identically bit for bit. Packages already in testing that slip backward face the same barrier.

The announcement came directly from Paul Gevers, writing on behalf of the release team. “Aided by the efforts of the Reproducible Builds project, we’ve decided it’s time to say that Debian must ship reproducible packages,” he stated in the bits from the release team posted to the debian-devel-announce mailing list on May 10, 2026. The message described the shift as “a small step in code, but a giant leap in commitment.”

This matters. For years the project has chased reproducibility without forcing it. Progress came steadily. Independent verifiers could rebuild many packages and match the official binaries exactly. Yet gaps remained. Timestamps crept in. Build paths differed. Random seeds introduced variation. The result? No one could say with absolute certainty that the binary downloaded from Debian’s servers came from the published source without trusting the build infrastructure.

That trust model no longer suffices. Supply-chain attacks have sharpened focus across the industry. The 2024 xz-utils incident, in which a sophisticated backdoor nearly slipped into major distributions, served as a wake-up call. Reproducible builds offer a practical defense. Anyone can rebuild the package. Compare the output. Match the hash. Confirm no alterations occurred between source and binary. Simple in theory. Demanding in practice.

Debian has come far. Phoronix reported on the policy shift within hours of the mailing list post. Michael Larabel noted that Debian 14.0, expected around 2027, will mark the first major release under this mandate. Earlier coverage from the same outlet showed the archive reaching 94 percent reproducibility for Debian 9 on x86_64 back in 2017. Rates have climbed since. The project’s testing infrastructure at tests.reproducible-builds.org tracks progress across architectures and suites.

Monthly reports from the Reproducible Builds project document the grind. In April 2026 the team reviewed dozens of packages, updated infrastructure, and refined tools. Vagrant Cascadian handled non-maintainer uploads to fix specific issues. Chris Lamb continued refining diffoscope, the sophisticated diff utility that pinpoints why two builds diverge. These efforts accumulate. They turn reproducibility from aspiration into requirement.

But. Challenges persist. Some packages embed timestamps by design. Others rely on compilers that produce varying output based on hardware or optimization flags. File ordering in archives can differ. Build environments must match exactly, down to the precise versions of every dependency. The policy accepts no excuses for new uploads. Maintainers must adapt or see their packages rejected from testing.

Reactions poured in quickly. On Hacker News, users debated the practicality. One commenter acknowledged the protection against compromised build servers yet questioned how often such attacks occur in practice. Others pointed to distributions that already achieve high or full reproducibility. NixOS, Guix, and Tails stand out. NetBSD reached the milestone years earlier. Debian’s size and package count make the task bigger. Its influence makes success matter more.

The timing aligns with broader movement. The Reproducible Builds project publishes regular updates. Its April 2026 report highlighted infrastructure upgrades for the forky release and the addition of new test nodes. Holger Levsen upgraded systems and dropped older architectures from testing. These changes prepare the ground. They signal that the project views full reproducibility as attainable.

Security experts have long argued for this. A 2021 paper titled “Reproducible Builds: Increasing the Integrity of Software Supply Chains” laid out the case. Authors described how the technique creates a verifiable path from source to binary. They drew on Debian’s own experience. The paper, available on arXiv, influenced policy discussions at multiple organizations. Governments and enterprises now reference similar principles when specifying procurement requirements.

Debian’s decision will ripple outward. Ubuntu, Linux Mint, and numerous derivatives pull packages from Debian. Higher reproducibility there strengthens the entire family. Downstream builders gain confidence. Users running critical infrastructure can verify their systems more easily. Auditors gain a concrete check.

Not every package will comply overnight. The release team built in testing for binary non-maintainer uploads, or binNMUs. These automated rebuilds help when architecture-specific tweaks are needed. The team also added LoongArch 64-bit, known as loong64, to the archive two weeks before the reproducibility announcement. That addition triggered widespread rebuilds and lengthened the continuous integration queue. Patience, the message noted, remains necessary.

Uploaders now carry explicit responsibility. If a package blocks due to test regressions in reverse dependencies, the original maintainer must file release-critical bugs. The system no longer tolerates drift. This raises the bar. It also rewards those who invested early in reproducible tooling.

Tools have matured. Strip-nondeterminism removes timestamps and other variable elements after the build completes. diffoscope dissects differences with remarkable precision. rebuilderd runs independent rebuilds at scale and reports discrepancies. Debian integrates all three. The project even operates reproduce.debian.net to let anyone verify packages against official builds.

Still, full compliance across every architecture and every package will test the community’s resolve. Armhf support was dropped from some tests after years of maintenance by Vagrant Cascadian’s collection of hardware. Newer ports like loong64 bring their own quirks. Each requires validation.

The announcement carries weight precisely because it comes from the release team. Not a working group. Not a side project. The people who decide what enters the stable release have drawn the line. Packages that cannot be reproduced will not migrate. Debian 14 aims to ship with this guarantee.

Observers see momentum. Recent X posts celebrated the move. One noted that NetBSD achieved the goal in 2017 while Debian followed in 2026. Another highlighted the audit value: no binary should be trusted if it cannot be bitwise reproduced. Discussions on Linux forums emphasized the link to supply-chain integrity.

Yet the work continues. The Reproducible Builds project issued its latest monthly summary just weeks ago. It tracks patches, infrastructure, and community efforts across distributions. Debian remains central. Its scale provides both the hardest test and the greatest reward.

So the policy lands as both culmination and beginning. Years of incremental fixes, tool development, and advocacy reached critical mass. The release team converted that progress into enforcement. Maintainers will feel the pressure. Users will gain assurance. The broader software supply chain stands to benefit as practices spread.

Debian has bet that the cost of adaptation is lower than the risk of inaction. Early evidence suggests the community agrees. The real test will come as Forky approaches release. If the archive reaches and holds 100 percent reproducibility under the new rules, the distribution will have set a standard for others to follow.



from WebProNews https://ift.tt/tNVanoI

Monday, 11 May 2026

How AI Bots Outpaced Bun’s Creator and Why Anthropic Bought the Whole Project

Jarred Sumner once spent three weeks hand-porting a Go transpiler to Zig. Line by line. No AI. The result became the seed for Bun, the JavaScript runtime that now powers some of the hottest AI coding tools on the market.

Today that same project has a GitHub bot called robobun with more contributions than Sumner himself. The milestone, flagged by developer Simon Willison on May 6, 2026, arrived during a “Code w/ Code” conversation between Sumner and Bryan Cherny. Fenado AI captured the moment: “Watching @jarredsumner and @bcherny at Code w/ Code talking about robobun, the Bun project’s GitHub bot that’s now made more contributions to Bun than Jarred has.”

Short. Stark. And a signal of how fast the ground is shifting.

Five months earlier, Anthropic had acquired Bun outright. The deal, announced December 2, 2025, tied the fast JavaScript toolkit directly to Claude Code, the AI coding product that hit $1 billion in annualized revenue just six months after public launch. Sumner’s blog post laid out the logic without fanfare. Bun Blog quoted him: “In late 2024, AI coding tools went from ‘cool demo’ to ‘actually useful.’ And a ton of them are built with Bun.”

Claude Code ships as a single-file Bun executable to millions. That single technical choice — fast startup, native addons, easy distribution — made Bun the quiet backbone for several AI-first developer tools. FactoryAI and OpenCode joined the list. When those tools succeed, Bun must not break. Anthropic now has every reason to keep it excellent.

But the story runs deeper than infrastructure. Sumner got obsessed with Claude Code. He took four-hour walks around San Francisco with engineers from the team. They talked about where coding heads next. He repeated the walks with competitors. He chose Anthropic. “This feels approximately a few months ahead of where things are going. Certainly not years,” he wrote in the acquisition post.

The numbers tell part of the tale. Bun’s monthly downloads climbed 25% in October 2025, crossing 7.2 million. The project carried more than four years of runway yet generated zero revenue. Traditional paths — cloud hosting, paid tiers — felt mismatched when AI agents threatened to write, test and deploy most new code. Sumner saw the runtime and tooling around that code mattering more than ever. Speed. Predictability. Scale. Bun had chased those traits from the start.

The Hand-Port That Started It All

Sumner’s original frustration was simple. A browser-based voxel game. A large Next.js codebase. Forty-five-second iteration cycles. He attacked the bottleneck by rewriting esbuild’s transpiler from Go into Zig. Three weeks of focused effort produced something that worked, roughly. Early benchmarks showed it transpiling JSX three times faster than esbuild, 94 times faster than swc, 197 times faster than Babel.

That exercise taught lessons that still shape Bun. Write all the code first. Avoid incremental fixes until the full picture appears. Favor breadth-first exploration over depth-first rabbit holes. Sumner repeated those principles in recent X threads while discussing an experimental Rust port of parts of Bun. The original Zig implementation remains largely intact, though Claude-generated code sometimes arrived with excess comments that later required cleanup.

By July 2022, Bun v0.1 combined bundler, transpiler, runtime, test runner and package manager into one binary. It hit 20,000 GitHub stars in a week. Production use grew. Windows support arrived in v1.1 after relentless user demand. Built-in clients for PostgreSQL, Redis and MySQL followed. Companies such as X and Midjourney adopted it. Tailwind’s standalone CLI compiles with Bun.

Yet the real acceleration came when AI coding tools discovered Bun’s single-file executables. Developers could bundle entire JavaScript projects into binaries that run anywhere, even on machines without Bun or Node installed. Startup stayed quick. Native modules worked. Distribution simplified. The traits that solved Sumner’s original 45-second pain now solved distribution pain for AI-powered CLIs.

Anthropic’s Chief Product Officer Mike Krieger put it plainly in the acquisition announcement. Anthropic reported: “Bun represents exactly the kind of technical excellence we want to bring into Anthropic. Jarred and his team rethought the entire JavaScript toolchain from first principles while remaining focused on real use cases.” Claude Code’s rapid growth demanded matching infrastructure. Bun supplied it.

Post-acquisition, Bun stays open source and MIT licensed. The same team continues the work. Development remains public on GitHub. Node.js compatibility stays a priority. The roadmap now aligns more closely with Claude Code and the Claude Agent SDK, yet retains independence similar to browser engines and their JavaScript runtimes.

Robobun’s lead in contribution count adds another layer. The bot handles force pushes, labeling, bug fixes and test writing. It responds to review comments. In one setup, it tests fixes against earlier Bun versions before merging. Sumner has praised the productivity gains even while acknowledging the shift in metrics. Traditional contribution graphs once measured human effort. They now capture a mix of human direction and machine execution.

Other tools race forward. Cursor released an SDK for building agents using its own runtime and models, though early feedback noted missing Python support and beta-stage limitations, as covered by The New Stack on May 8, 2026. Windsurf positioned itself as an AI-native IDE with agent command centers and verification workflows. Chrome DevTools integrated Gemini for styling, performance and network debugging. The field fragments, yet Bun’s position inside Anthropic gives it unusual leverage in the agent-heavy future.

Sumner’s early tweets captured the ambition. One from 2021 highlighted JavaScriptCore’s four-times-faster startup compared with V8 in his tests. Another announced Bun as “an incredibly fast all-in-one JavaScript runtime.” Those claims proved durable. The acquisition simply reframes the bet: instead of chasing venture-scale monetization alone, Bun now sits at the center of one of the most aggressive AI coding efforts in the industry.

Questions remain. How will contribution credit evolve when bots outpace founders? What does code ownership mean when agents generate the majority of new lines? Will runtime performance still dominate when humans review less of the output? Sumner has wagered that fast, predictable tooling becomes even more valuable in that world.

He is hiring. The team ships updates at a quick clip. Bun v1.3.13 arrived with parallel test improvements, lower memory usage for installs and better source map handling. Each release tightens the loop between human intent and machine output. The original frustration — 45 seconds to check if a change worked — feels quaint. Today the constraint is how quickly an agent can propose, validate and deploy across thousands of lines.

Sumner once coded in a cramped Oakland apartment, tweeting progress between commits. Now he walks San Francisco streets with AI product teams and watches bots merge more PRs than he does. The project he started to solve his own iteration pain has become infrastructure for tools that multiply developer output by orders of magnitude. And Anthropic paid to own the stack underneath it all.

The numbers keep moving. Downloads rise. Revenue at Claude Code compounds. Robobun’s commit count grows. Bun itself ships faster than before. The question is no longer whether AI will change software engineering. It already has. The question is who controls the runtime that agents rely on when most code never passes through human hands first. For now, that runtime is Bun. And its creator no longer holds the top spot on its own contribution graph.



from WebProNews https://ift.tt/sWRteXV

Sunday, 10 May 2026

OpenAI Launches Specialized Cybersecurity AI Model for Threat Detection

OpenAI has introduced a specialized AI model designed specifically for cybersecurity professionals, arriving just weeks after Anthropic launched its own security-focused system called Mythos. The new offering, detailed in a recent announcement from the company, aims to provide security teams with advanced capabilities for threat detection, incident response, and vulnerability assessment through more targeted language processing and reasoning abilities.

This development highlights the growing competition among leading AI developers to create tools that address the specific demands of information security work. According to the TechRadar report, OpenAI positioned its model as a direct response to the needs expressed by security operations centers and threat intelligence units that often struggle with the volume and complexity of modern cyber threats.

The model builds upon OpenAI’s existing GPT architecture but incorporates training data and fine-tuning processes that emphasize security contexts. Engineers at the company exposed the system to thousands of real-world security reports, malware analysis documents, network logs, and incident response playbooks. This specialized training allows the model to better understand technical terminology, recognize patterns associated with common attack vectors, and generate recommendations that align with established security frameworks such as NIST, MITRE ATT&CK, and ISO 27001.

Security teams frequently face challenges when using general-purpose AI models for sensitive work. Standard large language models sometimes hallucinate technical details, misinterpret security logs, or suggest actions that could inadvertently weaken defenses. OpenAI claims its new model demonstrates measurable improvements in accuracy for tasks ranging from analyzing phishing emails to mapping attack chains across enterprise networks. Early testing with select partners showed the system correctly identifying sophisticated social engineering attempts that generic models often missed.

One notable feature involves the model’s ability to process and correlate information from multiple security tools simultaneously. Rather than examining isolated alerts from endpoint detection systems, SIEM platforms, and cloud security monitors, the AI can synthesize findings across these sources to present a coherent picture of potential intrusions. This capability addresses a persistent pain point for analysts who spend considerable time manually connecting disparate data points during investigations.

The timing of this release creates an interesting dynamic in the artificial intelligence sector. Anthropic debuted its Mythos model barely a month earlier, signaling that both organizations recognize the substantial market potential in serving cybersecurity customers. While specific technical comparisons remain limited due to the proprietary nature of both systems, industry observers suggest the offerings may differ in their approaches to safety constraints and specialized knowledge bases. Anthropic has historically emphasized constitutional AI principles that prioritize careful reasoning and reduced harmful outputs, which could influence how Mythos handles sensitive security scenarios.

OpenAI’s approach appears to focus on practical integration with existing security workflows. The company has developed application programming interfaces that allow the model to connect directly with popular platforms like Splunk, Elastic, and Microsoft Sentinel. Security analysts can query the system using natural language while it maintains awareness of the specific environment’s architecture, policies, and historical incidents. This contextual awareness represents a significant advancement over previous AI assistants that required extensive prompt engineering to produce relevant results.

Privacy and data protection formed central considerations during development. The model operates with strict controls that prevent training on customer data without explicit permission. Organizations can deploy the system in private instances that keep all security information within their own infrastructure, addressing concerns about sharing sensitive threat data with external providers. This attention to confidentiality proves essential when dealing with zero-day vulnerabilities or advanced persistent threats where information disclosure could compromise ongoing investigations.

Performance metrics shared in the announcement indicate the specialized model outperforms general versions of GPT-4 on security-specific benchmarks by substantial margins. In tests involving malware classification, the system achieved higher precision and recall rates when distinguishing between legitimate software and malicious code. Similarly, when asked to assess network traffic patterns, the model demonstrated better recognition of command-and-control communications associated with various threat actors.

Experts suggest this specialization trend reflects broader maturation in artificial intelligence applications. Rather than expecting one model to handle every possible task with equal competence, developers now create variants optimized for particular professional domains. Healthcare, legal, financial services, and now cybersecurity each present unique terminology, regulatory requirements, and risk profiles that benefit from tailored approaches.

The new model includes features specifically designed for red team operations and penetration testing. Security professionals can use it to brainstorm attack scenarios, identify potential weaknesses in proposed architectures, or generate realistic phishing content for training purposes. However, OpenAI implemented guardrails to prevent the system from assisting with actual malicious activities, maintaining ethical boundaries while supporting defensive work.

Integration with automated response systems marks another area of focus. The model can not only identify threats but also suggest specific remediation steps based on an organization’s particular tools and policies. For example, when detecting ransomware indicators, it might recommend isolating affected systems, initiating backup restoration procedures, and updating firewall rules according to predefined playbooks. This guidance helps reduce response times during critical incidents when every minute counts.

Industry analysts predict strong adoption among managed security service providers who handle multiple client environments. These organizations face pressure to deliver consistent, high-quality analysis despite varying skill levels among their staff. An AI system that can augment junior analysts while providing sophisticated insights for senior team members could significantly improve overall service quality and operational efficiency.

Challenges remain in measuring the true effectiveness of such tools in real-world conditions. While benchmark results look promising, actual security incidents involve numerous variables including organizational culture, existing tool configurations, and the unpredictable nature of human adversaries. Security leaders will likely proceed with careful pilot programs before committing to widespread deployment.

The competitive pressure between OpenAI and Anthropic may drive further innovation in this space. Both companies possess substantial resources and access to talented researchers who understand both artificial intelligence and information security. Their parallel development efforts could accelerate improvements in areas such as explainability, where security teams require clear reasoning behind AI-generated recommendations rather than black-box outputs.

Educational institutions and training programs have already expressed interest in incorporating these specialized models into their curricula. Teaching the next generation of cybersecurity professionals how to effectively collaborate with AI systems will become an essential component of preparation for modern security roles. Understanding both the capabilities and limitations of these tools represents a critical skill set for future practitioners.

OpenAI emphasized that this release forms part of a larger strategy to address enterprise needs across multiple sectors. The company continues investing in research that adapts foundation models for specific professional requirements while maintaining focus on safety and reliability. For cybersecurity teams specifically, the model arrives at a time when threats grow increasingly sophisticated and the shortage of qualified personnel continues to widen.

Organizations considering adoption should evaluate several factors beyond the technical specifications. Implementation costs, integration complexity, staff training requirements, and alignment with existing governance frameworks all require careful assessment. The most successful deployments will likely combine the AI capabilities with strong human oversight and established security processes rather than treating the technology as a standalone solution.

As more details emerge about both OpenAI’s offering and Anthropic’s Mythos system, security professionals will gain better understanding of which approach best fits their particular operational models. Some teams may prefer one system’s handling of certain attack types while finding the other more effective for compliance-related tasks. This diversity in specialized AI tools ultimately benefits the entire field by providing options that match different organizational needs and preferences.

The introduction of these purpose-built models signals a shift toward more practical applications of artificial intelligence in high-stakes environments. Rather than pursuing general intelligence, developers now focus on creating reliable partners for specific complex tasks. For cybersecurity teams overwhelmed by alert volumes and sophisticated adversaries, such targeted assistance could meaningfully improve their ability to protect critical systems and data.

Future updates will likely expand the models’ knowledge bases as new threats emerge and defensive techniques evolve. Both OpenAI and Anthropic have indicated plans for continuous improvement based on feedback from early adopters in the security community. This iterative approach acknowledges that effective cybersecurity AI must adapt quickly to changing circumstances in ways that static tools cannot match.

The broader implications extend beyond individual security operations. As these systems become more capable, they may influence how organizations structure their security teams, allocate resources, and approach risk management. The technology could help address the persistent talent gap by amplifying the effectiveness of available personnel while potentially creating new roles focused on AI system management and oversight.

Security leaders who embrace these tools thoughtfully while maintaining appropriate skepticism about their limitations will likely gain advantages over those who either reject the technology outright or implement it without proper controls. The most effective strategies will combine artificial intelligence with human expertise, using each to compensate for the other’s weaknesses in the ongoing effort to stay ahead of determined adversaries. This balanced approach offers the best path toward improved security outcomes in an increasingly challenging digital environment.



from WebProNews https://ift.tt/ow8hdmH

Saturday, 9 May 2026

Google Chrome Drops On-Device AI Privacy Pledge Amid Silent Model Downloads

Google Chrome quietly altered text in its settings this week. The change removes a direct assurance that its on-device AI model keeps user data away from company servers. Privacy researcher Alexander Hanff spotted the edit days after he exposed how the browser downloads a 4GB Gemini Nano model without asking first.

The discovery has stirred fresh doubts about how Google handles local AI. Users expect on-device processing to mean exactly that. No cloud. No telemetry. Yet the company scrubbed the phrase that made that promise explicit.

Hanff runs the site That Privacy Guy. In his May 8 post he laid out the before and after. Earlier Chrome versions told users under the System section that the browser “can use AI models that run directly on your device without sending your data to Google servers.” The sentence vanished. New wording simply notes the models run on device. Turn the toggle off and features might stop working. Nothing more.

The edit lands at a bad moment. Hanff’s earlier report detailed how Chrome drops the Gemini Nano weights.bin file into a folder called OptGuideOnDeviceModel. It happens automatically on capable hardware. No consent dialog. No settings checkbox labeled “download this 4GB AI model.” Delete the files and Chrome fetches them again.

His analysis pulled from macOS filesystem logs. On a fresh profile the directory appeared at 16:38. The weights unpacked minutes later. Smaller models for text safety and prompt routing arrived too. Chrome checks device specs in its Local State file. Performance class 6. Plenty of VRAM. Then it pulls from Google’s edge servers.

At billion-user scale the numbers add up. Each 4GB download burns roughly 0.24 kilowatt-hours. Multiply across hundreds of millions of machines and the electricity and carbon footprint grow large. Hanff calculated potential emissions in the tens of thousands of tonnes of CO2 equivalent. He called the pattern troubling. Similar behavior showed up in software from Anthropic.

Google pushed back. A spokesperson told multiple outlets the model has been available since 2024. It powers scam detection and developer APIs. Data stays off the cloud. The company added a toggle in February. Turn off “On-device GenAI Enabled” or the later “On-device AI” setting and the model deletes. No more downloads or updates.

That statement appeared in coverage from Gizmodo and Engadget. Both ran the company’s words in full. The model uninstalls automatically on low-storage devices. Features like real-time warnings against fake sites run locally.

But the removed text raises questions. Hanff listed three possibilities. The original claim was never accurate. An architecture shift now sends some data back. Or the company simply wants legal breathing room. None sit well. He argued the assurance counted as a binding representation. Users kept the toggle on because they believed it.

Legal experts may take interest. Hanff pointed to the EU’s ePrivacy Directive. Article 5(3) requires consent before storing information on a user’s device. The Gemini Nano download looks like it fails that test. GDPR articles on transparency and data protection by design enter the picture too. Similar rules apply under UK and California law.

Chrome’s market share makes the stakes high. The browser sits on more than 60 percent of desktops and mobiles worldwide. Default settings reach hundreds of millions. Many users never open the System page. They never see the toggle at all.

Earlier coverage from Forbes in January noted the incoming control. The pre-release toggle deleted models tied to scam detection. Google described it as processing locally. No personal data sent to the cloud. The piece captured the tension. Users gained an off switch yet the software had already placed the files without clear notice.

Developers gained new tools. Chrome 148 integrates Gemini Nano through a JavaScript API. Websites can call summarization or rewriting functions that run locally. The promise was speed and privacy. Yet the silent install undercut the message.

Security researchers flagged another issue. The added model expands the browser’s attack surface. Local inference means new code paths. Potential for exploits that never leave the machine. Mozilla has pushed back on similar web AI standards. The open web risks fragmentation when one vendor ships large binaries by default.

Google insists the feature helps users. On-device scam detection spots phishing in real time. Text suggestions stay private. The company points to automatic cleanup on resource-constrained devices. Still the pattern feels familiar. Roll out first. Answer questions later.

Hanff demanded clarity. Confirm whether any data ever left the device. Restore the exact wording if the claim holds. Move to explicit opt-in. He posted the questions publicly and tagged Chrome security leader Parisa Tabriz. No detailed reply has surfaced yet.

The episode highlights a larger shift. Browsers once shipped lean. Now they bundle large language models measured in gigabytes. Storage is cheap for some. Bandwidth costs add up for others. Metered connections in developing markets feel the hit first.

Users can act. Open Chrome settings. Head to System. Flip the on-device AI control off. The model should remove itself. Flags at chrome://flags let power users dig deeper. Enterprise policies offer stronger blocks. But most people won’t bother.

And that is the point. Default behavior shapes what millions experience. When the default includes a multi-gigabyte download and then drops the privacy language that justified it, trust erodes. Google built its brand on search and speed. Privacy rhetoric helped too. The latest moves test how far that rhetoric stretches.

Recent reporting from Tom’s Hardware tied the download to possible EU law violations and large energy waste. The piece echoed Hanff’s calculations. At scale the electricity burned for initial distribution alone rivals small-country consumption.

Tech observers watch closely. Browser engines compete. Chrome’s decisions set expectations for everyone. If local AI becomes table stakes then consent, transparency and resource costs must improve. Otherwise users grow numb. They accept the bloat. They ignore the toggles. And the quiet accumulation of models on their drives continues.

The text change itself is small. One sentence gone. Yet it signals discomfort. Google no longer wants to make that particular promise in plain view. The reason matters. Users deserve to know it. So far the company has offered general statements but not the specifics Hanff requested.

Chrome will keep evolving. New versions arrive every few weeks. AI features will expand. The question is whether the company learns from this episode. Clear communication. Honest defaults. Or the cycle of silent rollout, public surprise and partial walk-back repeats.

For now the 4GB model sits on countless machines. The privacy sentence is gone. And the conversation about what on-device really means has only grown louder.



from WebProNews https://ift.tt/AlxO09P

Friday, 8 May 2026

Samsung’s Foldables Face a Chip Divide: Snapdragon Dominance for Z Fold 8, Uncertainty Looms for Flip 8

Samsung prepares its 2026 foldable lineup amid fresh questions about processor choices. Recent leaks point to a continued split strategy. The Galaxy Z Fold 8 and an all-new wider variant will rely on Qualcomm’s latest Snapdragon 8 Elite Gen 5. The Galaxy Z Flip 8? Its path remains less clear.

Industry observers have watched Samsung wrestle with this decision for years. Once, all foldables carried Snapdragon chips exclusively. That changed with the Z Flip 7. Now signs suggest the book-style models will stick with Qualcomm while the clamshell tests Samsung’s own silicon again. But nothing stands final until the summer Unpacked event.

Tipster Erencan Yılmaz uncovered the details in Samsung source code. Both the standard Z Fold 8, model SM-F976, and the rumored Z Fold 8 Wide, model SM-F971, list the Snapdragon 8 Elite Gen 5 “for Galaxy.” The custom tune promises higher clocks and deeper software integration than standard versions. Android Authority first reported the code findings on May 7, 2026.

The Z Fold 8 Wide represents Samsung’s direct answer to potential competition from Apple. Its taller aspect ratio when closed aims for a more passport-like feel. Pairing it with the top-tier Snapdragon avoids risks that often plague first-generation devices. Performance consistency matters here. So does thermal headroom for the larger unfolded screen.

Qualcomm’s Snapdragon 8 Elite Gen 5 builds on the current Elite platform. It delivers marked gains in CPU speed and power efficiency. Samsung’s “for Galaxy” editions typically extract even more through factory overclocks and specialized camera pipelines. Expect 12GB or 16GB of RAM alongside the chip. Storage options could reach 1TB.

Yet the real story sits with the Z Flip 8. Code references show flexibility. It could ship with Snapdragon or Exynos. Older reports favor the Exynos 2600, Samsung’s 2nm flagship processor that debuted in some Galaxy S26 models. Android Headlines noted this uncertainty just yesterday, highlighting how the smaller foldable might test Samsung’s growing confidence in its in-house designs.

Samsung’s processor balancing act carries real consequences for battery life, heat, and regional availability.

Flip-style devices face different demands than book-style ones. Their compact form leaves less room for cooling. Battery capacity stays tighter. A processor that sips power while delivering snappy performance wins favor. The Exynos 2600 reportedly shines in efficiency metrics. But past generations struggled with sustained loads and modem performance compared with Snapdragon equivalents.

Regional splits have defined Samsung’s flagship approach for over a decade. Snapdragon often lands in the US, China, and select premium markets. Exynos appears elsewhere. The Z Flip 7 broke foldable tradition by adopting Exynos more broadly. Some buyers noticed differences. Others did not. The debate continues in enthusiast circles.

Recent supply chain chatter adds another layer. Samsung has explored a custom 2nm version of the Snapdragon 8 Elite Gen 5 produced in its own foundries. Qualcomm would need to approve the arrangement. If it happens, the Z Flip 8 could gain a specialized variant tuned for clamshell use cases. But as of this week that possibility remains speculative. No new articles today confirmed movement on the custom chip front.

Power consumption figures matter most for foldables. Users expect all-day endurance despite thin bodies and dual screens. The Snapdragon 8 Elite family already sets high bars for efficiency. Samsung’s version of the Exynos 2600 must match or exceed it. Early S26 feedback suggests the 2nm Exynos has closed much of the gap. Real-world foldable tests will decide if it crosses the line.

Camera processing offers another differentiator. Snapdragon variants often receive optimized imaging pipelines from Qualcomm. Samsung’s own chips lean on its long experience with image signal processors. The Z Fold 8 reportedly eyes a 200MP main sensor. That workload demands strong silicon. A mismatched chip could throttle burst shooting or video features.

And then there is the wider market picture. Chinese rivals pack ever-stronger foldables with Dimensity or custom chips. Apple rumors swirl around a 2026 or 2027 foldable iPhone. Samsung wants no weak links. Uniform Snapdragon deployment across the Z Fold 8 trio would simplify development and marketing. But cost, supply, and strategic foundry goals pull in other directions.

The Z Fold 8 Wide could prove the most interesting test case. Its unique proportions require software tuning. Hardware must support larger unfolded real estate without draining the battery faster. Snapdragon 8 Elite Gen 5 gives engineers a known quantity. They can focus on hinge improvements, crease reduction, and new One UI 9 features instead of silicon unknowns.

Launch timing looks set for July. Samsung has shifted its summer event to London this year, according to multiple supply chain sources. Three foldables could debut together: the standard Z Fold 8, the Z Fold 8 Wide, and the Z Flip 8. Pricing remains unknown but expectations run high for the wider model.

Buyers care about more than just the processor name. They want reliable performance, long battery life, smooth displays with minimal creasing, and cameras that deliver. The chip decision influences all of it. A Snapdragon-only approach for premium book-style foldables would signal confidence in Qualcomm’s platform. An Exynos-powered Flip 8 would show Samsung believes its own silicon has matured enough for high-volume consumer devices.

Either outcome carries risks. Past Exynos models drew criticism for throttling and modem issues in certain regions. Snapdragon versions sometimes cost more to produce. Samsung must weigh these factors against its goal of reducing reliance on external suppliers while maintaining product excellence.

So the leaks leave us with a partial map. Clear Snapdragon territory for the Z Fold 8 family. Foggy ground for the Z Flip 8. Expect more details to surface in the coming weeks as firmware and certification filings appear. By July the picture should sharpen. Until then speculation fills the gap.

One fact stands firm. Samsung’s foldable business now drives significant revenue. Processor choices will shape its competitive edge for the next generation. The company cannot afford missteps. The market watches closely. So do its rivals.



from WebProNews https://ift.tt/YrmVsAp