Saturday, 16 May 2026

Canadian Privacy Clash: VPN Pioneer Windscribe and Signal Draw a Line Against Surveillance Bill

Windscribe pays a lot in taxes to Ottawa. The Toronto-based VPN provider built its business on a strict no-logs policy that has withstood court tests. Now that policy stands in direct conflict with proposed federal legislation. So the company says it will move its headquarters if the law passes unchanged.

Signal reached the same conclusion first. The encrypted messaging service, which prides itself on end-to-end encryption that even its own engineers cannot break, warned it would exit the Canadian market rather than weaken its core protections. Windscribe quickly echoed that stance. The two companies, though different in scale and focus, have drawn a sharp boundary.

At issue is Bill C-22, introduced in March 2026 and now under committee review. The legislation, formally titled the Lawful Access Act, would require telecoms, internet firms and other electronic service providers to retain user metadata for up to a year. It would also compel companies to make technical changes enabling police and intelligence agencies to access data more readily. The Globe and Mail first reported Signal’s position after Vice President of Strategy and Global Affairs Udbhav Tiwari spoke plainly.

“We would rather pull out of the country than be compelled to compromise on the privacy promises we have made to our users,” Tiwari said. He added that the bill could force the introduction of vulnerabilities. “Bill C-22 could potentially allow hackers to exploit these very vulnerabilities engineered into electronic systems, with private messaging services serving as an ideal target for foreign adversaries.”

End-to-end encryption, he noted, cannot coexist with exceptional access. Any route to it creates risk. Provisions that deliberately engineer weaknesses into systems like Signal represent a grave threat to privacy everywhere.

Windscribe’s reaction came via X. The company stated it would not lag far behind. “In its current state, VPNs would almost certainly require us to log identifying user data,” the post read. Then came the sharper language. “Signal isn’t headquartered in Canada so they can just shut off Canadian servers, but our HQ is. We pay an ungodly amount of taxes to this corrupt government, and in return they want to destroy the entire essence of our service to basically spy on its own citizens. Not happening. We’ll move HQ and take our taxes elsewhere.”

The message landed with force. And it wasn’t alone.

Public Safety Minister Gary Anandasangaree has pushed back. He described the bill as encryption-neutral during a Commons committee hearing. A spokesperson later told reporters the government is not requiring companies to install surveillance capabilities. Assertions to the contrary are false, the spokesperson said. Yet Apple, Meta and the Canadian Chamber of Commerce have issued similar warnings. So have two chairs of U.S. congressional committees.

The Electronic Frontier Foundation called the measure a repackaged version of last year’s failed Bill C-2. That earlier proposal collapsed under privacy backlash. Bill C-22 keeps the core elements with only modest adjustments. It demands metadata retention for a full year. Metadata can reveal who communicates with whom, approximate locations and timing patterns even when message content stays hidden. The bill also grants the Minister of Public Safety authority to order companies to build access mechanisms. These orders come with a condition. They must not create a systemic vulnerability. The definitions of both systemic vulnerability and encryption remain vague enough to invite broad interpretation.

“Surveillance of encrypted communications is fundamentally a systemic vulnerability,” the EFF wrote in its analysis. “When you build these systems, hackers will come.” The organization highlighted risks of expanded information sharing with foreign governments, including the United States. EFF detailed how the legislation could conscript private companies into extended government surveillance roles with insufficient safeguards.

Meta’s head of public policy in Canada, Rachel Curran, testified before the committee. She warned the bill could require companies to break, weaken or circumvent encryption or zero-knowledge architectures. It might even force installation of government spyware directly on systems. Apple has taken a comparable position. The Canadian Chamber of Commerce raised concerns about weakened encryption and deterred investment.

Two U.S. House committee chairs sent a letter to Canadian officials in early May. They expressed worry that the bill would expand surveillance powers in ways that create cross-border risks to American security and data privacy. The letter highlighted potential compulsion of American companies to build backdoors. Such changes could introduce vulnerabilities exploitable by hackers, adversaries and cybercriminals. Paubox covered the letter and its implications for cybersecurity norms.

Windscribe brings a distinct perspective. Founded in Toronto in 2016, the company maintains a lean operation focused on practical privacy tools. Its no-logs policy faced a real test in 2025 when Greek authorities sought user data. Courts found nothing to hand over. The company had logged nothing. That outcome reinforced its public claims. Relocating headquarters would allow it to preserve that architecture outside Canadian legal reach. Shutting down local operations entirely remains an option but moving the HQ offers a cleaner separation.

Observers note this isn’t the first time Canada has tried such measures. Successive governments have returned to lawful access ideas over more than a decade. Each attempt met resistance. Previous versions stalled. Bill C-22 follows a familiar pattern yet arrives amid heightened global tension over encryption. The United Kingdom’s demands on Apple for access to encrypted iCloud data led the company to withdraw a security feature rather than comply. Signal itself once warned it would exit Sweden over comparable proposals. That threat contributed to long delays in the Swedish bill.

So the threats carry weight. Companies aren’t bluffing when they say compliance would destroy their product. For Signal, any mandated access mechanism would mean ceasing to offer the service users chose. For a VPN like Windscribe, mandatory logging of identifying data would erase the anonymity that defines its value. Users seeking to protect their traffic from surveillance or censorship would lose a trusted Canadian option.

Parliamentary committee hearings continue. Amendments remain possible. Yet the government’s responses so far suggest little appetite for major changes. Officials repeatedly insist critics misunderstand the bill. They point to its aim of updating outdated laws to combat modern crime and national security threats. Digital networks have evolved. Law enforcement tools have not kept pace, ministers argue.

But the pushback grows louder. Cybersecurity experts, human rights groups and now multiple technology providers line up against the current text. Michael Geist, a leading technology law professor, compared the government’s handling to its approach on the Online News Act. Dismissal of expert concerns, he wrote, follows a troubling playbook. His detailed critique appeared on his site and Substack just days ago. Michael Geist’s analysis traces how warnings from Signal, Apple, Meta, U.S. lawmakers and cybersecurity advisors have all been waved aside.

Canadians could face a practical outcome. Secure services might simply become unavailable. Or available only in weakened form. VPN users might turn to providers based elsewhere. Messaging apps that refuse to comply could disappear from Canadian app stores or servers. The bill’s broad language on electronic service providers leaves room for regulators to include many categories later. Messaging platforms, operating systems and apps could fall under future definitions.

Windscribe’s CEO Yegor Sak has spoken before about the company’s commitment. In past statements he made clear that if Canadian jurisdiction prevents upholding the privacy policy, the company will not remain based in Canada. The recent X posts align with that long-held view.

The situation carries irony. Canada positions itself as a defender of democratic values and digital rights on the world stage. Yet this legislation risks isolating the country in technology policy. Allies to the south already voice alarm over potential spillover effects. European debates on chat control and scanning proposals face similar criticism. The pattern repeats. Governments want visibility into encrypted channels. Providers say visibility cannot come without breaking the encryption that makes those channels safe.

TechRadar first connected the Windscribe response directly to Signal’s statement in coverage published today. The article noted the company’s Greek court validation and the logistical differences between the two firms. TechRadar reported that committee hearings began May 7 and the bill remains in review.

Whether lawmakers will heed the warnings or proceed with only cosmetic tweaks will shape the outcome. History suggests confrontation. Companies have shown they will follow through. Signal delayed features or limited availability in other jurisdictions rather than compromise. Apple litigated in the UK. Windscribe, with its headquarters on the line, now signals the same resolve.

The stakes extend beyond one country. Each backdoor created anywhere weakens security everywhere. Each metadata store becomes a target. The companies say they see no path to compliance that preserves their promises. So they prepare to leave. The Canadian government insists they are mistaken. The coming weeks in committee may decide which side holds.

But one fact already stands clear. Privacy-focused services will not quietly accept mandates that undermine their foundations. Windscribe and Signal have made that plain.



from WebProNews https://ift.tt/fkjzoml

Friday, 15 May 2026

Southeast Asia’s AI Surge Collides With a Power Grid That Can’t Keep Up

Singapore once led the charge. Now its data center pause reveals the tension. Malaysia races ahead in Johor. Thailand approves billions in projects. Yet the numbers tell a harder story. Power demand from data centers in the region stands set to quadruple from 2.6 gigawatts to 10.7 gigawatts between 2025 and 2035.

Wood Mackenzie laid out that forecast in December 2025. The jump would lift data centers from 1% of peak demand today to 3-4% by 2035. That growth equals seven to 10% of all new electricity consumption across Southeast Asia over the decade. Roughly the same as Singapore’s entire current power use.

But the TechRadar analysis from earlier this year already flagged the risk. Energy constraints sit underestimated while governments chase AI investment and hyperscalers hunt cheap land and lower costs. Joe Ong, ASEAN vice president and general manager at Hitachi Vantara, put it plainly in that TechRadar article. “The artificial intelligence (AI) boom is often framed as a race for compute power, talent and investment. But beneath the surface, a different constraint is emerging; one that is far less visible and harder to scale. Energy.”

Short. Direct. And increasingly accurate.

The International Energy Agency sees Southeast Asia driving a quarter of global energy demand growth by 2035. Data centers already number more than 2,000 across Indonesia, Malaysia, Singapore, Thailand, Vietnam and the Philippines, according to Ember. A standard AI facility can draw as much electricity as 100,000 households. Cooling demands soar in the tropical heat. Grids strain. Water use draws scrutiny.

Malaysia plans to add as much as eight gigawatts of gas-fired power by 2030 to meet data center needs. Its utility, Tenaga Nasional Berhad, has fielded applications for 11,000 megawatts of data center supply. That figure equals nearly 40% of the country’s current total generation capacity. Projections show data center electricity demand there could hit 5,000 megawatts beyond 2035.

The Grid Reality Check

Yet supply timelines lag. Grid congestion grows. Intermittency of renewables clashes with the always-on requirement of AI training runs that can demand hundreds of megawatts without pause. In Indonesia, coal still generates nearly 70% of electricity. Power consumption by data centers there could quadruple by 2030, per Ember’s analysis in its recent ASEAN report.

Singapore learned early. It imposed a moratorium on new data centers years ago. Growth resumed under tighter rules that stress efficiency, low-carbon power and tighter alignment with energy planning. Land remains scarce. The island imports most of its energy. Its data centers already tripled power demand in recent years.

Malaysia and Thailand now position as alternatives. Thailand’s Board of Investment approved $21 billion in data center projects in 2025 alone. Ninety percent concentrate in the Eastern Economic Corridor. Capacity in Bangkok could surge more than 10 times between 2026 and 2030. Jakarta follows with a projected 4.4 times increase.

But the Associated Press reported in March 2026 that several nations now revisit nuclear plans shelved years ago. Malaysia revived its program specifically for data centers. Indonesia, Vietnam and others eye small modular reactors. The reason stays simple. Tech giants demand uptime measured in nines. Solar and wind alone cannot deliver that reliability at the densities AI requires.

“The Iran war has caused the price of oil to increase, raising concern on the reliability of traditional energies,” one data center executive told Fortune in late March. The piece highlighted how conflict in the Middle East adds pressure on fossil fuel supplies already stretched by AI growth.

And the heat makes everything worse. Tropical humidity forces more energy into cooling. Traditional air conditioning systems lose efficiency. Some operators explore liquid cooling or waste heat reuse. Others simply pay higher tariffs. On-grid electricity costs for data centers in the region could quadruple to $10.2 billion annually by 2035, according to Wood Mackenzie.

Local resistance builds in places. Communities question water consumption when reservoirs run low. Regulators in Johor rejected nearly 30% of recent data center applications over efficiency and grid concerns. Vietnam already saw power shortages during peak seasons even before the latest AI wave.

Nuclear Returns to the Table

The nuclear discussion marks a policy pivot. Southeast Asia never operated a commercial nuclear plant. Now five countries pursue programs tied directly to digital infrastructure goals. Power purchase agreements from Microsoft, Amazon and others provide the revenue certainty developers need. The shift reframes energy policy as industrial policy.

Global data center electricity use surged again in 2025 despite some deployment bottlenecks, the IEA noted in recent updates. AI already represents a fast-rising share of workloads. One forecast sees it driving 50% of data center capacity by 2030, up from 25% today.

Operators respond with varied strategies. Some hyperscalers source half their Malaysian power from solar and plan to expand that model. Others push for grid modernization, better interconnectivity across ASEAN and accelerated storage deployment. Yet structural gaps persist. Transmission infrastructure often cannot deliver new generation to the exact sites where data centers cluster.

Recent announcements underscore the momentum. Gorilla Technology revealed plans for a 200-megawatt AI data center campus in Thailand in early May 2026. Chinese firms such as ByteDance and Alibaba shift more AI workloads to Malaysia, drawn by available power and Nvidia hardware access. The regional data center market could exceed $30 billion by 2030.

Still, vacancy rates across Asia tightened last year even as 1.5 gigawatts of capacity came online. Demand outruns supply. Southeast Asian hubs show the fastest projected growth rates through the end of the decade.

The pattern mirrors what the United States and Europe faced earlier, only compressed. Here the baseline grids start weaker in many markets. Urbanization and industrial demand already pull hard. AI adds a new, concentrated load that behaves differently from traditional factories or homes.

Success will hinge on more than raw megawatts. Integration matters. Energy planners must coordinate with data center developers months or years in advance. Efficiency gains from better chips and optimized software help but cannot offset the sheer scale of projected growth. Data quality and governance also shape outcomes. More compute without clean inputs simply amplifies errors at higher cost.

So governments face choices. Accelerate fossil capacity and accept higher emissions. Bet on renewables and storage while managing intermittency risks. Or embrace nuclear for firm, low-carbon baseload. Many appear prepared to pursue all three in parallel.

The underestimated part, as the original TechRadar piece argued, lies in the visibility. Compute announcements make headlines. Power contracts rarely do. Yet the latter determines which ambitions survive contact with physical limits. Those limits now press hard across the region.

Ember projects that between 2% and 30% of national electricity demand could flow to data centers by 2030 in major Southeast Asian markets. The upper end applies to places like Malaysia. A third of ASEAN data centers could run on solar and wind under optimistic scenarios. The gap between hope and delivery remains wide.

Operators who solve the power equation first will capture market share. Those who treat energy as an afterthought risk delays, cost overruns and regulatory blocks. The AI race in Southeast Asia has quietly become an energy race. The winners will measure success not just in racks deployed but in electrons reliably delivered.



from WebProNews https://ift.tt/e0Khwb9

Thursday, 14 May 2026

Google and SpaceX Eye Orbital AI Compute as Earth Hits Power Limits

Google has held discussions with SpaceX about launching test hardware for data centers that would orbit the Earth. The talks surfaced this week. They signal how two of technology’s most ambitious companies see the next front in artificial intelligence infrastructure moving off the planet.

The conversations center on Google’s Project Suncatcher. Announced last November, the initiative envisions clusters of solar-powered satellites loaded with the company’s Tensor Processing Units. These chips would tap uninterrupted sunlight in space. They would sidestep the massive electricity and cooling demands that now strain terrestrial grids. Google’s official blog post describes an interconnected network designed for massively scaled machine learning. Early research includes satellite constellation design, control systems, communication methods and radiation testing on TPUs.

But. The project needs rides to orbit. And SpaceX possesses the most frequent and capable launch system operating today. According to The Wall Street Journal, Google is in talks with SpaceX for a rocket-launch deal. The search company has also approached other providers including Planet. A person familiar with the matter confirmed the discussions to the Journal. The potential partnership would place the companies in an unusual spot. They would collaborate on launches while preparing to compete in the emerging market for orbital data centers.

Elon Musk has pushed this idea hard. After SpaceX acquired xAI in February he declared that advances in AI depend on large data centers requiring immense power and cooling. “Global electricity demand for AI simply cannot be met with terrestrial solutions,” Musk said. “In the long term space-based AI is obviously the only way to scale.” The Mashable report on the talks quotes him directly. Last week Anthropic agreed to use the full output of xAI’s Colossus supercluster in Memphis. The deal included interest in future orbital work. SpaceX’s acquisition of xAI ties these threads together.

SpaceX itself has filed with the FCC to deploy up to a million satellites as part of an orbital data center constellation. The company highlighted the concept in materials tied to its planned IPO. Valued potentially at $1.75 trillion the listing could come as soon as this year. A deal with Google which already owns about 6 percent of SpaceX would strengthen the pitch to investors. TechCrunch noted the timing. It reported that current orbital concepts remain far more expensive than ground-based facilities once launch and satellite construction costs enter the equation.

Yet the pressure on Earth continues to mount. Data centers already consume huge shares of local power in Virginia, Texas and elsewhere. Hyperscalers face resistance from communities worried about electricity prices, water use for cooling and environmental impact. In space solar arrays could generate power continuously without night or weather. The vacuum provides free radiative cooling. No land permits. No neighborhood hearings. Proponents argue these advantages will eventually outweigh the staggering expense of reaching orbit.

Google’s own moves reflect growing conviction. CEO Sundar Pichai told an audience in New Delhi in February that he never expected to spend time figuring out how to put data centers into space. The company plans to launch two prototype satellites in partnership with Planet by early 2027. Those spacecraft will test hardware durability in the radiation environment and gather data on orbital operations. A preprint paper accompanying the announcement outlined initial findings on TPU resilience.

Other players watch closely. Nvidia has posted jobs for orbital data-center system architects. Jeff Bezos through Blue Origin and Sam Altman of OpenAI have expressed interest in space-based compute though Altman once called the economics ridiculous. A New York Times article from January captured the shift in thinking. Leaders who once dismissed the concept now view it as perhaps the only long-term answer to AI’s appetite for energy.

Challenges stack up. Latency poses one immediate obstacle. Signals traveling to and from geostationary or low-Earth orbit introduce delays that could slow interactive AI applications. Radiation can degrade electronics over time though Google has begun testing its chips. Maintenance becomes nearly impossible once satellites launch. Any failure requires replacement from the ground. And the upfront capital costs remain savage. Starship aims to slash launch prices but even optimistic projections show orbital facilities costing multiples of equivalent terrestrial ones today.

SpaceX has acknowledged the risks. In its S-1 filing the company warned that orbital AI compute involves unproven technologies operating in a harsh environment. “These initiatives may not achieve commercial viability,” the document stated according to Reuters. Musk nevertheless calls the direction obvious. His vision extends beyond Earth orbit to lunar and Martian industrialization.

The Google-SpaceX talks come at a moment of convergence. Google needs launch capacity and expertise in satellite fleets. SpaceX needs credible customers and use cases to justify its enormous constellation plans. Starlink already provides high-bandwidth connectivity that could link orbital compute back to Earth. The combination could create a closed loop. Satellites powered by the sun. Cooled by space. Connected by laser or radio links. Trained models returned via Starlink.

Analysts question the timeline. Prototypes in 2027 will deliver proof of concept at best. Commercial scale could lie a decade away. Still the conversation has moved from science fiction to boardroom strategy. Bloomberg reported the talks on May 12 citing the Journal’s sources. Its coverage noted Google’s prior comments about exploring multiple launch partners.

So the race accelerates. Hyperscalers race for more compute. Launch providers race to drop costs. Chip designers race to harden hardware against radiation. The prize is access to effectively unlimited clean power for the next generation of AI models. Whether that power floats 500 kilometers above the planet or remains bound to sprawling facilities in the American heartland will shape technology for decades.

Google and SpaceX have not confirmed a final agreement. The discussions could still collapse over price, technical details or strategic concerns. But the fact they are happening at all reveals how seriously both organizations treat the constraints now facing AI development on Earth. Power. Cooling. Land. Regulation. In orbit many of those problems simply vanish. The new ones that replace them will test engineering ingenuity for years to come.

And if the prototypes work? If Starship delivers payloads cheaply enough? The night sky could one day hold not just stars but glowing clusters of silicon thinking at scales impossible on the ground. The talks between Google and SpaceX mark an early step toward that possibility.



from WebProNews https://ift.tt/g3Z405T

Wednesday, 13 May 2026

Microsoft Fires Back: Why Windows 11’s CPU Boost Isn’t Cheating

Scott Hanselman didn’t hold back. The Microsoft vice president took to X last week to confront critics head-on. Their target? A new Windows 11 feature that briefly maxes out CPU clocks to make menus snap open and apps launch faster.

Call it the Low Latency Profile. It ramps processor frequency for one to three seconds during interactive tasks. Start menu. Context menus. App launches. The result feels immediate. Tests show up to 70% faster Start menu responses and 40% quicker launches for built-in apps like Edge and Outlook. (Windows Central, May 7, 2026)

But not everyone cheered. Online voices labeled it a band-aid. A lazy shortcut. Proof that Windows had grown too bloated to run efficiently without brute force. Hanselman pushed back. Hard.

“Apple does this and y’all love it.” He followed with a sharper point. “All modern operating systems do this, including macOS and Linux. It’s not ‘cheating’; this is how modern systems make apps feel fast: they temporarily boost the CPU speed and prioritize interactive tasks to reduce latency.” (Pureinfotech, May 11, 2026)

The exchange revealed more than one executive’s frustration. It exposed a deeper tension in how users judge operating systems today. Speed. Responsiveness. That instant feel when you click. Benchmarks matter less than perception. And Windows 11 has struggled with that perception for years.

Modern interfaces carry weight. The Start menu no longer simply unhides a static list. It pulls cloud recommendations, web results, live tiles. File Explorer handles thumbnails, previews, network shares. Background services multiply. Web technologies replace lean native code. Each addition extracts a cost in latency. Milliseconds add up.

So Microsoft turned to a proven tactic. Predict high-priority user actions. Boost frequency and scheduler priority. Complete the task quickly. Drop back to idle. Smartphones do it constantly. Tap the screen. Cores wake. Clocks spike. Frame renders. Power falls away milliseconds later. Users never notice the dance. They just feel the device responds.

macOS takes the same approach. Aggressive clock boosts on clicks and animations. Quality of Service classes help the scheduler anticipate needs. Linux kernels rely on frequency governors and schedutil to wake performant cores the moment UI interaction begins. The techniques differ in detail. The goal stays identical. Reduce perceived lag.

Hanselman drove that message home. He pointed critics to macOS’s powermetrics tool. Check it yourself, he suggested. Watch the bursts. He also corrected misconceptions about Linux. “Linux achieves its responsiveness through the same methods, using the kernel scheduler, CPU frequency governors, and modern CPU boost technologies like schedutil.” The negativity, he added, sometimes came from “computer science enthusiasts without experience in computer science making assumptions based on their intuition.”

Yet the criticism landed because it touched a nerve. Windows 11 launched with hardware requirements that frustrated many. Early builds felt heavier than Windows 10 in daily use. Later updates introduced AI features that some saw as distractions from core reliability. Trust eroded. So even a sensible engineering choice met skepticism. Why does my PC need to redline the CPU just to open the Start menu?

The answer sits in the evolution of software. Older Windows versions did less. Windows 95’s Start menu displayed a pre-rendered panel. No scaling gymnastics. No search indexing in the background. No synchronization with online accounts. That simplicity delivered raw speed on modest hardware. Today’s expectations demand more. Users want rich previews, personalized suggestions, seamless integration across devices. Delivering that without lag requires clever resource management.

This Low Latency Profile forms one piece of a larger initiative. Microsoft calls it Windows K2 internally. The effort combines short CPU bursts with deeper code optimization. Teams strip legacy components. They migrate more shell elements to WinUI 3 for lighter rendering. Scheduler tweaks improve how the OS handles processor power states and C-state transitions. The company has already begun shipping some of these changes to Insiders and retail users. (Windows Latest, May 11, 2026)

Early tests impress. On budget hardware and virtual machines, the difference turns sluggish experiences snappy. ARM-based systems like those with Snapdragon X Elite benefit especially. Their rapid power-state transitions pair perfectly with brief boosts. Battery and thermal impact stays low because bursts last seconds, not minutes.

But Hanselman stressed balance. “There are actual things wrong and smart people are working to fix them.” The boost doesn’t replace optimization. It complements it. Microsoft pursues both. Legacy code cleanup continues. File Explorer gains attention. The Run dialog moves to native frameworks. Performance work stretches across multiple fronts.

The episode highlights how Microsoft communicates engineering decisions in 2026. Executives engage directly on social platforms. They explain trade-offs in plain language. Transparency carries risk. Critics seize on admissions that the OS needs help. Yet silence would fuel conspiracy theories about hidden tricks.

Users ultimately vote with their experience. If the Start menu opens instantly, if apps feel immediate, if the system stays cool and efficient, complaints fade. The Low Latency Profile aims for exactly that outcome. It doesn’t promise higher benchmark scores in sustained workloads. It targets the moments that shape daily satisfaction. Click. Respond. Done.

Whether the feature ships widely this year remains unclear. Testing continues in Insider builds. Adjustments to duration and triggers could still occur. What won’t change is the underlying principle. Modern operating systems manage power and performance dynamically. They always have. The difference now lies in how aggressively and intelligently they do so.

Microsoft has joined the conversation openly. Hanselman’s defense may not sway every skeptic. It does clarify the playing field. Apple does it. Linux does it. Smartphones perfected it. Windows 11 is catching up in visibility and effectiveness. The real test arrives when millions of users encounter the smoother experience. Then the debate shifts from theory to results.

And results, in the end, determine whether Windows wins back the fans it seeks.



from WebProNews https://ift.tt/UgXlJA1

Amazon Halts High-Speed E-Bike Sales in California as Deadly Crashes Mount

Amazon has drawn a firm line in California. The retail giant will no longer sell electric bikes capable of exceeding the state’s strict speed limits for legal e-bikes. The decision follows months of pressure from Attorney General Rob Bonta and local prosecutors alarmed by a surge in fatal collisions involving young riders.

California draws clear distinctions. Class 1 e-bikes offer pedal assistance up to 20 mph. Class 2 models add throttle but cap at the same speed. Class 3 bikes, which require riders to be at least 16, reach 28 mph with pedal assist. Anything faster or lacking proper pedals crosses into moped or motorcycle territory. That shift demands a license, registration, insurance and often higher age minimums.

The change isn’t abstract. KCRA 3 Investigates flagged multiple Amazon listings advertising speeds over 40 mph. Some models pushed even higher. After the station shared examples with the company, Amazon moved. It now requires third-party sellers to meet state laws, its own policies and speed classifications. Non-compliant products have been pulled. Others face review.

“We are seeing a surge of safety incidents on our sidewalks, parks, and streets,” Bonta said in an April consumer alert titled “Too Fast, Too Furious.” He warned parents and riders directly. “If your or your teen’s electric two-wheeled vehicle goes too fast, it might be a motorcycle or a moped — not an e-bike.”

Orange County District Attorney Todd Spitzer welcomed the step. He noted more than 100 deaths nationwide tied to e-bike and e-motorcycle crashes. Two recent tragedies hit close. Thirteen-year-old Benson Nguyen of Santa Ana died after crashing an e-motorcycle traveling around 35 mph in Garden Grove. In Lake Forest, an 81-year-old veteran named Ed Ashman was struck and killed by a 14-year-old on a similar machine.

Prosecutors have filed charges against parents in related cases. One Yorba Linda father allegedly modified his son’s vehicle to exceed 60 mph. The boy had already gone through impound and safety training. Another parent in Aliso Viejo faces involuntary manslaughter charges after her son crashed fatally despite prior warnings. These incidents underscore a pattern. Young riders treat powerful machines like toys. The results prove otherwise.

Amazon’s announcement landed Friday. It came weeks after Bonta’s alert and direct outreach from investigators. The company told the Orange County Register it demands every product on its platform follow applicable regulations. Compliance checks continue. Yet as of early this week, some borderline models lingered in carts. One YVY bike rated between 30 and 38 mph remained available for California delivery, according to a Gizmodo check.

The episode exposes cracks in the marketplace. Third-party sellers flood platforms with imported machines that blur lines between bicycle and motor vehicle. These so-called hooligan bikes often weigh heavily, lack adequate brakes for their speed and attract underage users who skip helmets, training or licenses. One industry observer called the Amazon move progress. Such bikes, the person said, simply should not be on public roads when operated by 14-year-olds unfamiliar with traffic rules.

But the crackdown raises questions too. Compliant e-bike makers have complained for years that rogue models damage the category’s reputation and endanger everyone. Safety advocates point to rising clashes. E-bike riders mix with pedestrians on paths, frustrate transit users and spark debates in cities trying to cut car use. Hikers and cyclists have tangled with faster machines on trails.

State law requires permanent labels on e-bikes. Those stickers must list the class, motor wattage and top assisted speed. Many imported products ignore the rule. Sellers market them as e-bikes anyway. Buyers in California who click purchase on a 40-mph model could unknowingly acquire something that demands motorcycle endorsement.

And enforcement lags. Local police struggle to distinguish compliant bikes from illegal ones on sight. Bonta’s office partnered with district attorneys across the Bay Area and beyond to issue the alert. The goal was education first. Amazon’s response suggests the message registered.

Other retailers have taken notice. Walmart already blocks non-compliant models for California addresses. Smaller direct-to-consumer brands may feel less immediate pressure, yet the signal is clear. Major platforms won’t risk liability or regulatory heat.

The broader market keeps growing. E-bikes promised affordable, green mobility. Many models deliver exactly that. They help commuters skip traffic, let older adults stay active and reduce short car trips. Yet the fastest segment undercuts those gains. Speed sells. So does minimal regulation. Until crashes mount.

California isn’t alone. New Jersey enacted tough rules effective this July. Riders of machines over 20 mph need a driver’s license, registration and insurance. The law drew fire from cycling groups and environmental organizations worried about climate targets. Similar tensions bubble in other states.

Amazon’s pivot won’t eliminate dangerous machines. Determined buyers can still order from overseas sites or local shops that skirt rules. Private property use remains legal for non-street machines. But removing easy one-click access from the nation’s largest online marketplace changes the equation. It forces conversation about what counts as a bicycle in an era of 5,000-watt motors.

Prosecutors and regulators insist the law has been settled for years. The three-class system dates back well before the current boom. Manufacturers and sellers simply ignored it when convenient. Bonta’s alert and the KCRA probe applied pressure where it counts. At the point of sale.

Shoppers face new realities. Those seeking legitimate Class 3 transport can still find options on Amazon. Models capped at 28 mph with proper labeling should remain. Thrill seekers chasing 40 mph or more must look elsewhere. And they should understand the legal consequences. A traffic stop on a misclassified machine can bring fines, impoundment and insurance complications.

The episode also highlights platform responsibility. Amazon hosts millions of third-party listings. Policing every speed claim proved difficult until spotlighted by journalists and attorneys general. Now the company investigates similar products and coordinates with law enforcement. That shift may ripple beyond California.

Industry watchers expect tighter scrutiny nationwide. Major retailers dislike headlines about deadly crashes tied to their sites. Insurance carriers grow wary. Cities debate trail access and speed limits on shared paths. The humble e-bike has become a policy battleground.

Amazon’s decision won’t end the debate. But it marks a turning point. Speed without accountability carries costs. California officials decided those costs had grown too high. Retailers are following suit. Riders, parents and sellers now navigate the consequences. Some faster than others.



from WebProNews https://ift.tt/PJNSwhu

Tuesday, 12 May 2026

Debian Draws A Line: Reproducible Builds Become Mandatory For Its Next Release

Debian’s release team delivered a quiet bombshell this weekend. Halfway through the development cycle for the next major version, code-named Forky, officials declared that the distribution must ship only reproducible packages. The change took effect immediately. Migration tools now block any new package that fails to build identically bit for bit. Packages already in testing that slip backward face the same barrier.

The announcement came directly from Paul Gevers, writing on behalf of the release team. “Aided by the efforts of the Reproducible Builds project, we’ve decided it’s time to say that Debian must ship reproducible packages,” he stated in the bits from the release team posted to the debian-devel-announce mailing list on May 10, 2026. The message described the shift as “a small step in code, but a giant leap in commitment.”

This matters. For years the project has chased reproducibility without forcing it. Progress came steadily. Independent verifiers could rebuild many packages and match the official binaries exactly. Yet gaps remained. Timestamps crept in. Build paths differed. Random seeds introduced variation. The result? No one could say with absolute certainty that the binary downloaded from Debian’s servers came from the published source without trusting the build infrastructure.

That trust model no longer suffices. Supply-chain attacks have sharpened focus across the industry. The 2024 xz-utils incident, in which a sophisticated backdoor nearly slipped into major distributions, served as a wake-up call. Reproducible builds offer a practical defense. Anyone can rebuild the package. Compare the output. Match the hash. Confirm no alterations occurred between source and binary. Simple in theory. Demanding in practice.

Debian has come far. Phoronix reported on the policy shift within hours of the mailing list post. Michael Larabel noted that Debian 14.0, expected around 2027, will mark the first major release under this mandate. Earlier coverage from the same outlet showed the archive reaching 94 percent reproducibility for Debian 9 on x86_64 back in 2017. Rates have climbed since. The project’s testing infrastructure at tests.reproducible-builds.org tracks progress across architectures and suites.

Monthly reports from the Reproducible Builds project document the grind. In April 2026 the team reviewed dozens of packages, updated infrastructure, and refined tools. Vagrant Cascadian handled non-maintainer uploads to fix specific issues. Chris Lamb continued refining diffoscope, the sophisticated diff utility that pinpoints why two builds diverge. These efforts accumulate. They turn reproducibility from aspiration into requirement.

But. Challenges persist. Some packages embed timestamps by design. Others rely on compilers that produce varying output based on hardware or optimization flags. File ordering in archives can differ. Build environments must match exactly, down to the precise versions of every dependency. The policy accepts no excuses for new uploads. Maintainers must adapt or see their packages rejected from testing.

Reactions poured in quickly. On Hacker News, users debated the practicality. One commenter acknowledged the protection against compromised build servers yet questioned how often such attacks occur in practice. Others pointed to distributions that already achieve high or full reproducibility. NixOS, Guix, and Tails stand out. NetBSD reached the milestone years earlier. Debian’s size and package count make the task bigger. Its influence makes success matter more.

The timing aligns with broader movement. The Reproducible Builds project publishes regular updates. Its April 2026 report highlighted infrastructure upgrades for the forky release and the addition of new test nodes. Holger Levsen upgraded systems and dropped older architectures from testing. These changes prepare the ground. They signal that the project views full reproducibility as attainable.

Security experts have long argued for this. A 2021 paper titled “Reproducible Builds: Increasing the Integrity of Software Supply Chains” laid out the case. Authors described how the technique creates a verifiable path from source to binary. They drew on Debian’s own experience. The paper, available on arXiv, influenced policy discussions at multiple organizations. Governments and enterprises now reference similar principles when specifying procurement requirements.

Debian’s decision will ripple outward. Ubuntu, Linux Mint, and numerous derivatives pull packages from Debian. Higher reproducibility there strengthens the entire family. Downstream builders gain confidence. Users running critical infrastructure can verify their systems more easily. Auditors gain a concrete check.

Not every package will comply overnight. The release team built in testing for binary non-maintainer uploads, or binNMUs. These automated rebuilds help when architecture-specific tweaks are needed. The team also added LoongArch 64-bit, known as loong64, to the archive two weeks before the reproducibility announcement. That addition triggered widespread rebuilds and lengthened the continuous integration queue. Patience, the message noted, remains necessary.

Uploaders now carry explicit responsibility. If a package blocks due to test regressions in reverse dependencies, the original maintainer must file release-critical bugs. The system no longer tolerates drift. This raises the bar. It also rewards those who invested early in reproducible tooling.

Tools have matured. Strip-nondeterminism removes timestamps and other variable elements after the build completes. diffoscope dissects differences with remarkable precision. rebuilderd runs independent rebuilds at scale and reports discrepancies. Debian integrates all three. The project even operates reproduce.debian.net to let anyone verify packages against official builds.

Still, full compliance across every architecture and every package will test the community’s resolve. Armhf support was dropped from some tests after years of maintenance by Vagrant Cascadian’s collection of hardware. Newer ports like loong64 bring their own quirks. Each requires validation.

The announcement carries weight precisely because it comes from the release team. Not a working group. Not a side project. The people who decide what enters the stable release have drawn the line. Packages that cannot be reproduced will not migrate. Debian 14 aims to ship with this guarantee.

Observers see momentum. Recent X posts celebrated the move. One noted that NetBSD achieved the goal in 2017 while Debian followed in 2026. Another highlighted the audit value: no binary should be trusted if it cannot be bitwise reproduced. Discussions on Linux forums emphasized the link to supply-chain integrity.

Yet the work continues. The Reproducible Builds project issued its latest monthly summary just weeks ago. It tracks patches, infrastructure, and community efforts across distributions. Debian remains central. Its scale provides both the hardest test and the greatest reward.

So the policy lands as both culmination and beginning. Years of incremental fixes, tool development, and advocacy reached critical mass. The release team converted that progress into enforcement. Maintainers will feel the pressure. Users will gain assurance. The broader software supply chain stands to benefit as practices spread.

Debian has bet that the cost of adaptation is lower than the risk of inaction. Early evidence suggests the community agrees. The real test will come as Forky approaches release. If the archive reaches and holds 100 percent reproducibility under the new rules, the distribution will have set a standard for others to follow.



from WebProNews https://ift.tt/tNVanoI

Monday, 11 May 2026

How AI Bots Outpaced Bun’s Creator and Why Anthropic Bought the Whole Project

Jarred Sumner once spent three weeks hand-porting a Go transpiler to Zig. Line by line. No AI. The result became the seed for Bun, the JavaScript runtime that now powers some of the hottest AI coding tools on the market.

Today that same project has a GitHub bot called robobun with more contributions than Sumner himself. The milestone, flagged by developer Simon Willison on May 6, 2026, arrived during a “Code w/ Code” conversation between Sumner and Bryan Cherny. Fenado AI captured the moment: “Watching @jarredsumner and @bcherny at Code w/ Code talking about robobun, the Bun project’s GitHub bot that’s now made more contributions to Bun than Jarred has.”

Short. Stark. And a signal of how fast the ground is shifting.

Five months earlier, Anthropic had acquired Bun outright. The deal, announced December 2, 2025, tied the fast JavaScript toolkit directly to Claude Code, the AI coding product that hit $1 billion in annualized revenue just six months after public launch. Sumner’s blog post laid out the logic without fanfare. Bun Blog quoted him: “In late 2024, AI coding tools went from ‘cool demo’ to ‘actually useful.’ And a ton of them are built with Bun.”

Claude Code ships as a single-file Bun executable to millions. That single technical choice — fast startup, native addons, easy distribution — made Bun the quiet backbone for several AI-first developer tools. FactoryAI and OpenCode joined the list. When those tools succeed, Bun must not break. Anthropic now has every reason to keep it excellent.

But the story runs deeper than infrastructure. Sumner got obsessed with Claude Code. He took four-hour walks around San Francisco with engineers from the team. They talked about where coding heads next. He repeated the walks with competitors. He chose Anthropic. “This feels approximately a few months ahead of where things are going. Certainly not years,” he wrote in the acquisition post.

The numbers tell part of the tale. Bun’s monthly downloads climbed 25% in October 2025, crossing 7.2 million. The project carried more than four years of runway yet generated zero revenue. Traditional paths — cloud hosting, paid tiers — felt mismatched when AI agents threatened to write, test and deploy most new code. Sumner saw the runtime and tooling around that code mattering more than ever. Speed. Predictability. Scale. Bun had chased those traits from the start.

The Hand-Port That Started It All

Sumner’s original frustration was simple. A browser-based voxel game. A large Next.js codebase. Forty-five-second iteration cycles. He attacked the bottleneck by rewriting esbuild’s transpiler from Go into Zig. Three weeks of focused effort produced something that worked, roughly. Early benchmarks showed it transpiling JSX three times faster than esbuild, 94 times faster than swc, 197 times faster than Babel.

That exercise taught lessons that still shape Bun. Write all the code first. Avoid incremental fixes until the full picture appears. Favor breadth-first exploration over depth-first rabbit holes. Sumner repeated those principles in recent X threads while discussing an experimental Rust port of parts of Bun. The original Zig implementation remains largely intact, though Claude-generated code sometimes arrived with excess comments that later required cleanup.

By July 2022, Bun v0.1 combined bundler, transpiler, runtime, test runner and package manager into one binary. It hit 20,000 GitHub stars in a week. Production use grew. Windows support arrived in v1.1 after relentless user demand. Built-in clients for PostgreSQL, Redis and MySQL followed. Companies such as X and Midjourney adopted it. Tailwind’s standalone CLI compiles with Bun.

Yet the real acceleration came when AI coding tools discovered Bun’s single-file executables. Developers could bundle entire JavaScript projects into binaries that run anywhere, even on machines without Bun or Node installed. Startup stayed quick. Native modules worked. Distribution simplified. The traits that solved Sumner’s original 45-second pain now solved distribution pain for AI-powered CLIs.

Anthropic’s Chief Product Officer Mike Krieger put it plainly in the acquisition announcement. Anthropic reported: “Bun represents exactly the kind of technical excellence we want to bring into Anthropic. Jarred and his team rethought the entire JavaScript toolchain from first principles while remaining focused on real use cases.” Claude Code’s rapid growth demanded matching infrastructure. Bun supplied it.

Post-acquisition, Bun stays open source and MIT licensed. The same team continues the work. Development remains public on GitHub. Node.js compatibility stays a priority. The roadmap now aligns more closely with Claude Code and the Claude Agent SDK, yet retains independence similar to browser engines and their JavaScript runtimes.

Robobun’s lead in contribution count adds another layer. The bot handles force pushes, labeling, bug fixes and test writing. It responds to review comments. In one setup, it tests fixes against earlier Bun versions before merging. Sumner has praised the productivity gains even while acknowledging the shift in metrics. Traditional contribution graphs once measured human effort. They now capture a mix of human direction and machine execution.

Other tools race forward. Cursor released an SDK for building agents using its own runtime and models, though early feedback noted missing Python support and beta-stage limitations, as covered by The New Stack on May 8, 2026. Windsurf positioned itself as an AI-native IDE with agent command centers and verification workflows. Chrome DevTools integrated Gemini for styling, performance and network debugging. The field fragments, yet Bun’s position inside Anthropic gives it unusual leverage in the agent-heavy future.

Sumner’s early tweets captured the ambition. One from 2021 highlighted JavaScriptCore’s four-times-faster startup compared with V8 in his tests. Another announced Bun as “an incredibly fast all-in-one JavaScript runtime.” Those claims proved durable. The acquisition simply reframes the bet: instead of chasing venture-scale monetization alone, Bun now sits at the center of one of the most aggressive AI coding efforts in the industry.

Questions remain. How will contribution credit evolve when bots outpace founders? What does code ownership mean when agents generate the majority of new lines? Will runtime performance still dominate when humans review less of the output? Sumner has wagered that fast, predictable tooling becomes even more valuable in that world.

He is hiring. The team ships updates at a quick clip. Bun v1.3.13 arrived with parallel test improvements, lower memory usage for installs and better source map handling. Each release tightens the loop between human intent and machine output. The original frustration — 45 seconds to check if a change worked — feels quaint. Today the constraint is how quickly an agent can propose, validate and deploy across thousands of lines.

Sumner once coded in a cramped Oakland apartment, tweeting progress between commits. Now he walks San Francisco streets with AI product teams and watches bots merge more PRs than he does. The project he started to solve his own iteration pain has become infrastructure for tools that multiply developer output by orders of magnitude. And Anthropic paid to own the stack underneath it all.

The numbers keep moving. Downloads rise. Revenue at Claude Code compounds. Robobun’s commit count grows. Bun itself ships faster than before. The question is no longer whether AI will change software engineering. It already has. The question is who controls the runtime that agents rely on when most code never passes through human hands first. For now, that runtime is Bun. And its creator no longer holds the top spot on its own contribution graph.



from WebProNews https://ift.tt/sWRteXV