Sunday, 15 March 2026

Biological Data Centers: Startups Are Building Computers Powered by Human Brain Cells

A new class of data centers doesn’t run on silicon. It runs on human neurons.

Several startups are now developing computing systems built around organoids — lab-grown clusters of human brain cells — arguing that biological processors could dramatically reduce the energy consumption that’s crippling the AI industry’s expansion. The concept sounds like science fiction. It isn’t. And the money flowing into it suggests serious people are taking it seriously.

Futurism reported that companies including Cortical Labs, FinalSpark, and Brainchip are pursuing biocomputing architectures that use living neurons as processing units. The logic is straightforward: the human brain operates on roughly 20 watts of power — about what it takes to run a dim light bulb — while performing cognitive tasks that the most advanced AI systems require megawatts to approximate. That efficiency gap represents an enormous opportunity.

FinalSpark, a Swiss startup, has already built what it calls the Neuroplatform, a system that keeps human brain organoids alive and uses them to perform basic computational tasks. The organoids, each containing tens of thousands of neurons, are maintained in microfluidic environments that supply nutrients and remove waste. Electrodes interface with the living tissue to send and receive signals. It’s crude compared to a modern GPU cluster. But the power consumption is almost negligible.

The timing isn’t accidental.

AI’s energy problem has become impossible to ignore. The International Energy Agency projected that data center electricity consumption could double by 2026, driven largely by AI workloads. Goldman Sachs estimated that a single ChatGPT query uses roughly ten times the electricity of a Google search. Tech giants are restarting nuclear plants, signing unprecedented power purchase agreements, and still struggling to secure enough energy for planned facilities. Against that backdrop, a technology that could process information at a fraction of the energy cost commands attention — even if it’s years from practical deployment.

Cortical Labs, based in Melbourne, demonstrated in 2022 that a dish of human neurons could learn to play Pong. The research, published in the journal Neuron, showed that biological neural networks could adapt their behavior in response to electrical feedback — essentially learning from their environment without being explicitly programmed. The company has since raised funding to scale this approach toward more complex tasks.

Not everyone is convinced the technology can bridge the gap between laboratory curiosity and industrial application. Growing and maintaining living tissue at scale introduces problems that semiconductor manufacturers never face. Organoids die. They’re sensitive to temperature, contamination, and nutrient supply. And the interface between biological tissue and electronic systems remains primitive — reading and writing signals to neurons with anything approaching the precision of digital circuits is an unsolved engineering challenge.

There are also questions no one has fully answered about what these organoids experience. Ethicists have raised concerns about whether brain organoids could develop some form of consciousness or sensation as they grow more complex. A 2024 report from the National Academies of Sciences, Engineering, and Medicine recommended establishing oversight frameworks for organoid research, acknowledging that current ethical guidelines haven’t kept pace with the science. So the industry may face regulatory friction before it faces technical limits.

Still, the trajectory is clear. FinalSpark claims its biological processors are already up to a million times more energy-efficient than traditional silicon chips for certain operations. That figure deserves scrutiny — lab benchmarks rarely survive contact with real-world conditions — but even if the actual advantage is orders of magnitude smaller, the implications for sustainable computing would be significant.

And the applications being discussed go beyond just efficiency. Proponents argue that biological neural networks could excel at pattern recognition, sensory processing, and adaptive learning in ways that digital architectures struggle with despite massive parameter counts. The brain doesn’t just process information efficiently. It processes it differently — using analog signals, massively parallel connections, and mechanisms we still don’t fully understand.

Investment is accelerating. Cortical Labs secured $10 million in funding in 2023. FinalSpark has opened remote access to its Neuroplatform for researchers worldwide. Other players are entering the space, though most remain in stealth. The U.S. Department of Defense has also expressed interest in biocomputing for edge applications where power constraints are severe.

The practical timeline? Long. We’re talking about a technology that can barely play a video game from 1972. Scaling from thousands of neurons to the billions required for meaningful computation presents challenges that no one has a clear roadmap for solving. But the same was true of digital computing in the 1940s, when ENIAC filled a room and could do less than what a modern calculator handles.

What matters now is that the fundamental proof of concept exists. Living neurons can compute. They can learn. They can do it on almost no power. The engineering problems are enormous, the ethical questions are real, and the commercial viability is unproven. But the AI industry’s insatiable appetite for energy has created a problem urgent enough to make biological computing look less like a fringe bet and more like a necessary frontier.



from WebProNews https://ift.tt/3D9cftX

FCC Chair Brendan Carr’s License Threats Over Iran Coverage Signal a New Era of Government Pressure on Broadcast Media

Federal Communications Commission Chairman Brendan Carr has escalated his campaign against broadcast networks, this time threatening the licenses of stations that aired coverage he deemed insufficiently supportive of the Trump administration’s handling of Iran-related developments. The move represents the latest and perhaps most aggressive step yet in a pattern of regulatory intimidation that has alarmed First Amendment advocates, media executives, and constitutional scholars across the political spectrum.

Carr, who was appointed by President Donald Trump, has increasingly wielded the FCC’s licensing authority as a cudgel against media organizations whose editorial choices conflict with the administration’s preferred narratives. According to Business Insider, the FCC chairman specifically targeted broadcast outlets over their coverage of U.S. policy toward Iran, suggesting that certain reporting could jeopardize their ability to operate on the public airwaves.

A Pattern of Regulatory Pressure That Goes Beyond Precedent

The threat against broadcasters over Iran coverage did not emerge in a vacuum. Since assuming the chairmanship, Carr has repeatedly signaled that the FCC would take a more interventionist approach to broadcast content than any of his predecessors in modern memory. Under longstanding FCC practice, license renewals have been treated as largely routine proceedings, with revocations reserved for the most egregious technical or legal violations — not editorial disagreements with the sitting administration.

Yet Carr has turned the license renewal process into something far more politically charged. As Business Insider reported, his public statements have drawn explicit connections between specific news coverage and the potential loss of broadcast licenses, a linkage that previous FCC chairs — both Republican and Democratic — studiously avoided. The implication is clear: networks that produce coverage the administration finds objectionable do so at their own regulatory peril.

The Legal Framework: What the FCC Can and Cannot Do

The FCC’s authority over broadcast licensees is rooted in the Communications Act of 1934, which grants the commission the power to issue, renew, and revoke licenses for use of the public airwaves. The standard for renewal is whether a station has served the “public interest, convenience, and necessity.” Historically, this standard has been interpreted broadly, and outright license denials have been exceedingly rare.

The First Amendment complicates any attempt by the FCC to punish broadcasters for their editorial content. While broadcast media have traditionally received somewhat less First Amendment protection than print media — a distinction rooted in the Supreme Court’s 1969 Red Lion Broadcasting Co. v. FCC decision — the government is still prohibited from engaging in content-based regulation of the press. Legal scholars have argued that threatening license revocation over specific news stories crosses a constitutional line that even the reduced protections afforded to broadcasters do not permit.

Industry Reaction: Fear, Defiance, and Self-Censorship

Inside the broadcast industry, the reaction to Carr’s threats has been a mixture of public defiance and private anxiety. Network executives, speaking on background, have described an atmosphere of uncertainty that has begun to affect editorial decision-making. Some newsroom leaders have acknowledged that the specter of license challenges has prompted more cautious coverage of topics the administration has flagged as sensitive — a chilling effect that media advocates say is precisely the point of Carr’s public statements.

Major media trade organizations have pushed back forcefully. The National Association of Broadcasters has reiterated its position that the FCC should not use its licensing authority to influence news coverage. Press freedom organizations, including the Reporters Committee for Freedom of the Press, have warned that Carr’s approach represents a fundamental threat to the independence of American journalism. “When the government starts telling broadcasters what they can and cannot report, we have crossed into territory that the founders of this country explicitly sought to prevent,” one press freedom advocate told reporters.

The Iran Coverage That Sparked the Latest Confrontation

The specific Iran-related coverage that drew Carr’s ire involved reporting on the Trump administration’s diplomatic and military posture toward Tehran. Several broadcast networks aired segments that included critical analysis of the administration’s strategy, featured commentary from former officials who questioned the approach, and reported on potential consequences of escalation. Carr characterized some of this coverage as misleading and suggested it could constitute a failure to serve the public interest — the legal standard that governs broadcast license renewals.

Critics of Carr’s position have noted that the coverage in question fell well within the bounds of standard journalistic practice. Reporting that includes critical perspectives on government policy is not only permissible but is widely regarded as a core function of a free press. The notion that presenting viewpoints at odds with the administration’s position could endanger a broadcaster’s license has been described by legal experts as both unprecedented and constitutionally suspect.

How This Fits Into the Broader Administration Strategy

Carr’s actions at the FCC are part of a wider pattern of the Trump administration using regulatory and legal mechanisms to pressure media organizations. The administration has pursued or threatened legal action against several major news outlets, and Trump himself has repeatedly called for investigations into networks whose coverage he dislikes. The FCC’s licensing power gives the administration a uniquely potent tool in this effort, because broadcast stations — unlike cable networks, newspapers, or digital media — require government permission to operate.

This dynamic has created a two-tier system in American media, where broadcast outlets face a form of government oversight that their competitors in cable, print, and digital do not. Some analysts have argued that this disparity is increasingly anachronistic in an era when most Americans consume news through platforms that fall outside the FCC’s jurisdiction. Nevertheless, broadcast television remains a significant source of news for millions of Americans, and the threat of license revocation carries enormous financial and operational consequences for station owners.

Constitutional Scholars Sound the Alarm

The legal community’s response to Carr’s threats has been notably bipartisan. Conservative and liberal constitutional scholars alike have expressed concern that using licensing authority to influence editorial content represents a dangerous expansion of government power over the press. Floyd Abrams, one of the nation’s foremost First Amendment attorneys, has previously warned that such tactics, if left unchecked, could fundamentally alter the relationship between the government and the media in the United States.

Some legal experts have suggested that affected broadcasters could challenge any adverse licensing action in federal court, where they would likely argue that the FCC’s actions constitute unconstitutional content-based regulation of speech. Such a case could potentially reach the Supreme Court and force a reexamination of the legal framework governing broadcast regulation — a framework that many scholars believe is overdue for modernization.

What Comes Next for Broadcasters and the FCC

For now, the broadcast industry finds itself in an uncomfortable position. No licenses have actually been revoked, and it remains unclear whether Carr’s threats will translate into formal regulatory action. But the mere possibility has introduced a new variable into the calculations of every broadcast newsroom in the country. Editors and producers must now weigh not only the journalistic merits of a story but also the potential regulatory consequences of airing it — a consideration that would have been unthinkable just a few years ago.

The situation also raises questions about the future of the FCC itself. If the commission’s licensing authority can be wielded as a political weapon, the implications extend far beyond any single administration or any single set of news stories. The precedent being set — or at least being tested — by Carr’s approach could reshape the relationship between the federal government and broadcast media for years to come. Whether the courts, Congress, or the industry itself will push back with sufficient force to prevent that outcome remains an open question, and one that carries significant consequences for the future of press freedom in America.

As the standoff between the FCC and the broadcast industry continues, one thing is clear: the traditional boundaries that separated government regulation from editorial independence are under more strain than at any point in recent memory. The outcome of this confrontation will say a great deal about the durability of the constitutional protections that have long defined the American media system.



from WebProNews https://ift.tt/joiLIYt

Saturday, 14 March 2026

Adobe’s $4.95 Million Settlement Over Hidden Cancellation Fees Exposes the Dark Economics of Subscription Traps

Adobe will pay $4.95 million to settle a federal lawsuit that accused the software giant of burying early termination fees deep within its subscription sign-up process — charges that hit consumers with hundreds of dollars when they tried to cancel plans they didn’t fully understand they’d committed to. The settlement, announced in late June 2025, resolves a case brought by the U.S. Department of Justice on behalf of the Federal Trade Commission, and it marks one of the most significant enforcement actions yet against a major tech company over so-called “dark patterns” in subscription design.

The case dates back to June 2024, when the DOJ filed a complaint alleging that Adobe used deceptive practices to enroll customers in its most lucrative subscription plan — an annual commitment billed monthly — without clearly disclosing the early termination fee (ETF) that kicked in if users tried to cancel before 12 months elapsed. That fee could reach 50% of the remaining subscription cost, sometimes totaling hundreds of dollars. According to the government’s filing, Adobe hid the ETF disclosure behind optional text boxes and hyperlinks during the sign-up flow, ensuring that most consumers never saw it, as reported by CNET.

The numbers tell a damning story. Adobe’s Creative Cloud subscriptions — which include Photoshop, Illustrator, Premiere Pro, and dozens of other professional tools — generate billions in annual recurring revenue. The annual plan billed monthly was the default option presented to new subscribers, and the FTC alleged that Adobe steered customers toward it precisely because it locked them in. Customers who later discovered the ETF and complained were often routed through a convoluted cancellation process designed to retain them. Some reported being transferred between multiple agents, offered temporary discounts, or simply having their cancellation requests ignored.

Under the settlement’s terms, Adobe must pay the $4.95 million penalty and fundamentally restructure how it presents subscription terms to new customers. The company is now required to clearly and conspicuously disclose the existence and amount of early termination fees before a consumer completes enrollment. No more burying the terms in collapsed text or behind hyperlinks. Adobe must also simplify its cancellation process, making it possible for subscribers to cancel through the same digital channels they used to sign up — without being subjected to excessive retention tactics.

Adobe, for its part, did not admit wrongdoing. A company spokesperson told CNET that Adobe had already made changes to its sign-up and cancellation flows before the settlement, and that the company is “committed to ensuring a transparent experience” for its subscribers. The statement was carefully worded, stopping well short of acknowledging the practices described in the complaint.

That’s a familiar posture for companies caught in the FTC’s crosshairs. And it raises a question that industry watchers have been asking with increasing urgency: how many other subscription-based software companies are running essentially the same playbook?

The answer, according to consumer advocates, is a lot of them.

The FTC has been escalating its crackdown on subscription traps for several years. In 2021, the agency issued an enforcement policy statement warning companies that failing to provide simple cancellation mechanisms could violate federal law. Then in 2023, the FTC proposed its “click-to-cancel” rule, which would require that canceling a subscription be as easy as signing up for one. That rule was finalized in late 2024, though legal challenges from industry groups have slowed its full implementation.

The Adobe case fits squarely within this broader regulatory push. But it also stands out because of the company’s market position. Adobe isn’t some fly-by-night subscription box or obscure streaming service. It’s a $200 billion company whose products are standard-issue tools for photographers, designers, filmmakers, marketers, and virtually anyone who works in creative fields. Its shift from perpetual software licenses to a subscription-only model, completed in 2013, was widely studied in business schools as a masterclass in recurring revenue strategy. What the FTC complaint revealed is that the strategy’s success depended, at least in part, on making it very hard to leave.

The early termination fee itself wasn’t illegal. Plenty of companies charge them. The issue was disclosure — or the lack of it. The government alleged that during Adobe’s online enrollment process, the annual-plan-billed-monthly option was presented as the default, with the monthly price prominently displayed. The fact that the customer was committing to a 12-month contract, and that canceling early would trigger a fee equal to half the remaining balance, was disclosed only in fine print that required affirmative clicks to reveal. Most people don’t click. Adobe knew that.

Internal communications cited in the complaint suggested that Adobe employees were aware of widespread customer frustration over the ETF. Customer service logs reportedly showed thousands of complaints from subscribers who felt blindsided by the charges. Some customers reported being charged the fee even after they believed they had successfully canceled. The complaint painted a picture of a company that had optimized its sign-up funnel for conversion while systematically under-investing in transparency.

The $4.95 million penalty is, by any reasonable measure, modest relative to Adobe’s financial scale. The company reported $21.5 billion in revenue for fiscal year 2024, with the overwhelming majority coming from its Digital Media segment, which includes Creative Cloud. A fine of less than $5 million barely registers on a balance sheet that size. Critics have pointed out that the penalty likely represents a fraction of the revenue Adobe earned from the very practices the FTC challenged.

But the injunctive relief — the mandated changes to Adobe’s business practices — may prove more consequential. Requiring clear, upfront disclosure of ETFs and easy cancellation pathways could reduce subscriber lock-in and increase churn rates. For a company whose valuation is built substantially on predictable recurring revenue, even a modest uptick in cancellations could have outsized effects on investor sentiment. Adobe’s stock barely moved on the settlement news, suggesting Wall Street views the changes as manageable. Whether that assessment holds will depend on how the new disclosure requirements affect renewal rates over the next several quarters.

The settlement also includes a provision requiring Adobe to obtain express informed consent before charging the ETF. That means the company can’t simply point to terms of service that a customer technically agreed to but never read. The consent must be affirmative, specific, and separate from the general enrollment process. This is a higher bar than most subscription companies currently meet, and it could become a template for future FTC enforcement actions against other firms.

So where does this leave Adobe’s competitors? Companies like Microsoft, Autodesk, and Figma all operate subscription models with varying degrees of lock-in. Microsoft 365, for instance, offers both monthly and annual plans, but its cancellation and refund policies are generally considered more straightforward than what Adobe was offering. Autodesk has faced its own share of customer complaints about subscription pricing and cancellation difficulties, though it hasn’t attracted the same level of regulatory scrutiny. Figma, which Adobe attempted to acquire in a $20 billion deal that was abandoned in 2023 after antitrust opposition, operates on a more flexible subscription model that doesn’t impose early termination fees on most plans.

The broader software industry is watching this case closely. The subscription model has become the dominant business structure for software companies of all sizes, from enterprise giants to solo-developer SaaS tools. Recurring revenue is prized by investors because it’s predictable and compounds over time. But the Adobe settlement is a reminder that the strategies companies use to maintain that predictability — long-term commitments, auto-renewals, difficult cancellation processes, and opaque fee structures — carry regulatory risk. And that risk is growing.

Consumer advocacy groups have praised the settlement while noting its limitations. The National Consumer Law Center said the case sends a clear message that subscription companies cannot hide material terms from consumers, but argued that the financial penalty should have been larger to serve as a meaningful deterrent. The Electronic Frontier Foundation, which has been vocal about dark patterns in software design, called the settlement “a step in the right direction” but urged the FTC to pursue more aggressive remedies in future cases.

For Adobe’s millions of individual subscribers, the practical impact is already visible. The company’s current sign-up flow now displays the annual commitment and associated ETF more prominently than it did a year ago. The cancellation process has been streamlined, with fewer retention screens and a clearer path to completing a cancellation online. These changes were implemented before the settlement was finalized, likely as a strategic move to demonstrate good faith and limit the scope of any court-ordered remedies.

Still, the underlying tension hasn’t been resolved. Adobe’s most popular Creative Cloud plans remain annual commitments billed monthly, and the ETF still exists — it’s just disclosed more clearly now. Customers who want true month-to-month flexibility must choose a different plan that costs significantly more per month. The economics of the subscription are designed to push users toward the annual commitment. That’s not illegal. But it does mean the company’s incentives remain fundamentally misaligned with consumers who value flexibility.

And this is the core issue the FTC is trying to address, not just with Adobe but across the subscription economy. The agency’s position is that consumers should understand what they’re agreeing to before they agree to it, and they should be able to exit as easily as they entered. Simple principles. But implementing them in an industry that has spent two decades engineering every friction point to maximize retention is proving to be a protracted fight.

The Adobe settlement won’t be the last word on subscription transparency. It may not even be the most important one. But for an industry that has grown accustomed to treating customer inertia as a feature rather than a bug, it’s a $4.95 million reminder that regulators are paying attention — and that the cost of opacity is going up.



from WebProNews https://ift.tt/ZesCzuJ

Artemis II and the Calculus of Acceptable Risk: Why Sending Humans Back to the Moon Is a Bet NASA Can’t Afford to Lose

Four astronauts are preparing to fly around the Moon later this year. It will be the first time human beings have traveled beyond low Earth orbit since December 1972, when Apollo 17’s crew splashed down in the Pacific and the curtain fell on an era. More than half a century of silence followed. Now NASA is attempting something that sounds deceptively simple: send a crew on a loop around the Moon and bring them home. No landing. No surface operations. Just a flyby.

Don’t let the simplicity fool you.

Artemis II, currently targeting a launch no earlier than late 2025 or early 2026 depending on readiness reviews, represents one of the most consequential test flights in NASA’s history. The mission will be the first crewed flight of the Space Launch System rocket and the Orion spacecraft together — a combination that flew once before, uncrewed, during Artemis I in late 2022. That mission revealed problems. Heat shield erosion behaved in ways engineers didn’t predict. Bolts holding the heat shield’s outer layer shed unexpectedly. And the life support system, which had no humans aboard to stress-test it, remains largely unproven in the deep-space environment where Artemis II will operate.

As Ars Technica reported in a detailed examination of the mission’s risk profile, the fundamental question hanging over Artemis II isn’t whether NASA can pull it off — it’s how much risk the agency and its astronauts are actually accepting, and whether that level of risk is being communicated honestly.

The heat shield issue alone would give any engineer pause. During Artemis I’s return, Orion’s Avcoat heat shield — a material with Apollo-era heritage — experienced what NASA has described as unexpected charring patterns and loss of material. Chunks of the ablative coating came off in ways that thermal models hadn’t predicted. NASA spent more than a year investigating, ultimately attributing the anomaly to gases trapped within the heat shield material that expanded and caused pieces to liberate during the intense heating of reentry. The agency says it now understands the phenomenon and has determined that Artemis II can fly safely with the existing heat shield design, though it has committed to a redesigned heat shield for Artemis III and beyond.

That’s a significant caveat. If the heat shield design needs to be changed for future missions, the implicit admission is that the current design is not optimal. NASA’s position is that the material loss observed on Artemis I, while unexpected, did not compromise the structural integrity of the heat shield and that adequate margins remain for a crewed reentry. Engineers have run additional thermal analyses and ground tests. They believe the risk is manageable.

Belief and certainty are different things.

The crew — NASA astronauts Reid Wiseman, Victor Glover, and Christina Koch, along with Canadian Space Agency astronaut Jeremy Hansen — will be flying a spacecraft that has carried humans exactly zero times. Every system that interacts with a crew will be operating in its true environment for the first time. The environmental control and life support system. The crew displays and interfaces. The manual flight control capability, which the astronauts are expected to demonstrate during a segment of the mission. The waste management system. All of it untested with actual human occupants in the thermal and radiation conditions of cislunar space.

This is not unprecedented in the history of spaceflight. Apollo’s first crewed mission beyond Earth orbit was Apollo 8 in December 1968, which sent three astronauts around the Moon on a spacecraft that had only flown with crew once before, in low Earth orbit. The Saturn V rocket that carried them had launched just twice — once successfully, once with significant problems including engine failures and structural oscillations. NASA made the call to go anyway, driven by Cold War urgency and intelligence suggesting the Soviet Union might attempt a circumlunar flight first.

The risk tolerance was different then. As Ars Technica noted, some estimates placed the probability of crew loss on Apollo missions in the range of 5% per flight. NASA administrator Jim Webb reportedly believed there was a one-in-four chance of losing a crew during the program. The astronauts themselves understood this. They flew anyway.

Today’s NASA operates under a fundamentally different set of expectations. The agency’s own probabilistic risk assessments for the Space Shuttle, conducted after the Columbia disaster, suggested loss-of-crew probabilities on the order of 1 in 90 for early shuttle flights. For the Commercial Crew Program — SpaceX’s Crew Dragon and Boeing’s Starliner — NASA set a requirement of no worse than 1 in 270 chance of loss of crew. The agency has not publicly released a comparable number for Artemis II.

That silence is telling.

NASA officials have said publicly that they believe Artemis II’s risk level is acceptable and consistent with the agency’s standards for human spaceflight. But the specifics remain closely held. Part of the reason is institutional: publishing a precise probability invites public debate about whether that number is “safe enough,” a conversation NASA would rather have internally. Part of it is technical: the models used to generate these numbers carry their own uncertainties, and a single figure can be misleading without extensive context.

But there’s also a political dimension. Artemis is the centerpiece of NASA’s human exploration strategy. Billions of dollars have been spent. Contracts with Boeing, Lockheed Martin, Northrop Grumman, and others are deeply embedded in the industrial base. Congressional delegations in Alabama, Louisiana, Florida, Texas, and other states have strong interests in the program’s continuation. Acknowledging elevated risk — even risk that falls within historically accepted bounds — creates ammunition for critics who argue the program is too expensive, too slow, or too dangerous.

And Artemis has no shortage of critics. The SLS rocket, a government-designed and government-built vehicle derived from Space Shuttle components, costs roughly $2.5 billion per launch by most independent estimates, though NASA has resisted confirming a precise per-flight figure. It is expendable — each rocket is used once and destroyed. SpaceX’s Starship, by contrast, is designed to be fully reusable and, if it achieves its cost targets, could launch for a fraction of SLS’s price. Starship is, in fact, a critical part of the Artemis architecture: a modified version called the Human Landing System is supposed to carry astronauts from lunar orbit to the surface on Artemis III.

So NASA finds itself in the awkward position of relying on two very different vehicles built by two very different philosophies — one a government cost-plus megaproject, the other a commercial venture iterating through rapid prototyping and occasional spectacular failures — to accomplish a single goal. The tension between these approaches is real and ongoing.

None of this changes the immediate question facing the Artemis II crew and the engineers supporting them. The mission profile itself is relatively conservative by Apollo standards. Orion will launch atop SLS from Kennedy Space Center’s Pad 39B, enter a high Earth orbit, receive a trans-lunar injection burn from the SLS upper stage, coast to the Moon, perform a free-return trajectory that swings behind the lunar far side, and return to Earth for a splashdown in the Pacific. Total mission duration is approximately 10 days. No orbital insertion at the Moon. No docking with another spacecraft. No landing attempt.

The free-return trajectory is a deliberate risk-reduction choice. If the Orion spacecraft’s service module engine fails after the trans-lunar injection burn, the laws of orbital mechanics will bring the capsule back to Earth without any additional propulsive maneuver. Apollo 13 used a variant of this principle to survive its catastrophic oxygen tank explosion in 1970. It’s a built-in safety net, and it’s one of the reasons NASA chose this mission profile for the first crewed flight.

But the free-return trajectory doesn’t protect against every failure mode. A breach of the crew cabin’s pressure vessel would be fatal regardless of trajectory. A failure of the heat shield during reentry — the scenario that has drawn the most scrutiny — would be catastrophic. A loss of electrical power or life support could turn a 10-day mission into a survival scenario with very limited margins. And the radiation environment between Earth and the Moon, while generally manageable for a short-duration mission, poses a risk during solar particle events that could deliver dangerous doses to the crew if a major solar flare occurs during the transit.

Orion does carry a small radiation shelter area where crew members can huddle during a solar event, using equipment and supplies as additional shielding. NASA has studied this scenario extensively. The protection is adequate for most events but not for the most extreme solar particle storms, which are rare but not impossible. The mission is timed to avoid the predicted peak of Solar Cycle 25, though solar activity forecasting remains an imprecise science.

There’s another factor that receives less public attention: the abort options during launch and ascent. SLS does not carry a traditional launch escape system in the way that Apollo’s Saturn V did, with a tower-mounted solid rocket pulling the capsule away from a failing booster. Instead, Orion has its own Launch Abort System — a set of solid rocket motors mounted on a tower atop the capsule that can pull it free during the first couple of minutes of flight. After that, Orion relies on its own service module engine and the separation capability from SLS to execute abort scenarios at various points during ascent. These abort modes have been analyzed extensively but never tested in an actual emergency. The Launch Abort System was tested once, in an uncrewed pad abort test in 2019 at White Sands, New Mexico. It worked. But an ascent abort — pulling away from a rocket that is actively failing while traveling at high speed through the atmosphere — has never been demonstrated.

This is standard for new crewed vehicles. SpaceX’s Crew Dragon conducted an in-flight abort test in January 2020, deliberately triggering separation from a Falcon 9 at the point of maximum aerodynamic pressure. It succeeded. Boeing’s Starliner has not conducted an in-flight abort test, though its pad abort test in 2019 experienced a partial parachute deployment failure. NASA accepted the risk of flying Starliner without a dedicated in-flight abort demonstration.

Risk acceptance is, ultimately, a human decision made under uncertainty. Engineers can model failure scenarios, calculate probabilities, test components, and run simulations. But spaceflight — particularly on new vehicles — always carries unknowns that models can’t fully capture. The “unknown unknowns,” as former Defense Secretary Donald Rumsfeld once put it in a different context, are what keep flight directors awake at night.

The astronauts themselves appear to accept this reality with the equanimity characteristic of their profession. Reid Wiseman, the mission commander, is a Navy test pilot and veteran of a long-duration stay on the International Space Station. Victor Glover flew to the ISS aboard SpaceX’s Crew Dragon on its first operational mission. Christina Koch holds the record for the longest single spaceflight by a woman. Jeremy Hansen, while a spaceflight rookie, is a former CF-18 fighter pilot. These are people who have spent their careers evaluating and accepting calculated risk.

But their willingness to fly does not absolve NASA of the responsibility to be transparent about what that risk actually is. As Ars Technica’s analysis emphasized, the agency’s reluctance to discuss specific risk numbers for Artemis II stands in contrast to the relative openness it has shown about risk assessments for other programs. After Columbia, NASA published detailed probabilistic risk assessments for remaining shuttle flights. The Commercial Crew Program’s safety requirements, including the 1-in-270 loss-of-crew threshold, are public. For Artemis, the numbers are harder to find.

One reason may be that the numbers aren’t flattering. A new rocket, a spacecraft with one uncrewed test flight, a heat shield that behaved unexpectedly, life support systems untested in their operational environment, and abort modes that have never been exercised in real conditions — all of these factors push the probability of loss of crew higher than what NASA has accepted for routine ISS crew rotation flights. How much higher is the question NASA doesn’t seem eager to answer publicly.

It is worth placing this in historical context. Every first crewed flight of a new American spacecraft has carried elevated risk. John Glenn’s Mercury-Atlas 6 mission in 1962. Gus Grissom and John Young’s Gemini 3 in 1965. The first crewed Apollo flight, Apollo 7, in 1968 — which came after the Apollo 1 fire killed three astronauts during a ground test. The first Space Shuttle mission, STS-1, in 1981, which launched with a crew aboard a vehicle that had never flown to space at all. Doug Hurley and Bob Behnken’s Demo-2 mission on Crew Dragon in 2020. In each case, the crew and the agency accepted risk that was higher than what subsequent missions would carry, because someone has to go first.

Artemis II is that flight for the Artemis program. And the stakes extend beyond the four people in the capsule. A successful mission validates the SLS-Orion architecture, builds confidence for the far more complex Artemis III lunar landing mission, and sustains political and public support for a program that has already consumed decades and tens of billions of dollars. A failure — particularly a fatal one — would be devastating. Not just for the families of the crew, but for NASA as an institution and for the broader cause of human space exploration. The political fallout from a crew loss on Artemis II would almost certainly ground the program for years, if not permanently.

NASA knows this. The agency’s leadership, from Administrator Bill Nelson on down, has repeatedly stated that they will not fly until they are ready and that safety is the top priority. These are the right words. The question is whether the institutional pressures — schedule, budget, political expectations, contractor relationships — create subtle incentives to declare readiness before every concern has been fully resolved.

The history of spaceflight suggests this is not a theoretical concern. The Rogers Commission found that NASA managers overrode engineering objections to launch Challenger in cold weather. The Columbia Accident Investigation Board found that organizational culture and schedule pressure contributed to the decision to fly with known foam-shedding risks. In both cases, the agency’s own internal processes failed to prevent catastrophe.

NASA has implemented significant safety reforms since Columbia, including the creation of an independent safety oversight structure and a stronger role for the chief safety officer. The Aerospace Safety Advisory Panel, an independent body that reports to Congress and the NASA administrator, has been closely monitoring Artemis development and has raised concerns about schedule pressure and workforce fatigue at various points. Whether these safeguards are sufficient to prevent the kind of normalization of risk that contributed to past disasters is something that can only be judged in retrospect.

For now, the Artemis II crew continues to train. Engineers continue to analyze data, close out action items, and prepare the hardware at Kennedy Space Center. The SLS rocket and Orion spacecraft are being stacked and tested. Review boards will convene. Flight readiness reviews will be conducted. And at some point, if all the boxes are checked and all the concerns are addressed to the satisfaction of the people responsible for the decision, four human beings will strap into a capsule atop the most powerful rocket in operation, light the engines, and head for the Moon.

It will be dangerous. How dangerous, exactly, is something NASA would prefer to discuss in qualitative rather than quantitative terms. The astronauts will trust the engineers. The engineers will trust their analysis. And the rest of us will watch, knowing that for all the technology and all the testing and all the reviews, spaceflight remains an inherently hazardous undertaking — one where the margin between triumph and tragedy can be measured in millimeters of heat shield ablator or milliseconds of reaction time.

Fifty-four years is a long time to be away from the Moon. Getting back was never going to be easy. And it was never going to be safe.



from WebProNews https://ift.tt/RKxHQjS

Friday, 13 March 2026

Amazon’s Prime Day Is Moving to October — and the Entire Retail Calendar May Never Be the Same

Amazon is preparing to uproot its signature shopping event from its familiar July perch and transplant it into October, a move that would place the e-commerce giant’s most powerful promotional weapon directly in the path of the holiday shopping season. The shift, if executed as reported, wouldn’t just alter Amazon’s own calendar. It would send shockwaves through the entire retail industry, forcing competitors to recalibrate their strategies during the most consequential selling period of the year.

According to Digital Trends, Amazon has informed some sellers that Prime Day 2025 will take place in October rather than July. The report, which cites communications sent to third-party merchants on the platform, suggests Amazon is making this calendar change amid broader economic turbulence — specifically, the ongoing uncertainty surrounding U.S. tariff policy and its cascading effects on consumer goods pricing.

That’s a big deal. Prime Day has become one of the largest online shopping events on the planet, routinely generating billions of dollars in sales over its 48-hour window. Moving it from the relative calm of midsummer into the opening weeks of the holiday retail sprint represents a fundamental rethinking of how Amazon deploys its most potent demand-generation tool.

From Summer Spectacle to Holiday Accelerant

Prime Day launched in 2015 as a 24-hour event celebrating Amazon’s 20th anniversary. It was deliberately positioned in July — a period historically devoid of major retail events — to create a shopping moment where none existed. The strategy worked brilliantly. What started as a single day of deals ballooned into a multi-day extravaganza that competitors felt compelled to match. Target created “Deal Days.” Walmart rolled out competing sales. Best Buy jumped in. July became, improbably, one of the biggest retail moments of the year.

But Amazon has experimented with the timing before. In 2020, the COVID-19 pandemic forced Prime Day into October, where it landed on October 13-14. The company saw strong results. In 2022, Amazon introduced a second Prime-branded event — the “Prime Big Deal Days” — in October, essentially giving itself two bites at the apple. That fall event has continued annually since.

Now, rather than running two separate events, Amazon appears ready to consolidate its firepower into a single October push. The timing is strategic on multiple levels. An October Prime Day would land just weeks before Black Friday and Cyber Monday, effectively extending the holiday shopping season by a full month. For consumers already anxious about rising prices due to tariffs on Chinese imports, an early opportunity to lock in deals could prove irresistible.

And for Amazon, the math is straightforward. Holiday quarter sales dwarf every other period. Placing Prime Day at the front end of that cycle could pull forward demand, capture early holiday budgets, and establish Amazon as the default starting point for gift shopping. It’s a land grab for consumer attention at the most valuable time of year.

The tariff angle can’t be overstated. With the Trump administration’s trade policies creating genuine uncertainty about the cost of electronics, apparel, toys, and household goods — categories that dominate Prime Day — Amazon may be calculating that October offers a window where current inventory can still be sold at attractive prices before tariff-driven cost increases fully hit shelves. Sellers who spoke to Digital Trends indicated that the shift was communicated in the context of these macroeconomic pressures.

There’s also a defensive dimension. Temu and Shein, the Chinese-origin discount platforms, have been aggressively courting American consumers with rock-bottom pricing. Both companies have seen their cost advantages threatened by new tariff structures targeting low-value imports from China, including the elimination of the de minimis exemption that allowed packages under $800 to enter the U.S. duty-free. Amazon may see an October Prime Day as a chance to reassert dominance over deal-seeking shoppers before those competitors can adjust their own holiday strategies.

What This Means for Sellers, Competitors, and the Consumer

For Amazon’s third-party sellers — who now account for more than 60% of units sold on the platform — the shift introduces both opportunity and logistical complexity. Sellers typically spend months preparing inventory, negotiating deals, and planning advertising budgets around Prime Day. Moving the event by three months compresses timelines for some and extends them for others. Sellers with holiday-oriented products may welcome the change. Those who relied on a July sales spike to fund second-half inventory purchases could find themselves squeezed.

Advertising costs on Amazon’s platform, already elevated during Prime Day, could reach new heights in October. Amazon’s advertising business generated over $14 billion in Q4 2024 alone. An October Prime Day would supercharge that figure by layering Prime Day ad spending on top of already-intense holiday advertising demand. For smaller sellers with limited budgets, competing for visibility could become significantly more expensive.

Competitors face an uncomfortable choice. When Prime Day sat in July, rival retailers could counter-program with their own summer sales while still preserving their holiday playbooks. An October Prime Day forces them to either match Amazon’s timing — pulling their own holiday promotions earlier — or risk losing early-season shoppers entirely. Target, Walmart, and Best Buy will almost certainly respond with competing events, further compressing the promotional calendar and potentially training consumers to expect deep discounts even earlier than Black Friday.

This acceleration of holiday discounting has a downside. Margin pressure. Retailers already operate on thin margins during the holiday quarter, and starting the promotional arms race in October rather than late November could erode profitability across the industry. But few retailers can afford to sit out the fight.

For consumers, the picture is more straightforward. More deals, earlier. An October Prime Day means shoppers can spread holiday spending over a longer period, potentially avoiding the frantic compressed spending of late November and December. In a year when tariff-driven price increases are a genuine concern, locking in prices early could represent real savings.

But there’s a psychological element too. Amazon has spent a decade training consumers to associate Prime Day with summer. Changing that association isn’t trivial. The July event carried a distinct identity — a manufactured shopping holiday that felt separate from the traditional retail calendar. Folding it into the holiday season risks making Prime Day feel less like a unique event and more like just another pre-Black Friday sale. Whether that matters to Amazon’s bottom line is debatable. The company has never been sentimental about branding when money is on the table.

Amazon hasn’t officially confirmed the October date publicly as of this writing. The company tends to announce Prime Day details relatively close to the event itself, and seller communications don’t always reflect final decisions. But the pattern of evidence — the 2020 precedent, the success of October’s Big Deal Days, the tariff pressures, the competitive dynamics with Chinese discount platforms — all point in the same direction.

Wall Street will be watching closely. Amazon’s stock has been sensitive to any signals about consumer demand strength, particularly as recession fears and trade war anxieties weigh on sentiment. A strong October Prime Day could provide a powerful data point about the health of the American consumer heading into the holidays. A weak one could amplify concerns.

The Bigger Picture: Retail’s Calendar Is Up for Grabs

What Amazon is doing here extends beyond a single event. It reflects a broader truth about modern retail: the traditional calendar — built around back-to-school in August, Black Friday in November, and post-Christmas clearance in January — is increasingly irrelevant. Amazon, more than any other company, has the power to create shopping moments at will. Prime Day proved that in July. Now the company is betting it can do even more damage in October.

The implications ripple outward. If October becomes the new starting gun for holiday shopping, brands will need to have holiday inventory in warehouses by September. Marketing campaigns will need to launch earlier. Supply chains will need to accelerate. The entire rhythm of retail planning shifts forward.

So does Amazon risk cannibalizing its own Black Friday and Cyber Monday sales? Possibly. But the company has shown repeatedly that expanding the total number of shopping occasions tends to grow the overall pie rather than simply redistribute existing demand. Prime Day didn’t reduce holiday spending when it launched in July. It created net new spending. Amazon is betting the same logic holds when the event moves closer to the holidays.

There’s one more factor worth considering. Amazon’s physical retail ambitions — Whole Foods, Amazon Fresh, Amazon Go — have underperformed expectations. The company’s strength remains overwhelmingly online. An October Prime Day reinforces that advantage at precisely the moment when brick-and-mortar retailers are gearing up for their strongest period. It’s a reminder of where Amazon’s real power lies.

The July Prime Day isn’t dead yet. Amazon could still run a smaller summer event or introduce a different promotional vehicle for the middle of the year. But the center of gravity is shifting. And in retail, where Amazon goes, everyone else eventually follows — whether they want to or not.



from WebProNews https://ift.tt/CPvR98y

When the Algorithm Gets It Wrong: An Innocent Grandmother Jailed for Months Exposes the Terrifying Fragility of AI-Powered Surveillance

Sandra Barker spent 39 days in a North Dakota jail. She was 56 years old, a grandmother, and she had done nothing wrong.

The case against her began not with a detective’s hunch or a witness tip but with an algorithm — an artificial intelligence system deployed by the state to detect Medicaid fraud. The AI flagged Barker’s billing records as suspicious. Based largely on that automated output, North Dakota’s Bureau of Criminal Investigation arrested her, and prosecutors charged her with multiple felonies. She faced years in prison. And the machine was wrong.

The story, first reported in detail by the Grand Forks Herald, is more than one woman’s nightmare. It’s a warning shot about what happens when governments lean on artificial intelligence to police their citizens — and what happens when no one checks the machine’s work before lives are destroyed.

Barker worked as a personal care assistant in North Dakota, providing in-home services to people on Medicaid. The state’s Department of Human Services had contracted with Conduent, a technology services company, to process Medicaid claims and, critically, to use AI-driven analytics to identify potential fraud. The system flagged Barker. Investigators took the flag and ran with it, building a criminal case that alleged she had billed for services she never provided.

Except she had provided them.

According to the Herald’s reporting, the AI system’s analysis contained significant errors. The algorithm apparently failed to account for legitimate billing patterns and misinterpreted data in ways that made lawful claims look fraudulent. Barker was arrested in 2023, booked into the Ward County Jail, and held for 39 days before she could post bond. She lost income. She lost time with her grandchildren. She lost her sense of safety in a country that promises its citizens are innocent until proven guilty.

The charges were eventually dropped. But “eventually” is doing enormous work in that sentence. For months, Barker lived under the weight of felony accusations, her reputation in tatters, her freedom conditional on a court’s calendar. All because a computer said she was a criminal.

The Machinery of Automated Suspicion

North Dakota’s case isn’t an isolated incident. It sits at the intersection of two accelerating trends: the expansion of government surveillance infrastructure and the increasing reliance on AI to interpret the data that infrastructure collects. Across the United States, federal and state agencies are deploying algorithmic tools to monitor benefits programs, tax filings, immigration status, and criminal behavior. The appeal is obvious. These systems can process millions of records in hours, flagging anomalies that would take human auditors months to find. They promise efficiency. They promise savings. What they don’t promise — and can’t guarantee — is accuracy.

The problems are well-documented. In Michigan, an automated fraud-detection system for unemployment insurance falsely accused more than 40,000 people of fraud between 2013 and 2015, according to reporting by the Detroit Free Press. The state’s own review later found a 93% error rate. People had wages garnished. Some lost their homes. In the Netherlands, a tax authority scandal involving an algorithmic fraud-detection system that disproportionately targeted minority families brought down the entire Dutch government in 2021.

And yet the tools keep proliferating. The IRS has invested in AI to detect tax fraud. States use predictive algorithms to flag child welfare cases. Police departments across the country employ facial recognition, predictive policing software, and automated license plate readers that log the movements of millions of vehicles daily. The surveillance net grows wider and finer simultaneously, and AI is the engine pulling it taut.

The fundamental problem isn’t that these systems exist. It’s that they’re treated as authoritative when they are, by design, probabilistic. An AI fraud-detection model doesn’t determine guilt. It calculates likelihood. It produces a score, a flag, a recommendation. But somewhere between the algorithm’s output and a prosecutor’s charging decision, that probability hardens into certainty. The flag becomes the case. The score becomes the evidence. And the human beings who are supposed to exercise judgment — investigators, prosecutors, judges — defer to the machine.

That’s what appears to have happened to Sandra Barker. The Bureau of Criminal Investigation received the AI’s output and, according to the Herald’s account, conducted an investigation that leaned heavily on the algorithmic findings without sufficiently verifying them against ground truth. Nobody knocked on enough doors. Nobody cross-referenced enough timesheets. The machine said fraud, so fraud it was.

This pattern has a name in academic circles: automation bias. It’s the well-documented tendency of human decision-makers to favor suggestions from automated systems, even when contradictory evidence is available. Studies in aviation, medicine, and criminal justice have repeatedly shown that when a computer says one thing and a human’s own assessment says another, the human tends to go with the computer. In low-stakes environments, this might mean a slightly less optimal route on your GPS. In criminal justice, it means an innocent woman in a jail cell.

The implications extend far beyond individual cases. Mass surveillance systems powered by AI create what scholars at the AI Now Institute have called an “asymmetry of power” — the state knows everything about you, and you know nothing about the system judging you. When Sandra Barker was arrested, she had no way to examine the algorithm that flagged her. She couldn’t challenge its assumptions, audit its training data, or question its methodology. She was confronting an accuser she couldn’t see, built by engineers she’d never meet, operating under logic no one in the courtroom fully understood.

This opacity is a feature, not a bug, for the companies that build these systems. Conduent, the firm involved in North Dakota’s Medicaid processing, has faced scrutiny in multiple states. In 2023, the Associated Press reported on widespread problems with Conduent’s Medicaid eligibility systems in several states, including Texas and Indiana, where technical failures led to eligible people being wrongly denied coverage. The company has maintained that its systems work as designed and that errors are the responsibility of the state agencies that deploy them. That defense — it’s not our fault how they use it — is a recurring theme in the AI accountability vacuum.

No one owns the mistake. The algorithm’s developer says the tool is only advisory. The government agency says it relied on the developer’s technology. The prosecutor says she relied on the investigators. The investigators say they relied on the data. And the person whose life gets wrecked has no one to hold accountable and no clear path to make herself whole.

Barker’s attorney has indicated she may pursue legal action against the state, according to the Grand Forks Herald. But even successful lawsuits don’t fix the underlying architecture. The systems remain in place. The data keeps flowing. The algorithms keep flagging.

So where are the guardrails? In theory, the justice system itself is supposed to be the check. Prosecutors have ethical obligations to verify evidence before filing charges. Judges are supposed to scrutinize the basis for arrests and detention. Defense attorneys are supposed to challenge the state’s case. But these human safeguards are under enormous strain. Public defenders carry crushing caseloads. Prosecutors face political pressure to show results. And very few lawyers or judges have the technical literacy to meaningfully interrogate an AI system’s output.

Some states are beginning to act. Colorado passed a law in 2024 requiring impact assessments for high-risk AI systems, including those used in government decision-making. The European Union’s AI Act, which began taking effect in stages this year, classifies law enforcement and benefits-administration AI as high-risk and imposes transparency and accuracy requirements. But in most of the United States, there is no legal framework specifically governing how AI-generated evidence or AI-driven investigations must be validated before they can be used to deprive someone of liberty.

That gap is staggering when you consider the scale. The federal government processes roughly 100 million Medicaid claims per month. The IRS handles more than 150 million individual tax returns annually. The Department of Homeland Security monitors billions of border-crossing and immigration records. Each of these systems increasingly relies on automated analysis. Each generates flags that can trigger investigations, audits, denials, and arrests. The denominator is enormous. Even a small error rate — say, 1% — means hundreds of thousands of people wrongly targeted every year.

And error rates in these systems are rarely as low as 1%.

The National Institute of Standards and Technology has repeatedly found significant accuracy disparities in facial recognition systems, with error rates for Black and Native American faces running 10 to 100 times higher than for white faces. Predictive policing algorithms trained on historically biased arrest data tend to direct police disproportionately to minority neighborhoods, creating feedback loops that reinforce the very disparities they reflect. Fraud-detection models trained on incomplete or skewed datasets — like the one that apparently ensnared Barker — can systematically misidentify legitimate behavior as criminal.

The people most likely to be harmed are those least equipped to fight back. They’re low-income workers like Barker, who depend on government programs and can’t afford private attorneys. They’re immigrants whose visa status depends on opaque algorithmic risk scores. They’re residents of over-policed neighborhoods where every data point feeds a system primed to see threat.

Mass surveillance has always carried this risk. What AI adds is speed, scale, and a veneer of objectivity that makes the results harder to question. A human investigator who targets someone unfairly can be cross-examined, challenged, held accountable for bias. An algorithm that produces the same biased outcome is treated as math. Neutral. Scientific. Trustworthy.

It isn’t.

Sandra Barker is home now. The charges are gone. But the 39 days don’t come back. The months of anxiety don’t come back. The mugshot that appeared in local media doesn’t disappear from the internet. And the AI system that put her through it? As far as public reporting indicates, it’s still running.

That should trouble everyone — not just the people it’s already gotten wrong, but the millions of Americans whose daily lives are increasingly mediated, monitored, and judged by systems they can’t see, can’t question, and can’t appeal. The promise of AI in government was better decisions, faster. The reality, in at least one grandmother’s case in North Dakota, was the opposite. A worse decision, made faster, with consequences that no software update can undo.

The question now isn’t whether AI will continue to be used in surveillance and law enforcement. It will. The question is whether the institutions deploying these tools will build in the skepticism, the verification, and the accountability that the technology itself cannot provide. Based on the evidence so far, the answer isn’t encouraging.



from WebProNews https://ift.tt/K8r31bF

Thursday, 12 March 2026

AI Chatbots Are Homogenizing Human Thought — and the Research to Prove It Is Alarming

Here’s the thing about asking a chatbot for advice: you’re probably getting the same answer as everyone else. And that sameness is starting to reshape how people think.

A new study covered by CNET reveals that people who use AI chatbots to help form opinions on social and political topics end up converging on remarkably similar viewpoints. Not slightly similar. Strikingly so. The research, published in the journal Science in 2025, found that individuals who consulted AI for guidance on moral and political dilemmas showed a measurable reduction in opinion diversity compared to control groups who deliberated on their own or discussed with other humans.

Think about what that means at scale. Millions of people are now turning to ChatGPT, Gemini, Claude, and other large language models for everything from relationship advice to policy opinions. If those tools consistently nudge users toward a narrow band of responses — even subtly — the downstream effects on democratic discourse, cultural diversity, and independent reasoning could be enormous.

The researchers behind the study conducted experiments where participants were asked to consider contentious topics. Some worked through the questions alone. Others discussed with fellow humans. A third group interacted with an AI chatbot. The results were stark: the AI group’s opinions clustered tightly together, while the human-only groups maintained a wider spread of perspectives. The chatbot didn’t just inform. It flattened.

Why does this happen? Large language models are trained on massive datasets and optimized to produce responses that are helpful, harmless, and — critically — agreeable. They’re designed to avoid controversy. That design choice has a side effect: the models tend to land on moderate, consensus-friendly positions that sound reasonable but lack the rough edges of genuine human disagreement. When millions of people receive that same smoothed-over perspective, individual thought patterns start to converge.

This isn’t a hypothetical risk. It’s measurable right now.

And the problem compounds. As Science has reported, AI-generated text is increasingly feeding back into the training data for future models, creating a feedback loop where homogenized outputs train the next generation of homogenized outputs. Researchers call this “model collapse” — a gradual narrowing of the information space that becomes self-reinforcing over time.

The implications for industry professionals are direct. If you’re building products that integrate AI-generated recommendations — whether in media, education, healthcare, or policy — you’re potentially building a conformity engine. Not intentionally. But structurally. The architecture of these systems rewards convergence, and users, often without realizing it, absorb that convergence as their own thinking.

Some researchers argue the effect mirrors what social media algorithms already do: filter and flatten. But there’s a key difference. Social media creates echo chambers where like-minded people reinforce each other’s existing beliefs. AI chatbots do something stranger. They pull people with different starting positions toward the same middle ground. Echo chambers polarize. Chatbots homogenize. Both are problems, but they’re different problems requiring different solutions.

So what can be done?

The study’s authors suggest that AI systems could be designed to present multiple perspectives rather than settling on a single authoritative-sounding answer. Some companies are already experimenting with this. Anthropic, the maker of Claude, has discussed building models that acknowledge uncertainty and present competing viewpoints. OpenAI has explored similar ideas in its research on democratic inputs to AI. But these remain early-stage efforts, and the default behavior of most commercial chatbots still trends toward confident, singular answers.

There’s also a user-side fix, though it’s harder to implement: teaching people to treat AI outputs as one input among many rather than as definitive answers. Digital literacy campaigns have been discussed for years. They haven’t kept pace with adoption.

For product teams and engineers, the takeaway is concrete. Default designs matter. If your AI integration surfaces one answer, you’re shaping opinion whether you mean to or not. If it surfaces three competing answers with context, you’re preserving cognitive diversity. That’s a design choice, not a technical limitation.

I grew up in the Midwest, where people argued about everything at the dinner table — politics, religion, whether a hot dog is a sandwich. Those arguments were messy and unresolved and vital. They’re how you learn that reasonable people can look at the same facts and reach different conclusions. A system that quietly erases that messiness isn’t making us smarter. It’s making us the same.

The research is clear. The question now is whether the companies building these tools will treat opinion homogenization as a bug worth fixing — or a feature they can live with.



from WebProNews https://ift.tt/m4J9Vyc