Wednesday, 11 March 2026

How Apple Trained AI to Recognize Unseen Hand Gestures Using Wearable Sensors

Apple has steadily advanced the way humans interact with technology, moving beyond physical buttons and touchscreens toward natural body movements. Recent developments in artificial intelligence and wearable hardware have allowed the company to train machine learning models capable of recognizing hand gestures that the system has never previously encountered. This capability stems from advanced sensor fusion, combining data from accelerometers, gyroscopes, and optical sensors to interpret the complex kinematics of the human hand.

Traditionally, electronic devices required explicit programming for every specific input. If a manufacturer wanted a smartwatch to recognize a wrist flick, engineers had to collect thousands of examples of that exact motion and train a model specifically for it. Apple’s recent artificial intelligence research shifts this approach by teaching the neural network the fundamental mechanics of hand movements, enabling the software to identify novel gestures based on their underlying physical properties.

The Hardware Foundation Behind Movement Tracking

The foundation for this technology already exists within consumer hardware like the Apple Watch Series 9 and Apple Watch Ultra 2. These devices incorporate the S9 System in Package (SiP), which features a specialized four-core Neural Engine designed specifically for on-device machine learning tasks. This processor handles complex algorithms locally, interpreting continuous streams of data from the watch’s internal components without relying on a cloud connection.

To capture the minute details of a hand gesture, Apple employs a technique called sensor fusion. The device continuously monitors the accelerometer and gyroscope to track sudden changes in velocity and orientation. Simultaneously, the optical heart sensor, primarily designed to measure pulse, detects subtle changes in blood flow that occur when specific muscles and tendons contract in the wrist and fingers.

Overcoming the Limitations of Traditional Machine Learning

Training an artificial intelligence model to understand unseen inputs involves a concept known in computer science as zero-shot learning or generalization. In standard supervised learning, an algorithm learns to identify a specific gesture, such as a double tap, by analyzing massive datasets of that exact movement. However, if a user performs a slightly different motion—like a triple tap or a finger rub—a standard model fails to recognize the intent because it lacks direct training data for that specific action.

Apple’s researchers have tackled this limitation by training their models on the broader biomechanics of the human hand. Instead of mapping a specific sensor output to a single command, the neural network learns a multidimensional representation of wrist and finger articulation. When a user performs an unfamiliar gesture, the AI evaluates the sensor data against this biomechanical model, estimating the physical position of the fingers even if it has never been explicitly programmed to recognize that exact sequence of movements.

Interpreting Sensor Data as Kinematic Models

The process of converting raw hardware signals into a recognized gesture requires sophisticated mathematics. When a user moves their hand, the accelerometer records linear acceleration across three axes, while the gyroscope measures rotational velocity. These sensors generate a distinct wave pattern for every physical action. Apple’s machine learning models analyze the frequency and amplitude of these waves to reconstruct the motion in a virtual space.

By mapping these wave patterns to known anatomical constraints, the AI can infer what the hand is doing. For instance, the human wrist has a limited range of motion, and certain finger movements naturally cause specific tendons to shift. The neural network uses these biological rules to filter out noise—like the user simply walking or typing—and isolate deliberate, communicative gestures, assigning mathematical probabilities to various hand poses.

The Role of Blood Flow and Muscle Contraction

One of the most fascinating aspects of Apple’s gesture recognition involves the optical heart sensor. While accelerometers and gyroscopes are excellent at tracking gross motor movements, they struggle with micro-gestures where the wrist remains entirely still, but the fingers move. To solve this, Apple uses photoplethysmography (PPG), the same technology used to measure heart rate, to observe the physical expansion and contraction of blood vessels.

When a user pinches their index finger and thumb together, the muscles in the forearm contract. This contraction briefly alters the volume of blood flowing through the wrist. The optical sensor detects this microscopic fluctuation. By feeding this PPG data into the Neural Engine alongside the motion data, the AI gains a comprehensive understanding of muscle engagement, allowing it to detect tiny finger movements that produce almost no external wrist motion.

Integration with Spatial Computing Devices

While wearable sensors provide excellent data regarding muscle contraction and wrist orientation, they represent only one half of Apple’s broader human-computer interaction strategy. The Apple Vision Pro approaches gesture recognition from an entirely different angle, relying on high-resolution external cameras and infrared illuminators to visually track the user’s hands in three-dimensional space.

The convergence of these two technologies presents massive potential for future application. A smartwatch can detect the tactile force and exact timing of a finger pinch through muscle and blood flow changes, while a spatial computing headset can track the exact spatial coordinates of the hand. Training artificial intelligence to interpret both visual data and wearable sensor data simultaneously allows the system to recognize highly complex, previously unseen gestures with near-perfect accuracy, even if the user’s hand is partially obscured from the headset’s cameras.

Prioritizing User Privacy and On-Device Processing

Processing continuous biometric and movement data raises significant privacy considerations. Recording every hand movement a person makes could theoretically expose sensitive information, such as the cadence of their typing or the specific keys they press. Apple addresses this concern by strictly limiting where and how the gesture recognition algorithms operate.

All sensor data analysis occurs locally on the device’s Neural Engine. The trained AI model is downloaded to the smartwatch or headset, and the raw accelerometer, gyroscope, and optical sensor data are evaluated in real-time. Once the system identifies a gesture, it translates that movement into a system command—like answering a call or pausing a song—and immediately discards the raw sensor feed. No continuous movement data is transmitted to remote servers for processing.

Expanding Accessibility Through Adaptive Technology

The ability of an AI to recognize previously unseen gestures has profound implications for device accessibility. Users with motor impairments or physical disabilities often cannot perform standard gestures exactly as a manufacturer intended. A person with limited hand mobility might execute a pinch motion that looks fundamentally different on a sensor level than the data the model was originally trained on.

By moving away from rigid, hard-coded gesture recognition and toward a generalized understanding of hand kinematics, Apple’s devices can adapt to individual users. An AI capable of inferring intent from unseen variations in movement can learn a user’s unique physical capabilities. This adaptive approach ensures that assistive technologies, like Apple’s AssistiveTouch for the Apple Watch, become more responsive and personalized over time.

The Future of Natural Human-Computer Interaction

As machine learning models become more sophisticated, the requirement for users to learn specific commands will diminish. Instead of memorizing a list of exact taps, swipes, and pinches to control their devices, users will be able to interact with technology using intuitive, natural body language. The hardware will bear the burden of interpretation, using trained neural networks to understand the user’s intent based on context and physical motion.

Apple’s ongoing research into training AI on wearable sensor data points toward a future where technology fades into the background. By combining advanced processors, precise internal sensors, and generalized machine learning models, the company is building a foundation for hardware that responds to human movement as naturally as another person would, fundamentally changing how we control the digital world around us.



from WebProNews https://ift.tt/6wWL732

Chinese Startup’s AI Voices Beat Tech Giants in Trust and Realism Study

Artificial intelligence has made significant strides in generating human-like speech, yet the quality of these synthetic voices plays a pivotal role in how much users believe and accept them. A recent evaluation highlighted this point when participants assessed voices from various providers, revealing that a Chinese startup’s offerings scored higher in both trustworthiness and lifelikeness compared to those from major players like Microsoft, Google, and Amazon. This finding, detailed in an article from TechRadar, underscores a broader challenge in the field: subpar AI voices can erode confidence, while superior ones foster greater acceptance.

To understand this development, consider the evolution of text-to-speech technology. Early systems produced robotic, monotonous outputs that felt distant and unnatural. Over time, advancements in machine learning, particularly deep neural networks, have enabled more fluid and expressive voices. These improvements draw from vast datasets of human recordings, allowing algorithms to mimic intonation, rhythm, and emotional nuances. However, not all implementations achieve the same level of sophistication. The study mentioned in the TechRadar piece involved listeners rating voices on scales of realism and trust, with the Chinese company, identified as Speechify in some contexts but more accurately as a rising entity in the AI audio space, outperforming established giants. Participants found its voices more convincing, which suggests that finer details in voice synthesis—such as subtle prosody variations or reduced artifacts—can make a substantial difference.

Trust in AI-generated speech matters for several reasons. In applications like virtual assistants, audiobooks, or customer service bots, users need to feel that the voice is reliable and authentic. If a voice sounds off, it can lead to skepticism, reducing engagement. For instance, in educational tools, a trustworthy voice might encourage learners to absorb information more effectively, while a dubious one could distract or disengage them. The TechRadar report points out that poor AI voices often trigger an uncanny valley effect, where something almost human but not quite right provokes discomfort. This psychological response has roots in evolutionary biology, where humans are wired to detect anomalies in communication for survival purposes. When AI voices fall short, they amplify this unease, making users question the underlying technology or even the content being delivered.

The evaluation process in the study was straightforward yet revealing. Researchers gathered a diverse group of listeners and presented them with audio samples from different providers. Each sample involved neutral statements to minimize bias from content. Listeners then scored the voices on how realistic they sounded and how much trust they inspired. Surprisingly, the Chinese startup’s voices topped the charts, even surpassing those from tech behemoths with massive resources. Microsoft, for example, has invested heavily in its Azure Cognitive Services, which include neural text-to-speech capabilities trained on extensive multilingual datasets. Google’s WaveNet technology, integrated into products like Google Assistant, uses waveform generation to produce highly natural speech. Amazon’s Polly service employs similar methods, offering a range of voices for applications in Alexa and beyond. Despite these efforts, the startup’s approach apparently resonated more with evaluators.

What sets this Chinese company apart? While specifics aren’t fully disclosed in the TechRadar article, industry insights suggest it may employ advanced generative models that prioritize emotional expressiveness and contextual adaptation. Many startups in this space focus on niche improvements, such as better handling of accents or dialects, which can enhance perceived authenticity. In contrast, larger corporations often scale their technologies broadly, sometimes at the expense of fine-tuned quality in specific scenarios. This dynamic echoes patterns seen in other tech sectors, where nimble innovators challenge incumbents by addressing overlooked user needs. The higher ratings for trust could stem from the startup’s voices avoiding common pitfalls like unnatural pauses or metallic tones, which plague some mainstream options.

This outcome has implications for the broader adoption of AI voices. As synthetic speech integrates into everyday tools—from navigation apps to telehealth services—the ability to inspire confidence becomes essential. Businesses relying on AI for customer interactions risk losing credibility if their voices fall flat. Consider the rise of voice commerce, where users might dictate purchases or queries; a trustworthy voice could boost conversion rates, while a suspicious one might lead to abandoned transactions. Similarly, in media production, realistic AI voices enable faster content creation, such as dubbing films or generating podcasts, but only if audiences accept them as genuine.

Looking at the competitive landscape, it’s clear that voice quality is a battleground. Microsoft has been refining its offerings through partnerships and updates, aiming for more inclusive voices that represent diverse demographics. Google’s efforts include research into prosody modeling, ensuring voices convey appropriate emotions. Amazon continues to expand Polly’s capabilities with custom voice options. Yet, the TechRadar findings indicate that these giants might need to reassess their strategies. Perhaps incorporating user feedback loops more aggressively or investing in perceptual studies could help them catch up. The Chinese startup’s success might also reflect cultural nuances in voice perception; listeners from different backgrounds may prioritize certain auditory cues, suggesting that global providers should tailor their models accordingly.

Beyond trust and realism, ethical considerations come into play. As AI voices become indistinguishable from human ones, concerns about misinformation arise. Deepfake audio could be used to impersonate individuals, spreading false narratives. The higher realism of the startup’s voices amplifies this risk, prompting calls for safeguards like watermarking or detection tools. Regulators are beginning to address these issues, with proposals for labeling AI-generated content. In the TechRadar piece, the emphasis on trust ties directly to these worries; if users can’t discern synthetic from real, they might grow wary of all digital audio, hindering positive applications.

From a technical standpoint, achieving superior voice synthesis involves complex processes. Models like Tacotron or FastSpeech convert text into spectrograms, which are then transformed into audio waveforms via vocoders. Enhancements in these areas, such as attention mechanisms that align text with speech patterns more accurately, contribute to better outputs. The Chinese startup likely excels in optimizing these elements, possibly through proprietary datasets or novel training techniques. Comparative analyses show that while big tech companies have access to enormous computational power, startups can innovate by focusing on quality over quantity. For example, training on high-fidelity recordings from professional voice actors can yield more polished results than relying solely on crowdsourced data.

User perceptions of AI voices also vary by context. In casual settings, like smart home devices, a slightly imperfect voice might be forgiven, but in professional environments, such as legal or medical consultations, precision is non-negotiable. The study’s results align with broader surveys indicating that emotional congruence—where the voice matches the message’s tone—strongly influences trust. If an AI voice delivers bad news with inappropriate cheerfulness, it undermines credibility. Developers must therefore integrate sentiment analysis to modulate voice output dynamically.

The future of AI voice technology looks promising, with ongoing research pushing boundaries. Innovations in multilingual support and accent adaptation could make voices more accessible worldwide. Collaborations between startups and established firms might accelerate progress, combining agility with scale. As the TechRadar article illustrates, excellence in this area isn’t solely about resources; it’s about understanding human auditory preferences deeply.

In educational contexts, high-quality AI voices could transform learning experiences. Imagine interactive textbooks where narratives come alive with convincing intonation, aiding comprehension for students with reading difficulties. In accessibility tools, realistic voices empower those with visual impairments by providing seamless audio interfaces. The superior ratings for the Chinese startup’s voices suggest potential for such applications, where trust directly impacts usability.

Challenges remain, however. Scalability is one; producing top-tier voices for every language and dialect requires immense data and effort. Bias in training data can lead to voices that favor certain demographics, perpetuating inequalities. Addressing these requires diverse datasets and ethical guidelines. Moreover, as AI voices improve, the line between helpful assistance and deceptive manipulation blurs, necessitating robust verification methods.

The TechRadar evaluation serves as a wake-up call for the industry. It demonstrates that users prioritize quality that feels human, not just functional. For Microsoft, Google, and Amazon, this means refining their technologies to match or exceed emerging competitors. For the Chinese startup, it’s an opportunity to expand influence, perhaps through partnerships or global outreach.

Ultimately, the pursuit of trustworthy AI voices drives innovation that benefits society. By focusing on realism that builds confidence, developers can create tools that enhance communication without sowing doubt. This balance will define the next phase of synthetic speech, ensuring it serves as a reliable extension of human interaction rather than a source of suspicion. As more studies like this emerge, they will guide refinements, leading to voices that not only sound right but also feel right to listeners everywhere.



from WebProNews https://ift.tt/mQpG8q6

Tuesday, 10 March 2026

Uber Rolls Out Women Driver Preference Feature Nationwide: What It Means for Riders and the Gig Economy

Uber just expanded its Women Rider Preference feature to all 50 U.S. states. The feature, which lets women and nonbinary riders request a woman driver, is now available nationwide after a phased rollout that began in select markets. It’s a significant move — and one that’s already drawing both praise and pointed criticism.

The concept is straightforward. Women and nonbinary riders can toggle a preference in the Uber app to be matched with women drivers. Women drivers, in turn, can opt to primarily receive ride requests from women and nonbinary passengers. Uber first piloted the feature in cities like Chicago, Phoenix, and several others before this national expansion, according to Mashable.

Safety is the driving force here. Uber has long faced scrutiny over rider safety incidents, particularly those involving women passengers. The company’s own U.S. Safety Report, released in 2022, documented thousands of reports of sexual assault and other safety incidents on its platform between 2019 and 2020. That history looms large over this feature’s rollout.

And the numbers back up the demand. According to Uber, early data from pilot markets showed strong adoption. The company reported that the feature helped increase the number of women drivers on its platform in those areas — a persistent challenge for rideshare companies. Women make up a relatively small share of Uber’s driver base, estimated at roughly 30%, and the company has acknowledged that safety concerns are a primary reason many women don’t sign up to drive.

So this isn’t just a rider-facing feature. It’s a recruitment tool.

By giving women drivers more control over who they pick up, Uber is betting it can attract and retain more women behind the wheel. That matters for the bottom line. More drivers means shorter wait times, better coverage, and a healthier marketplace overall. Uber has explicitly framed the feature as part of a broader effort to close the gender gap among its drivers, per its newsroom announcements.

But the rollout hasn’t been without friction. Critics have raised questions about whether gender-based matching could run afoul of anti-discrimination laws. Some legal scholars have pointed out that federal and state civil rights statutes generally prohibit businesses from discriminating based on sex in public accommodations. Uber’s counter-argument: this is a preference, not a mandate. Male riders can’t be denied service — they simply may wait longer or get matched with a different driver. The distinction matters legally, though it hasn’t been fully tested in court.

There’s also the question of how this interacts with Uber’s algorithmic matching. Toggling the preference doesn’t guarantee a woman driver. It expresses a preference that the system tries to honor. In areas with fewer women drivers, wait times could increase substantially for riders who activate it. Uber has been upfront about this trade-off.

Worth watching: how competitors respond. Lyft has its own set of safety features but hasn’t rolled out an equivalent gender-preference matching system at this scale. Smaller players and women-focused rideshare startups like See Jane Go have operated in this niche for years, but none have the reach or driver density of Uber. This move effectively absorbs a selling point that once differentiated those smaller services.

The timing is deliberate. Uber is making this push as the gig economy faces renewed regulatory pressure and as public discourse around women’s safety in shared transportation intensifies. Cities from New York to Los Angeles have debated additional safety mandates for rideshare platforms. By acting proactively, Uber positions itself ahead of potential regulation — a familiar playbook for the company.

Driver reactions have been mixed. Some women drivers on forums and social media have welcomed the added control. Others worry it could inadvertently reduce their overall ride volume if the feature segments the market in unexpected ways. A few have noted on X that the preference system could create perverse incentives — like surge pricing dynamics shifting if women drivers cluster around preference-enabled rides during peak hours.

Real-world impact will take time to measure. Uber says it plans to share more data on adoption rates and driver recruitment effects in the coming months. For now, the company is leaning heavily into the narrative that this is about empowerment and choice.

For industry professionals, the takeaway is clear. Gender-preference matching is now a mainstream feature in American ridesharing, not an experiment. How it performs at national scale — legally, operationally, and culturally — will likely shape product decisions across the entire on-demand transportation sector for years to come.



from WebProNews https://ift.tt/wCZILKE

AI Assistants Are Rewriting the Rules of Cybersecurity — and Defenders Are Scrambling to Keep Up

The security goalposts haven’t just moved. They’ve been launched into orbit.

A detailed analysis from Krebs on Security lays out how AI-powered assistants — the kind now embedded in enterprise workflows, consumer devices, and developer toolchains — are fundamentally reshaping the threat surface that security teams must defend. The implications are significant, and the industry’s response so far has been uneven at best.

Here’s the core problem: AI assistants don’t just process data. They act on it. They compose emails, write code, query databases, summarize confidential documents, and increasingly make decisions with minimal human oversight. Every one of those capabilities represents a potential attack vector that didn’t exist three years ago. Brian Krebs argues that the speed at which these tools have been deployed has far outpaced the development of security frameworks designed to contain them.

That gap is where the trouble lives.

The most pressing concern is prompt injection — a class of attack where adversaries craft inputs designed to manipulate an AI assistant into performing unauthorized actions. Security researchers have been sounding alarms about this for over two years now, but the problem has only grown more acute as AI assistants gain deeper access to enterprise systems. According to Krebs, attackers are now chaining prompt injection techniques with social engineering to trick AI assistants into exfiltrating sensitive data, modifying records, or bypassing access controls entirely. And because these assistants often operate with the permissions of the user who deployed them, a single compromised interaction can cascade across an organization’s internal infrastructure.

It’s not theoretical. Krebs cites multiple incidents in which AI assistants were manipulated into leaking proprietary information through carefully constructed prompts embedded in seemingly benign documents — PDFs, emails, even calendar invites. The attack surface is vast because AI assistants are designed to be helpful, which means they’re inherently inclined to follow instructions. Distinguishing between legitimate user intent and adversarial manipulation remains an unsolved problem.

So what are vendors doing about it? Not enough, apparently.

Krebs reports that major AI providers have implemented guardrails — content filters, system-level instructions that attempt to override malicious prompts, and sandboxing techniques that limit what assistants can access. But researchers have repeatedly demonstrated that these defenses are brittle. Wired and Ars Technica have both covered how red teams at academic institutions and independent security firms have bypassed these protections with alarming consistency. The fundamental architecture of large language models makes them susceptible to adversarial inputs in ways that traditional software isn’t, and bolting security measures onto the outside hasn’t proven sufficient.

There’s a second dimension to this that’s equally concerning: data governance. AI assistants are voracious consumers of context. They ingest meeting transcripts, Slack messages, email threads, code repositories, and internal wikis to generate useful responses. But that means they can also surface information that specific users shouldn’t have access to, effectively flattening organizational access controls. Krebs highlights cases where AI assistants exposed salary data, M&A deliberations, and unreleased product details to employees who had no business seeing them — not because of a hack, but because the assistant was doing exactly what it was designed to do.

The permissions model is broken. Or more precisely, it was never built for this.

Traditional access control assumes that a user queries a specific system and receives information gated by their role. AI assistants collapse that model by sitting on top of multiple systems simultaneously, aggregating and synthesizing data across boundaries that were previously enforced by the simple friction of having to log into separate tools. Removing that friction was the whole point. But it also removed the implicit security that friction provided.

Enterprise security teams are now being forced to rethink identity and access management from the ground up. Some organizations have started treating AI assistants as distinct identities within their security architectures — entities that need their own permission sets, audit trails, and behavioral monitoring. It’s a sensible approach, but implementation is complex and the tooling is still immature.

And then there’s the supply chain angle. AI assistants increasingly rely on third-party plugins, APIs, and extensions to perform tasks. Each integration point is a potential vulnerability. Krebs notes that attackers have begun targeting these connectors specifically, compromising a plugin to feed poisoned data into an AI assistant’s context window. The assistant then acts on that corrupted information as though it were legitimate. Classic supply chain attack logic, adapted for the AI era.

The regulatory picture isn’t helping. The EU AI Act addresses some of these concerns in broad strokes, but enforcement mechanisms remain vague and implementation timelines are long. In the U.S., there’s still no comprehensive federal framework governing AI security in enterprise settings. Companies are largely self-regulating, which means the quality of defenses varies wildly depending on organizational maturity and budget.

What should security leaders take away from this? First, audit every AI assistant deployment in your organization — including shadow deployments that individual teams may have spun up without IT approval. Second, assume that prompt injection is a when-not-if scenario and build detection and response capabilities accordingly. Third, revisit your data classification and access control policies with the understanding that AI assistants will try to bridge every silo you’ve built.

The technology is genuinely useful. That’s what makes this hard. Nobody wants to go back to manually summarizing 200-page compliance documents or writing boilerplate code from scratch. But the security implications of giving an AI assistant broad access to corporate systems are profound, and the industry hasn’t yet developed the tools or frameworks to manage them adequately.

Krebs puts it bluntly: the security community is playing catch-up, and the gap is widening. Until AI providers and enterprise security teams find a way to close it, every organization running these tools is accepting a level of risk that most haven’t fully quantified.

That’s the uncomfortable truth. And it’s not going away.



from WebProNews https://ift.tt/xgWAkD8

Monday, 9 March 2026

When the Government Controls AI: The Constitutional Crisis Nobody in Washington Wants to Debate

The U.S. government isn’t just buying AI tools. It’s building the infrastructure for a surveillance apparatus that would make the authors of the Fourth Amendment reach for their muskets.

OpenAI’s accelerating partnership with the Department of Defense has moved from theoretical debate to operational reality. According to The New Stack, the company that once pledged never to develop military applications has reversed course, working directly with defense agencies on applications that include cybersecurity operations and processing of sensitive government data. The pivot wasn’t subtle. OpenAI quietly updated its usage policies to remove prohibitions on military use, then began courting Pentagon contracts with the enthusiasm of a Beltway defense contractor.

This isn’t about national security in the abstract. It’s about what happens when the most powerful pattern-recognition and data-processing systems ever built are handed to agencies with a documented history of constitutional overreach.

Maya Sulkin, posting on X, raised pointed concerns about the trajectory of government AI adoption, highlighting how rapidly these technologies are being deployed without meaningful public debate or legislative guardrails. The concern resonates far beyond tech policy circles. When AI systems capable of analyzing billions of data points per second are deployed by intelligence and law enforcement agencies, the question isn’t whether they’ll be used for mass surveillance. The question is what’s stopping them.

The answer, right now, is almost nothing.

Consider the precedent. The NSA’s bulk metadata collection program, revealed by Edward Snowden in 2013, operated for years under secret legal interpretations that the FISA court rubber-stamped. That program was primitive by today’s standards — it collected phone records. Modern AI systems can correlate phone records with location data, facial recognition feeds, social media activity, financial transactions, and communication patterns simultaneously. The surveillance potential isn’t additive. It’s multiplicative.

And the legal framework hasn’t kept pace. Section 702 of the Foreign Intelligence Surveillance Act, reauthorized in 2024, still permits warrantless collection of communications data on foreign targets — but “incidental” collection of American citizens’ data continues at scale. Layer AI-powered analysis on top of that collection, and you don’t need to target Americans directly. The system can identify, profile, and track them as a byproduct of its normal operations.

Not Divided, an organization focused on protecting democratic institutions from technological overreach, has been documenting how AI deployment by government agencies threatens constitutional protections. Their research points to a fundamental asymmetry: the government’s capacity to collect and process data about citizens is growing exponentially, while citizens’ ability to understand, challenge, or even know about that collection remains static. No transparency. No accountability. No meaningful consent.

The Fourth Amendment’s protection against unreasonable searches was written for a world where searching someone’s papers required physically entering their home. The Supreme Court has updated that understanding — the 2018 Carpenter v. United States decision held that accessing historical cell-site location records constitutes a search requiring a warrant. But Carpenter addressed a single data type from a single source. It said nothing about AI systems that can fuse dozens of data streams into comprehensive behavioral profiles without any individual search ever being conducted.

That’s the gap. And the government is driving a fleet of trucks through it.

The defense establishment’s AI ambitions go far beyond battlefield applications

The Department of Homeland Security has deployed AI-powered systems at the border that use facial recognition, behavioral analysis, and social media monitoring. Immigration and Customs Enforcement has contracted with data brokers who aggregate location data from commercial apps — data that would require a warrant to collect directly but can be purchased on the open market. The FBI’s use of facial recognition technology has been criticized by the Government Accountability Office for lacking adequate privacy safeguards. These aren’t hypothetical risks. They’re current operations.

OpenAI’s entry into this space adds a new dimension. Large language models and multimodal AI systems don’t just process structured data — they can interpret unstructured text, analyze images, understand context, and generate inferences that would take human analysts months to produce. When The New Stack reported on the company’s defense partnerships, the framing centered on cybersecurity and administrative efficiency. But the same models that can summarize intelligence reports can also analyze intercepted communications at population scale. The same computer vision systems that can identify military equipment in satellite imagery can identify individuals in street-level surveillance footage.

The technology is dual-use by nature. The intentions of today’s operators don’t constrain the applications of tomorrow’s.

Some will argue that democratic oversight provides sufficient protection. It doesn’t. Congressional intelligence committees have repeatedly demonstrated they lack the technical expertise to evaluate AI capabilities and the political will to restrict intelligence agencies. The Church Committee reforms of the 1970s, which created the modern oversight framework after revelations of CIA and FBI domestic surveillance programs, were a response to abuses that had already occurred. We’re watching the conditions for similar abuses being assembled in real time, and the response from Congress has been a handful of hearings and zero binding legislation.

The European Union’s AI Act, whatever its flaws, at least attempts to categorize AI applications by risk level and impose restrictions on the most dangerous uses, including real-time biometric surveillance in public spaces. The United States has no equivalent federal framework. Executive orders on AI safety issued by the Biden administration were largely voluntary and have been rolled back. State-level efforts are fragmented and inconsistent.

So where does that leave American citizens?

Exposed. The combination of commercially available personal data, government surveillance authorities that predate the AI era, and AI systems capable of synthesizing both into detailed individual profiles creates conditions the Constitution’s framers could not have anticipated but clearly would have opposed. The right to be left alone — what Justice Brandeis called “the most comprehensive of rights, and the right most valued by a free people” — is being eroded not by a single dramatic act but by the steady accumulation of technical capabilities deployed without democratic authorization.

The tech industry bears responsibility here too. OpenAI’s shift from “we won’t work with the military” to active defense contracting happened without shareholder votes, public referenda, or legislative approval. It was a business decision. The company determined that government contracts were too lucrative and strategically important to forgo, and it adjusted its principles accordingly. Other AI companies — Palantir, Anduril, Scale AI — never pretended to have such reservations. But OpenAI’s reversal matters precisely because it demonstrates that voluntary ethical commitments in the AI industry are worth exactly as much as the paper they’re not printed on.

Groups like Not Divided are pushing for structural reforms: mandatory algorithmic impact assessments before government deployment, warrant requirements for AI-assisted surveillance, independent technical audits of government AI systems, and sunset clauses that force periodic reauthorization. These aren’t radical proposals. They’re the minimum conditions for democratic governance of powerful technologies.

But they face opposition from an intelligence community that views oversight as an obstacle, a defense industry that views AI as its next major revenue stream, and a political class that views “tough on security” as an electoral imperative. The incentives all point in one direction. More collection. More analysis. More power concentrated in agencies that operate largely in secret.

The constitutional question isn’t complicated. The government should not be able to construct detailed profiles of citizens’ movements, associations, communications, and beliefs without individualized suspicion and judicial authorization. AI makes it technically trivial to do exactly that. The law hasn’t caught up. And every month that passes without action makes the gap harder to close.

This isn’t a technology problem. It’s a democracy problem. And right now, democracy is losing.



from WebProNews https://ift.tt/Ub8AIvc

Sunday, 8 March 2026

OpenAI’s ‘Adult Mode’ Keeps Slipping — and the Reasons Say Everything About AI’s Hardest Problem

OpenAI can’t seem to ship its most controversial feature on time.

The company has delayed the launch of what it internally and publicly calls “adult mode” — a less restrictive version of ChatGPT intended for verified adults — pushing the release date further into the future. Originally expected in the spring, then reportedly aimed for mid-2025, the feature now appears unlikely to arrive before late summer at the earliest, according to reporting by Engadget, citing sources familiar with the matter. The repeated delays reveal something more than typical product scheduling headaches. They expose a fundamental tension at the heart of OpenAI’s ambitions: how to give paying adult users the unrestricted AI experience they want while keeping the company’s reputation, regulatory standing, and safety commitments intact.

The concept behind adult mode is straightforward enough. OpenAI wants to offer a tier of ChatGPT that responds to queries the current system refuses or heavily sanitizes — topics involving explicit content, graphic violence, politically sensitive material, and other areas where the model’s safety filters currently intervene. The idea is that adults who verify their age should be able to interact with AI the way they might with any other uncensored information source. Think of it as the R-rated version of a chatbot that currently defaults to PG-13.

But straightforward concepts don’t always translate into clean execution. And this one has proven especially messy.

According to multiple reports, the delays stem from internal disagreements about where exactly to draw the lines. Engadget noted that OpenAI has struggled with calibrating the feature so it loosens restrictions meaningfully without opening the floodgates to content that could generate legal liability or public backlash. That calibration problem is more art than science. Every threshold decision — what’s permissible in adult mode and what remains off-limits even for verified users — involves a judgment call that different teams within OpenAI apparently see differently.

Sam Altman himself has publicly acknowledged the demand for less filtered AI. In a blog post earlier this year, he described the company’s intention to let ChatGPT be more direct, opinionated, and willing to engage with mature subject matter. The framing was deliberate: OpenAI positioned the move as respecting user autonomy. Adults, the argument goes, don’t need an AI nanny.

That message resonated. Loudly. On X, users have been clamoring for months, with recurring threads demanding that OpenAI stop what many perceive as excessive censorship. The frustration is real and commercially significant — competitors like Grok, xAI’s chatbot integrated into the X platform, have explicitly marketed themselves as less filtered alternatives. Mistral’s models and various open-source projects have attracted users specifically because they don’t impose the same guardrails OpenAI does. Every month adult mode doesn’t ship, OpenAI risks losing engagement to rivals who’ve already made the bet that users want fewer restrictions.

So why not just launch it?

The answer involves at least three intertwined problems. First, age verification itself is a minefield. OpenAI would need a system robust enough to withstand regulatory scrutiny — particularly in the EU, where the AI Act is now being enforced in phases, and in the UK, where the Online Safety Act imposes strict requirements on platforms offering adult content. A simple checkbox won’t cut it. But aggressive ID verification raises privacy concerns that OpenAI, already under fire from data protection authorities in multiple jurisdictions, would rather avoid.

Second, there’s the question of what adult mode actually permits. Sexually explicit content is the most obvious category, but it’s far from the only one. Would adult mode allow detailed instructions for activities that are legal but dangerous? Would it engage with extremist political ideologies for the sake of intellectual debate? Would it generate graphic depictions of violence in creative writing contexts? Each of these categories carries different risks, and OpenAI’s teams reportedly haven’t reached consensus on a unified policy framework that covers them all.

Third — and perhaps most importantly — there’s the reputational calculus. OpenAI is simultaneously trying to close a massive funding round that would value the company at north of $300 billion, convert from a nonprofit structure to a for-profit corporation, and maintain partnerships with Apple, Microsoft, and other enterprise clients who have their own brand sensitivities. Launching a feature that immediately generates headlines about AI-produced pornography or violent content could complicate all of those efforts. The timing has to be right. Or at least not catastrophically wrong.

The competitive pressure, though, isn’t waiting for OpenAI to figure this out. Grok has leaned into its anything-goes persona, and while that’s generated its own controversies, it hasn’t meaningfully damaged xAI’s trajectory. Character.AI, despite facing lawsuits related to its chatbot interactions with minors, continues to attract massive user numbers. The market is sending a clear signal: people want AI that talks to them like an adult, and they’ll go wherever they can find it.

Open-source models have made this even more acute. Projects running on platforms like Hugging Face allow users to deploy completely uncensored language models locally, with zero content restrictions. These aren’t fringe tools anymore. They’re increasingly mainstream among developers and power users. OpenAI’s walled-garden approach looks more constraining by the month.

Inside OpenAI, the delay has reportedly caused friction between product teams eager to ship and safety researchers who want more time to test edge cases. This is a familiar dynamic at the company — the same tension that contributed to the dramatic boardroom coup attempt in late 2023. The safety-versus-speed debate never really resolved; it just went underground. Adult mode has brought it back to the surface.

There’s also a legal dimension that’s gotten less attention but matters enormously. Section 230 of the Communications Decency Act, which shields platforms from liability for user-generated content, has an explicit carve-out for content that’s “obscene” under federal law. If ChatGPT in adult mode generates material that a court deems obscene — a standard that’s notoriously subjective and varies by jurisdiction — OpenAI could face criminal liability, not just civil suits. The company’s lawyers are reportedly very aware of this risk and have been pushing for narrow, carefully defined permissions rather than a broad unlocking of capabilities.

International considerations add another layer of complexity. What’s legal and culturally acceptable in the United States may be prohibited in Germany, Saudi Arabia, or Singapore. OpenAI operates globally. An adult mode that’s available everywhere would need to account for wildly different legal regimes, or the company would need to geofence the feature — a technically feasible but operationally burdensome approach that fragments the user experience.

None of this is insurmountable. But all of it together explains why a feature that sounds simple keeps getting pushed back.

The broader industry is watching closely. Google’s Gemini has its own set of content restrictions that have drawn user complaints, though Google has shown little appetite for an explicit “adult” tier. Anthropic, maker of Claude, has taken perhaps the most conservative approach of any major AI lab, and its leadership has been vocal about the risks of loosening safety filters. Meta, meanwhile, has open-sourced its Llama models, effectively outsourcing the content moderation question to whoever deploys them. Each company has made a different bet about where the market is heading.

OpenAI’s bet is that it can have it both ways — a safe, broadly appealing default product and a more permissive option for adults who opt in. That’s the theory. In practice, the existence of adult mode may make the default mode’s restrictions feel even more arbitrary, prompting questions about why certain content is deemed inappropriate for adults in the first place. It could also create a two-tier perception problem: the “real” ChatGPT that OpenAI wants you to use, and the uncensored version lurking behind an age gate.

For investors, the delay is a footnote in a much larger story about OpenAI’s path to profitability. The company reportedly burned through billions in 2024 and is expected to continue operating at a significant loss through at least 2026. Adult mode, if it drives higher engagement and retention among paying subscribers, could be a meaningful contributor to revenue growth. ChatGPT Plus and the newer Pro tier already command premium prices; an adult mode available exclusively to subscribers would add another reason to pay. Every month of delay is, in a sense, revenue left on the table.

And then there’s the elephant in the room that nobody at OpenAI wants to discuss publicly: the adult entertainment industry. Porn has historically been an early adopter and driver of new technology — from VHS to streaming video to VR. AI-generated adult content is already a massive and rapidly growing market, mostly served by smaller, less scrupulous operators. OpenAI entering this space, even indirectly, would legitimize it. That’s either a massive commercial opportunity or a reputational catastrophe, depending on who you ask within the company.

The most likely outcome, based on the pattern of delays and the signals from OpenAI’s leadership, is that adult mode will eventually launch in a heavily caveated form. Expect narrow permissions — perhaps more tolerance for profanity, violence in fiction, and candid discussion of drugs or sex, but probably not AI-generated explicit imagery or anything that could be classified as hate speech. In other words, a mode that’s less “adult” and more “slightly less cautious.” Whether that satisfies the users demanding it is another question entirely.

What’s clear is that this isn’t just a product delay. It’s a stress test for the entire philosophy of AI safety as practiced by the industry’s most prominent company. OpenAI built its brand on the promise of safe, beneficial AI. Now it’s trying to figure out how much of that promise it can relax without breaking it. The answer, apparently, requires more time than anyone originally expected.



from WebProNews https://ift.tt/O4pMPlq

The 300-Millisecond Cancer Treatment: How FLASH Radiotherapy Is Racing From Lab Bench to Hospital Floor

Imagine a radiation treatment so fast it’s over before you blink. Not minutes. Not seconds. Milliseconds. That’s the promise of FLASH radiotherapy — a technique that delivers ultra-high dose rates of radiation to tumors in a fraction of the time conventional treatments require, while appearing to spare healthy tissue from the devastating side effects that have plagued cancer patients for decades.

The concept sounds almost too good to be true. And for years, skeptics said exactly that.

But a growing body of preclinical evidence, the first human clinical trials, and a surge of engineering innovation are now converging to push FLASH from theoretical curiosity toward clinical reality. The question is no longer whether FLASH works in the lab. It’s whether the physics, the biology, and the economics can align to bring it to patients at scale.

A Century-Old Clue, Rediscovered

The origins of FLASH trace back further than most people realize. As IEEE Spectrum reported in its detailed technical examination of the field, the earliest hints of the FLASH effect appeared in the 1960s and even earlier, when researchers noticed that radiation delivered at extremely high dose rates seemed to produce less damage to normal tissue than the same dose delivered slowly. But the observation was largely shelved. The technology to deliver such dose rates clinically didn’t exist, and the radiobiology community had bigger problems to solve.

The modern resurgence began around 2014, when a team at the Lausanne University Hospital (CHUV) in Switzerland, led by Jean Bourhis and Marie-Catherine Vozenin, published striking results showing that ultra-high dose rate radiation — delivered at rates exceeding 40 grays per second, compared to the roughly 0.01 to 0.03 grays per second used in conventional radiotherapy — could sterilize tumors in mice while leaving surrounding normal tissue remarkably intact. The so-called “FLASH effect” was real, reproducible, and dramatic.

The results electrified the radiation oncology world. Here was a differential effect that, if it translated to humans, could fundamentally alter the therapeutic ratio — the balance between killing cancer and harming the patient that has defined radiation therapy since its inception.

Since then, the preclinical evidence has piled up. Studies in mice, mini-pigs, cats, and dogs have consistently shown that FLASH-rate irradiation produces less skin toxicity, less lung fibrosis, less neurocognitive damage, and less intestinal injury than conventional dose rate radiation, while maintaining equivalent tumor control. The biological mechanism remains incompletely understood, though leading hypotheses center on oxygen depletion: at ultra-high dose rates, radiation may transiently consume all available oxygen in normal tissue so quickly that the chemical reactions responsible for DNA damage in healthy cells can’t fully propagate. Tumors, which are often already hypoxic and have different metabolic profiles, don’t benefit from this protective effect.

Not everyone is convinced the oxygen depletion hypothesis tells the whole story. Some researchers have pointed to differential immune responses, differences in DNA damage complexity, and other radiochemical phenomena. The mechanism matters — not just for intellectual satisfaction, but because understanding it will determine how to optimize FLASH delivery parameters for maximum clinical benefit.

Still, the clinical momentum is undeniable.

The first human patient treated with FLASH radiotherapy received her dose in 2018 at CHUV. A 75-year-old woman with a multiresistant CD30+ T-cell cutaneous lymphoma on her skin received a single 15-gray dose to a 3.5-centimeter tumor in less than 100 milliseconds using a modified clinical electron linear accelerator. The tumor responded completely. Five months later, there was no significant skin toxicity. The case, published in Radiotherapy and Oncology, was proof of principle in a single patient — not proof of efficacy. But it opened the door.

Since then, the first formal clinical trial — the FAST-01 study conducted at Cincinnati Children’s Hospital — treated bone metastasis patients with proton FLASH therapy using Varian’s ProBeam system. Results published in 2022 showed the treatment was feasible and safe, with pain relief comparable to conventional palliative radiation. The trial wasn’t designed to demonstrate the FLASH effect’s tissue-sparing advantage; it was a feasibility and safety study. But it showed that proton FLASH could be delivered to human patients in a clinical setting.

More trials are underway or in planning. As IEEE Spectrum noted, researchers at several institutions are now designing studies to test FLASH in more challenging anatomical sites — brain, lung, abdomen — where the tissue-sparing effect could make the most dramatic clinical difference. The pace is accelerating.

The Engineering Problem Is Enormous

If the biology of FLASH is tantalizing, the engineering is formidable. Delivering therapeutic doses of radiation at rates hundreds or thousands of times faster than conventional treatment requires fundamentally rethinking the machines that produce the beams.

Conventional medical linear accelerators (linacs) were never designed for these dose rates. They operate in pulsed mode, delivering microsecond bursts of radiation, but their average dose rates are far too low for FLASH. Achieving FLASH-level dose rates with electrons is comparatively straightforward — electrons are easy to accelerate, and modified research linacs can reach the necessary intensities. But electrons penetrate only a few centimeters into tissue, limiting their clinical utility to superficial tumors.

For deep-seated cancers — which account for the vast majority of cases where radiation therapy is used — protons or X-rays (photons) are needed. And that’s where the engineering challenges multiply.

Proton FLASH is perhaps the nearest-term pathway for deep tumors. Cyclotron-based proton therapy systems can, in principle, deliver dose rates high enough for FLASH by removing the beam-limiting components that slow delivery in conventional treatments. Varian (now part of Siemens Healthineers) demonstrated this with its ProBeam system in the FAST-01 trial. But proton therapy systems cost $25 million to $200 million to build and operate. They occupy entire buildings. Fewer than 100 proton centers exist in the United States. Scaling proton FLASH to widespread use faces enormous capital and infrastructure barriers.

Photon FLASH — using high-energy X-rays, the workhorse of modern radiation therapy — is even harder. Generating X-rays at FLASH dose rates requires electron beams of extraordinary intensity striking a conversion target, and the physics of bremsstrahlung radiation production means most of the energy is lost as heat. Several groups are working on the problem. According to IEEE Spectrum, researchers at Stanford, SLAC National Accelerator Laboratory, and other institutions have explored using compact linear accelerators and even very high energy electrons (VHEE) in the 50-to-250 MeV range, which can penetrate deeply without a conversion target and could potentially deliver FLASH dose rates throughout the body.

VHEE is an intriguing approach. These electrons are energetic enough to pass through the body much like photons, avoiding the shallow penetration problem of conventional electron beams. And because no conversion target is needed, the beam intensity isn’t limited by target heating. But VHEE accelerators don’t exist in clinical form yet. Building them will require adapting technology from particle physics — compact, high-gradient accelerator structures — for medical use. Several startups and academic groups are pursuing this, but clinical VHEE systems are likely years away.

Then there’s the dosimetry problem. Measuring radiation dose accurately at FLASH rates is extraordinarily difficult. Conventional ionization chambers, the gold standard of radiation dosimetry, suffer from ion recombination effects at ultra-high dose rates that can cause them to underread by 20% or more. New detector technologies — diamond detectors, scintillators, alanine dosimeters, and specialized ionization chamber designs — are being developed and validated, but standardized, clinically accepted dosimetry protocols for FLASH don’t yet exist. Without accurate dosimetry, you can’t safely treat patients. Period.

Treatment planning presents its own challenges. Conventional radiation treatment planning systems assume continuous, low-dose-rate delivery and optimize dose distributions accordingly. FLASH delivery may require entirely new planning paradigms that account for the temporal structure of the beam, the spatial distribution of dose rate (not just dose), and the biological response to ultra-high dose rate irradiation. The interplay between dose, dose rate, fractionation, and the FLASH effect is still poorly characterized. Getting this wrong could mean losing the FLASH effect entirely — or worse, overdosing normal tissue.

And the regulatory pathway is uncharted territory. The FDA has granted breakthrough device designation to Varian’s FLASH-enabled ProBeam system, signaling recognition of the technology’s potential. But the evidentiary bar for widespread clinical approval will be high. Regulators will want to see not just safety and feasibility, but clear evidence of clinical benefit — improved outcomes or reduced toxicity — in well-designed randomized trials. Those trials will take years to complete.

The cost question looms over everything. Proton therapy, even in its conventional form, has struggled to demonstrate cost-effectiveness compared to advanced photon techniques like intensity-modulated radiation therapy (IMRT) and stereotactic body radiation therapy (SBRT). If FLASH requires proton infrastructure, its adoption will be limited to wealthy academic centers. If compact electron or VHEE systems can deliver FLASH at a fraction of the cost, the calculus changes dramatically. The technology that wins won’t just be the one that works best biologically. It’ll be the one that fits into existing clinical workflows and reimbursement structures.

Several companies are positioning themselves in this space. Varian/Siemens Healthineers has the proton FLASH lead. IntraOp Medical is developing electron FLASH systems for intraoperative use. PMB-Alcen in France has built a high-dose-rate electron accelerator specifically for FLASH research. And a handful of startups are pursuing VHEE and compact photon FLASH approaches, though most remain pre-revenue and pre-clinical.

What Happens Next

The next three to five years will be decisive for FLASH radiotherapy. Several pivotal questions must be answered.

First, does the FLASH effect hold up in human patients across multiple tumor types and anatomical sites? The preclinical evidence is strong, but animal models don’t always predict human outcomes. The ongoing and planned clinical trials — particularly those targeting deep-seated tumors where toxicity reduction would be most meaningful — will provide the critical data.

Second, can the mechanism be sufficiently understood to optimize delivery parameters? If oxygen depletion is the primary driver, then tissue oxygenation status, beam temporal structure, and total dose will all interact in complex ways. Clinicians will need reliable biomarkers or predictive models to know when the FLASH effect is being achieved in a given patient’s tissue. Without that, FLASH treatments will be designed partly by guesswork.

Third, can the engineering be democratized? Right now, FLASH-capable systems are bespoke research tools or modified clinical machines available at a handful of centers worldwide. For FLASH to impact cancer care broadly, the technology must become compact, affordable, and reliable enough for community hospitals — not just major academic centers. That’s a tall order, but it’s not unprecedented. IMRT followed a similar trajectory from research curiosity to standard of care over roughly two decades.

And fourth, can the field avoid overpromising? Radiation oncology has a history of enthusiasms — proton therapy, carbon ion therapy, neutron therapy — that were heralded as transformative but ultimately found narrower niches than initially predicted. FLASH’s advocates are aware of this history. Many are deliberately cautious in their public statements, emphasizing the preliminary nature of the evidence. But the hype cycle is powerful, and patient expectations can outpace the science.

So where does that leave us? FLASH radiotherapy represents a genuinely novel approach to an old problem — one grounded in real physics and increasingly supported by real biology. It’s not a sure thing. The engineering barriers are substantial, the clinical evidence is nascent, and the path to broad adoption is long and uncertain. But the potential payoff — radiation therapy that kills tumors without crippling patients — is significant enough to justify the enormous investment of talent and capital now flowing into the field.

The radiation will be fast. The road to the clinic won’t be.



from WebProNews https://ift.tt/89lMNhg