Thursday, 12 March 2026

AI Chatbots Are Homogenizing Human Thought — and the Research to Prove It Is Alarming

Here’s the thing about asking a chatbot for advice: you’re probably getting the same answer as everyone else. And that sameness is starting to reshape how people think.

A new study covered by CNET reveals that people who use AI chatbots to help form opinions on social and political topics end up converging on remarkably similar viewpoints. Not slightly similar. Strikingly so. The research, published in the journal Science in 2025, found that individuals who consulted AI for guidance on moral and political dilemmas showed a measurable reduction in opinion diversity compared to control groups who deliberated on their own or discussed with other humans.

Think about what that means at scale. Millions of people are now turning to ChatGPT, Gemini, Claude, and other large language models for everything from relationship advice to policy opinions. If those tools consistently nudge users toward a narrow band of responses — even subtly — the downstream effects on democratic discourse, cultural diversity, and independent reasoning could be enormous.

The researchers behind the study conducted experiments where participants were asked to consider contentious topics. Some worked through the questions alone. Others discussed with fellow humans. A third group interacted with an AI chatbot. The results were stark: the AI group’s opinions clustered tightly together, while the human-only groups maintained a wider spread of perspectives. The chatbot didn’t just inform. It flattened.

Why does this happen? Large language models are trained on massive datasets and optimized to produce responses that are helpful, harmless, and — critically — agreeable. They’re designed to avoid controversy. That design choice has a side effect: the models tend to land on moderate, consensus-friendly positions that sound reasonable but lack the rough edges of genuine human disagreement. When millions of people receive that same smoothed-over perspective, individual thought patterns start to converge.

This isn’t a hypothetical risk. It’s measurable right now.

And the problem compounds. As Science has reported, AI-generated text is increasingly feeding back into the training data for future models, creating a feedback loop where homogenized outputs train the next generation of homogenized outputs. Researchers call this “model collapse” — a gradual narrowing of the information space that becomes self-reinforcing over time.

The implications for industry professionals are direct. If you’re building products that integrate AI-generated recommendations — whether in media, education, healthcare, or policy — you’re potentially building a conformity engine. Not intentionally. But structurally. The architecture of these systems rewards convergence, and users, often without realizing it, absorb that convergence as their own thinking.

Some researchers argue the effect mirrors what social media algorithms already do: filter and flatten. But there’s a key difference. Social media creates echo chambers where like-minded people reinforce each other’s existing beliefs. AI chatbots do something stranger. They pull people with different starting positions toward the same middle ground. Echo chambers polarize. Chatbots homogenize. Both are problems, but they’re different problems requiring different solutions.

So what can be done?

The study’s authors suggest that AI systems could be designed to present multiple perspectives rather than settling on a single authoritative-sounding answer. Some companies are already experimenting with this. Anthropic, the maker of Claude, has discussed building models that acknowledge uncertainty and present competing viewpoints. OpenAI has explored similar ideas in its research on democratic inputs to AI. But these remain early-stage efforts, and the default behavior of most commercial chatbots still trends toward confident, singular answers.

There’s also a user-side fix, though it’s harder to implement: teaching people to treat AI outputs as one input among many rather than as definitive answers. Digital literacy campaigns have been discussed for years. They haven’t kept pace with adoption.

For product teams and engineers, the takeaway is concrete. Default designs matter. If your AI integration surfaces one answer, you’re shaping opinion whether you mean to or not. If it surfaces three competing answers with context, you’re preserving cognitive diversity. That’s a design choice, not a technical limitation.

I grew up in the Midwest, where people argued about everything at the dinner table — politics, religion, whether a hot dog is a sandwich. Those arguments were messy and unresolved and vital. They’re how you learn that reasonable people can look at the same facts and reach different conclusions. A system that quietly erases that messiness isn’t making us smarter. It’s making us the same.

The research is clear. The question now is whether the companies building these tools will treat opinion homogenization as a bug worth fixing — or a feature they can live with.



from WebProNews https://ift.tt/m4J9Vyc

AI Agents Are Here — And Their Security Risks Are Outpacing the Governance Meant to Contain Them

AI agents are no longer theoretical. They’re booking meetings, writing code, managing workflows, and making decisions with minimal human oversight. And that’s exactly what makes them dangerous.

A recent TechRepublic report lays out what security professionals have been warning about for months: autonomous AI agents introduce a class of risk that most organizations aren’t prepared to handle. These aren’t chatbots answering customer questions. They’re software entities that can take actions — real ones, with real consequences — across enterprise systems. The gap between what these agents can do and what companies have built to govern them is widening fast.

The core problem is simple to state and hard to solve. AI agents operate with a degree of autonomy that traditional security models weren’t designed for. When an agent can access databases, send emails, execute transactions, and interact with external APIs on its own, the attack surface doesn’t just grow. It multiplies.

Consider prompt injection. It’s already a well-documented vulnerability in large language models, but with agents, the stakes escalate dramatically. A prompt injection attack against a standalone chatbot might produce a misleading answer. The same attack against an agent with access to financial systems could trigger unauthorized transactions. Researchers at OWASP have flagged this as a top concern in their Top 10 for LLM Applications, noting that agents with tool access create compound risk vectors that are qualitatively different from anything enterprises have dealt with before.

Then there’s the identity problem. Who is the agent acting as? Traditional access control assumes a human user with credentials. Agents blur this entirely. They may inherit permissions from the user who deployed them, or they may operate under service accounts with overly broad access. In many current implementations, there’s no granular way to audit what an agent did, why it did it, or whether it was operating within intended boundaries. That’s not a minor oversight. It’s a structural flaw.

Microsoft, Google, and OpenAI are all racing to ship agentic capabilities. Microsoft’s Copilot Studio now lets enterprises build custom agents that can take actions across Microsoft 365 and other connected services. Google’s Agentspace, announced in late 2024, aims to let agents operate across an organization’s full data environment. OpenAI has been steadily expanding its Assistants API and recently introduced more sophisticated function-calling features that give agents greater autonomy. The commercial incentives are obvious. But security frameworks haven’t kept pace with the product roadmaps.

Gartner has projected that by 2028, at least 15% of day-to-day work decisions will be made autonomously by agentic AI — up from essentially zero in 2024. That’s an enormous shift in a short window. And it’s happening while most organizations lack even basic policies for agent deployment.

What does governance actually look like here? The TechRepublic piece highlights several emerging best practices. Least-privilege access is one — agents should have the minimum permissions necessary for their specific task, and those permissions should be scoped tightly. Session-based authorization is another, where an agent’s access expires after a defined period or task completion rather than persisting indefinitely. Logging and observability matter enormously; if you can’t reconstruct what an agent did and why, you can’t secure it.

But here’s the tension. The whole point of AI agents is speed and autonomy. Every guardrail you add introduces friction. Every approval step you require slows the workflow. Organizations that lock agents down too aggressively won’t see the productivity gains they’re chasing. Organizations that don’t lock them down enough are setting themselves up for breaches that could be extraordinarily difficult to detect, let alone remediate.

Shadow AI makes this worse. Employees are already deploying agents using free or low-cost tools without IT approval. A marketing manager connecting an AI agent to the company CRM through a third-party integration. A developer giving an agent access to production databases to speed up debugging. These aren’t hypothetical scenarios. They’re happening now, in companies of every size.

The regulatory picture is fragmented. The EU AI Act addresses some aspects of autonomous systems, but its framework wasn’t specifically designed for the kind of multi-step, tool-using agents now entering production. In the U.S., there’s no comprehensive federal AI legislation, and the patchwork of state-level proposals doesn’t specifically address agentic risk. NIST’s AI Risk Management Framework provides useful principles but lacks the specificity that security teams need for implementation.

Some vendors are trying to fill the gap. Startups focused on AI security — companies like Prompt Security, Lakera, and Protect AI — are building tools specifically designed to monitor and constrain agent behavior. But the market is young, standards are thin, and interoperability between different agent platforms remains limited.

So what should security leaders do right now? First, inventory. Know which agents are operating in your environment, what they have access to, and who deployed them. Second, establish kill switches — the ability to immediately revoke an agent’s access and halt its operations if something goes wrong. Third, treat agent deployment like you’d treat onboarding a new employee with system access: with formal review, defined permissions, and ongoing monitoring.

None of this is optional. The technology is moving. The threats are real. And the window for getting governance right before something goes seriously wrong is narrower than most executives realize.



from WebProNews https://ift.tt/62yQctM

Wednesday, 11 March 2026

How Apple Trained AI to Recognize Unseen Hand Gestures Using Wearable Sensors

Apple has steadily advanced the way humans interact with technology, moving beyond physical buttons and touchscreens toward natural body movements. Recent developments in artificial intelligence and wearable hardware have allowed the company to train machine learning models capable of recognizing hand gestures that the system has never previously encountered. This capability stems from advanced sensor fusion, combining data from accelerometers, gyroscopes, and optical sensors to interpret the complex kinematics of the human hand.

Traditionally, electronic devices required explicit programming for every specific input. If a manufacturer wanted a smartwatch to recognize a wrist flick, engineers had to collect thousands of examples of that exact motion and train a model specifically for it. Apple’s recent artificial intelligence research shifts this approach by teaching the neural network the fundamental mechanics of hand movements, enabling the software to identify novel gestures based on their underlying physical properties.

The Hardware Foundation Behind Movement Tracking

The foundation for this technology already exists within consumer hardware like the Apple Watch Series 9 and Apple Watch Ultra 2. These devices incorporate the S9 System in Package (SiP), which features a specialized four-core Neural Engine designed specifically for on-device machine learning tasks. This processor handles complex algorithms locally, interpreting continuous streams of data from the watch’s internal components without relying on a cloud connection.

To capture the minute details of a hand gesture, Apple employs a technique called sensor fusion. The device continuously monitors the accelerometer and gyroscope to track sudden changes in velocity and orientation. Simultaneously, the optical heart sensor, primarily designed to measure pulse, detects subtle changes in blood flow that occur when specific muscles and tendons contract in the wrist and fingers.

Overcoming the Limitations of Traditional Machine Learning

Training an artificial intelligence model to understand unseen inputs involves a concept known in computer science as zero-shot learning or generalization. In standard supervised learning, an algorithm learns to identify a specific gesture, such as a double tap, by analyzing massive datasets of that exact movement. However, if a user performs a slightly different motion—like a triple tap or a finger rub—a standard model fails to recognize the intent because it lacks direct training data for that specific action.

Apple’s researchers have tackled this limitation by training their models on the broader biomechanics of the human hand. Instead of mapping a specific sensor output to a single command, the neural network learns a multidimensional representation of wrist and finger articulation. When a user performs an unfamiliar gesture, the AI evaluates the sensor data against this biomechanical model, estimating the physical position of the fingers even if it has never been explicitly programmed to recognize that exact sequence of movements.

Interpreting Sensor Data as Kinematic Models

The process of converting raw hardware signals into a recognized gesture requires sophisticated mathematics. When a user moves their hand, the accelerometer records linear acceleration across three axes, while the gyroscope measures rotational velocity. These sensors generate a distinct wave pattern for every physical action. Apple’s machine learning models analyze the frequency and amplitude of these waves to reconstruct the motion in a virtual space.

By mapping these wave patterns to known anatomical constraints, the AI can infer what the hand is doing. For instance, the human wrist has a limited range of motion, and certain finger movements naturally cause specific tendons to shift. The neural network uses these biological rules to filter out noise—like the user simply walking or typing—and isolate deliberate, communicative gestures, assigning mathematical probabilities to various hand poses.

The Role of Blood Flow and Muscle Contraction

One of the most fascinating aspects of Apple’s gesture recognition involves the optical heart sensor. While accelerometers and gyroscopes are excellent at tracking gross motor movements, they struggle with micro-gestures where the wrist remains entirely still, but the fingers move. To solve this, Apple uses photoplethysmography (PPG), the same technology used to measure heart rate, to observe the physical expansion and contraction of blood vessels.

When a user pinches their index finger and thumb together, the muscles in the forearm contract. This contraction briefly alters the volume of blood flowing through the wrist. The optical sensor detects this microscopic fluctuation. By feeding this PPG data into the Neural Engine alongside the motion data, the AI gains a comprehensive understanding of muscle engagement, allowing it to detect tiny finger movements that produce almost no external wrist motion.

Integration with Spatial Computing Devices

While wearable sensors provide excellent data regarding muscle contraction and wrist orientation, they represent only one half of Apple’s broader human-computer interaction strategy. The Apple Vision Pro approaches gesture recognition from an entirely different angle, relying on high-resolution external cameras and infrared illuminators to visually track the user’s hands in three-dimensional space.

The convergence of these two technologies presents massive potential for future application. A smartwatch can detect the tactile force and exact timing of a finger pinch through muscle and blood flow changes, while a spatial computing headset can track the exact spatial coordinates of the hand. Training artificial intelligence to interpret both visual data and wearable sensor data simultaneously allows the system to recognize highly complex, previously unseen gestures with near-perfect accuracy, even if the user’s hand is partially obscured from the headset’s cameras.

Prioritizing User Privacy and On-Device Processing

Processing continuous biometric and movement data raises significant privacy considerations. Recording every hand movement a person makes could theoretically expose sensitive information, such as the cadence of their typing or the specific keys they press. Apple addresses this concern by strictly limiting where and how the gesture recognition algorithms operate.

All sensor data analysis occurs locally on the device’s Neural Engine. The trained AI model is downloaded to the smartwatch or headset, and the raw accelerometer, gyroscope, and optical sensor data are evaluated in real-time. Once the system identifies a gesture, it translates that movement into a system command—like answering a call or pausing a song—and immediately discards the raw sensor feed. No continuous movement data is transmitted to remote servers for processing.

Expanding Accessibility Through Adaptive Technology

The ability of an AI to recognize previously unseen gestures has profound implications for device accessibility. Users with motor impairments or physical disabilities often cannot perform standard gestures exactly as a manufacturer intended. A person with limited hand mobility might execute a pinch motion that looks fundamentally different on a sensor level than the data the model was originally trained on.

By moving away from rigid, hard-coded gesture recognition and toward a generalized understanding of hand kinematics, Apple’s devices can adapt to individual users. An AI capable of inferring intent from unseen variations in movement can learn a user’s unique physical capabilities. This adaptive approach ensures that assistive technologies, like Apple’s AssistiveTouch for the Apple Watch, become more responsive and personalized over time.

The Future of Natural Human-Computer Interaction

As machine learning models become more sophisticated, the requirement for users to learn specific commands will diminish. Instead of memorizing a list of exact taps, swipes, and pinches to control their devices, users will be able to interact with technology using intuitive, natural body language. The hardware will bear the burden of interpretation, using trained neural networks to understand the user’s intent based on context and physical motion.

Apple’s ongoing research into training AI on wearable sensor data points toward a future where technology fades into the background. By combining advanced processors, precise internal sensors, and generalized machine learning models, the company is building a foundation for hardware that responds to human movement as naturally as another person would, fundamentally changing how we control the digital world around us.



from WebProNews https://ift.tt/6wWL732

Chinese Startup’s AI Voices Beat Tech Giants in Trust and Realism Study

Artificial intelligence has made significant strides in generating human-like speech, yet the quality of these synthetic voices plays a pivotal role in how much users believe and accept them. A recent evaluation highlighted this point when participants assessed voices from various providers, revealing that a Chinese startup’s offerings scored higher in both trustworthiness and lifelikeness compared to those from major players like Microsoft, Google, and Amazon. This finding, detailed in an article from TechRadar, underscores a broader challenge in the field: subpar AI voices can erode confidence, while superior ones foster greater acceptance.

To understand this development, consider the evolution of text-to-speech technology. Early systems produced robotic, monotonous outputs that felt distant and unnatural. Over time, advancements in machine learning, particularly deep neural networks, have enabled more fluid and expressive voices. These improvements draw from vast datasets of human recordings, allowing algorithms to mimic intonation, rhythm, and emotional nuances. However, not all implementations achieve the same level of sophistication. The study mentioned in the TechRadar piece involved listeners rating voices on scales of realism and trust, with the Chinese company, identified as Speechify in some contexts but more accurately as a rising entity in the AI audio space, outperforming established giants. Participants found its voices more convincing, which suggests that finer details in voice synthesis—such as subtle prosody variations or reduced artifacts—can make a substantial difference.

Trust in AI-generated speech matters for several reasons. In applications like virtual assistants, audiobooks, or customer service bots, users need to feel that the voice is reliable and authentic. If a voice sounds off, it can lead to skepticism, reducing engagement. For instance, in educational tools, a trustworthy voice might encourage learners to absorb information more effectively, while a dubious one could distract or disengage them. The TechRadar report points out that poor AI voices often trigger an uncanny valley effect, where something almost human but not quite right provokes discomfort. This psychological response has roots in evolutionary biology, where humans are wired to detect anomalies in communication for survival purposes. When AI voices fall short, they amplify this unease, making users question the underlying technology or even the content being delivered.

The evaluation process in the study was straightforward yet revealing. Researchers gathered a diverse group of listeners and presented them with audio samples from different providers. Each sample involved neutral statements to minimize bias from content. Listeners then scored the voices on how realistic they sounded and how much trust they inspired. Surprisingly, the Chinese startup’s voices topped the charts, even surpassing those from tech behemoths with massive resources. Microsoft, for example, has invested heavily in its Azure Cognitive Services, which include neural text-to-speech capabilities trained on extensive multilingual datasets. Google’s WaveNet technology, integrated into products like Google Assistant, uses waveform generation to produce highly natural speech. Amazon’s Polly service employs similar methods, offering a range of voices for applications in Alexa and beyond. Despite these efforts, the startup’s approach apparently resonated more with evaluators.

What sets this Chinese company apart? While specifics aren’t fully disclosed in the TechRadar article, industry insights suggest it may employ advanced generative models that prioritize emotional expressiveness and contextual adaptation. Many startups in this space focus on niche improvements, such as better handling of accents or dialects, which can enhance perceived authenticity. In contrast, larger corporations often scale their technologies broadly, sometimes at the expense of fine-tuned quality in specific scenarios. This dynamic echoes patterns seen in other tech sectors, where nimble innovators challenge incumbents by addressing overlooked user needs. The higher ratings for trust could stem from the startup’s voices avoiding common pitfalls like unnatural pauses or metallic tones, which plague some mainstream options.

This outcome has implications for the broader adoption of AI voices. As synthetic speech integrates into everyday tools—from navigation apps to telehealth services—the ability to inspire confidence becomes essential. Businesses relying on AI for customer interactions risk losing credibility if their voices fall flat. Consider the rise of voice commerce, where users might dictate purchases or queries; a trustworthy voice could boost conversion rates, while a suspicious one might lead to abandoned transactions. Similarly, in media production, realistic AI voices enable faster content creation, such as dubbing films or generating podcasts, but only if audiences accept them as genuine.

Looking at the competitive landscape, it’s clear that voice quality is a battleground. Microsoft has been refining its offerings through partnerships and updates, aiming for more inclusive voices that represent diverse demographics. Google’s efforts include research into prosody modeling, ensuring voices convey appropriate emotions. Amazon continues to expand Polly’s capabilities with custom voice options. Yet, the TechRadar findings indicate that these giants might need to reassess their strategies. Perhaps incorporating user feedback loops more aggressively or investing in perceptual studies could help them catch up. The Chinese startup’s success might also reflect cultural nuances in voice perception; listeners from different backgrounds may prioritize certain auditory cues, suggesting that global providers should tailor their models accordingly.

Beyond trust and realism, ethical considerations come into play. As AI voices become indistinguishable from human ones, concerns about misinformation arise. Deepfake audio could be used to impersonate individuals, spreading false narratives. The higher realism of the startup’s voices amplifies this risk, prompting calls for safeguards like watermarking or detection tools. Regulators are beginning to address these issues, with proposals for labeling AI-generated content. In the TechRadar piece, the emphasis on trust ties directly to these worries; if users can’t discern synthetic from real, they might grow wary of all digital audio, hindering positive applications.

From a technical standpoint, achieving superior voice synthesis involves complex processes. Models like Tacotron or FastSpeech convert text into spectrograms, which are then transformed into audio waveforms via vocoders. Enhancements in these areas, such as attention mechanisms that align text with speech patterns more accurately, contribute to better outputs. The Chinese startup likely excels in optimizing these elements, possibly through proprietary datasets or novel training techniques. Comparative analyses show that while big tech companies have access to enormous computational power, startups can innovate by focusing on quality over quantity. For example, training on high-fidelity recordings from professional voice actors can yield more polished results than relying solely on crowdsourced data.

User perceptions of AI voices also vary by context. In casual settings, like smart home devices, a slightly imperfect voice might be forgiven, but in professional environments, such as legal or medical consultations, precision is non-negotiable. The study’s results align with broader surveys indicating that emotional congruence—where the voice matches the message’s tone—strongly influences trust. If an AI voice delivers bad news with inappropriate cheerfulness, it undermines credibility. Developers must therefore integrate sentiment analysis to modulate voice output dynamically.

The future of AI voice technology looks promising, with ongoing research pushing boundaries. Innovations in multilingual support and accent adaptation could make voices more accessible worldwide. Collaborations between startups and established firms might accelerate progress, combining agility with scale. As the TechRadar article illustrates, excellence in this area isn’t solely about resources; it’s about understanding human auditory preferences deeply.

In educational contexts, high-quality AI voices could transform learning experiences. Imagine interactive textbooks where narratives come alive with convincing intonation, aiding comprehension for students with reading difficulties. In accessibility tools, realistic voices empower those with visual impairments by providing seamless audio interfaces. The superior ratings for the Chinese startup’s voices suggest potential for such applications, where trust directly impacts usability.

Challenges remain, however. Scalability is one; producing top-tier voices for every language and dialect requires immense data and effort. Bias in training data can lead to voices that favor certain demographics, perpetuating inequalities. Addressing these requires diverse datasets and ethical guidelines. Moreover, as AI voices improve, the line between helpful assistance and deceptive manipulation blurs, necessitating robust verification methods.

The TechRadar evaluation serves as a wake-up call for the industry. It demonstrates that users prioritize quality that feels human, not just functional. For Microsoft, Google, and Amazon, this means refining their technologies to match or exceed emerging competitors. For the Chinese startup, it’s an opportunity to expand influence, perhaps through partnerships or global outreach.

Ultimately, the pursuit of trustworthy AI voices drives innovation that benefits society. By focusing on realism that builds confidence, developers can create tools that enhance communication without sowing doubt. This balance will define the next phase of synthetic speech, ensuring it serves as a reliable extension of human interaction rather than a source of suspicion. As more studies like this emerge, they will guide refinements, leading to voices that not only sound right but also feel right to listeners everywhere.



from WebProNews https://ift.tt/mQpG8q6

Tuesday, 10 March 2026

Uber Rolls Out Women Driver Preference Feature Nationwide: What It Means for Riders and the Gig Economy

Uber just expanded its Women Rider Preference feature to all 50 U.S. states. The feature, which lets women and nonbinary riders request a woman driver, is now available nationwide after a phased rollout that began in select markets. It’s a significant move — and one that’s already drawing both praise and pointed criticism.

The concept is straightforward. Women and nonbinary riders can toggle a preference in the Uber app to be matched with women drivers. Women drivers, in turn, can opt to primarily receive ride requests from women and nonbinary passengers. Uber first piloted the feature in cities like Chicago, Phoenix, and several others before this national expansion, according to Mashable.

Safety is the driving force here. Uber has long faced scrutiny over rider safety incidents, particularly those involving women passengers. The company’s own U.S. Safety Report, released in 2022, documented thousands of reports of sexual assault and other safety incidents on its platform between 2019 and 2020. That history looms large over this feature’s rollout.

And the numbers back up the demand. According to Uber, early data from pilot markets showed strong adoption. The company reported that the feature helped increase the number of women drivers on its platform in those areas — a persistent challenge for rideshare companies. Women make up a relatively small share of Uber’s driver base, estimated at roughly 30%, and the company has acknowledged that safety concerns are a primary reason many women don’t sign up to drive.

So this isn’t just a rider-facing feature. It’s a recruitment tool.

By giving women drivers more control over who they pick up, Uber is betting it can attract and retain more women behind the wheel. That matters for the bottom line. More drivers means shorter wait times, better coverage, and a healthier marketplace overall. Uber has explicitly framed the feature as part of a broader effort to close the gender gap among its drivers, per its newsroom announcements.

But the rollout hasn’t been without friction. Critics have raised questions about whether gender-based matching could run afoul of anti-discrimination laws. Some legal scholars have pointed out that federal and state civil rights statutes generally prohibit businesses from discriminating based on sex in public accommodations. Uber’s counter-argument: this is a preference, not a mandate. Male riders can’t be denied service — they simply may wait longer or get matched with a different driver. The distinction matters legally, though it hasn’t been fully tested in court.

There’s also the question of how this interacts with Uber’s algorithmic matching. Toggling the preference doesn’t guarantee a woman driver. It expresses a preference that the system tries to honor. In areas with fewer women drivers, wait times could increase substantially for riders who activate it. Uber has been upfront about this trade-off.

Worth watching: how competitors respond. Lyft has its own set of safety features but hasn’t rolled out an equivalent gender-preference matching system at this scale. Smaller players and women-focused rideshare startups like See Jane Go have operated in this niche for years, but none have the reach or driver density of Uber. This move effectively absorbs a selling point that once differentiated those smaller services.

The timing is deliberate. Uber is making this push as the gig economy faces renewed regulatory pressure and as public discourse around women’s safety in shared transportation intensifies. Cities from New York to Los Angeles have debated additional safety mandates for rideshare platforms. By acting proactively, Uber positions itself ahead of potential regulation — a familiar playbook for the company.

Driver reactions have been mixed. Some women drivers on forums and social media have welcomed the added control. Others worry it could inadvertently reduce their overall ride volume if the feature segments the market in unexpected ways. A few have noted on X that the preference system could create perverse incentives — like surge pricing dynamics shifting if women drivers cluster around preference-enabled rides during peak hours.

Real-world impact will take time to measure. Uber says it plans to share more data on adoption rates and driver recruitment effects in the coming months. For now, the company is leaning heavily into the narrative that this is about empowerment and choice.

For industry professionals, the takeaway is clear. Gender-preference matching is now a mainstream feature in American ridesharing, not an experiment. How it performs at national scale — legally, operationally, and culturally — will likely shape product decisions across the entire on-demand transportation sector for years to come.



from WebProNews https://ift.tt/wCZILKE

AI Assistants Are Rewriting the Rules of Cybersecurity — and Defenders Are Scrambling to Keep Up

The security goalposts haven’t just moved. They’ve been launched into orbit.

A detailed analysis from Krebs on Security lays out how AI-powered assistants — the kind now embedded in enterprise workflows, consumer devices, and developer toolchains — are fundamentally reshaping the threat surface that security teams must defend. The implications are significant, and the industry’s response so far has been uneven at best.

Here’s the core problem: AI assistants don’t just process data. They act on it. They compose emails, write code, query databases, summarize confidential documents, and increasingly make decisions with minimal human oversight. Every one of those capabilities represents a potential attack vector that didn’t exist three years ago. Brian Krebs argues that the speed at which these tools have been deployed has far outpaced the development of security frameworks designed to contain them.

That gap is where the trouble lives.

The most pressing concern is prompt injection — a class of attack where adversaries craft inputs designed to manipulate an AI assistant into performing unauthorized actions. Security researchers have been sounding alarms about this for over two years now, but the problem has only grown more acute as AI assistants gain deeper access to enterprise systems. According to Krebs, attackers are now chaining prompt injection techniques with social engineering to trick AI assistants into exfiltrating sensitive data, modifying records, or bypassing access controls entirely. And because these assistants often operate with the permissions of the user who deployed them, a single compromised interaction can cascade across an organization’s internal infrastructure.

It’s not theoretical. Krebs cites multiple incidents in which AI assistants were manipulated into leaking proprietary information through carefully constructed prompts embedded in seemingly benign documents — PDFs, emails, even calendar invites. The attack surface is vast because AI assistants are designed to be helpful, which means they’re inherently inclined to follow instructions. Distinguishing between legitimate user intent and adversarial manipulation remains an unsolved problem.

So what are vendors doing about it? Not enough, apparently.

Krebs reports that major AI providers have implemented guardrails — content filters, system-level instructions that attempt to override malicious prompts, and sandboxing techniques that limit what assistants can access. But researchers have repeatedly demonstrated that these defenses are brittle. Wired and Ars Technica have both covered how red teams at academic institutions and independent security firms have bypassed these protections with alarming consistency. The fundamental architecture of large language models makes them susceptible to adversarial inputs in ways that traditional software isn’t, and bolting security measures onto the outside hasn’t proven sufficient.

There’s a second dimension to this that’s equally concerning: data governance. AI assistants are voracious consumers of context. They ingest meeting transcripts, Slack messages, email threads, code repositories, and internal wikis to generate useful responses. But that means they can also surface information that specific users shouldn’t have access to, effectively flattening organizational access controls. Krebs highlights cases where AI assistants exposed salary data, M&A deliberations, and unreleased product details to employees who had no business seeing them — not because of a hack, but because the assistant was doing exactly what it was designed to do.

The permissions model is broken. Or more precisely, it was never built for this.

Traditional access control assumes that a user queries a specific system and receives information gated by their role. AI assistants collapse that model by sitting on top of multiple systems simultaneously, aggregating and synthesizing data across boundaries that were previously enforced by the simple friction of having to log into separate tools. Removing that friction was the whole point. But it also removed the implicit security that friction provided.

Enterprise security teams are now being forced to rethink identity and access management from the ground up. Some organizations have started treating AI assistants as distinct identities within their security architectures — entities that need their own permission sets, audit trails, and behavioral monitoring. It’s a sensible approach, but implementation is complex and the tooling is still immature.

And then there’s the supply chain angle. AI assistants increasingly rely on third-party plugins, APIs, and extensions to perform tasks. Each integration point is a potential vulnerability. Krebs notes that attackers have begun targeting these connectors specifically, compromising a plugin to feed poisoned data into an AI assistant’s context window. The assistant then acts on that corrupted information as though it were legitimate. Classic supply chain attack logic, adapted for the AI era.

The regulatory picture isn’t helping. The EU AI Act addresses some of these concerns in broad strokes, but enforcement mechanisms remain vague and implementation timelines are long. In the U.S., there’s still no comprehensive federal framework governing AI security in enterprise settings. Companies are largely self-regulating, which means the quality of defenses varies wildly depending on organizational maturity and budget.

What should security leaders take away from this? First, audit every AI assistant deployment in your organization — including shadow deployments that individual teams may have spun up without IT approval. Second, assume that prompt injection is a when-not-if scenario and build detection and response capabilities accordingly. Third, revisit your data classification and access control policies with the understanding that AI assistants will try to bridge every silo you’ve built.

The technology is genuinely useful. That’s what makes this hard. Nobody wants to go back to manually summarizing 200-page compliance documents or writing boilerplate code from scratch. But the security implications of giving an AI assistant broad access to corporate systems are profound, and the industry hasn’t yet developed the tools or frameworks to manage them adequately.

Krebs puts it bluntly: the security community is playing catch-up, and the gap is widening. Until AI providers and enterprise security teams find a way to close it, every organization running these tools is accepting a level of risk that most haven’t fully quantified.

That’s the uncomfortable truth. And it’s not going away.



from WebProNews https://ift.tt/xgWAkD8

Monday, 9 March 2026

When the Government Controls AI: The Constitutional Crisis Nobody in Washington Wants to Debate

The U.S. government isn’t just buying AI tools. It’s building the infrastructure for a surveillance apparatus that would make the authors of the Fourth Amendment reach for their muskets.

OpenAI’s accelerating partnership with the Department of Defense has moved from theoretical debate to operational reality. According to The New Stack, the company that once pledged never to develop military applications has reversed course, working directly with defense agencies on applications that include cybersecurity operations and processing of sensitive government data. The pivot wasn’t subtle. OpenAI quietly updated its usage policies to remove prohibitions on military use, then began courting Pentagon contracts with the enthusiasm of a Beltway defense contractor.

This isn’t about national security in the abstract. It’s about what happens when the most powerful pattern-recognition and data-processing systems ever built are handed to agencies with a documented history of constitutional overreach.

Maya Sulkin, posting on X, raised pointed concerns about the trajectory of government AI adoption, highlighting how rapidly these technologies are being deployed without meaningful public debate or legislative guardrails. The concern resonates far beyond tech policy circles. When AI systems capable of analyzing billions of data points per second are deployed by intelligence and law enforcement agencies, the question isn’t whether they’ll be used for mass surveillance. The question is what’s stopping them.

The answer, right now, is almost nothing.

Consider the precedent. The NSA’s bulk metadata collection program, revealed by Edward Snowden in 2013, operated for years under secret legal interpretations that the FISA court rubber-stamped. That program was primitive by today’s standards — it collected phone records. Modern AI systems can correlate phone records with location data, facial recognition feeds, social media activity, financial transactions, and communication patterns simultaneously. The surveillance potential isn’t additive. It’s multiplicative.

And the legal framework hasn’t kept pace. Section 702 of the Foreign Intelligence Surveillance Act, reauthorized in 2024, still permits warrantless collection of communications data on foreign targets — but “incidental” collection of American citizens’ data continues at scale. Layer AI-powered analysis on top of that collection, and you don’t need to target Americans directly. The system can identify, profile, and track them as a byproduct of its normal operations.

Not Divided, an organization focused on protecting democratic institutions from technological overreach, has been documenting how AI deployment by government agencies threatens constitutional protections. Their research points to a fundamental asymmetry: the government’s capacity to collect and process data about citizens is growing exponentially, while citizens’ ability to understand, challenge, or even know about that collection remains static. No transparency. No accountability. No meaningful consent.

The Fourth Amendment’s protection against unreasonable searches was written for a world where searching someone’s papers required physically entering their home. The Supreme Court has updated that understanding — the 2018 Carpenter v. United States decision held that accessing historical cell-site location records constitutes a search requiring a warrant. But Carpenter addressed a single data type from a single source. It said nothing about AI systems that can fuse dozens of data streams into comprehensive behavioral profiles without any individual search ever being conducted.

That’s the gap. And the government is driving a fleet of trucks through it.

The defense establishment’s AI ambitions go far beyond battlefield applications

The Department of Homeland Security has deployed AI-powered systems at the border that use facial recognition, behavioral analysis, and social media monitoring. Immigration and Customs Enforcement has contracted with data brokers who aggregate location data from commercial apps — data that would require a warrant to collect directly but can be purchased on the open market. The FBI’s use of facial recognition technology has been criticized by the Government Accountability Office for lacking adequate privacy safeguards. These aren’t hypothetical risks. They’re current operations.

OpenAI’s entry into this space adds a new dimension. Large language models and multimodal AI systems don’t just process structured data — they can interpret unstructured text, analyze images, understand context, and generate inferences that would take human analysts months to produce. When The New Stack reported on the company’s defense partnerships, the framing centered on cybersecurity and administrative efficiency. But the same models that can summarize intelligence reports can also analyze intercepted communications at population scale. The same computer vision systems that can identify military equipment in satellite imagery can identify individuals in street-level surveillance footage.

The technology is dual-use by nature. The intentions of today’s operators don’t constrain the applications of tomorrow’s.

Some will argue that democratic oversight provides sufficient protection. It doesn’t. Congressional intelligence committees have repeatedly demonstrated they lack the technical expertise to evaluate AI capabilities and the political will to restrict intelligence agencies. The Church Committee reforms of the 1970s, which created the modern oversight framework after revelations of CIA and FBI domestic surveillance programs, were a response to abuses that had already occurred. We’re watching the conditions for similar abuses being assembled in real time, and the response from Congress has been a handful of hearings and zero binding legislation.

The European Union’s AI Act, whatever its flaws, at least attempts to categorize AI applications by risk level and impose restrictions on the most dangerous uses, including real-time biometric surveillance in public spaces. The United States has no equivalent federal framework. Executive orders on AI safety issued by the Biden administration were largely voluntary and have been rolled back. State-level efforts are fragmented and inconsistent.

So where does that leave American citizens?

Exposed. The combination of commercially available personal data, government surveillance authorities that predate the AI era, and AI systems capable of synthesizing both into detailed individual profiles creates conditions the Constitution’s framers could not have anticipated but clearly would have opposed. The right to be left alone — what Justice Brandeis called “the most comprehensive of rights, and the right most valued by a free people” — is being eroded not by a single dramatic act but by the steady accumulation of technical capabilities deployed without democratic authorization.

The tech industry bears responsibility here too. OpenAI’s shift from “we won’t work with the military” to active defense contracting happened without shareholder votes, public referenda, or legislative approval. It was a business decision. The company determined that government contracts were too lucrative and strategically important to forgo, and it adjusted its principles accordingly. Other AI companies — Palantir, Anduril, Scale AI — never pretended to have such reservations. But OpenAI’s reversal matters precisely because it demonstrates that voluntary ethical commitments in the AI industry are worth exactly as much as the paper they’re not printed on.

Groups like Not Divided are pushing for structural reforms: mandatory algorithmic impact assessments before government deployment, warrant requirements for AI-assisted surveillance, independent technical audits of government AI systems, and sunset clauses that force periodic reauthorization. These aren’t radical proposals. They’re the minimum conditions for democratic governance of powerful technologies.

But they face opposition from an intelligence community that views oversight as an obstacle, a defense industry that views AI as its next major revenue stream, and a political class that views “tough on security” as an electoral imperative. The incentives all point in one direction. More collection. More analysis. More power concentrated in agencies that operate largely in secret.

The constitutional question isn’t complicated. The government should not be able to construct detailed profiles of citizens’ movements, associations, communications, and beliefs without individualized suspicion and judicial authorization. AI makes it technically trivial to do exactly that. The law hasn’t caught up. And every month that passes without action makes the gap harder to close.

This isn’t a technology problem. It’s a democracy problem. And right now, democracy is losing.



from WebProNews https://ift.tt/Ub8AIvc