Friday, 20 February 2026

The Productivity Parasite: The Hidden Cost of Childhood Illnesses on the Workforce

We often track corporate productivity killers in broad strokes. We analyze the impact of supply chain disruptions, the cost of software downtime, and the billions lost to flu season. HR departments have robust protocols for maternity leave and long-term disability. But there is a silent, micro-level friction that bleeds efficiency from companies every single day, and it rarely shows up in a quarterly report.

It happens at 10:00 AM on a Tuesday. A key project manager gets a call from the school nurse. It isn’t a fever, and it isn’t a broken arm. It’s head lice. In an instant, that employee is gone. They aren’t just leaving to pick up a child; they are entering a multi-day vortex of laundry, combing, anxiety, and sleepless nights. For the modern business, these minor childhood ailments are a major operational leak. They cause unscheduled absenteeism that disrupts workflows and forces teams to scramble.

In the past, this might have been viewed strictly as a family issue. But in an era where workforce optimization is the goal, savvy professionals are realizing that the fastest way to solve the problem isn’t a drugstore shampoo—it’s professional speed. Just as we outsource IT or payroll, parents are finding that a one-hour visit to a professional lice clinic is the difference between missing an afternoon and missing an entire week of work.

Here is why the minor bugs of childhood are actually a significant drag on corporate B2B performance, and why efficiency demands a modern solution.

1. The Math of the Three-Day Window

When a child is sent home with lice, the parent doesn’t just lose that afternoon. The traditional at-home treatment protocol creates a cascade of lost time:

  • Day 1 (Discovery): The employee leaves work abruptly. They spend the rest of the day researching what to do and buying over-the-counter products.
  • Day 2 (The Labor): Treatments often require hours of combing. Bedding must be washed. The mental load is entirely focused on the household, not the quarterly review.
  • Day 3 (The Failure): This is the hidden killer. Most over-the-counter treatments are less effective than they used to be due to genetic resistance (more on that later). So, the parent sends the kid back to school, only to get called again two days later because the infestation wasn’t cleared.

This cycle turns a minor nuisance into a recurring absence. For an employer, having a key staff member distracted or absent intermittently for two weeks is often worse than them being out for three straight days with the flu. It breaks the rhythm of collaboration.

2. Presenteeism and the Distracted Desk

Even if the employee physically shows up to work the next day, are they actually there? “Presenteeism” is the phenomenon of employees being on the clock but functioning at partial capacity due to illness or stress.

There is a unique stigma and anxiety attached to lice. A parent sitting in a board meeting isn’t thinking about the KPI slides; they are thinking, “Did I get all the nits? Is my head itching? Did I give this to my coworkers?” This mental fog destroys productivity. The employee is texting their spouse, Googling remedies on their second monitor, and operating in a state of high-stress distraction. The physical body is in the chair, but the creative and strategic mind is at home battling bugs.

3. The Super Lice Economic Impact

From a business perspective, using outdated tools is a waste of capital. The same logic applies to healthcare. For decades, the standard solution was a chemical shampoo from the local pharmacy. However, insects evolve. Today, the majority of lice in the United States are resistant to the active ingredients (pyrethroids) in those box kits. These are colloquially known as super lice.

When an employee relies on these outdated methods, they are essentially trying to fix a server crash with a reboot that doesn’t work. They use the product, think the problem is solved, return to work, and then get hit with a recurrence a week later. This extends the crisis mode indefinitely. The shift toward professional treatment—using heated air technology that dehydrates the bugs and eggs—is effectively a technology upgrade. It solves the problem in one session, guaranteeing that the employee returns to full productivity immediately. It transforms a chronic issue into a singular event.

4. The Ripple Effect on Teams

In a highly collaborative office, one person’s absence is rarely an isolated event. If the Director of Marketing has to leave suddenly because their twins were sent home from school, the creative review gets pushed. The graphic designers are left waiting for approval. The ad buy is delayed.

This ripple effect causes frustration among the staff who are left to pick up the slack. This occurs when colleagues have to cover for the absent parent, adding to their own burnout. By the time the parent returns, the team dynamic is frayed, and everyone is playing catch-up.

5. Why Outsourcing the Cure is a Business Strategy

High-performing executives rarely mow their own lawns or do their own taxes. They understand the concept of the highest and best use of time. They pay experts to handle maintenance tasks so they can focus on high-value work.

Healthcare for minor ailments should be viewed through the same lens. Spending 20 hours over a weekend manually combing hair is a poor use of a professional’s time. It leads to exhaustion and resentment. Opting for a professional service is an efficiency decision. It costs money, yes, but it buys back time. It buys back sanity.

From an HR perspective, creating a culture where employees feel supported in making these fast-fix decisions—rather than feeling pressured to “tough it out” with cheaper, slower methods—pays dividends. When an employee knows they can solve a family crisis in an hour and be back online the next morning, their loyalty increases, and the business continuity remains intact.

Control the Chaos

We cannot prevent the random chaos of childhood. Kids will get sick, they will break things, and they will bring home unwanted guests from the classroom. However, we can control the response.

In the business world, we value speed, accuracy, and reliability. We should apply those same standards to how we manage the “home front” challenges that spill over into the workday. Treating a lice outbreak not as a shameful secret, but as a logistical problem to be solved with professional technology, is the smartest move a working parent can make. It keeps the “parasite” from feeding on the company’s bottom line.



from WebProNews https://ift.tt/n2CR9tp

Perplexity AI Bets Its Future on Advertising — and Google Should Be Watching Closely

For years, the implicit bargain of internet search has been straightforward: users type queries, receive answers, and tolerate advertisements woven between the results. Google built a $300 billion annual advertising empire on this arrangement. Now, Perplexity AI — the venture-backed search startup valued at over $9 billion — is attempting to rewrite that contract, inserting ads into AI-generated answers while promising something Google never quite managed: transparency about where the money comes from and how it shapes what users see.

The company launched its advertising program in late 2024 with a handful of brand partners, and has since expanded the effort significantly. According to Wired, Perplexity now displays “sponsored follow-up questions” alongside its AI-generated responses, a format that lets advertisers suggest the next thing a user might want to ask. It is a subtle but significant departure from the traditional search ad model, where blue links and banner placements dominate. Instead of interrupting the user’s flow, Perplexity is attempting to embed commercial interests directly into the conversational thread of inquiry.

A New Kind of Search Ad — Or an Old One in Disguise?

Perplexity’s ad format works like this: when a user asks a question, the AI engine synthesizes an answer from multiple web sources, complete with citations. Below or beside that answer, a sponsored question appears — labeled as such — that, when clicked, leads to another AI-generated response shaped by the advertiser’s messaging. The company has described this as a way to keep ads “relevant” without degrading the quality of the core answer. Dmitry Shevelenko, Perplexity’s chief business officer, has said the company is committed to never letting advertising influence the actual answers the AI produces.

That promise is central to Perplexity’s pitch, but skeptics abound. As Wired reported, the concern among industry observers is that once advertising revenue becomes a primary business model, the pressure to satisfy sponsors will inevitably shape editorial and algorithmic decisions — even if that influence is indirect. The history of digital media is littered with companies that started with noble intentions about separating commercial and editorial interests, only to blur the lines as growth demands intensified. Google itself began with a famous internal memo arguing that advertising-funded search engines would be “inherently biased,” a warning its founders eventually set aside as the company scaled.

The Economics Behind Perplexity’s Advertising Push

The financial logic driving Perplexity toward advertising is not hard to understand. Running large language models at scale is enormously expensive. Each query processed by an AI engine costs significantly more than a traditional search query, which largely involves matching keywords to pre-indexed web pages. Perplexity has a subscription tier — Perplexity Pro, priced at $20 per month — but subscription revenue alone is unlikely to cover the computational costs of serving millions of users. Advertising offers a path to unit economics that actually work.

The company has reportedly brought on major advertisers including brands in the technology, finance, and consumer products sectors. While Perplexity has not disclosed specific revenue figures, the startup has been aggressive in courting ad buyers, positioning itself as an alternative to Google that offers higher engagement rates. The argument is that users who interact with AI-generated answers are more attentive and intentional than users scrolling through a page of ten blue links, making each ad impression more valuable. Early data shared by the company with prospective advertisers reportedly supports this claim, though independent verification remains limited.

Google’s Response and the Broader Competitive Picture

Google has not been sitting idle. The search giant has been integrating its own AI-generated summaries — called AI Overviews — into the top of search results pages, a move that has itself drawn criticism from publishers who worry about traffic loss. Google has also begun experimenting with ads within these AI Overviews, testing formats that place sponsored content inside the AI-generated answer box. The parallels with Perplexity’s approach are striking, and suggest that regardless of which company leads the way, the future of search advertising will involve commercial messages embedded in AI-synthesized responses rather than displayed alongside organic links.

But the competitive dynamics are asymmetric. Google processes roughly 8.5 billion searches per day and controls approximately 90% of the global search market. Perplexity, by contrast, handles a tiny fraction of that volume — estimates suggest tens of millions of queries per month, a rounding error by Google’s standards. What Perplexity lacks in scale, however, it compensates for with agility and a user base that skews heavily toward early adopters, professionals, and researchers — demographics that advertisers prize. The company’s pitch to Madison Avenue is essentially that its users are higher-quality leads, even if there are far fewer of them.

Publisher Tensions and the Question of Attribution

Perplexity’s relationship with publishers has been contentious from the start. The company’s AI engine synthesizes answers by pulling information from across the web, raising questions about whether it is giving adequate credit — and traffic — to the original sources. Several major publishers, including The New York Times and Forbes, have raised objections, with some accusing Perplexity of effectively scraping their content to generate answers that keep users on Perplexity’s platform rather than sending them to the original articles.

In response, Perplexity introduced a revenue-sharing program for publishers, offering a cut of advertising revenue generated from queries that cite their content. As Wired noted, the details of this arrangement remain opaque, and many publishers have expressed skepticism about whether the payments will be meaningful. The fundamental tension is structural: Perplexity’s value proposition to users is that they don’t have to click through to source websites to get their answers. Every successful Perplexity query is, in some sense, a visit that a publisher’s website did not receive. Revenue sharing may soften the blow, but it does not resolve the underlying conflict.

What Advertisers Are Actually Buying

For advertisers, the appeal of Perplexity’s format lies in context and intent. Traditional search ads work because users have expressed a specific need by typing a query. Perplexity takes this a step further: because the AI generates a detailed, conversational answer, the system has a richer understanding of what the user is actually looking for. A sponsored follow-up question can be tailored not just to the keywords in the original query but to the full context of the conversation. This represents a genuinely different kind of targeting — one that is less about matching keywords and more about understanding meaning.

The risk for advertisers, however, is brand safety. When an AI generates answers in real time, there is always the possibility that a sponsored question will appear alongside content that is inaccurate, controversial, or otherwise problematic. Perplexity has implemented content moderation systems to mitigate this, but the challenge is inherent to the format. Unlike a traditional search results page, where ads appear in clearly delineated spaces, Perplexity’s ads are woven into the conversational flow, making any association with problematic content feel more intimate and potentially more damaging to the brand.

The Regulatory Shadow Over AI-Powered Advertising

Regulators in both the United States and the European Union have begun paying closer attention to how AI systems present information — and how commercial interests might distort that presentation. The Federal Trade Commission has signaled interest in ensuring that AI-generated recommendations and answers are clearly distinguished from advertising, and the EU’s AI Act includes provisions that could affect how companies like Perplexity disclose the role of advertising in shaping AI outputs. Perplexity’s decision to clearly label its sponsored questions as advertising may give it a head start in regulatory compliance, but the rules are still being written, and the company’s model could face new constraints as governments catch up with the technology.

There is also the question of consumer trust. Perplexity has built its early reputation on providing direct, well-sourced answers without the clutter that characterizes modern Google search results. Introducing advertising — no matter how tastefully — risks eroding that trust. The company appears aware of this danger; its executives have repeatedly emphasized that ads will never influence the core answers, and that the sponsored follow-up questions will always be transparently labeled. Whether users believe those assurances over time will depend on whether the company’s actions match its rhetoric.

A Test Case for the Future of AI Monetization

Perplexity’s advertising experiment matters beyond the company itself. It is, in effect, a test case for how the entire class of AI-powered information tools — from ChatGPT to Claude to Gemini — might eventually make money. OpenAI has so far relied primarily on subscriptions and API licensing, but the pressure to find additional revenue streams is mounting as costs escalate and competition intensifies. If Perplexity demonstrates that advertising can coexist with high-quality AI answers without alienating users, it will provide a template that others will almost certainly follow.

The stakes are high precisely because the model is untested at scale. Google’s advertising machine was refined over two decades, through countless iterations and billions of data points. Perplexity is trying to build something comparable in a fraction of the time, with a fundamentally different technology stack and user experience. The outcome will tell us a great deal about whether AI-powered search can sustain itself as a business — or whether, like so many ambitious startups before it, Perplexity will find that the gap between a compelling product and a viable business is wider than it appears.



from WebProNews https://ift.tt/7SeykVF

Thursday, 19 February 2026

Anthropic’s Claude Code Faces a Legal Tightrope: What Enterprises Need to Know About AI-Generated Code Compliance

When Anthropic quietly published a detailed legal and compliance guide for its Claude Code product, it sent a clear signal to the enterprise software market: the era of casual AI-assisted coding is over, and the compliance questions are only getting harder. The document, hosted on Anthropic’s official Claude Code documentation site, lays out a surprisingly candid framework for how organizations should think about intellectual property, licensing, data privacy, and regulatory risk when deploying AI agents that write and execute code autonomously.

For industry insiders who have watched the generative AI space mature from novelty to necessity, the publication of this compliance framework marks a turning point. It acknowledges what many corporate legal departments have been whispering for months: that AI-generated code introduces a distinct category of legal exposure that existing software governance frameworks were never designed to handle.

The IP Ownership Question That Won’t Go Away

At the heart of Anthropic’s compliance documentation is a frank treatment of intellectual property ownership — the single most contested legal question in generative AI today. The guide makes clear that code generated by Claude Code is produced by an AI system trained on vast datasets, and that organizations should consult their own legal counsel regarding ownership rights over AI-generated outputs. This is not a trivial disclaimer. It reflects the unsettled state of copyright law as it applies to machine-generated works across multiple jurisdictions.

In the United States, the Copyright Office has repeatedly signaled that works produced entirely by AI without meaningful human authorship may not qualify for copyright protection. A series of rulings in 2023 and 2024 reinforced this position, creating a gray zone for enterprises that rely on AI-generated code as part of their proprietary software stack. Anthropic’s documentation implicitly acknowledges this uncertainty by urging users to maintain human oversight and review of all generated code — a practice that could strengthen claims of human authorship in the event of a dispute.

Licensing Contamination: The Hidden Risk in Every AI Code Suggestion

Perhaps the most technically significant section of the compliance guide deals with open-source licensing risks. Claude Code, like all large language models trained on publicly available code repositories, has been exposed to code governed by a wide range of open-source licenses — from permissive licenses like MIT and Apache 2.0 to copyleft licenses like GPL and AGPL. The concern is straightforward: if an AI model reproduces or closely paraphrases code that is subject to a copyleft license, the organization using that output could inadvertently trigger license obligations that require disclosure of proprietary source code.

Anthropic’s guidance recommends that enterprises implement code scanning and license detection tools as part of their development pipeline when using Claude Code. This recommendation aligns with practices already standard at large technology firms but represents a new compliance burden for smaller organizations and startups that may be adopting AI coding tools without the infrastructure to detect licensing contamination. The documentation specifically advises users to review generated code for potential matches with known open-source projects before incorporating it into production systems.

Data Privacy and the Confidentiality of Your Codebase

The compliance guide also addresses a concern that has become a dealbreaker for many enterprise procurement teams: what happens to the proprietary code and data that Claude Code accesses during operation. Anthropic states that Claude Code operates with access to the user’s local development environment, meaning it can read files, execute commands, and interact with codebases directly. For organizations working with regulated data — financial records, health information, defense-related intellectual property — this access model raises immediate questions about data handling, retention, and potential exposure.

Anthropic’s documentation outlines that, under its standard terms, inputs provided to Claude Code in certain configurations may be used to improve the model unless users opt out or operate under an enterprise agreement with different data-use provisions. This distinction between consumer-tier and enterprise-tier data handling is critical. Organizations subject to regulations like GDPR, HIPAA, or ITAR need to understand precisely which data flows to Anthropic’s servers and which remains local. The compliance guide encourages enterprises to work with Anthropic’s sales team to establish data processing agreements that meet their specific regulatory requirements.

Autonomous Agents and the Accountability Gap

One of the more forward-looking sections of the compliance documentation addresses the use of Claude Code as an autonomous agent — a mode in which the AI can execute multi-step coding tasks with minimal human intervention. This capability, while powerful, introduces what legal scholars have begun calling the “accountability gap”: when an AI agent introduces a security vulnerability, violates a compliance rule, or produces code that infringes on a third party’s rights, the question of who bears responsibility becomes genuinely complex.

Anthropic’s guidance on this point is measured but clear. The company positions Claude Code as a tool, not a decision-maker, and places the burden of oversight squarely on the human operators and organizations deploying it. The documentation recommends establishing clear approval workflows, limiting the scope of autonomous operations, and maintaining audit logs of all actions taken by the AI agent. These recommendations echo the emerging consensus among AI governance professionals that human-in-the-loop controls are not optional — they are a legal and operational necessity.

Export Controls and Sanctions: An Underappreciated Dimension

A less discussed but significant portion of the compliance framework addresses export controls and international sanctions. AI-generated code, particularly code that implements encryption algorithms, advanced computational methods, or dual-use technologies, may be subject to export control regulations under the U.S. Export Administration Regulations (EAR) or the International Traffic in Arms Regulations (ITAR). Anthropic’s documentation flags this as an area requiring careful attention, particularly for organizations with international operations or customers in sanctioned jurisdictions.

This is not a theoretical concern. In recent months, the U.S. government has tightened restrictions on the export of advanced AI technologies and related components. Organizations using Claude Code to develop software that will be deployed internationally need to ensure that their export compliance programs account for the AI-generated components of their products. The compliance guide does not provide a comprehensive export control analysis — that would be impossible given the diversity of use cases — but it does flag the issue prominently and recommends consultation with trade compliance counsel.

The Broader Industry Context: A Race to Set Standards

Anthropic’s publication of this compliance framework does not exist in a vacuum. Across the industry, AI coding tool providers are grappling with the same set of legal and regulatory questions. GitHub Copilot, powered by OpenAI’s models, has faced its own legal challenges, including a class-action lawsuit alleging that the tool reproduces copyrighted code without proper attribution. Microsoft and GitHub have responded by introducing features like code reference filters and license detection, but the underlying legal questions remain unresolved.

Google’s Gemini Code Assist and Amazon’s CodeWhisperer have similarly published their own terms of service and compliance guidelines, each attempting to strike a balance between usability and legal protection. What distinguishes Anthropic’s approach is the relative specificity and transparency of its compliance documentation. Rather than burying legal disclaimers in dense terms of service, the company has created a standalone resource that directly addresses the concerns of enterprise legal and compliance teams. This approach may reflect Anthropic’s broader positioning as a safety-focused AI company, but it also serves a practical commercial purpose: reducing friction in enterprise sales cycles where legal review is often the longest pole in the tent.

What Enterprise Buyers Should Be Asking Right Now

For organizations evaluating Claude Code or any AI coding assistant, the Anthropic compliance guide provides a useful checklist of questions that should be part of every procurement review. First, what are the data retention and usage policies, and do they align with the organization’s regulatory obligations? Second, what controls exist to prevent the reproduction of copyleft-licensed code in proprietary projects? Third, how does the tool handle sensitive or classified information, and what contractual protections are available? Fourth, what audit and logging capabilities does the tool provide to support compliance monitoring?

These are not questions that can be answered by a marketing deck or a product demo. They require detailed legal and technical analysis, and they need to be revisited as both the technology and the regulatory environment continue to evolve. Anthropic’s compliance documentation, available at code.claude.com, is a starting point — but only a starting point. The companies that get this right will be those that treat AI code generation not as a simple productivity tool but as a new category of technology with its own distinct risk profile, requiring its own distinct governance framework.

The legal infrastructure around AI-generated code is being built in real time, and the organizations that engage with these questions now — rather than after an incident forces their hand — will be far better positioned to capture the productivity benefits of AI coding tools without exposing themselves to unacceptable legal risk. Anthropic, to its credit, has made the first move toward transparency. The question is whether the rest of the industry will follow, and whether regulators will accept self-governance or demand something more prescriptive.



from WebProNews https://ift.tt/zURFDQN

The AI Productivity Paradox: Why Billions in Artificial Intelligence Spending Isn’t Showing Up in the Bottom Line

American corporations have poured hundreds of billions of dollars into artificial intelligence over the past three years, yet a growing body of evidence suggests that the promised productivity bonanza remains stubbornly elusive. A new working paper from the National Bureau of Economic Research and a mounting chorus of CEO frustrations are raising a familiar specter from economic history: the so-called productivity paradox, first articulated by Nobel laureate Robert Solow in 1987 when he quipped that computers could be seen “everywhere but in the productivity statistics.”

Nearly four decades later, the same conundrum appears to be playing out with generative AI. Despite breathless forecasts from consulting firms and technology vendors projecting trillions in economic value, the hard numbers tell a more complicated story—one that should give pause to boards, investors, and policymakers betting heavily on AI as the engine of the next great productivity surge.

A New Paper Puts Numbers to the Disconnect

A working paper published by the National Bureau of Economic Research (NBER) offers one of the most rigorous examinations to date of how AI adoption is translating—or failing to translate—into measurable productivity gains at the firm level. The researchers analyzed data across a broad cross-section of industries and firm sizes, tracking both the intensity of AI investment and subsequent changes in output per worker, revenue efficiency, and total factor productivity.

The findings are sobering. While firms that adopted AI tools reported improvements in certain narrow task-level metrics—such as the speed of generating first drafts of documents or the volume of customer service inquiries handled per hour—these micro-level gains have not aggregated into statistically significant improvements in firm-wide productivity. The paper identifies several structural reasons for this gap, including the substantial overhead costs of implementation, the reallocation of worker time toward AI supervision and error correction, and what the authors describe as “productivity displacement”—the tendency for efficiency gains in one area to be offset by new inefficiencies elsewhere in the organization.

CEOs Voice Growing Frustration Over Returns

The academic findings echo a sentiment that has been building in corporate boardrooms. As Fortune reported in February, a growing number of chief executives are privately expressing disappointment with the returns on their AI investments. The publication cited a survey of Fortune 500 CEOs in which a majority acknowledged that their companies had yet to see meaningful productivity improvements from generative AI deployments, even as spending on the technology continued to accelerate.

One CEO quoted in the Fortune piece described the situation as “a lot of demos and not a lot of P&L impact.” Another noted that while individual employees were enthusiastic about tools like ChatGPT and Microsoft Copilot, the organization as a whole had not figured out how to convert that enthusiasm into measurable business outcomes. The frustration is compounded by the fact that AI spending is not discretionary for many firms—competitive pressure and investor expectations have made it feel mandatory, regardless of near-term returns.

The Solow Paradox Returns, With a Twist

The parallels to the information technology boom of the 1980s and 1990s are impossible to ignore. When Solow made his famous observation in 1987, businesses were spending heavily on personal computers, networking equipment, and enterprise software, yet aggregate productivity growth in the United States was actually slowing. It took nearly a decade—and a wholesale reorganization of business processes around digital technology—before the productivity gains finally materialized in the late 1990s.

Economists who study this earlier episode point out that the lag was not accidental. As Erik Brynjolfsson of Stanford, who has written extensively on the topic, has argued, general-purpose technologies require complementary investments in organizational redesign, worker training, and process reengineering before their full benefits can be captured. The NBER paper makes a similar argument about AI, noting that firms which simply layered AI tools on top of existing workflows saw the smallest productivity effects, while the handful that undertook more fundamental restructuring showed more promising—though still modest—results.

The Hidden Costs That Rarely Make the Pitch Deck

One of the most striking findings in the NBER research concerns the hidden costs of AI adoption that are frequently omitted from vendor projections and internal business cases. The paper documents significant expenditures on what it terms “AI maintenance labor”—the human effort required to review AI outputs for accuracy, correct hallucinations and errors, manage prompt engineering, and handle the edge cases that automated systems cannot resolve.

In customer-facing applications, for example, the researchers found that while AI chatbots could handle a larger volume of initial inquiries, the rate of escalation to human agents actually increased in several cases, as customers grew frustrated with incorrect or irrelevant automated responses. The net effect on cost per resolved inquiry was, in some firms, negligible or even negative. Similarly, in knowledge work settings such as legal research and financial analysis, the time saved by AI-generated first drafts was partially consumed by the additional review and fact-checking those drafts required. Senior professionals reported spending less time writing but more time editing—a reallocation of effort rather than a reduction.

Capital Markets Are Starting to Ask Harder Questions

The productivity paradox is not merely an academic curiosity; it has real implications for the investment thesis underpinning the AI boom. Technology companies have committed more than $300 billion in capital expenditures on AI infrastructure in 2025 and 2026, according to estimates from multiple Wall Street analysts. Cloud providers, chipmakers, and enterprise software firms have all justified elevated valuations on the assumption that AI will drive a step-change in corporate efficiency and, by extension, willingness to pay for AI-powered services.

If the productivity gains remain diffuse and difficult to measure, the willingness of enterprises to sustain—let alone increase—their AI spending could come under pressure. Already, some analysts have begun drawing comparisons to the fiber-optic buildout of the late 1990s, when massive infrastructure investment preceded a painful period of overcapacity and write-downs. The comparison is imperfect—AI capabilities are advancing far more rapidly than bandwidth demand did in that era—but the underlying concern about the gap between investment and realized value is structurally similar.

Where the Gains Are Actually Appearing

It would be misleading to suggest that AI is producing no value whatsoever. The NBER paper identifies several areas where productivity improvements are both real and measurable. Software development stands out as one domain where AI coding assistants have demonstrably increased output per developer, particularly for routine tasks such as writing boilerplate code, debugging, and generating test cases. Customer support operations at very large scale—think millions of interactions per month—have also shown genuine cost reductions, though primarily in tier-one triage rather than complex problem resolution.

The Fortune report similarly noted that CEOs in the pharmaceutical and materials science sectors were more optimistic about AI’s near-term impact, citing applications in drug discovery, molecular simulation, and supply chain optimization where the technology is being applied to well-defined problems with clear metrics. The common thread among the success stories is specificity: AI appears to deliver the strongest returns when applied to narrow, well-structured tasks with abundant training data and low tolerance for ambiguity, rather than as a general-purpose productivity enhancer across the enterprise.

The Organizational Challenge May Be the Binding Constraint

Perhaps the most important insight from both the NBER research and the CEO surveys is that the binding constraint on AI productivity is not technological but organizational. The technology itself is advancing at a remarkable pace—large language models are becoming more capable, inference costs are falling, and new modalities are expanding the range of tasks AI can address. But organizations are struggling to redesign their workflows, incentive structures, and management practices to take full advantage of these capabilities.

This is a pattern that has repeated with every major general-purpose technology, from electrification to the personal computer. The firms that eventually captured the largest productivity gains from electricity in the early 20th century were not those that simply replaced steam engines with electric motors in the same factory layout. They were the ones that redesigned their factories from the ground up to take advantage of the flexibility that distributed electric power provided. The analogy to AI is direct: bolting a chatbot onto an existing customer service operation is the equivalent of swapping a steam engine for an electric motor without changing the floor plan.

What Comes Next for the AI Investment Cycle

History suggests that the productivity paradox is not permanent. The question for investors, executives, and workers is how long the lag will persist and how painful the intervening period will be. The optimistic view, articulated by some of the researchers behind the NBER paper, is that the current period of disappointing returns is a necessary phase of experimentation and learning, and that the organizational adaptations required to unlock AI’s full potential are already underway at leading firms.

The more cautious view is that the gap between AI hype and AI reality could trigger a correction in spending and valuations before the productivity gains arrive. If CEOs continue to report underwhelming returns, boards may begin to question the pace of investment, particularly in an environment of elevated interest rates and tightening capital budgets. The technology will almost certainly prove transformative over a longer time horizon—but as Solow’s paradox reminds us, the gap between “almost certainly” and “right now” can be wide enough to swallow billions of dollars in shareholder value.

For now, the data suggest that the AI productivity revolution is real in theory and elusive in practice. The firms most likely to bridge that gap will be those willing to undertake the difficult, unglamorous work of organizational redesign—rethinking not just which tasks AI can perform, but how entire business processes, team structures, and performance metrics need to change to accommodate a fundamentally different kind of tool. That work is harder to sell in a keynote presentation than a flashy demo, but it may ultimately be what separates the winners from the also-rans in the AI era.



from WebProNews https://ift.tt/GsqpSjT

Wednesday, 18 February 2026

Lenovo Accused of Secretly Funneling User Data to China: Inside the Class-Action Privacy Lawsuit That Could Reshape Tech Manufacturing Trust

A sweeping class-action lawsuit filed in a U.S. federal court accuses Lenovo Group Ltd., the world’s largest personal computer manufacturer, of covertly transferring vast quantities of American consumer data to servers in China — a charge that, if substantiated, could send tremors through the global technology supply chain and reignite fierce debate over the security implications of Chinese-manufactured hardware in American homes and offices.

The complaint, filed in the Northern District of California, alleges that Lenovo embedded software in its consumer devices that systematically harvested user data — including browsing activity, device identifiers, and other sensitive personal information — and transmitted that data in bulk to servers located in the People’s Republic of China. The lawsuit seeks class-action status on behalf of potentially millions of Lenovo device owners across the United States, as reported by Slashdot.

A Familiar Ghost: Lenovo’s Troubled History With Pre-Installed Software

For industry veterans, the allegations carry an unmistakable echo. In 2015, Lenovo was caught distributing laptops pre-loaded with Superfish, a visual search adware application that installed its own root certificate authority on users’ machines. The Superfish debacle didn’t merely inject unwanted advertisements into web browsers — it fundamentally compromised the HTTPS encryption that protects online banking, medical records, and virtually every other sensitive digital transaction. Security researchers at the time described it as one of the most reckless pre-installation decisions ever made by a major PC manufacturer. Lenovo eventually settled with the Federal Trade Commission in 2017, agreeing to obtain affirmative consent before installing adware and to undergo third-party security audits for 20 years.

The new lawsuit suggests that Lenovo may not have fully internalized the lessons of that episode. According to the complaint, the data collection practices at issue go beyond adware and into the realm of systematic surveillance-style data harvesting. Plaintiffs’ attorneys argue that Lenovo’s software collected data without meaningful user consent and routed it to infrastructure in China, where it could potentially be accessed by state authorities under the country’s expansive national security and intelligence laws — including the 2017 National Intelligence Law, which compels Chinese organizations and citizens to support and cooperate with state intelligence work.

What the Lawsuit Specifically Alleges

The legal filing details several categories of data that Lenovo’s pre-installed software allegedly collected and transmitted. These include hardware and software configuration data, application usage patterns, web browsing histories, unique device identifiers, and geolocation information. Plaintiffs contend that this data was transmitted to servers controlled by or accessible to entities in China, creating a pipeline of American consumer information flowing directly into a jurisdiction with minimal privacy protections for foreign nationals.

The attorneys driving the case are framing it not merely as a consumer privacy violation but as a national security concern. The complaint draws explicit parallels to the ongoing U.S. government scrutiny of Chinese technology companies, including the prolonged campaign against Huawei Technologies and the legislative efforts to force a divestiture of TikTok from its Chinese parent company, ByteDance. The argument is straightforward: if the U.S. government considers Chinese-controlled social media apps a security risk, then Chinese-manufactured computers that secretly exfiltrate user data represent an even more direct threat.

The Broader Regulatory and Geopolitical Context

The lawsuit arrives at a moment of heightened tension between Washington and Beijing over technology, data sovereignty, and espionage. The U.S. government has in recent years taken increasingly aggressive steps to limit Chinese access to American data and technology. Executive orders have restricted transactions with Chinese-linked technology firms. The Commerce Department has expanded export controls on advanced semiconductors. And Congress has moved to ban or force the sale of TikTok, citing concerns that the app’s data could be weaponized by Beijing.

Lenovo occupies a particularly sensitive position in this environment. The company, headquartered in Beijing and Hong Kong, is the largest PC vendor in the world by unit shipments, commanding roughly 23% of the global market according to recent figures from IDC. Its ThinkPad line, originally developed by IBM, remains a staple in corporate IT departments and government agencies worldwide. The U.S. Department of Defense and other federal agencies have at various points used Lenovo hardware, though security concerns have periodically led to restrictions. In 2019, the U.S. Army reportedly removed Lenovo devices from certain sensitive environments, and the company has faced recurring questions from lawmakers about its ties to the Chinese government, particularly through its largest shareholder, Legend Holdings, which has links to the Chinese Academy of Sciences.

Legal Theories and the Path to Class Certification

The plaintiffs are pursuing claims under several legal theories, including violations of state consumer protection statutes, the federal Wiretap Act, the Computer Fraud and Abuse Act, and California’s Invasion of Privacy Act. The breadth of the legal claims reflects a strategy designed to survive the inevitable motion to dismiss and to establish standing for a nationwide class. Attorneys involved in the case are reportedly seeking damages that could reach into the hundreds of millions of dollars if the class is certified and the case proceeds to trial or settlement.

Class certification will be a critical battleground. Lenovo’s defense team is expected to argue that the putative class is too diverse — encompassing users of different devices, operating systems, and software configurations — to be treated as a single group. They may also challenge whether plaintiffs can demonstrate concrete injury, a threshold that the U.S. Supreme Court raised in its 2021 decision in TransUnion LLC v. Ramirez, which held that plaintiffs in data-related class actions must show a concrete harm, not merely a statutory violation. The plaintiffs will need to demonstrate that the alleged data transfers caused or created an imminent risk of real-world harm — a showing that courts have found easier to make when sensitive personal data is involved.

Lenovo’s Likely Defense and Industry Implications

Lenovo has not yet filed a detailed response to the complaint, but the company has historically maintained that its data collection practices are transparent, consensual, and compliant with applicable laws. In past controversies, Lenovo has pointed to its privacy policies and end-user license agreements as evidence that users were informed about data collection. The company has also emphasized that it operates as a global, publicly traded corporation subject to the laws of every jurisdiction in which it does business, including the European Union’s General Data Protection Regulation and U.S. state privacy laws such as the California Consumer Privacy Act.

However, privacy advocates have long argued that burying data collection disclosures in lengthy terms-of-service agreements that virtually no consumer reads does not constitute meaningful consent. The Federal Trade Commission has signaled in recent enforcement actions that it takes a dim view of so-called “dark patterns” and consent mechanisms that obscure the true scope of data collection. If the court agrees that Lenovo’s disclosures were inadequate, the case could establish an important precedent for how pre-installed software on consumer hardware is regulated.

What This Means for the PC Industry and Supply Chain Security

The ramifications extend well beyond Lenovo. The global PC industry relies heavily on manufacturing concentrated in China and other parts of East Asia. If a U.S. court finds that a Chinese-headquartered manufacturer engaged in unauthorized bulk data transfers to China, it could accelerate efforts to diversify technology supply chains away from Chinese manufacturing — a process that is already underway but has been slow and costly. Companies like Dell Technologies, HP Inc., and Apple have all faced questions about their own supply chain dependencies on China, though none have faced allegations as pointed as those in the Lenovo complaint.

For enterprise IT departments and government procurement officers, the lawsuit underscores the importance of rigorous vetting of hardware and pre-installed software. The practice of “bloatware” — pre-installing third-party software on consumer devices, often for advertising revenue — has been a persistent irritant for consumers and a recurring security risk. Microsoft has attempted to address the issue with its Signature Edition PCs, which ship without third-party software, and Google has imposed restrictions on pre-installed apps for Android devices. But the problem persists, and the Lenovo case may provide the impetus for more aggressive regulatory action.

The Stakes for American Consumers and Data Sovereignty

At its core, the lawsuit raises a question that American policymakers and consumers will increasingly have to confront: Can hardware manufactured by companies headquartered in adversarial nations be trusted with the most intimate details of daily digital life? The answer has profound implications not only for the technology industry but for the broader relationship between the United States and China.

The case is in its early stages, and it may be months or years before it reaches a resolution. But the mere filing of the complaint — and the public attention it is generating — serves as a powerful reminder that the intersection of technology, privacy, and geopolitics remains one of the most consequential and unresolved issues of the digital age. For Lenovo, a company that has spent two decades building its reputation as a trustworthy global brand, the stakes could not be higher. For American consumers, the case is a sobering prompt to ask what, exactly, their devices are doing when they aren’t looking.



from WebProNews https://ift.tt/odVaYx2

Tuesday, 17 February 2026

The $100 Startup Dream Is Dead: Why Launching a Side Hustle in America Now Costs More Than Ever

For years, the American entrepreneurial mythology has rested on a seductive premise: that anyone with grit, a laptop, and a modest sum of cash could launch a business from their kitchen table and build it into something meaningful. The side hustle — that celebrated engine of upward mobility — was supposed to be capitalism’s great equalizer. But a growing body of evidence suggests the economics of starting small have shifted dramatically, and the barriers to entry are climbing faster than most aspiring founders realize.

A recent deep-dive report from Business Insider lays bare the rising costs associated with launching even the most modest of enterprises in 2025, painting a picture that is far less romantic than the bootstrapping narratives that dominate social media and entrepreneurship podcasts. The piece argues that the side hustle economy, once heralded as a democratizing force, is increasingly becoming a privilege of those who already have capital to spare.

The Hidden Price Tags Behind Every ‘Low-Cost’ Business Idea

The notion that you can start a business for next to nothing has been a staple of entrepreneurial content for over a decade. Platforms like Shopify, Etsy, and Amazon FBA were marketed as near-zero-cost launchpads. But as Business Insider reports, the actual costs have ballooned considerably. Between rising platform fees, the increasing necessity of paid digital advertising to gain any visibility, software subscriptions for everything from accounting to email marketing, and the regulatory costs of business registration and compliance, the true startup cost for a side hustle now regularly runs into the thousands of dollars — often before a single dollar of revenue is generated.

Consider the freelancer who wants to offer graphic design services. Beyond the obvious need for a computer and design software — Adobe Creative Cloud alone runs roughly $660 per year — there are costs for a professional website, portfolio hosting, invoicing software, self-employment taxes, and health insurance that a traditional employer would otherwise subsidize. For someone selling physical products, the math gets even more punishing: inventory costs, shipping supplies, warehouse or storage fees, product photography, and the ever-increasing cost of customer acquisition through platforms like Meta and Google, where ad prices have surged year over year.

Platform Economics: The Toll Booth Model of Modern Entrepreneurship

One of the most significant shifts in the side hustle economy over the past five years has been the evolution of digital platforms from enablers to gatekeepers. Etsy, once the darling of handmade-goods entrepreneurs, has steadily increased its transaction fees and now charges sellers a mandatory advertising fee on certain sales. Amazon’s FBA program, while offering logistical convenience, takes a substantial cut that can consume 30% to 40% of a product’s sale price when all fees are tallied. Shopify’s basic plan starts at $39 per month, but most serious sellers quickly find themselves paying for premium themes, apps, and third-party integrations that push monthly costs well above $200.

This toll-booth model means that platforms capture an ever-larger share of the value created by small entrepreneurs. The result is a dynamic where the platforms themselves are the primary beneficiaries of the side hustle boom, while individual sellers face razor-thin margins. As the Business Insider piece highlights, this creates a paradox at the heart of modern capitalism: the tools that were supposed to lower barriers to entry have, in many cases, become the barriers themselves.

The Inflation Factor: When Everything Costs More, So Does Starting Up

The broader macroeconomic environment has compounded these challenges. Cumulative inflation since 2020 has driven up the cost of nearly every input a small business owner needs. Commercial rents, even for modest co-working spaces, have climbed in most metropolitan areas. The cost of raw materials for product-based businesses has increased substantially. And the labor market, while cooling somewhat from its pandemic-era tightness, still makes it expensive to hire even part-time help.

According to data from the U.S. Bureau of Labor Statistics, the Consumer Price Index has risen more than 20% since January 2020. For aspiring entrepreneurs, this means that the $5,000 that might have been sufficient seed capital five years ago now buys considerably less. Meanwhile, wages for many workers — the very people most likely to pursue side hustles as a supplemental income strategy — have not kept pace with inflation in real terms, creating a squeeze from both directions: the cost to start is higher, and the disposable income available to fund that start is lower.

The Social Media Illusion and Survivorship Bias

Adding to the challenge is the distorted picture of entrepreneurial success that pervades social media. TikTok and Instagram are awash with creators showcasing their supposedly effortless side hustle income — the print-on-demand store generating $10,000 a month, the dropshipping operation funding a luxury lifestyle. What these narratives almost universally omit are the failure rates, the months of unprofitable grinding, and the significant upfront investments that preceded any success.

Research from the U.S. Small Business Administration has consistently shown that approximately 20% of new businesses fail within their first year, and roughly half fail within five years. For side hustles specifically — which typically operate with less capital, less strategic planning, and less dedicated time than full-time ventures — the attrition rates are likely even higher, though comprehensive data is harder to come by. The survivorship bias inherent in social media entrepreneurship content creates unrealistic expectations and can lead aspiring founders to underestimate both the financial and emotional costs of starting a business.

Regulatory and Tax Burdens That Catch New Entrepreneurs Off Guard

Beyond the visible costs of tools, platforms, and materials, new entrepreneurs frequently encounter a thicket of regulatory and tax obligations they hadn’t anticipated. Self-employment tax in the United States adds an additional 15.3% burden on net earnings, covering both the employer and employee portions of Social Security and Medicare taxes. Many states and municipalities require business licenses, permits, or registrations that carry their own fees. And for businesses that sell physical products across state lines, the post-South Dakota v. Wayfair sales tax compliance requirements have created a complex web of obligations that often necessitate paid software or professional accounting help.

Health insurance represents another major hidden cost. Entrepreneurs who leave traditional employment — or who never had employer-sponsored coverage — must navigate the individual insurance market, where premiums for a single adult averaged $477 per month in 2024 according to KFF (formerly the Kaiser Family Foundation). For a side hustler earning modest revenue, this single expense can consume a substantial portion of their business income, undermining the financial rationale for the venture entirely.

Who Can Actually Afford to Be an Entrepreneur?

The cumulative effect of these rising costs is a troubling stratification of entrepreneurial opportunity. As Business Insider argues, the side hustle economy increasingly favors those who already possess financial cushions — savings, spousal income, family wealth, or access to credit. For workers living paycheck to paycheck, the risk-reward calculus of investing several thousand dollars into an uncertain venture with no guaranteed return is simply untenable.

This has implications that extend beyond individual financial outcomes. If entrepreneurship becomes primarily accessible to those with existing capital, it risks reinforcing rather than disrupting existing wealth inequalities. The Kauffman Foundation, which tracks entrepreneurial activity in the United States, has noted that while new business formation surged during and after the pandemic, many of those new businesses were concentrated among higher-income demographics and in industries with relatively high capital requirements, such as e-commerce and professional services.

What Would Actually Make Side Hustles Accessible Again

Policy discussions around small business support have largely focused on loan programs and tax incentives, but critics argue these measures don’t address the structural issues driving up startup costs. Platform fee regulation, simplified tax compliance for micro-businesses, expanded access to affordable health insurance decoupled from employment, and public investment in digital infrastructure and training could all help lower the effective cost of entry.

Some states have begun experimenting with micro-enterprise grant programs that provide small amounts of non-repayable capital — typically $1,000 to $10,000 — to aspiring entrepreneurs who meet certain income thresholds. These programs, while modest in scale, represent a recognition that the traditional advice to “just start” rings hollow when the cost of starting has outpaced the financial capacity of the people most in need of supplemental income.

The American side hustle isn’t dead, but it is increasingly expensive, complex, and stratified. For the millions of workers who look to entrepreneurship as a path to financial independence, the gap between aspiration and reality is widening — and closing it will require more than motivational Instagram posts and $29.99 online courses promising passive income. It will require an honest reckoning with the economics of starting small in an era where almost nothing is small anymore.



from WebProNews https://ift.tt/QnpgP2z

Chinese Robots Perform in Front of 1 Billion People

Elon Musk’s Optimus will have to outperform these highly dexterous Chinese robots once launched. It’s truly amazing what they can do:

The difference with Optimus is its brain, which enables it to learn and respond to humans. The Chinese robots are pre-programmed, but still inspiring to watch. As one person replied on X, “The progress coming out of China in robotics is a serious reminder of why the United States needs to stay focused and invested in frontier technology. Elon Musk through Tesla and his other ventures continues to be one of the most important forces driving American competitiveness in this space.”



from WebProNews https://ift.tt/Ao1p8Ic