Friday, 17 January 2025

Microsoft: AI Will Never Be 100% Secure

Anyone holding out hope that AI can be made inherently secure are in for a disappointment, with Microsoft’s research team saying it is an impossible task.

AI has been both a blessing and a curse for cybersecurity. While it can be a useful tool for analyzing code and finding vulnerabilities, bad actors are already working overtime to use AI in ever-increasingly sophisticated attacks. Beyond direct cyberattacks, AI also poses everyday risks to data security and trade secrets, thanks to how AI consumes and indexes data.

To better understand the burgeoning field, a group of Microsoft researchers tackled the question of AI security, publishing their findings in a new paper. The research was done using Microsoft’s own AI models.

In recent years, AI red teaming has emerged as a practice for probing the safety and security of generative AI systems. Due to the nascency of the field, there are many open questions about how red teaming operations should be conducted.

The team focused on eight specific areas.

  1. Understand what the system can do and where it is applied
  2. You don’t have to compute gradients to break an AI system
  3. AI red teaming is not safety benchmarking
  4. Automation can help cover more of the risk landscape
  5. The human element of AI red teaming is crucial
  6. Responsible AI harms are pervasive but difficult to measure
  7. LLMs amplify existing security risks and introduce new ones
  8. The work of securing AI systems will never be complete

AI Will Never Be 100% Secure

When discussing the eighth point, the researchers highlighted the issues involved in securing AI systems.

Engineering and scientific breakthroughs are much needed and will certainly help mitigate the risks of powerful AI systems. However, the idea that it is possible to guarantee or “solve” AI safety through technical advances alone is unrealistic and overlooks the roles that can be played by economics, break-fix cycles, and regulation.

Ultimately, the researchers conclude that the key to AI security is raising the cost of attacking, using the three specific methods cited.

Economics of cybersecurity. A well-known epigram in cybersecurity is that “no system is completely foolproof” [2]. Even if a system is engineered to be as secure as possible, it will always be subject to the fallibility of humans and vulnerable to sufficiently well-resourced adversaries. Therefore, the goal of operational cybersecurity is to increase the cost required to successfully attack a system (ideally, well beyond the value that would be gained by the attacker) [2, 26]. Fundamental limitations of AI models give rise to similar cost-benefit tradeoffs in the context of AI alignment. For example, it has been demonstrated theoretically [50] and experimentally [9] that for any output which has a non-zero probability of being generated by an LLM, there exists a sufficiently long prompt that will elicit this response. Techniques like reinforcement learning from human feedback (RLHF) therefore make it more difficult, but by no means impossible, to jailbreak models. Currently, the cost of jailbreaking most models is low, which explains why real-world adversaries usually do not use expensive attacks to achieve their objectives.

Break-fix cycles. In the absence of safety and security guarantees, we need methods to develop AI systems that are as difficult to break as possible. One way to do this is using break-fix cycles, which perform multiple rounds of red teaming and mitigation until the system is robust to a wide range of attacks. We applied this approach to safety-align Microsoft’s Phi-3 language models and covered a wide variety of harms and scenarios [11]. Given that mitigations may also inadvertently introduce new risks, purple teaming methods that continually apply both offensive and defensive strategies [3] may be more effective at raising the cost of attacks than a single round of red teaming.

Policy and regulation. Finally, regulation can also raise the cost of an attack in multiple ways. For example, it can require organizations to adhere to stringent security practices, creating better defenses across the industry. Laws can also deter attackers by establishing clear consequences for engaging in illegal activities. Regulating the development and usage of AI is complicated, and governments around the world are deliberating on how to control these powerful technologies without stifling innovation. Even if it were possible to guarantee the adherence of an AI system to some agreed upon set of rules, those rules will inevitably change over time in response to shifting priorities.

The work of building safe and secure AI systems will never be complete. But by raising the cost of attacks, we believe that the prompt injections of today will eventually become the buffer overflows of the early 2000s – though not eliminated entirely, now largely mitigated through defense-in-depth measures and secure-first design.

Additional Findings

The study cites a number of issues involved in securing AI systems, not the least of which is the common scenario of integrating AI with legacy systems. Unfortunately, trying to marry the two often results in serious security issues.

The integration of generative AI models into a variety of applications has introduced novel attack vectors and shifted the security risk landscape. However, many discussions around GenAI security overlook existing vulnerabilities. As elaborated in Lesson 2, attacks that target end-to-end systems, rather than just underlying models, often work best in practice. We therefore encourage AI red teams to consider both existing (typically system-level) and novel (typically model-level) risks.

Existing security risks. Application security risks often stem from improper security engineering practices including outdated dependencies, improper error handling, lack of input/output sanitization, credentials in source, insecure packet encryption, etc. These vulnerabilities can have major consequences. For example, Weiss et al., [49] discovered a token-length side channel in GPT-4 and Microsoft Copilot that enabled an adversary to accurately reconstruct encrypted LLM responses and infer private user interactions. Notably, this attack did not exploit any weakness in the underlying AI model and could only be mitigated by more secure methods of data transmission. In case study #5, we provide an example of a well-known security vulnerability (SSRF) identified by one of our operations.

Model-level weaknesses. Of course, AI models also introduce new security vulnerabilities and have expanded the attack surface. For example, AI systems that use retrieval augmented generation (RAG) architectures are often susceptible to cross-prompt injection attacks (XPIA), which hide malicious instructions in documents, exploiting the fact that LLMs are trained to follow user instructions and struggle to distinguish among multiple inputs [13]. We have leveraged this attack in a variety of operations to alter model behavior and exfiltrate private data. Better defenses will likely rely on both system-level mitigations (e.g., input sanitization) and model-level improvements (e.g., instruction hierarchies [43]).

While techniques like these are helpful, it is important to remember that they can only mitigate, and not eliminate, security risk. Due to fundamental limitations of language models [50], one must assume that if an LLM is supplied with untrusted input, it will produce arbitrary output. When that input includes private information, one must also assume that the model will output private information. In the next lesson, we discuss how these limitations inform our thinking around how to develop AI systems that are as safe and secure as possible

Conclusion

The study’s conclusion is a fascinating look into the challenges involved in securing AI systems, and sets realistic expectations that organizations must account for.

Ultimately, AI is proving to be much like any other computer system, where it will become a never-ending battle of one-upmanship between security professionals and bad actors.



from WebProNews https://ift.tt/syk5UXM

U.S. Supreme Court Upholds TikTok Ban

The United States Supreme Court has dealt a blow to TikTok, upholding a previous ruling banning the app over privacy and national security concerns.

The U.S Court of Appeals for the District of Columbia had previously denied an appeal by TikTok challenging the ban passed by lawmakers. The ban is scheduled to go into effect January 19. The Chinese company brought its case to the Supreme Court, but SCOTUS has upheld the ban, shooting down the company’s First Amendment argument.

“There is no doubt that, for more than 170 million Americans, TikTok offers a distinctive and expansive outlet for expression, means of engagement, and source of community,” the justices wrote in an unsigned opinion, via Deadline. “But Congress has determined that divestiture is necessary to address its well-supported national security concerns regarding TikTok’s data collection practices and relationship with a foreign adversary. For the foregoing reasons, we conclude that the challenged provisions do not violate petitioners’ First Amendment rights.”

The legislation, Protecting Americans from Foreign Controlled Applications Act, was passed with bipartisan support and signed into law by President Biden in 2024. The legislation capped a multi-year effort by both the Trump and Biden administrations to ban the Chinese social media platform.

Lawmakers, governmetn officials, and intelligence agencies have grown increasingly concerned about TikTok, both from a privacy and national security standpoint. ByteDance, TikTok’s parent company, used the platform to surveil Forbes journalists. The company has faced multiple lawsuits for violating the privacy of children, and the company is accused of having an uncomfortably close relationship with Beijing. Lawmakers have also accused the platform of fostering election misinformation.

The Donald Trump Factor

Despite being the one who first pushed for a ban on TikTok, Trump has seemingly done an about-face and wants to work out a deal to keep the app operational in the U.S.

In posts on his Truth Social network, Trump said he had spoken Chairman Xi Jinping about a number of topics, including TikTok.

I just spoke to Chairman Xi Jinping of China. The call was a very good one for both China and the U.S.A. It is my expectation that we will solve many problems together, and starting immediately. We discussed balancing Trade, Fentanyl, TikTok, and many other subjects. President Xi and I will do everything possible to make the World more peaceful and safe!

While Trump has been more vocal in recent weeks about his desire to strike a deal to save TikTok, he was measured in his response to the Supreme Court’s decision, saying he would need time to review the options.

The Supreme Court decision was expected, and everyone must respect it. My decision on TikTok will be made in the not too distant future, but I must have time to review the situation. Stay tuned!



from WebProNews https://ift.tt/pgqwe5y

FTC Orders GoDaddy to Improve Its Security

GoDaddy, a domain registrar ad one of the world’s largest hosting companies, has been ordered to improve its security by the Federal Trade Commission.

In its complaint, the FTC cites GoDaddy’s marketing “itself as a secure choice for customers to host their websites,” as well as “its commitment to data security and careful threat monitoring practices.” Unfortunately, according to the complaint, GoDaddy failed to live up to its own hype.

In fact, GoDaddy’s data security program was unreasonable for a company of its size and complexity. Despite its representations, GoDaddy was blind to vulnerabilities and threats in its hosting environment. Since 2018, GoDaddy has violated Section 5 of the FTC Act by failing to implement standard security tools and practices to protect the environment where it hosts customers’ websites and data, and to monitor it for security threats. In particular, GoDaddy failed to: (a) inventory and manage assets; (b) manage software updates; (c) assess risks to its website hosting services; (d) use multi-factor authentication; (e) log security-related events; (f) monitor for security threats, including by failing to use software that could actively detect threats from its many logs, and failing to use file integrity monitoring; (g) segment its network; and (h) secure connections to services that provide access to consumer data. These failures made GoDaddy’s representations about security false or misleading.

According to the FTC, these security inadequacies led to multiple data breaches.

As a result of GoDaddy’s data security failures, it experienced several major compromises of its hosting service between 2019 and December 2022, in which threat actors repeatedly gained access to its customers’ websites and data, causing harm to its customers and putting them and visitors to their websites at risk of further harm. GoDaddy’s customers and other consumers could not avoid this harm, and it is not outweighed by benefits to consumers or competition. Even after these compromises of its environment, GoDaddy continues to struggle to gain visibility into its hosting environment and adequately monitor it for threats.

The FTC also calls out GoDaddy for misrepresenting its compliance with the EU-U.S> Privacy Shield framework that regulates the transfer of personal data between the EU and the U.S.

The Department of Commerce (“Commerce”) and the European Commission negotiated the EU-U.S. Privacy Shield framework to provide a mechanism for companies to transfer personal data from the European Union to the United States in a manner consistent with the requirements of European Union law on data protection. The Swiss-U.S. Privacy Shield framework is identical to the EU-U.S. Privacy Shield framework.

To join the EU-U.S. and/or Swiss-U.S. Privacy Shield framework, a company must certify to the United States Department of Commerce that it complies with the Privacy Shield Principles. Participating companies must annually re-certify their compliance. The Privacy Shield frameworks expressly provide that, while decisions by organizations to “enter the Privacy Shield are entirely voluntary, effective compliance is compulsory: organizations that self-certify to the Department and publicly declare their commitment to adhere to the Principles must comply fully with the Principles.”

In particular, companies claiming to adhere to the regulation must meet certain criteria.

Companies under the jurisdiction of the FTC are eligible to join the EU-U.S. and/or Swiss-U.S. Privacy Shield framework. Both frameworks warn companies that claim to have self-certified to the Privacy Shield Principles that failure to comply or otherwise to “fully implement” the Privacy Shield Principles “is enforceable under Section 5 of the Federal Trade Commission Act.”

The Privacy Shield Principles include the following: SECURITY [Principle 4]: (a) Organizations creating, maintaining, using or disseminating personal information must take reasonable and appropriate measures to protect it from loss, misuse and unauthorized access, disclosure, alteration and destruction, taking into due account the risks involved in the processing and the nature of the personal data.

In spite of its obligation to provide reasonable security, the FTC goes on to highlight multiple areas in which GoDaddy failed to do that. The company is accused of failing to adequately inventory and manage its computer assets, failing to apply security patches, failing to address the risks involved in its Shared Hosting packages, failing to properly log security-related events, and failing to adequately engage in security monitoring.

GoDaddy is also accused of not implementing multi-factor authentication, relying on username/password authentication for SSH access instead of more secure authentication methods, and failing to properly segment and isolate its Shared Hosting environment.

As a result of these lapses, GoDaddy has suffered multiple breaches over the years. In addition to the damage caused by the theft of sensitive information, the FTC says GoDaddy customers, as well as others, have suffered as a result of the company’s poor security practices.

GoDaddy’s Shared Hosting customers have also spent time and effort protecting themselves from the consequences of GoDaddy’s practices, including time spent resetting account credentials, restoring compromised websites and certificates, addressing their own customers’ concerns, and other remediation in light of the security incidents described above.

GoDaddy’s Shared Hosting customers are not able to avoid the consequences of GoDaddy’s security failures. Shared Hosting customers do not know detailed information about GoDaddy’s security controls, including which security controls or tools GoDaddy does not use in its Shared Hosting environment. In addition, as described in Paragraphs 12-19, GoDaddy has represented that it provided reasonable security for the Shared Hosting environment, and that it meticulously monitored the environment for security threats.

Consumers who have interacted with GoDaddy’s customers’ websites have also not been able to avoid the consequences of GoDaddy’s security failures. In most cases, consumers who visit GoDaddy’s customers’ sites are unaware that they are interacting with a site or service hosted by GoDaddy.

The harm that GoDaddy’s security failures have caused or are likely to cause is not offset by countervailing benefits to consumers or competition. GoDaddy could have remediated its failures using well-known and low-cost technologies and techniques.

The FTC’s complaint should be a wake-up call to GoDaddy, and will hopefully lead the company to make significant changes to its security and privacy model.



from WebProNews https://ift.tt/31da5Zl

Thursday, 16 January 2025

Nvidia Slams Biden Administration’s ‘Misguided AI Diffusion’ Rule

Nvidia is taking the Biden Administration to task, saying its “AI Diffusion” rule is misguided and undermines America’s ability to compete.

The AI Diffusion rule expands the administration’s restrictions on the export of chips used to power AI models.

With this interim final rule, the Commerce Department’s Bureau of Industry and Security (BIS) revises the Export Administration Regulations’ (EAR) controls on advanced computing integrated circuits (ICs) and adds a new control on artificial intelligence (AI) model weights for certain advanced closed-weight dual-use AI models. In conjunction with the expansion of these controls, which BIS has determined are necessary to protect U.S. national security and foreign policy interests, BIS is adding new license exceptions and updating the Data Center Validated End User authorization to facilitate the export, reexport, and transfer (in-country) of advanced computing (ICs) to end users in destinations that do not raise national security or foreign policy concerns. Together, these changes will cultivate secure ecosystems for the responsible diffusion and use of AI and advanced computing ICs.

Nvidia says the rule is a stark departure from the policies put forward by the first Trump administration.

The first Trump Administration laid the foundation for America’s current strength and success in AI, fostering an environment where U.S. industry could compete and win on merit without compromising national security. As a result, mainstream AI has become an integral part of every new application, driving economic growth, promoting U.S. interests and ensuring American leadership in cutting-edge technology.

Nvidia says the Biden Administration is pushing through legislation, at the last minute, that will cripple America’s AI innovation.

In its last days in office, the Biden Administration seeks to undermine America’s leadership with a 200+ page regulatory morass, drafted in secret and without proper legislative review. This sweeping overreach would impose bureaucratic control over how America’s leading semiconductors, computers, systems and even software are designed and marketed globally. And by attempting to rig market outcomes and stifle competition — the lifeblood of innovation — the Biden Administration’s new rule threatens to squander America’s hard-won technological advantage.

While cloaked in the guise of an “anti-China” measure, these rules would do nothing to enhance U.S. security. The new rules would control technology worldwide, including technology that is already widely available in mainstream gaming PCs and consumer hardware. Rather than mitigate any threat, the new Biden rules would only weaken America’s global competitiveness, undermining the innovation that has kept the U.S. ahead.

Although the rule is not enforceable for 120 days, it is already undercutting U.S. interests. As the first Trump Administration demonstrated, America wins through innovation, competition and by sharing our technologies with the world — not by retreating behind a wall of government overreach. We look forward to a return to policies that strengthen American leadership, bolster our economy and preserve our competitive edge in AI and beyond.

Nvidia Joins a Growing List of Companies Appealing to Trump

On the eve of President-elect Donald Trump’s second term, companies are bending over backward to get into his good graces. This is especially true within the tech industry given Trump’s vocal criticism of Big Tech.

Meta announced it is ending its fact-checking program in favor of community moderation, and major banks bailed on the Net Zero Banking Alliance. Both moves were were seen as efforts to appease Trump and his allies, and avoid a confrontation with the incoming administration.

While Nvidia’s commentary is certainly an appeal to the incoming Trump administration to loosen the Biden administration’s export rules, it also appears to be a thinly veiled attempt to get and stay on Trump’s good side.



from WebProNews https://ift.tt/LvFeh2V

Small Businesses Turn to Financial Tools to Boost Marketing Strategies

Image by Artem Podrez from Pexels

Running a small business these days isn’t for the faint of heart. With so many challenges popping up, businesses are getting creative, including using financial tools to fuel their marketing efforts. From artificial intelligence (AI)-driven analytics to partnerships with influencers, financial solutions are helping small businesses stretch their marketing dollars and make a bigger impact.

Why are small businesses turning to financial solutions?

Let’s face it—running a business takes money, and it’s not just about keeping the lights on. Marketing is essential to growth, but it’s expensive. Financial tools give small businesses the flexibility to fund key marketing initiatives without compromising their day-to-day operations.

Financial solutions like online loans and cash flow management apps help stabilize finances, allowing businesses a bit of wiggle room. For example, a debt consolidation loan to combine high-interest debts can free up funds for impactful marketing initiatives, such as influencer partnerships or AI-driven tools.

How is AI changing the marketing game for small businesses?

AI has not been exclusive to tech giants for a while; small businesses have been quick to leverage its potential. From personalized customer interactions to data-driven insights, AI tools empower them to enhance their marketing strategies and drive growth.

Real-time data insights

AI-powered tools can analyze customer behavior in real time, offering insights that help businesses create personalized campaigns. Instead of guessing what customers want, companies can use data to predict preferences and fine-tune their strategies. That means less waste and more meaningful engagement.

Automation for efficiency

When it comes to saving time, AI plays a pivotal role by automating routine tasks. These include sending personalized follow-up emails, optimizing and managing advertising budgets in real time, and scheduling social media content across multiple platforms.

By streamlining these processes, AI reduces the burden of manual work, enabling business owners and their teams to concentrate on higher-value activities. In fact, McKinsey reports that businesses using AI-driven marketing tools cut costs by about 30 percent—money they can reinvest into scaling their operations.

Why is influencer marketing a big deal for small businesses?

Influencer marketing isn’t just for the big brands anymore. Small businesses are jumping in, especially with micro-influencers with loyal, niche followings. Research indicates that influencers with smaller followings often offer better marketing ROI. Since they also tend to charge less, they are an excellent choice for small businesses aiming to maximize their marketing budgets.

Micro-influencers drive engagement for cheap

Micro-influencers, typically individuals with 10,000 to 100,000 followers, constitute a significant portion of the influencer landscape. They account for approximately 47.3 percent of all content creators, making them the largest and most influential group in the content creation space.

Moreover, while these influencers may not have millions of followers, their audiences are loyal and highly engaged. Plus, they’re more budget-friendly and authentic, which makes them perfect for small businesses looking to build trust and boost sales.

Data-backed partnerships

Modern financial and budgeting technologies empower small businesses to make informed decisions. Thanks to these financial tools, small businesses can track the return on their influencer campaigns. Business owners can see what’s working, whether engagement rates or new customer growth. This data-driven approach ensures that they get the most out of every partnership.

How do financial solutions fuel innovation?

Securing capital can mean the difference between staying stuck and making bold moves. With the right funding, you can invest in growth opportunities that push your business forward instead of treading water.

Flexible loan options

Modern lenders offer flexible loans that small businesses can tailor to their needs. For example, a line of credit can help fund a new marketing campaign during a key sales season without affecting cash flow.

Alternative funding models

If traditional loans aren’t your thing, options like crowdfunding and revenue-based financing are worth considering. These models offer upfront cash in exchange for a portion of future revenue, giving businesses the funds they need without long-term debt commitments.

What can small businesses do to handle economic ups and downs?

Even with all these tools, economic shifts can throw a wrench in the works. That’s why it’s crucial to stay flexible and plan ahead, so you can pivot quickly and keep your strategy on track.

Build resilience with smart planning

Building resilience starts with smart planning, and one of the best tools for that is budgeting software that forecasts cash flow. This type of software gives small businesses a clearer picture of their financial future by highlighting potential dips or surges ahead of time.

For example, suppose a forecast shows a slower sales season approaching. In that case, they can shift focus to more cost-effective marketing tactics. On the flip side, they can put money into bigger campaigns or partnerships without fear because they’re anticipating strong growth.

Forge strong financial partnerships

Partnering with financial institutions that truly understand the unique challenges small businesses face can be a real game-changer. It’s not just about getting money—it’s about having a partner invested in your long-term success.

Many reputable lenders have actually begun offering valuable consulting services. These can guide you in making smarter financial decisions, from managing cash flow to planning for growth. They can also help you avoid common pitfalls that often trip up small businesses, such as overextending credit or underestimating market shifts.

What’s in the works for small business marketing?

The future is looking bright for small businesses willing to embrace financial tools. With easier access to funding and advanced marketing technology, they’re leveling the playing field with bigger competitors.

Of course, having access to capital isn’t enough—it’s about knowing where to invest and how to make it count. By using financial solutions wisely, small businesses can expand their reach, connect with customers in new ways, and thrive—even in tough times.

Small businesses that make smart financial and marketing moves now will lead the pack later. With the right tools, they can grow stronger, build loyal customer bases, and set themselves up for long-term success.



from WebProNews https://ift.tt/LvCoaZJ

Texas AG Sues Allstate ‘for Unlawfully Collecting’ User Data to Drive Up Rates

Texas Attorney General Ken Paxton is suing Allstate, accusing the company of “unlawfully collecting, using, and selling” driving data on more than 45 million Americans.

Allstate is one of the biggest automotive insurance companies in the US. Unfortunately, according to a press release, Allstate paid app developers to include tracking software in popular apps, such as Life360, so the company could track users’ driving habits and raise rates accordingly.

Allstate, through its subsidiary data analytics company Arity, would pay app developers to incorporate its software to track consumers’ driving data. Allstate collected trillions of miles worth of location data from over 45 million consumers nationwide and used the data to create the “world’s largest driving behavior database.” When a consumer requested a quote or renewed their coverage, Allstate and other insurers would use that consumer’s data to justify increasing their car insurance premium.

“Our investigation revealed that Allstate and Arity paid mobile apps millions of dollars to install Allstate’s tracking software,” said Attorney General Paxton. “The personal data of millions of Americans was sold to insurance companies without their knowledge or consent in violation of the law. Texans deserve better and we will hold all these companies accountable.”

Allstate’s Data Collection

The lawsuit goes on to accuse Allstate of building the “world’s largest driving behavior database.” The lawsuit also provides important details on exactly how Allstate achieved this.

Defendants, a series of companies owned by insurance giant, Defendant The Allstate Corporation, conspired to secretly collect and sell “trillions of miles” of consumers’ “driving behavior” data from mobile devices, in-car devices, and vehicles. Defendants used the illicitly obtained data to build the “world’s largest driving behavior database,” housing the driving behavior of over 45 million Americans. Defendants created the database for two main purposes: (1) to support Allstate Defendants’ car insurance business and (2) profit from selling the driving behavior data to third parties, including other car insurance carriers (“Insurers”). Millions of Americans, including Texans, were never informed about, nor consented to, Defendants’ continuous collection and sale of their data.

Defendants covertly collected much of their “trillions of miles” of data by maintaining active connections with consumers’ mobile devices and harvesting the data directly from their phone. Defendants developed and integrated software into third-party apps so that when a consumer downloaded the third-party app onto their phone, they also unwittingly downloaded Defendants’ software. Once Defendants’ software was downloaded onto a consumer’s device, Defendants could monitor the consumer’s location and movement in real-time.

Through the software integrated into the third-party apps, Defendants directly pulled a litany of valuable data directly from consumers’ mobile phones. The data included a phone’s geolocation data, accelerometer data, magnetometer data, and gyroscopic data, which monitors details such as the phone’s altitude, longitude, latitude, bearing, GPS time, speed, and accuracy.

How Allstate Monetized the Data

The only thing more disturbing than the quantity of data Allstate collected was how they collected it and what they did with the data.

To encourage developers to adopt Defendants’ software, Defendants paid app developers millions of dollars to integrate Defendants’ software into their apps. Defendants further incentivized developer participation by creating generous bonus incentives for increasing the size of their dataset. According to Defendants, the apps integrated with their software currently allow them to “capture[] [data] every 15 seconds or less” from “40 [million] active mobile connections.”

Once collected, Defendants found several ways to monetize the ill-gotten data, including by selling access to Defendants’ driving behavior database to other Insurers and using the data for Allstate Defendants’ own insurance underwriting. If a consumer requested a car insurance quote or had to renew their coverage, Insurers would access that consumer’s driving behavior in Defendants’ database. Insurers then used that consumer’s data to justify increasing their car insurance premiums, denying them coverage, or dropping them from coverage.

Defendants marketed and sold the data obtained through third-party apps as “driving” data reflecting consumers’ driving habits, despite the data being collected from and about the location of a person’s phone. More recently, however, Defendants have begun purchasing data about vehicles’ operation directly from car manufacturers. Defendants ostensibly did this to better account for their inability to distinguish whether a person was actually driving based on the location and movements of their phone. The manufacturersthat Defendants purchased data from included Toyota, Lexus, Mazda, Chrysler, Dodge, Fiat, Jeep, Maserati, and Ram. Allstate Defendants have used this data for their own insurance underwriting purposes.

Worst of all, customers had no idea they were being tracked, and no way of opting out.

Consumers did not consent to, nor were aware of Defendants’ collection and sale of immeasurable amounts of their sensitive data. Pursuant to their agreements with app developers, Defendants had varying levels of control over the privacy disclosures and consent language that app developers presented and obtained from consumers. However, Defendants never informed consumers about their extensive data collection, nor did Defendants obtain consumers’ consent to engage in such data collection. Finally, Defendants never informed consumers about the myriad of ways Defendants would analyze, use, and monetize their sensitive data.

Disturbing Allegations Underscore a Larger Issue

The allegations against Allstate are disturbing on multiple levels. The fact that a company collected an incredible amount of sensitive data from millions of customers without their knowledge is unconscionable. The fact that Allstate paid other app developers in order to collect data from their apps and then sold the data to third parties makes it even worse.

Unfortunately, despicable as its actions may be, Allstate serves as an example of a growing trend in multiple industries: collecting and monetizing data from paying customers.

When a company provides a service for free, it is completely understandable for that company to profit off of its customers’ data. That’s the trade-off for using a free service.

On the other hand, when a company is charging its customers for the service it provides, those customers have every reason to demand the company respect their privacy, not Hoover their data and sell it to third parties. After all, the company is already charging the customer—it is being paid in full for the service it provides. As a result, the customer’s data should be off-limits unless the customer gives their explicit permission.

If the allegations prove true, Allstate’s behavior is a despicable breach of trust, one they will hopefully pay dearly for at the hands of AG Paxton.



from WebProNews https://ift.tt/KP1enyj

Wednesday, 15 January 2025

Amazon Ending ‘Try Before You Buy’ Program

Amazon has announced it is ending its ‘Try Before You Buy’ program that allowed shoppers to try clothes, shoes, and accessories before buying them.

Try Before You Buy was first introduced in 2017, under the name Prime Wardrobe, and was an Amazon Prime-exclusive feature. As pointed out by CNBC the service was similar to Rent the Runway, Stitch Fix, and similar services. Amazon has posted a notice at the top of the Try Before Your Buy webpage informing users that it is ending.

Prime Try Before You Buy will end on 01/31/2025. Shop Amazon Fashion to find our full selection of fashion items.

In a statement to CNBC, Amazon said AI-powered features have largely made Try Before You Buy redundant.

“Given the combination of Try Before You Buy only scaling to a limited number of items and customers increasingly using our new AI-powered features like virtual try-on, personalized size recommendations, review highlights, and improved size charts to make sure they find the right fit, we’re phasing out the Try Before You Buy option, effective January 31, 2025,” a spokesperson told CNBC.

Try Before You Buy’s demise is not unexpected, especially given CEO Andy Jassy’s efforts to streamline the company and reduce costs. Since taking over as CEO, Jassy has made clear that unpopular and unprofitable services will not be allowed to continue being a drain on the company.

As Amazon’s statement makes clear, there were issues scaling the service to a large number of customers. Combined with the abilities AI provides, Try Before You Buy’s fate was sealed.



from WebProNews https://ift.tt/Benz1L5