A nice story.
from Schneier on Security https://ift.tt/P8qDfpW
A nice story.
Did you know you can customize Google to filter out garbage? Take these steps for better search results, including adding my work at Lifehacker as a preferred source.
The nearly universal adoption of smartphones in the late 2000s changed more than how we waste time while waiting in lines. With nearly everyone carrying a high-quality camera and microphone in their pocket—and the ability to instantly broadcast anything to a potential audience of millions—our collective concept of privacy has been permanently altered. If you’re not a little concerned with how what you do in public would play on YouTube, you’re not paying attention.
As smart glasses equipped with cameras and mics edge closer to mainstream adoption, we’re facing another, subtler shift. Unlike smartphones, where it’s obvious when someone is recording, smart glasses can capture video or audio nearly invisibly—raising fresh legal, ethical, and moral concerns. Here's what you should be aware of, whether you’re currently rocking smart glasses or plan to in the future.
What the general public thinks of as “privacy” may have shifted, but the law may not have kept pace. “Current laws do not provide the protection that most people would probably expect that they should,” says David B. Hoppe, an international transactional lawyer who specializes in emerging legal issues in media and technology.
Some statutes have been written to account for new technology—prohibitions on revenge porn, for instance—but the overarching legal framework concerning privacy was developed for a pre-smartphone, pre-smart glasses world. So let's dig into it.
State and federal laws have criminalized some kinds of recordings in public, like shooting videos up people's skirts, but in general, the First Amendment provides broad protection of people's right to take photos and videos of whatever they can see. "In general, our presumption is that capturing photos, videos, or other data from public spaces is unrestricted," says Eric Goldman, a professor at Santa Clara University School of Law and Co-Director of the High Tech Law Institute.
That presumption applies to smart glasses, so if you're in a public space, you can usually record what you'd like. “As a general matter, the video function could be used in a public setting,” Hoppe says.
How you use a recording matters, though. “An issue that could arise is whether or not there's a commercial aspect to its use,” Hoppe says. “In many states there could be an obligation to have cleared the publicity rights from any individuals who are identifiable in the video.”
The meaning of "commercial,” though, can be tricky. Something like filming an advertisement would likely be considered commercial speech and have less legal protection, in terms of privacy, than something like making an art movie for your film class. Somewhere in the middle is earning money from a social media video. Monetizing doesn't automatically remove legal free speech protection, but it could shift content toward commercial speech, and local filming laws could apply to what you shoot as well. It's complicated, so if you have any doubts, talk to a lawyer.
Courts have largely held that a patron in a private business that is open to the public, like a store or a restaurant, can expect more privacy than they have while on a public sidewalk, but less than they’d have if they were somewhere really private, like their home. "It gets into expectations of privacy," explains Goldman. "A restaurant could be anywhere from family-seating, where that expectation would be unreasonable, to a private booth that has 50 feet in any direction from any other seat, which might be a more reasonable expectation of privacy."
While a person can generally legally capture images in a business that’s open to the public, it’s within the owners’ rights to prohibit filming. "Normally businesses can set rules for how their customers engage with each other," Goldman says. "The recourse would be banning you from their premises."
So if you turn on your Ray-Ban Metas in the gym, you probably won’t be arrested, but the gym could/should have a “no photography” policy that it could enforce by having you banned from the premises and calling the cops if you won't leave. Of course, recording in private areas of any business, like the locker room of said gym, is illegal everywhere in the U.S.
Recording sound from a pair of smart glasses could expose you to legal risks that shooting video may not. While images taken in public of anything in plain view are generally legal, audio is a different story. Just like a conversation in a restaurant, the key factor is the "reasonable expectation of privacy." Two people having a quiet conversation on a park bench likely expect a level of privacy that a guy shouting on a street corner does not.
Courts have largely agreed that recording conversations in public is protected by the Constitution, as long as everyone in the conversation knows they are being recorded and agrees to it. The opposite situation—a third party recording a private conversation without the participants’ knowledge—would often be considered “eavesdropping,” and that’s often a crime.
It gets tricky when only one party consents to a recording. "In general, there are some states that have required that any recording of a conversation between two parties requires the consent of both parties," Goldman says. "So if the glasses are being used in those conversations, without consent from the other party, that would be a violation in those states."
Here’s a breakdown of one-party consent states and all-party consent states. If you have any doubts about the legality of a recording, consult with a lawyer, or just don't hit record.
Maybe you bought a pair of smart glasses to record your life, but make no mistake: you are the one being recorded. When you click "agree" on that terms of service screen, you could be allowing a big data company to collect your GPS data, biometric data (like eye movements and health information), contact lists, messages, political views, what you see, what you say, who you talk to, and more. And it's legal because you agreed to it. Usually.
"Some [data collected by your smart glasses] is controlled by contract," Goldman says. "So Meta would disclose its privacy policies in some disclosure to the consumer, and then those might be the rules that apply. There are some places where there may be limits on the ability of Meta to access that data," Goldman says.
Bottom line: you have some protections over your personal data that aren't necessarily signed away with a click. A patchwork of federal laws provide specific protections: HIPAA protects the privacy of your medical records, FCRA protects your credit reports, and other federal laws protect financial information children's privacy. But more meaningful consumer privacy protection comes from California state law. In the last 10 years, Cali has enacted relatively robust privacy protection laws that give Californians the right to know what personal data companies collect, the right to delete that data, and the right to opt out of their data being sold.
"But I live in Ohio," you might be saying. First, sorry about that. Secondly, we have your back anyway! Big tech companies have largely adopted California's privacy laws as their baseline for data collection. So while the amount of data being collected from your glasses isn't ideal, at least you can claw some of it back.
Check out this video of a recent concert from O.G. trip hop band Massive Attack:
The band is turning facial recognition technology on its audience, displaying audience members along with what seems to be their professions. The technology to instantly identify a stranger and scrape publicly available databases on that person is possible with existing technology in smart glasses, and is, in theory, perfectly legal. Even if the person being filmed doesn't know you’re doing it. Again, how you use information you collect might not be legal.
According to Hoppe, the laws in place just weren’t written with smart glasses in mind. “The basic standard, that comes from common law times, was that if you’re in a public place, you don’t have a reasonable expectation of privacy, but at that point in time—and up until the last two decades—being in a public place meant you could be observed, but that you would simply be a memory in a human mind somewhere. It wouldn't be recorded in video format that could immediately be published to the entire world.” Hoppe said.
Right now, privacy laws in the U.S. are largely reactive and evolve after new technology has reshaped how we live. But what might it look like if we got ahead of the curve (or at least tried harder to catch up?) Like everything, it's complicated.
Hoppe imagines one extreme: a “privacy maximalist” set of laws, where no one could be recorded without their consent, even in public. "That would make sense, right? But the challenge you then have is things like security cameras and other stationary devices that are simply recording everything. Is that really a privacy threat?" Hoppe says. "And if so, isn't it outweighed by the beneficial effects to society as as a whole, in terms of protection of crime prevention and protection of property and so forth?"
And there's that whole "freedom" thing. "The idea that there is a public sphere where we are free to capture and record and share our views about what we see, is an essential part of free speech," Goldman says. "And if privacy laws were to overly restrict that, it would take our away our ability not only to express ourselves and and react to the world that we see, but it would have significant power implications on the ability of people to control conversations in a way that would ultimately take power away from us as people...We cannot let the concerns about people's desire to control what people know about them override the ability of people to have organic, healthy, pro-social conversations."
If you’re living your life in a halfway ethical manner (and you’re not providing cultural commentary in concert form like Massive Attack) you probably aren’t keen to privately dox everyone on the bus, and social norms are probably more important to you than potential legal penalties. Maybe you won’t be hauled away in cuffs for recording people eating dinner on the outdoor patio of a restaurant, but you will be met with scorn from just about every diner—especially if you’re sticking a phone in their face. Smart glasses, being less obvious than iPhones, change the equation somewhat. The etiquette around their use is evolving, leaving us all in a gray area where what’s legal and what’s socially acceptable don’t always line up.
Even if they’re not encoded in law, we’ve (mostly) collectively agreed upon some norms when it comes to cell phones—don’t film others in the gym, don’t stick your phone in a stranger’s face, etc.—and we’re getting there with smart glasses, but until we arrive, it’s going to be a bit tricky.
Smart glasses make recording less obtrusive and more natural-feeling, but they also make it easier to cross lines without realizing it. So it’s best to err on the side of courtesy: respect people in public, respect private spaces, and be cautious of what you’re recording in private/public spaces—taking pictures of your meal and friends is cool; taking pictures of strangers is not. Getting it wrong probably won’t end up with being thrown into jail, but being known as “that creep with the damn Meta glasses” might ultimately be a worse fate.
Did you know you can customize Google to filter out garbage? Take these steps for better search results, including adding my work at Lifehacker as a preferred source.
It's been a tumultuous year and a half for TikTok in the U.S. In April of 2024, President Biden signed a law forcing the app's parent company, ByteDance, to sell its majority stake to an American company, or face a ban in the U.S. ByteDance never did, and so, in January, the app went dark.
It was mostly performative, however. Then President-elect Trump had already assured TikTok that his incoming administration would not enforce the ban, as did the outgoing President Biden. As such, once Trump was sworn in, he signed an executive order kicking the TikTok ban down the road. Trump continued to delay enforcing the ban, which, while legally dubious, allowed the app to continue operating as usual.
It seems, however, this wild ride is coming to a close. On Thursday, Trump signed an executive order that sets the stage for a U.S.-majority stake in TikTok. Nothing is set in stone, but American companies like Oracle, as well as individuals like Larry Ellison (Oracle co-founder) and Rupert Murdoch could be among the newest owners of the app. Curiously, a non-American company, the Abu Dhabi-based MGX investment fund, would also be involved. This joint venture would control a majority of the new American TikTok, while ByteDance would control less than 20%.
Trump says Chinese President Xi Jinping has okayed the deal, though no Chinese representatives were present at the order's signing. Again, nothing is for certain at this point, but we can take a look at the early details to get a sense for how a new "America-approved" TikTok would operate in the U.S.
First things first: the app itself. It's highly possible you'll need to download a new app entirely in order to keep using TikTok. This has been a focus of speculation for a couple months now, but as the Washington Post reports, TikTok engineers have been working on a U.S.-version of the app. The new app will likely appear identical to the TikTok experience you already know, and, in fact, might be accessible via a link within the current app. The Post makes the point that the harder it is for users to access the new TikTok app, the higher the chance they leave the platform entirely for alternatives like Instagram and YouTube, so TikTok engineers will no doubt be working on ways to make the transition as seamless as possible.
Then, there's the famous algorithm. This is what makes TikTok so addictive; the app's algorithm is so good, it learns what you like and shows you content to keep you scrolling for hours. Without the algorithm, TikTok very well could lose its addictive nature, and, along with it, its users. The Post reports that, at least at this time, the algorithm is staying put, and will be leased out from ByteDance by the new American TikTok venture. Plus, you should still be able to see international content going forward, not just videos posted by Americans. TikTok should, in theory, be as entertaining (and addicting) as ever.
But that's not the end of the story. According to Trump's executive order, the algorithm will be "retrained and monitored" by "trusted security partners" of the U.S. That does not necessarily inspire confidence in a neutral algorithm for Americans, especially as Trump says he would make it "100% MAGA" if he could.
Finally, there's the question of user data. This was a major concern of the U.S. government, and part of why both the Trump and Biden administrations went after the app. It wasn't without reason either, as we learned ByteDance did in fact store American user data and had used it to obtain the IP addresses of American journalists. According to the executive order, all user data from the U.S. version of TikTok will need to be stored in a cloud environment operated by an American company.
We'll need to see how both ByteDance and the Chinese government address the executive order and potential divestiture going forward, as things could change. As of now, however, it seems both nothing will change, and everything will change. You'll still be able to endlessly scroll through your feed as you do now, but you may need to use a new, yet identical version of the app to do so. You may see the same content you do now, or you might start to see some new content, suspiciously aligned with the values of the current administration. And your data will still be controlled by a faceless third-party, only now it'll be by your own country, rather than a foreign nation.
Did you know you can customize Google to filter out garbage? Take these steps for better search results, including adding my work at Lifehacker as a preferred source.
Earlier this month, we saw a new running world record—more specifically, running backwards. In heels. Christian Roberto López Rodríguez claimed the fastest 100m backwards in high heels with an impressive time of 16.55 seconds.
I may not be setting that sort of record, but I do see running backwards crop up time and time again as a trendy idea for the average runner. Sometimes called "reverse running," it's exactly what it sounds like: runners literally turning around and jogging backwards. But does running backwards genuinely help improve forward running performance, or is it simply another fitness fad destined to fade?
From a physiological perspective, backwards running fundamentally alters how your body moves and which muscles bear the workload. Physiotherapist Alex Lee explains the dramatic shift that occurs when you reverse direction: "Your quadriceps do the majority of the job of slowing your body down. Your hamstrings work differently too because they aren't pushing you forwards. This variation alleviates stress from the knee joint, specifically the ACL." He further explains how running backwards also causes your ankles to "move with greater dorsiflexion," which trains balance and body awareness, known as proprioception.
As any runner can attest, going easy on the knees is a major draw. Lee notes additional advantages when he trains athletes, explaining how he incorporates running backwards to "shield their knees, develop leg strength, and enhance coordination."
While the biomechanical benefits are real, running coach Will Baldwin offers a more measured approach to backwards running's place in training programs. "I think the biggest benefit from running backwards is it helps you engage some of your posterior chain and muscles that aren't typically recruited in forward running, like your glutes, some hamstring, and it helps with your pushback a little bit," Baldwin explains.
However, he's quick to temper expectations about performance gains: "I don't think it makes runners faster. It's probably a good supplemental tool. I don't even think it's a necessity in training, but it definitely can wake some muscles up and can be a fun, different type of coordination skill to work on that's still similar to running."
Baldwin's perspective highlights a crucial consideration in training philosophy—the principle of specificity. "The law of specificity applies here. If we want to get better at a skill, we need to practice that skill in the specific way we want to compete. We've got to be careful with how much time we waste, especially for busy people. That could be time better spent doing core work or some specific strength training."
For most recreational runners juggling work, family, and training, Baldwin suggests backwards running falls into the "nice to have" rather than "must have" category: "You'd really have to be someone with a lot of extra time to experiment and play around with things like this."
Running backwards is awkward at best, and genuinely risky at worst. You could fall, and it's easy to twist an ankle or pull something because you can't see where you're landing. If you'd like to experiment with backwards running, start conservatively. Baldwin suggests beginning with walking: "If someone wanted to try it, I'd start with walking backwards. Especially uphill, on a treadmill or outside, it can really engage, work, and stretch certain muscles. It can be a fun skill to play with, but I'd definitely start with walking before running backwards."
This gradual approach allows your body to adapt to the different movement patterns while minimizing injury risk. Treadmills provide an ideal controlled environment for initial backwards movement practice, eliminating the hazard of unseen obstacles.
The bigger concerns, according to Baldwin, relate to training efficiency: "Wasted time and lack of specificity are probably the bigger ones, but again, not major." For time-strapped runners, the opportunity cost of backwards running sessions might outweigh the supplemental benefits.
Backwards running offers legitimate benefits—improved proprioception, reduced knee stress, enhanced muscle activation patterns, and increased coordination.
However, for the average recreational runner seeking to improve their forward running times, backwards running is more of a fun bonus activity, rather than a true game-changer. The time investment might be better allocated to proven training methods like tempo runs, interval training, strength work, or simply building up more forward-running mileage.
Backwards running sits comfortably in the category of "helpful but not essential" training methods. It's not the revolutionary breakthrough some social media posts might suggest, but it's also not entirely without merit. Like many fitness trends, the truth lies somewhere between.
So, for runners with specific needs—such as rehabilitation from knee injuries, athletes requiring enhanced proprioception, or those simply seeking variety in their training routine—backwards running can serve a valuable purpose. For everyone else, it remains an interesting option worth considering if time and safety conditions permit, but not a priority that should displace more fundamental aspects of training.
We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.
Did you know you can customize Google to filter out garbage? Take these steps for better search results, including adding my work at Lifehacker as a preferred source.
A lot of outdoor speakers will float on water or be rated IP68 waterproof, but the only one that will also float facing up and be resistant against salt water is the Soundcore Boom 3i, making it the best portable speaker for water activities. It also has plenty of useful features that make it a great outdoor speaker. Right now, you can get it for $99.99 (originally $139.99).
I've been using Soundcore speakers and headphones for a while, and I've been impressed by their material, sound quality, and well-designed companion app, especially when taking into account their price. The Soundcore Boom 3i is no different, considering you need to spend around $180 these days to break into portable speakers with similar features.
Being corrosion-proof means you can get it to the beach without worrying that the salt will mess up the speaker. In case it does, there's a “Buzz Clean” feature that plays low-frequency sounds to shake and clear debris out of the drivers. Floating facing up was a great addition, so the speakers won't be underwater. It's small enough to carry around and comes with a shoulder strap. The 50W power outage is great for its size, and can be loud enough to host a gathering of around eight people with no issues.
The app has a 9-band custom EQ so you can tweak the sound to your liking. There's also a bass boost button along with volume, play, pause, and other useful media controls on top of the speaker. The battery will last about 16 hours, and you can listen to music while charging it.
"Antifa" is in Donald Trump's sights. Following the assassination of conservative activist Charlie Kirk, Trump has blamed "radical leftists" for pushing political violence against those on the right—even as the assassin's motives remain under active investigation. As such, the president is going after Antifa, calling it a "domestic terrorist organization," despite the fact that Antifa is not actually an organization, and that the U.S. has no domestic terrorist designation.
When the president of the United States enflames tensions in this way, it's no surprise our discourse enflames as well. Here's one such example: You may have seen posts on social media this week claiming that Threads is now attaching warnings to posts from users who are suspected members of Antifa, or posts with the label attached itself. One viral post discussing the subject comes from the account "Balleralert," who shared the following screenshot on Wednesday:
The label, affixed to a innocuous post by the account benballer, reads: "This user is suspected of being part of a terrorist organization called Antifa. Please report any suspicious behavior." Taken at face value, one might assume Threads, owned by Meta, is trying to get on the Trump administration's good side by identifying seemingly "leftist" accounts as members of Antifa.
The thing is, the label is not real: A Meta spokesperson confirmed this to me via email, saying that the label is just a meme, and not something created by Meta. That's not to say the label hasn't appeared in any Threads posts. They absolutely have, and you may have seen them. But if some Threads posts appear to have the label attached, it's because it's actually part of the original text of the post, formatted to look like an addition by Meta.
Some users appear to be adding the text to their posts in jest, such as in this example, which puts the label in context with a popular meme from the film Inglourious Basterds. This post, which places the label on an innocent declaration about how pumpkin pie is good. These are solid jokes, but they're also fueling confusion: Some commenters are concerned about the label, while others are sharing their own versions of the meme, which are quite obvious compared to the "Antifa label."
Official labels on posts are increasingly a standard across social media posts, which is likely why this meme is something a handful of users are falling for, especially given recent controversies over the relationships between the U.S. government, the media, and tech companies. If you're used to seeing community notes or warning from companies like Meta, you assume this Antifa label is legitimate. Learning that it isn't should serve as a good reminder that the internet is a treasure trove of disinformation. You should never take a random post on Instagram as the unvarnished truth, especially if that post seems particularly controversial, or particularly aligned to your own worldview.
Before you believe something you see on your feeds, take a moment to think it through. Do some research to see if any trusted sources have confirmed the claims. If they haven't, remain skeptical, and refrain from spreading it around.
Cybercriminals are increasingly using AI-powered tools and (malicious) large language models to create convincing, error-free emails, deepfakes, online personas, lookalike/fake websites, and malware.
There’s even been a documented instance of an attacker using the agentic AI coding assistant Claude Code (along with Kali Linux) for nearly all steps of a data extortion operation.
More recently, Microsoft Threat Intelligence spotted and blocked an attack campaign delivering an LLM-obfuscated malicious attachment.
The attackers used a compromised small business email account to send messages, which looked like a notification to view a shared file.
Users who downloaded and opened the file – ostensibly a PDF, but actually an SVG (Scalable Vector Graphics) file – were redirected to a “CAPTCHA prompt” web page and then likely to a page created to harvest credentials.
What sets this attack apart is how the SVG file attempted to hide its malicious behavior: Rather than using encryption for obfuscating the content, the attackers disguised the payload with business language.
SVG files are beloved by attackers because they are text-based and allow them to embed JavaScript and other dynamic content into the file. This file type also allows them to include “invisible” elements, encoded attributes, and to delay script execution, which help them avoid static analysis and sandboxing.
For this campaign, the attackers padded the file with elements for a supposed Business Performance Dashboard, complete with chart bars and month labels. These elements were invisible to the user, as the attackers set their opacity to zero and their fill to transparent.
“Within the file, the attackers encoded the malicious payload using a long sequence of business-related terms. Words like revenue, operations, risk, or shares were concatenated into a hidden data-analytics attribute of an invisible element within the SVG. The terms in this attribute were later used by embedded JavaScript, which systematically processed the business-related words through several transformation steps,” Microsoft’s threat analysts noted.
“Instead of directly including malicious code, the attackers encoded the payload by mapping pairs or sequences of these business terms to specific characters or instructions. As the script runs, it decodes the sequence, reconstructing the hidden functionality from what appears to be harmless business metadata.”
Business-related terms converted into malicious code (Source: Microsoft)
The final payload can fingerprint the browser/system and, if the “conditions” are right, redirect the potential victim to a phishing page.
Microsoft has used Security Copilot, its AI cybersecurity assistant, to analyze the SVG file and decide whether it was written by a human or AI.
The tool flagged several artifacts that strongly suggested LLM generation, such as overly descriptive variable and function names, an over-engineered code structure, boilerplate comments, unnecessary code elements, and formulaic obfuscation techniques typical of LLM-generated code.
Because of these traits, the threat analysts concluded it was highly likely that the code was synthetic and likely generated by an LLM or a tool using one.
Luckily, blocking phishing attempts involves more than simply deciding if a payload is harmful. Nevertheless, AI-generated obfuscation often introduces synthetic artifacts, and these can become new detection signals, the analysts noted.
It’s therefore possible that the use of LLMs could occasionally make attacks easier to detect, not less.

Subscribe to our breaking news e-mail alert to never miss out on the latest breaches, vulnerabilities and cybersecurity threats. Subscribe here!

Copyright © 2014 Braxton Template
Designed by Template Trackers. Powered by Blogger