Crypto-Gram
July 15, 2022
by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram's web page.
Read this issue on the web
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
** *** ***** ******* *********** *************
In this issue:
If these links don't work in your email client, try reading this issue of Crypto-Gram on the web.
M1 Chip Vulnerability
Attacking the Performance of Machine Learning Systems
Tracking People via Bluetooth on Their Phones
Hertzbleed: A New Side-Channel Attack
Hidden Anti-Cryptography Provisions in Internet Anti-Trust Bills
Symbiote Backdoor in Linux
On the Subversion of NIST by the NSA
On the Dangers of Cryptocurrencies and the Uselessness of Blockchain
2022 Workshop on Economics and Information Security (WEIS)
When Security Locks You Out of Everything
Ecuador’s Attempt to Resettle Edward Snowden
ZuoRAT Malware Is Targeting Routers
Analyzing the Swiss E-Voting System
NIST Announces First Four Quantum-Resistant Cryptographic Algorithms
Ubiquitous Surveillance by ICE
Apple’s Lockdown Mode
Nigerian Prison Break
Security Vulnerabilities in Honda’s Keyless Entry System
Post-Roe Privacy
New Browser De-anonymization Technique
Upcoming Speaking Engagements
** *** ***** ******* *********** *************
M1 Chip Vulnerability
[2022.06.15] This is a new vulnerability against Apple’s M1 chip. Researchers say that it is unpatchable.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory, however, have created a novel hardware attack, which combines memory corruption and speculative execution attacks to sidestep the security feature. The attack shows that pointer authentication can be defeated without leaving a trace, and as it utilizes a hardware mechanism, no software patch can fix it.
The attack, appropriately called “Pacman,” works by “guessing” a pointer authentication code (PAC), a cryptographic signature that confirms that an app hasn’t been maliciously altered. This is done using speculative execution -- a technique used by modern computer processors to speed up performance by speculatively guessing various lines of computation -- to leak PAC verification results, while a hardware side-channel reveals whether or not the guess was correct.
What’s more, since there are only so many possible values for the PAC, the researchers found that it’s possible to try them all to find the right one.
It’s not obvious how to exploit this vulnerability in the wild, so I’m unsure how important this is. Also, I don’t know if it also applies to Apple’s new M2 chip.
Research paper. Another news article.
** *** ***** ******* *********** *************
Attacking the Performance of Machine Learning Systems
[2022.06.16] Interesting research: “Sponge Examples: Energy-Latency Attacks on Neural Networks“:
Abstract: The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs. While such devices enable us to train large-scale neural networks in datacenters and deploy them on edge devices, their designers’ focus so far is on average-case performance. In this work, we introduce a novel threat vector against neural networks whose energy consumption or decision latency are critical. We show how adversaries can exploit carefully-crafted sponge examples, which are inputs designed to maximise energy consumption and latency, to drive machine learning (ML) systems towards their worst-case performance. Sponge examples are, to our knowledge, the first denial-of-service attack against the ML components of such systems. We mount two variants of our sponge attack on a wide range of state-of-the-art neural network models, and find that language models are surprisingly vulnerable. Sponge examples frequently increase both latency and energy consumption of these models by a factor of 30×. Extensive experiments show that our new attack is effective across different hardware platforms (CPU, GPU and an ASIC simulator) on a wide range of different language tasks. On vision tasks, we show that sponge examples can be produced and a latency degradation observed, but the effect is less pronounced. To demonstrate the effectiveness of sponge examples in the real world, we mount an attack against Microsoft Azure’s translator and show an increase of response time from 1ms to 6s (6000×). We conclude by proposing a defense strategy: shifting the analysis of energy consumption in hardware from an average-case to a worst-case perspective.
Attackers were able to degrade the performance so much, and force the system to waste so many cycles, that some hardware would shut down due to overheating. Definitely a “novel threat vector.”
** *** ***** ******* *********** *************
Tracking People via Bluetooth on Their Phones
[2022.06.17] We’ve always known that phones -- and the people carrying them -- can be uniquely identified from their Bluetooth signatures, and that we need security techniques to prevent that. This new research shows that that’s not enough.
Computer scientists at the University of California San Diego proved in a study published May 24 that minute imperfections in phones caused during manufacturing create a unique Bluetooth beacon, one that establishes a digital signature or fingerprint distinct from any other device. Though phones’ Bluetooth uses cryptographic technology that limits trackability, using a radio receiver, these distortions in the Bluetooth signal can be discerned to track individual devices.
[...]
The study’s scientists conducted tests to show whether multiple phones being in one place could disrupt their ability to track individual signals. Results in an initial experiment showed they managed to discern individual signals for 40% of 162 devices in public. Another, scaled-up experiment showed they could discern 47% of 647 devices in a public hallway across two days.
The tracking range depends on device and the environment, and it could be several hundred feet, but in a crowded location it might only be 10 or so feet. Scientists were able to follow a volunteer’s signal as they went to and from their house. Certain environmental factors can disrupt a Bluetooth signal, including changes in environment temperature, and some devices send signals with more power and range than others.
One might say “well, I’ll just keep Bluetooth turned off when not in use,” but the researchers said they found that some devices, especially iPhones, don’t actually turn off Bluetooth unless a user goes directly into settings to turn off the signal. Most people might not even realize their Bluetooth is being constantly emitted by many smart devices.
** *** ***** ******* *********** *************
Hertzbleed: A New Side-Channel Attack
[2022.06.20] Hertzbleed is a new side-channel attack that works against a variety of microprocressors. Deducing cryptographic keys by analyzing power consumption has long been an attack, but it’s not generally viable because measuring power consumption is often hard. This new attack measures power consumption by measuring time, making it easier to exploit.
The team discovered that dynamic voltage and frequency scaling (DVFS) -- a power and thermal management feature added to every modern CPU -- allows attackers to deduce the changes in power consumption by monitoring the time it takes for a server to respond to specific carefully made queries. The discovery greatly reduces what’s required. With an understanding of how the DVFS feature works, power side-channel attacks become much simpler timing attacks that can be done remotely.
The researchers have dubbed their attack Hertzbleed because it uses the insights into DVFS to exposeor bleed outdata that’s expected to remain private.
[...]
The researchers have already shown how the exploit technique they developed can be used to extract an encryption key from a server running SIKE, a cryptographic algorithm used to establish a secret key between two parties over an otherwise insecure communications channel.
The researchers said they successfully reproduced their attack on Intel CPUs from the 8th to the 11th generation of the Core microarchitecture. They also claimed that the technique would work on Intel Xeon CPUs and verified that AMD Ryzen processors are vulnerable and enabled the same SIKE attack used against Intel chips. The researchers believe chips from other manufacturers may also be affected.
** *** ***** ******* *********** *************
Hidden Anti-Cryptography Provisions in Internet Anti-Trust Bills
[2022.06.21] Two bills attempting to reduce the power of Internet monopolies are currently being debated in Congress: S. 2992, the American Innovation and Choice Online Act; and S. 2710, the Open App Markets Act. Reducing the power to tech monopolies would do more to “fix” the Internet than any other single action, and I am generally in favor of them both. (The Center for American Progress wrote a good summary and evaluation of them. I have written in support of the bill that would force Google and Apple to give up their monopolies on their phone app stores.)
There is a significant problem, though. Both bills have provisions that could be used to break end-to-end encryption.
Let’s start with S. 2992. Sec. 3(c)(7)(A)(iii) would allow a company to deny access to apps installed by users, where those app makers “have been identified [by the Federal Government] as national security, intelligence, or law enforcement risks.” That language is far too broad. It would allow Apple to deny access to an encryption service provider that provides encrypted cloud backups to the cloud (which Apple does not currently offer). All Apple would need to do is point to any number of FBI materials decrying the security risks with “warrant proof encryption.”
Sec. 3(c)(7)(A)(vi) states that there shall be no liability for a platform “solely” because it offers “end-to-end encryption.” This language is too narrow. The word “solely” suggests that offering end-to-end encryption could be a factor in determining liability, provided that it is not the only reason. This is very similar to one of the problems with the encryption carve-out in the EARN IT Act. The section also doesn’t mention any other important privacy-protective features and policies, which also shouldn’t be the basis for creating liability for a covered platform under Sec. 3(a).
In Sec. 2(a)(2), the definition of business user excludes any person who “is a clear national security risk.” This term is undefined, and as such far too broad. It can easily be interpreted to cover any company that offers an end-to-end encrypted alternative, or a service offered in a country whose privacy laws forbid disclosing data in response to US court-ordered surveillance. Again, the FBI’s repeated statements about end-to-end encryption could serve as support.
Finally, under Sec. 3(b)(2)(B), platforms have an affirmative defense for conduct that would otherwise violate the Act if they do so in order to “protect safety, user privacy, the security of nonpublic data, or the security of the covered platform.” This language is too vague, and could be used to deny users the ability to use competing services that offer better security/privacy than the incumbent platform -- particularly where the platform offers subpar security in the name of “public safety.” For example, today Apple only offers unencrypted iCloud backups, which it can then turn over governments who claim this is necessary for “public safety.” Apple can raise this defense to justify its blocking third-party services from offering competing, end-to-end encrypted backups of iMessage and other sensitive data stored on an iPhone.
S. 2710 has similar problems. Sec 7. (6)(B) contains language specifying that the bill does not “require a covered company to interoperate or share data with persons or business users that...have been identified by the Federal Government as national security, intelligence, or law enforcement risks.” This would mean that Apple could ignore the prohibition against private APIs, and deny access to otherwise private APIs, for developers of encryption products that have been publicly identified by the FBI. That is, end-to-end encryption products.
I want those bills to pass, but I want those provisions cleared up so we don’t lose strong end-to-end encryption in our attempt to reign in the tech monopolies.
EDITED TO ADD (6/23): A few DC insiders have responded to me about this post. Their basic point is this: “Your threat model is wrong. The big tech companies can already break end-to-end encryption if they want. They don’t need any help, and this bill doesn’t give the FBI any new leverage they don’t already have. This bill doesn’t make anything any worse than it is today.” That’s a reasonable response. These bills are definitely a net positive for humanity.
** *** ***** ******* *********** *************
Symbiote Backdoor in Linux
[2022.06.22] Interesting:
What makes Symbiote different from other Linux malware that we usually come across, is that it needs to infect other running processes to inflict damage on infected machines. Instead of being a standalone executable file that is run to infect a machine, it is a shared object (SO) library that is loaded into all running processes using LD_PRELOAD (T1574.006), and parasitically infects the machine. Once it has infected all the running processes, it provides the threat actor with rootkit functionality, the ability to harvest credentials, and remote access capability.
News article:
Researchers have unearthed a discovery that doesn’t occur all that often in the realm of malware: a mature, never-before-seen Linux backdoor that uses novel evasion techniques to conceal its presence on infected servers, in some cases even with a forensic investigation.
No public attribution yet.
So far, there’s no evidence of infections in the wild, only malware samples found online. It’s unlikely this malware is widely active at the moment, but with stealth this robust, how can we be sure?
** *** ***** ******* *********** *************
On the Subversion of NIST by the NSA
[2022.06.23] Nadiya Kostyuk and Susan Landau wrote an interesting paper: “Dueling Over DUAL_EC_DRBG: The Consequences of Corrupting a Cryptographic Standardization Process”:
Abstract: In recent decades, the U.S. National Institute of Standards and Technology (NIST), which develops cryptographic standards for non-national security agencies of the U.S. government, has emerged as the de facto international source for cryptographic standards. But in 2013, Edward Snowden disclosed that the National Security Agency had subverted the integrity of a NIST cryptographic standardthe Dual_EC_DRBGenabling easy decryption of supposedly secured communications. This discovery reinforced the desire of some public and private entities to develop their own cryptographic standards instead of relying on a U.S. government process. Yet, a decade later, no credible alternative to NIST has emerged. NIST remains the only viable candidate for effectively developing internationally trusted cryptography standards.
Cryptographic algorithms are essential to security yet are hard to understand and evaluate. These technologies provide crucial security for communications protocols. Yet the protocols transit international borders; they are used by countries that do not necessarily trust each other. In particular, these nations do not necessarily trust the developer of the cryptographic standard.
Seeking to understand how NIST, a U.S. government agency, was able to remain a purveyor of cryptographic algorithms despite the Dual_EC_DRBG problem, we examine the Dual_EC_DRBG situation, NIST’s response, and why a non-regulatory, non-national security U.S. agency remains a successful international supplier of strong cryptographic solutions.
** *** ***** ******* *********** *************
On the Dangers of Cryptocurrencies and the Uselessness of Blockchain
[2022.06.24] Earlier this month, I and others wrote a letter to Congress, basically saying that cryptocurrencies are an complete and total disaster, and urging them to regulate the space. Nothing in that letter is out of the ordinary, and is in line with what I wrote about blockchain in 2019. In response, Matthew Green has written -- not really a rebuttal -- but a “a general response to some of the more common spurious objections...people make to public blockchain systems.” In it, he makes several broad points:
Yes, current proof-of-work blockchains like bitcoin are terrible for the environment. But there are other modes like proof-of-stake that are not.
Yes, a blockchain is an immutable ledger making it impossible to undo specific transactions. But that doesn’t mean there can’t be some governance system on top of the blockchain that enables reversals.
Yes, bitcoin doesn’t scale and the fees are too high. But that’s nothing inherent in blockchain technology -- that’s just a bunch of bad design choices bitcoin made.
Blockchain systems can have a little or a lot of privacy, depending on how they are designed and implemented.
There’s nothing on that list that I disagree with. (We can argue about whether proof-of-stake is actually an improvement. I am skeptical of systems that enshrine a “they who have the gold make the rules” system of governance. And to the extent any of those scaling solutions work, they undo the decentralization blockchain claims to have.) But I also think that these defenses largely miss the point. To me, the problem isn’t that blockchain systems can be made slightly less awful than they are today. The problem is that they don’t do anything their proponents claim they do. In some very important ways, they’re not secure. They don’t replace trust with code; in fact, in many ways they are far less trustworthy than non-blockchain systems. They’re not decentralized, and their inevitable centralization is harmful because it’s largely emergent and ill-defined. They still have trusted intermediaries, often with more power and less oversight than non-blockchain systems. They still require governance. They still require regulation. (These things are what I wrote about here.) The problem with blockchain is that it’s not an improvement to any system -- and often makes things worse.
In our letter, we write: “By its very design, blockchain technology is poorly suited for just about every purpose currently touted as a present or potential source of public benefit. From its inception, this technology has been a solution in search of a problem and has now latched onto concepts such as financial inclusion and data transparency to justify its existence, despite far better solutions to these issues already in use. Despite more than thirteen years of development, it has severe limitations and design flaws that preclude almost all applications that deal with public customer data and regulated financial transactions and are not an improvement on existing non-blockchain solutions.”
Green responds: “‘Public blockchain’ technology enables many stupid things: today’s cryptocurrency schemes can be venal, corrupt, overpromised. But the core technology is absolutely not useless. In fact, I think there are some pretty exciting things happening in the field, even if most of them are further away from reality than their boosters would admit.” I have yet to see one. More specifically, I can’t find a blockchain application whose value has anything to do with the blockchain part, that wouldn’t be made safer, more secure, more reliable, and just plain better by removing the blockchain part. I postulate that no one has ever said “Here is a problem that I have. Oh look, blockchain is a good solution.” In every case, the order has been: “I have a blockchain. Oh look, there is a problem I can apply it to.” And in no cases does it actually help.
Someone, please show me an application where blockchain is essential. That is, a problem that could not have been solved without blockchain that can now be solved with it. (And “ransomware couldn’t exist because criminals are blocked from using the conventional financial networks, and cash payments aren’t feasible” does not count.)
For example, Green complains that “credit card merchant fees are similar, or have actually risen in the United States since the 1990s.” This is true, but has little to do with technological inefficiencies or existing trust relationships in the industry. It’s because pretty much everyone who can and is paying attention gets 1% back on their purchases: in cash, frequent flier miles, or other affinity points. Green is right about how unfair this is. It’s a regressive subsidy, “since these fees are baked into the cost of most retail goods and thus fall heavily on the working poor (who pay them even if they use cash).” But that has nothing to do with the lack of blockchain, and solving it isn’t helped by adding a blockchain. It’s a regulatory problem; with a few exceptions, credit card companies have successfully pressured merchants into charging the same prices, whether someone pays in cash or with a credit card. Peer-to-peer payment systems like PayPal, Venmo, MPesa, and AliPay all get around those high transaction fees, and none of them use blockchain.
This is my basic argument: blockchain does nothing to solve any existing problem with financial (or other) systems. Those problems are inherently economic and political, and have nothing to do with technology. And, more importantly, technology can’t solve economic and political problems. Which is good, because adding blockchain causes a whole slew of new problems and makes all of these systems much, much worse.
Green writes: “I have no problem with the idea of legislators (intelligently) passing laws to regulate cryptocurrency. Indeed, given the level of insanity and the number of outright scams that are happening in this area, it’s pretty obvious that our current regulatory framework is not up to the task.” But when you remove the insanity and the scams, what’s left?
EDITED TO ADD: Nicholas Weaver is also adamant about this. David Rosenthal is good, too.
EDITED TO ADD (7/8/2022): This post has been translated into German.
** *** ***** ******* *********** *************
2022 Workshop on Economics and Information Security (WEIS)
[2022.06.27] I did not attend WEIS this year, but Ross Anderson was there and liveblogged all the talks.
** *** ***** ******* *********** *************
When Security Locks You Out of Everything
[2022.06.28] Thought experiment story of someone who lost everything in a house fire, and now can’t log into anything:
But to get into my cloud, I need my password and 2FA. And even if I could convince the cloud provider to bypass that and let me in, the backup is secured with a password which is stored in -- you guessed it -- my Password Manager.
I am in cyclic dependency hell. To get my passwords, I need my 2FA. To get my 2FA, I need my passwords.
It’s a one-in-a-million story, and one that’s hard to take into account in system design.
This is where we reach the limits of the “Code Is Law” movement.
In the boring analogue world -- I am pretty sure that I’d be able to convince a human that I am who I say I am. And, thus, get access to my accounts. I may have to go to court to force a company to give me access back, but it is possible.
But when things are secured by an unassailable algorithm -- I am out of luck. No amount of pleading will let me without the correct credentials. The company which provides my password manager simply doesn’t have access to my passwords. There is no-one to convince. Code is law.
Of course, if I can wangle my way past security, an evil-doer could also do so.
So which is the bigger risk?
An impersonator who convinces a service provider that they are me?
A malicious insider who works for a service provider?
Me permanently losing access to all of my identifiers?
I don’t know the answer to that.
Those risks are in the order of most common to least common, but that doesn’t necessarily mean that they are in risk order. They probably are, but then we’re left with no good way to handle someone who has lost all their digital credentials -- computer, phone, backup, hardware token, wallet with ID cards -- in a catastrophic house fire.
I want to remind readers that this isn’t a true story. It didn’t actually happen. It’s a thought experiment.
** *** ***** ******* *********** *************
Ecuador’s Attempt to Resettle Edward Snowden
[2022.06.29] Someone hacked the Ecuadorian embassy in Moscow and found a document related to Ecuador’s 2013 efforts to bring Edward Snowden there. If you remember, Snowden was traveling from Hong Kong to somewhere when the US revoked his passport, stranding him in Russia. In the document, Ecuador asks Russia to provide Snowden with safe passage to come to Ecuador.
It’s hard to believe this all happened almost ten years ago.
** *** ***** ******* *********** *************
ZuoRAT Malware Is Targeting Routers
[2022.06.30] Wired is reporting on a new remote-access Trojan that is able to infect at least eighty different targets:
So far, researchers from Lumen Technologies’ Black Lotus Labs say they’ve identified at least 80 targets infected by the stealthy malware, including routers made by Cisco, Netgear, Asus, and DrayTek. Dubbed ZuoRAT, the remote access Trojan is part of a broader hacking campaign that has existed since at least the fourth quarter of 2020 and continues to operate.
The discovery of custom-built malware written for the MIPS architecture and compiled for small-office and home-office routers is significant, particularly given its range of capabilities. Its ability to enumerate all devices connected to an infected router and collect the DNS lookups and network traffic they send and receive and remain undetected is the hallmark of a highly sophisticated threat actor.
More details in the article.
** *** ***** ******* *********** *************
Analyzing the Swiss E-Voting System
[2022.07.01] Andrew Appel has a long analysis of the Swiss online voting system. It’s a really good analysis of both the system and the official analyses.
** *** ***** ******* *********** *************
NIST Announces First Four Quantum-Resistant Cryptographic Algorithms
[2022.07.06] NIST’s post-quantum computing cryptography standard process is entering its final phases. It announced the first four algorithms:
For general encryption, used when we access secure websites, NIST has selected the CRYSTALS-Kyber algorithm. Among its advantages are comparatively small encryption keys that two parties can exchange easily, as well as its speed of operation.
For digital signatures, often used when we need to verify identities during a digital transaction or to sign a document remotely, NIST has selected the three algorithms CRYSTALS-Dilithium, FALCON and SPHINCS+ (read as “Sphincs plus”). Reviewers noted the high efficiency of the first two, and NIST recommends CRYSTALS-Dilithium as the primary algorithm, with FALCON for applications that need smaller signatures than Dilithium can provide. The third, SPHINCS+, is somewhat larger and slower than the other two, but it is valuable as a backup for one chief reason: It is based on a different math approach than all three of NIST’s other selections.
NIST has not chosen a public-key encryption standard. The remaining candidates are BIKE, Classic McEliece, HQC, and SIKE.
I have a lot to say on this process, and have written an essay for IEEE Security & Privacy about it. It will be published in a month or so.
** *** ***** ******* *********** *************
Ubiquitous Surveillance by ICE
[2022.07.07] Report by Georgetown’s Center on Privacy and Technology published a comprehensive report on the surprising amount of mass surveillance conducted by Immigration and Customs Enforcement (ICE).
Our two-year investigation, including hundreds of Freedom of Information Act requests and a comprehensive review of ICE’s contracting and procurement records, reveals that ICE now operates as a domestic surveillance agency. Since its founding in 2003, ICE has not only been building its own capacity to use surveillance to carry out deportations but has also played a key role in the federal government’s larger push to amass as much information as possible about all of our lives. By reaching into the digital records of state and local governments and buying databases with billions of data points from private companies, ICE has created a surveillance infrastructure that enables it to pull detailed dossiers on nearly anyone, seemingly at any time. In its efforts to arrest and deport, ICE has without any judicial, legislative or public oversight reached into datasets containing personal information about the vast majority of people living in the U.S., whose records can end up in the hands of immigration enforcement simply because they apply for driver’s licenses; drive on the roads; or sign up with their local utilities to get access to heat, water and electricity.
ICE has built its dragnet surveillance system by crossing legal and ethical lines, leveraging the trust that people place in state agencies and essential service providers, and exploiting the vulnerability of people who volunteer their information to reunite with their families. Despite the incredible scope and evident civil rights implications of ICE’s surveillance practices, the agency has managed to shroud those practices in near-total secrecy, evading enforcement of even the handful of laws and policies that could be invoked to impose limitations. Federal and state lawmakers, for the most part, have yet to confront this reality.
** *** ***** ******* *********** *************
Apple’s Lockdown Mode
[2022.07.08] Apple has introduced lockdown mode for high-risk users who are concerned about nation-state attacks. It trades reduced functionality for increased security in a very interesting way.
** *** ***** ******* *********** *************
Nigerian Prison Break
[2022.07.11] There was a massive prison break in Abuja, Nigeria:
Armed with bombs, Rocket Propelled Grenade (RPGs) and General Purpose Machine Guns (GPMG), the attackers, who arrived at about 10:05 p.m. local time, gained access through the back of the prison, using dynamites to destroy the heavily fortified facility, freeing 600 out of the prison’s 994 inmates, according to the country’s defense minister, Bashir Magashi....
What’s interesting to me is how the defenders got the threat model wrong. That attack isn’t normally associated with a prison break; it sounds more like a military action in a civil war.
** *** ***** ******* *********** *************
Security Vulnerabilities in Honda’s Keyless Entry System
[2022.07.12] Honda vehicles from 2021 to 2022 are vulnerable to this attack:
On Thursday, a security researcher who goes by Kevin2600 published a technical report and videos on a vulnerability that he claims allows anyone armed with a simple hardware device to steal the code to unlock Honda vehicles. Kevin2600, who works for cybersecurity firm Star-V Lab, dubbed the attack RollingPWN.
[...]
In a phone call, Kevin2600 explained that the attack relies on a weakness that allows someone using a software defined radio -- such as HackRF -- to capture the code that the car owner uses to open the car, and then replay it so that the hacker can open the car as well. In some cases, he said, the attack can be performed from 30 meters (approximately 98 feet) away.
In the videos, Kevin2600 and his colleagues show how the attack works by unlocking different models of Honda cars with a device connected to a laptop.
The Honda models that Kevin2600 and his colleagues tested the attack on use a so-called rolling code mechanism, which means that -- in theory -- every time the car owner uses the keyfob, it sends a different code to open it. This should make it impossible to capture the code and use it again. But the researchers found that there is a flaw that allows them to roll back the codes and reuse old codes to open the car, Kevin2600 said.
** *** ***** ******* *********** *************
Post-Roe Privacy
[2022.07.13] This is an excellent essay outlining the post-Roe privacy threat model. (Summary: period tracking apps are largely a red herring.)
Taken together, this means the primary digital threat for people who take abortion pills is the actual evidence of intention stored on your phone, in the form of texts, emails, and search/web history. Cynthia Conti-Cook’s incredible article “Surveilling the Digital Abortion Diary details what we know now about how digital evidence has been used to prosecute women who have been pregnant. That evidence includes search engine history, as in the case of the prosecution of Latice Fisher in Mississippi. As Conti-Cook says, Ms. Fisher “conduct[ed] internet searches, including how to induce a miscarriage, ‘buy abortion pills, mifepristone online, misoprostol online,’ and ‘buy misoprostol abortion pill online,'” and then purchased misoprostol online. Those searches were the evidence that she intentionally induced a miscarriage. Text messages are also often used in prosecutions, as they were in the prosecution of Purvi Patel, also discussed in Conti-Cook’s article.
These examples are why advice from reproductive access experts like Kate Bertash focuses on securing text messages (use Signal and auto-set messages to disappear) and securing search queries (use a privacy-focused web browser, and use DuckDuckGo or turn Google search history off). After someone alerts police, digital evidence has been used to corroborate or show intent. But so far, we have not seen digital evidence be a first port of call for prosecutors or cops looking for people who may have self-managed an abortion. We can be vigilant in looking for any indications that this policing practice may change, but we can also be careful to ensure we’re focusing on mitigating the risks we know are indeed already being used to prosecute abortion-seekers.
[...]
As we’ve discussed above, just tracking your period doesn’t necessarily put you at additional risk of prosecution, and would only be relevant should you both become (or be suspected of becoming) pregnant, and then become the target of an investigation. Period tracking is also extremely useful if you need to determine how pregnant you might be, especially if you need to evaluate the relative access and legal risks for your abortion options.
It’s important to remember that if an investigation occurs, information from period trackers is probably less legally relevant than other information from your phone.
See also EFF’s privacy guide for those seeking an abortion.
** *** ***** ******* *********** *************
New Browser De-anonymization Technique
[2022.07.14] Researchers have a new way to de-anonymize browser users, by correlating their behavior on one account with their behavior on another:
The findings, which NJIT researchers will present at the Usenix Security Symposium in Boston next month, show how an attacker who tricks someone into loading a malicious website can determine whether that visitor controls a particular public identifier, like an email address or social media account, thus linking the visitor to a piece of potentially personal data.
When you visit a website, the page can capture your IP address, but this doesn’t necessarily give the site owner enough information to individually identify you. Instead, the hack analyzes subtle features of a potential target’s browser activity to determine whether they are logged into an account for an array of services, from YouTube and Dropbox to Twitter, Facebook, TikTok, and more. Plus the attacks work against every major browser, including the anonymity-focused Tor Browser.
[...]
“Let’s say you have a forum for underground extremists or activists, and a law enforcement agency has covertly taken control of it,” Curtmola says. “They want to identify the users of this forum but can’t do this directly because the users use pseudonyms. But let’s say that the agency was able to also gather a list of Facebook accounts who are suspected to be users of this forum. They would now be able to correlate whoever visits the forum with a specific Facebook identity.”
** *** ***** ******* *********** *************
Upcoming Speaking Engagements
[2022.07.14] This is a current list of where and when I am scheduled to speak:
I’m speaking as part of a Geneva Centre for Security Policy course on Cyber Security in the Context of International Security, online, on September 22, 2022.
I’m speaking at IT-Security INSIDE 2022 in Zurich, Switzerland, on September 22, 2022.
The list is maintained on this page.
** *** ***** ******* *********** *************
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram's web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books -- including his latest, We Have Root -- as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.
Copyright © 2022 by Bruce Schneier.
** *** ***** ******* *********** *************
---
* Origin: TCOB1 (618:500/14)