Tag Archives: Security

For a year, gang operating rogue Tor node infected Windows executables

image
A flowchart of the infection process used by a malicious Tor exit node.

Attacks tied to gang that previously infected governments with highly advanced malware.

Three weeks ago, a security researcher uncovered a Tor exit node that added malware to uncompressed Windows executables passing through it. Officials with the privacy service promptly shut down the Russia-based node, but according to new research, the group behind the node had likely been infecting files for more than a year by that time, causing careless users to install a backdoor that gave attackers full control of their systems.

What’s more, according to a blog post published Friday by researchers from antivirus provider F-Secure, the rogue exit node was tied to the “MiniDuke” gang, which previously infected government agencies and organizations in 23 countries with highly advanced malware that uses low-level code to stay hidden. MiniDuke was intriguing because it bore the hallmark of viruses first encountered in the mid-1990s, when shadowy groups such as 29A engineered innovative pieces of malware for fun and then documented them in an E-zine of the same name. Written in assembly language, most MiniDuke files were tiny. Their use of multiple levels of encryption and clever coding tricks made the malware hard to detect and difficult to reverse engineer. The code also contained references to Dante Alighieri’s Divine Comedy and alluded to 666, the “mark of the beast” discussed in the biblical Book of Revelation.

“OnionDuke,” as the malware spread through the latest attacks is known, is a completely different malware family, but some of the command and control (C&C) channels it uses to funnel commands and stolen data to and from infected machines were registered by the same persona that obtained MiniDuke C&Cs. The main component of the malware monitored several attacker-operated servers to await instructions to install other pieces of malware. Other components siphoned login credentials and system information from infected machines.

Besides spreading through the Tor node, the malware also spread through other, undetermined channels. The F-Secure post stated:

During our research, we have also uncovered strong evidence suggesting that OnionDuke has been used in targeted attacks against European government agencies, although we have so far been unable to identify the infection vector(s). Interestingly, this would suggest two very different targeting strategies. On one hand is the “shooting a fly with a cannon” mass-infection strategy through modified binaries and, on the other, the more surgical targeting traditionally associated with APT [advanced persistent threat] operations.

The malicious Tor node infected uncompressed executable files passing through unencrypted traffic. It worked by inserting the original executable into a “wrapper” that added a second executable. Tor users downloading executables from an HTTPS-protected server or using a virtual private network were immune to the tampering; those who were careful to install only apps that were digitally signed by the developer would likely also be safe, although that assurance is by no means guaranteed. It’s not uncommon for attackers to compromise legitimate signing keys and use them to sign malicious packages.

Tor officials have long counseled people to employ a VPN use encryption when using the privacy service, and OnionDuke provides a strong cautionary tale when users fail to heed that advice.

This post was updated to remove incorrect statements concerning the use of virtual private networks.

For the complete story follow the source link below.

Source: Ars Technica

Android SMS worm Selfmite returns, more aggressive than ever

image

A new version of an Android worm called Selfmite has the potential to ramp up huge SMS charges for victims in its attempt to spread to as many devices as possible.

The first version of Selfmite was discovered in June, but its distribution was quickly disrupted by security researchers. The worm—a rare type of malware in the Android ecosystem—spread by sending text messages with links to a malicious APK (Android Package) to the first 20 entries in the address book of every victim.

The new version, found recently and dubbed Selfmite.b, has a similar, but much more aggressive spreading system, according to researchers from security firm AdaptiveMobile. It sends text messages with rogue links to all contacts in a victim’s address book, and does this in a loop.

“According to our data, Selfmite.b is responsible for sending over 150k messages during the past 10 days from a bit more than 100 infected devices,” Denis Maslennikov, a security analyst at AdaptiveMobile said in a blog post Wednesday. “To put this into perspective that is over a hundred times more traffic generated by Selfmite.b compared to Selfmite.a.”

At an average of 1,500 text messages sent per infected device, Selfmite.b can be very costly for users whose mobile plans don’t include unlimited SMS messages. Some mobile carriers might detect the abuse and block it, but this might leave the victim unable to send legitimate text messages.

Unlike Selfmite.a, which was found mainly on devices in North America, Selfmite.b has hit victims throughout at least 16 different countries: Canada, China, Costa Rica, Ghana, India, Iraq, Jamaica, Mexico, Morocco, Puerto Rico, Russia, Sudan, Syria, USA, Venezuela and Vietnam.

The first version of the worm used goo.gl shortened URLs in spam messages that pointed to an APK installer for the malware. Those URLs were hardcoded in the app’s code, so once they were disabled by Google, the operator of the goo.gl URL shortening service, Selfmite.a’s distribution stopped.

The worm’s authors took a different approach with the new version. They still use shortened URLs in text messages—this time generated with Go Daddy’s x.co service—but the URLs are specified in a configuration file that the worm downloads periodically from a third-party server.

“We notified Go Daddy about the malicious x.co URLs and at the moment both shortened URLs have been deactivated,” Maslennikov said. “But the fact that the author(s) of the worm can change it remotely using a configuration file makes it harder to stop the whole infection process.”

The goal of Selfmite is to generate money for its creators through pay-per-install schemes by promoting various apps and services. The old version distributed Mobogenie, a legitimate application that allows users to synchronize their Android devices with their PCs and to download Android apps from an alternative app store.

Selfmite.b creates two icons on the device’s home screen, one to Mobogenie and one to an app called Mobo Market. However, they act as Web links and clicking on them can lead to different apps and online offers depending on the victim’s IP (Internet Protocol) address location.

Fortunately, the worm’s distribution system does not use exploits and relies only on social engineering—users would have to click on the spammed links and then manually install the downloaded APK in order for their devices to be infected. Furthermore, their devices would need to be configured to allow the installation of apps from unknown sources—anything other than Google Play—which is not the default setting in Android. This further limits the attack’s success rate.

Source: Network World

Shellshock makes Heartbleed look insignificant

image

Somehow there always seems to be another Internet security disaster around the corner. A few months ago everyone was in a panic about Heartbleed.

Now the bug, Shellshock (officially CVE-2014-6271), a far more serious vulnerability, is running uncontrolled over the Internet. It’s never a good time to panic, but if you’re discouraged I don’t blame you; I know I am.

In retrospect, the grave concern over Heartbleed seems misplaced. As information disclosure bugs go it was a really bad one, but it was only an information disclosure bug and a difficult one to exploit. The sky’s the limit on attacks with Shellshock and it’s so easy to exploit that it’s already being widely-exploited according to research firm Fireeye, which says they have already observed several forms of attack:

• Malware droppers
• Reverse shells and backdoors
• Data exfiltration
• DDoS

Of course it’s not just Fireeye; everyone is reporting widespread sightings of exploits. See Kaspersky, Trend Micro, HP Security Research and many others.

Speaking of HP, their TippingPoint unit states that their network IPS has been updated to recognize known attacks using Shellshock. A vigorously updated IPS, deployed not just at the perimeter but also at critical points within the network, may be the only effective systemic protection you have against Shellshock for now. HP is not the only IPS around of course. And remember that an IPS is more of a protection against known exploits than against the vulnerability generally.

This particular bug has been in the Bash shell for over two decades. The implications of this are really bad. First, it means that an extremely important and popular program either went unscrutinized or poorly-scrutinized. Surely there are many other such problems out there. Don’t be surprised if several of them have been used carefully and surreptitiously for targeted attacks for years. In fact, don’t be surprised if Shellshock has been used in the past.

All sorts of horrible scenarios are possible with Shellshock. It’s not just limited to web server attacks. Fireeye shows how different Internet services, even DHCP and SSH, can be exploited to perform the attack, as long as Bash is the shell, and it usually is. They demonstrate automated click fraud, stealing the host password file, several DDOS attacks using the server and several ways to establish a shell on the server without any malware running on it.

For more information and the original article follow the source link below. 

Source: ZD Net

Android Browser flaw a “privacy disaster” for half of Android users

image

Bug enables malicious sites to grab cookies, passwords from other sites.

A bug quietly reported on September 1 appears to have grave implications for Android users. Android Browser, the open source, WebKit-based browser that used to be part of the Android Open Source Platform (AOSP), has a flaw that enables malicious sites to inject JavaScript into other sites. Those malicious JavaScripts can in turn read cookies and password fields, submit forms, grab keyboard input, or do practically anything else.

Browsers are generally designed to prevent a script from one site from being able to access content from another site. They do this by enforcing what is called the Same Origin Policy (SOP): scripts can only read or modify resources (such as the elements of a webpage) that come from the same origin as the script, where the origin is determined by the combination of scheme (which is to say, protocol, typically HTTP or HTTPS), domain, and port number.

The SOP should then prevent a script loaded from http://malware.bad/ from being able to access content at https://paypal.com/.

The Android Browser bug breaks the browser’s handling of the SOP. As Rafay Baloch, the researcher who discovered the problem found, JavaScript constructed in a particular way could ignore the SOP and freely meddle with other sites’ content without restriction.

This means that potentially any site visited in the browser could be stealing sensitive data. It’s a bug that needs fixing, and fast.

As part of its attempts to gain more control over Android, Google has discontinued the AOSP Browser. Android Browser used to be the default browser on Google, but this changed in Android 4.2, when Google switched to Chrome. The core parts of Android Browser were still used to power embedded Web view controls within applications, but even this changed in Android 4.4, when it switched to a Chromium-based browser engine.

But just as Microsoft’s end-of-life for Windows XP didn’t make that operating system magically disappear from the Web, Google’s discontinuation of the open source Browser app hasn’t made it disappear from the Web either. As our monthly look at Web browser usage shows, Android Browser has a little more real-world usage than Chrome for Android, with something like 40-50 percent of Android users using the flawed browser.

The Android Browser is likely to be embedded in third-party products, too, and some Android users have even installed it on their Android 4.4 phones because for one reason or another they prefer it to Chrome.

Google’s own numbers paint an even worse picture. According to the online advertising giant, only 24.5 percent of Android users are using version 4.4. The majority of Android users are using versions that include the broken component, and many of these users are using 4.1.x or below, so they’re not even using versions of Android that use Chrome as the default browser.

Baloch initially reported the bug to Google, but the company told him that it couldn’t reproduce the problem and closed his report. Since he wrote his blog post, a Metasploit module has been developed to enable the popular security testing framework to detect the problem, and Metasploit developers have branded the problem a “privacy disaster.” Baloch says that Google has subsequently changed its response, agreeing that it can reproduce the problem and saying that it is working on a suitable fix.

Just how this fix will be made useful is unclear. While Chrome is updated through the Play Store, the AOSP Browser is generally updated only through operating system updates. Timely availability of Android updates remains a sticking point for the operating system, so even if Google develops a fix, it may well be unavailable to those who actually need it.

Users of Android 4.0 and up can avoid much of the exposure by switching to Chrome, Firefox, or Opera, none of which should use the broken code. Other third-party browsers for Android may embed the broken AOSP code, and unfortunately for end users, there’s no good way to know if this is the case or not.

Update: Google has offered the following statement:

We have reviewed this report and Android users running Chrome as their browser, or those who are on Android 4.4+ are not affected. For earlier versions of Android, we have already released patches (1, 2) to AOSP.

Source: Ars Technica

Offline attack shows Wi-Fi routers still vulnerable

image

An attack can break into some common Wi-Fi routers, via a configuration feature.

A researcher has refined an attack on wireless routers with poorly implemented versions of the Wi-Fi Protected Setup that allows someone to quickly gain access to a router’s network.

The attack exploits weak randomization, or the lack of randomization, in a key used to authenticate hardware PINs on some implementations of Wi-Fi Protected Setup, allowing anyone to quickly collect enough information to guess the PIN using offline calculations. By calculating the correct PIN, rather than attempting to brute-force guess the numerical password, the new attack circumvents defenses instituted by companies.

While previous attacks require up to 11,000 guesses—a relatively small number—and approximately four hours to find the correct PIN to access the router’s WPS functionality, the new attack only requires a single guess and a series of offline calculations, according to Dominique Bongard, reverse engineer and founder of 0xcite, a Swiss security firm.

“It takes one second,” he said. “It’s nothing. Bang. Done.”

The problem affects the implementations provided by two chipset manufacturers, Broadcom and a second vendor whom Bongard asked not to be named until they have had a chance to remediate the problem. Broadcom did not provide a comment to Ars.

Because many router manufacturers use the reference software implementation as the basis for their customized router software, the problems affected the final products, Bongard said. Broadcom’s reference implementation had poor randomization, while the second vendor used a special seed, or nonce, of zero, essentially eliminating any randomness.

The Wi-Fi Alliance could not confirm whether the products impacted by the attack were certified, according to spokeswoman Carol Carrubba.

“A vendor implementation that improperly generates random numbers is more susceptible to attack, and it appears as though this is the case with at least two devices,” she said in a statement. “It is likely that the issue lies in the specific vendor implementations rather than the technology itself. As the published research does not identify specific products, we do not know whether any Wi-Fi certified devices are affected, and we are unable to confirm the findings.”

The research, originally demonstrated at the PasswordsCon Las Vegas 2014 conference in early August, builds on previous work published by Stefan Viehböck in late 2011. Viehböck found a number of design flaws in Wi-Fi Protected Setup, but most significantly, he found that the PIN needed to complete the setup of a wireless router could be broken into smaller parts and each part attacked separately. By breaking down the key, the number of attempts an attacker would have to try before finding the key shrunk from an untenable 100 million down to a paltry 11,000—a significant flaw for any access-control technology.

Viehböck was not the only researcher to notice the flaws in the technology. Independently, Craig Heffner of Tactical Network Solutions discovered the issue and created a tool, Reaver, to use brute-force guessing of all 11,000 combinations to find the PIN. Ars Technica used the tool to confirm the original issue.

Bongard’s updated attack exploits the lack of randomization in the nonce, a number used to create the pseudo-random inputs to calculate the keys.

For more information follow the source link below.

Source: Ars Technica

Visit the Wrong Website, and the FBI Could End Up in Your Computer

image

Security experts call it a “drive-by download”: a hacker infiltrates a high-traffic website and then subverts it to deliver malware to every single visitor. It’s one of the most powerful tools in the black hat arsenal, capable of delivering thousands of fresh victims into a hackers’ clutches within minutes.

Now the technique is being adopted by a different kind of a hacker—the kind with a badge. For the last two years, the FBI has been quietly experimenting with drive-by hacks as a solution to one of law enforcement’s knottiest Internet problems: how to identify and prosecute users of criminal websites hiding behind the powerful Tor anonymity system.

The approach has borne fruit—over a dozen alleged users of Tor-based child porn sites are now headed for trial as a result. But it’s also engendering controversy, with charges that the Justice Department has glossed over the bulk-hacking technique when describing it to judges, while concealing its use from defendants. Critics also worry about mission creep, the weakening of a technology relied on by human rights workers and activists, and the potential for innocent parties to wind up infected with government malware because they visited the wrong website. “This is such a big leap, there should have been congressional hearings about this,” says ACLU technologist Chris Soghoian, an expert on law enforcement’s use of hacking tools. “If Congress decides this is a technique that’s perfectly appropriate, maybe that’s OK. But let’s have an informed debate about it.”

The FBI’s use of malware is not new. The bureau calls the method an NIT, for “network investigative technique,” and the FBI has been using it since at least 2002 in cases ranging from computer hacking to bomb threats, child porn to extortion. Depending on the deployment, an NIT can be a bulky full-featured backdoor program that gives the government access to your files, location, web history and webcam for a month at a time, or a slim, fleeting wisp of code that sends the FBI your computer’s name and address, and then evaporates.

What’s changed is the way the FBI uses its malware capability, deploying it as a driftnet instead of a fishing line. And the shift is a direct response to Tor, the powerful anonymity system endorsed by Edward Snowden and the State Department alike.

Tor is free, open-source software that lets you surf the web anonymously. It achieves that by accepting connections from the public Internet—the “clearnet”—encrypting the traffic and bouncing it through a winding series of computers before dumping it back on the web through any of over 1,100 “exit nodes.”

The system also supports so-called hidden services—special websites, with addresses ending in .onion, whose physical locations are theoretically untraceable. Reachable only over the Tor network, hidden services are used by organizations that want to evade surveillance or protect users’ privacy to an extraordinary degree. Some users of such service have legitimate and even noble purposes—including human rights groups and journalists. But hidden services are also a mainstay of the nefarious activities carried out on the so-called Dark Net: the home of drug markets, child porn, murder for hire, and a site that does nothing but stream pirated My Little Pony episodes.

Law enforcement and intelligence agencies have a love-hate relationship with Tor. They use it themselves, but when their targets hide behind the system, it poses a serious obstacle. Last month, Russia’s government offered a $111,000 bounty for a method to crack Tor.

The FBI debuted its own solution in 2012, in an investigation dubbed “Operation Torpedo,” whose contours are only now becoming visible through court filings.

Operation Torpedo began with an investigation in the Netherlands in August 2011. Agents at the National High Tech Crime Unit of the Netherlands’ national police force had decided to crack down on online child porn, according to an FBI affidavit. To that end, they wrote a web crawler that scoured the Dark Net, collecting all the Tor onion addresses it could find.

The NHTCU agents systematically visited each of the sites and made a list of those dedicated to child pornography. Then, armed with a search warrant from the Court of Rotterdam, the agents set out to determine where the sites were located.

That, in theory, is a daunting task—Tor hidden services mask their locations behind layers of routing. But when the agents got to a site called “Pedoboard,” they discovered that the owner had foolishly left the administrative account open with no password. They logged in and began poking around, eventually finding the server’s real Internet IP address in Bellevue, Nebraska.

They provided the information to the FBI, who traced the IP address to 31-year-old Aaron McGrath. It turned out McGrath was hosting not one, but two child porn sites at the server farm where he worked, and a third one at home.

Instead of going for the easy bust, the FBI spent a solid year surveilling McGrath, while working with Justice Department lawyers on the legal framework for what would become Operation Torpedo. Finally, on November 2012, the feds swooped in on McGrath, seized his servers and spirited them away to an FBI office in Omaha.

A federal magistrate signed three separate search warrants: one for each of the three hidden services. The warrants authorized the FBI to modify the code on the servers to deliver the NIT to any computers that accessed the sites. The judge also allowed the FBI to delay notification to the targets for 30 days.

image

This NIT was purpose-built to identify the computer, and do nothing else—it didn’t collect keystrokes or siphon files off to the bureau. And it evidently did its job well. In a two-week period, the FBI collected IP addresses, hardware MAC addresses (a unique hardware identifier for the computer’s network or Wi-Fi card) and Windows hostnames on at least 25 visitors to the sites. Subpoenas to ISPs produced home addresses and subscriber names, and in April 2013, five months after the NIT deployment, the bureau staged coordinated raids around the country.

Today, with 14 of the suspects headed toward trial in Omaha, the FBI is being forced to defend its use of the drive-by download for the first time. Defense attorneys have urged the Nebraska court to throw out the spyware evidence, on the grounds that the bureau concealed its use of the NIT beyond the 30-day blackout period allowed in the search warrant. Some defendants didn’t learn about the hack until a year after the fact. “Normally someone who is subject to a search warrant is told virtually immediately,” says defense lawyer Joseph Gross Jr. “What I think you have here is an egregious violation of the Fourth Amendment.”

But last week U.S. Magistrate Judge Thomas Thalken rejected the defense motion, and any implication that the government acted in bad faith. “The affidavits and warrants were not prepared by some rogue federal agent,” Thalken wrote, “but with the assistance of legal counsel at various levels of the Department of Justice.” The matter will next be considered by U.S. District Judge Joseph Bataillon for a final ruling.

The ACLU’s Soghoian says a child porn sting is probably the best possible use of the FBI’s drive-by download capability. “It’s tough to imagine a legitimate excuse to visit one of those forums: the mere act of looking at child pornography is a crime,” he notes. His primary worry is that Operation Torpedo is the first step to the FBI using the tactic much more broadly, skipping any public debate over the possible unintended consequences. “You could easily imagine them using this same technology on everyone who visits a jihadi forum, for example,” he says. “And there are lots of legitimate reasons for someone to visit a jihadi forum: research, journalism, lawyers defending a case. ACLU attorneys read Inspire Magazine, not because we are particularly interested in the material, but we need to cite stuff in briefs.”

Soghoian is also concerned that the judges who considered NIT applications don’t fully understand that they’re being asked to permit the use of hacking software that takes advantage of software vulnerabilities to breach a machine’s defenses. The Operation Torpedo search warrant application, for example, never uses the words “hack,” “malware,” or “exploit.” Instead, the NIT comes across as something you’d be happy to spend 99 cents for in the App Store. “Under the NIT authorized by this warrant, the website would augment [its] content with some additional computer instructions,” the warrant reads.

From the perspective of experts in computer security and privacy, the NIT is malware, pure and simple. That was demonstrated last August, when, perhaps buoyed by the success of Operation Torpedo, the FBI launched a second deployment of the NIT targeting more Tor hidden services.

This one—still unacknowledged by the bureau—traveled across the servers of Freedom Hosting, an anonymous provider of turnkey Tor hidden service sites that, by some estimates, powered half of the Dark Net.

image

This attack had its roots in the July 2013 arrest of Freedom Hosting’s alleged operator, one Eric Eoin Marques, in Ireland. Marques faces U.S. charges of facilitating child porn—Freedom Hosting long had a reputation for tolerating child pornography.

Working with French authorities, the FBI got control of Marques’ servers at a hosting company in France, according to testimony in Marques’ case. Then the bureau appears to have relocated them—or cloned them—in Maryland, where the Marques investigation was centered.

On August 1, 2013, some savvy Tor users began noticing that the Freedom Hosting sites were serving a hidden “iframe”—a kind of website within a website. The iframe contained Javascript code that used a Firefox vulnerability to execute instructions on the victim’s computer. The code specifically targeted the version of Firefox used in the Tor Browser Bundle—the easiest way to use Tor.

This was the first Tor browser exploit found in the wild, and it was an alarming development to the Tor community. When security researchers analyzed the code, they found a tiny Windows program hidden in a variable named “Magneto.” The code gathered the target’s MAC address and the Windows hostname, and then sent it to a server in Virginia in a way that exposed the user’s real IP address. In short, the program nullified the anonymity that the Tor browser was designed to enable.

As they dug further, researchers discovered that the security hole the program exploited was already a known vulnerability called CVE-2013-1690—one that had theoretically been patched in Firefox and Tor updates about a month earlier. But there was a problem: Because the Tor browser bundle has no auto-update mechanism, only users who had manually installed the patched version were safe from the attack. “It was really impressive how quickly they took this vulnerability in Firefox and extrapolated it to the Tor browser and planted it on a hidden service,” says Andrew Lewman, executive director of the nonprofit Tor Project, which maintains the code.

The Freedom Hosting drive-by has had a lasting impact on the Tor Project, which is now working to engineer a safe, private way for Tor users to automatically install the latest security patches as soon as they’re available—a move that would make life more difficult for anyone working to subvert the anonymity system, with or without a court order.

Unlike with Operation Torpedo, the details of the Freedom Hosting drive-by operation remain a mystery a year later, and the FBI has repeatedly declined to comment on the attack, including when contacted by WIRED for this story. Only one arrest can be clearly tied to the incident—that of a Vermont man named Grant Klein who, according to court records, was raided in November based on an NIT on a child porn site that was installed on July 31, 2013. Klein pleaded guilty to a single count of possession of child pornography in May and is set for sentencing this October.

But according to reports at the time, the malware was seen, not just on criminal sites, but on legitimate hidden services that happened to be hosted by Freedom Hosting, including the privacy protecting webmail service Tormail. If true, the FBI’s drive-by strategy is already gathering data on innocent victims.

Despite the unanswered questions, it’s clear that the Justice Department wants to scale up its use of the drive-by download. It’s now asking the Judicial Conference of the United States to tweak the rules governing when and how federal judges issue search warrants. The revision would explicitly allow for warrants to “use remote access to search electronic storage media and to seize or copy electronically stored information” regardless of jurisdiction.

The revision, a conference committee concluded last May (.pdf), is the only way to confront the use of anonymization software like Tor, “because the target of the search has deliberately disguised the location of the media or information to be searched.”

Such dragnet searching needs more scrutiny, Soghoian says. “What needs to happen is a public debate about the use of this technology, and the use of these techniques,” he says. “And whether the criminal statutes that the government relies on even permit this kind of searching. It’s one thing to say we’re going to search a particular computer. It’s another thing to say we’re going to search every computer that visits this website, without knowing how many there are going to be, without knowing what city, state or countries they’re coming from.”

“Unfortunately,” he says, “we’ve tiptoed into this area, because the government never gave notice that they were going to start using this technique.”

For more information follow the source link below.

Source: Wired

Researchers show how to turn a phone’s gyroscope into a crude microphone for eavesdropping

Did you ever think your phone’s gyroscope could be used to monitor your conversations? Apparently it can. According to Wired, in a presentation at the Usenix security conference next week, researchers from Stanford University and Israel’s defense research group Rafael will present a way to eavesdrop on conversations using its gyroscopes, not its microphones. According to the report, gyroscopes, which are the sensors designed measure the phone’s orientation, can be tampered with to make them into eavesdropping sensors. Using a piece of software the researchers built called “Gyrophone,” they were able to make the gyroscope sensitive enough to pick up some sound waves, making them basic microphones. Further, there is no way to deny apps the ability to access gyroscopes the way users can for mics built into phones.

“Whenever you grant anyone access to sensors on a device, you’re going to have unintended consequences,” Dan Boneh, a computer security professor at Stanford, told Wired. “In this case the unintended consequence is that they can pick up not just phone vibrations, but air vibrations.”

However, the technique isn’t that practical for actual eavesdropping, the report said, noting that it works well enough to pick up a fraction of the words spoken near a phone. When the researchers tested the technique’s ability to discern the numbers 1 through 10 and the syllable “oh” in a simulation of how credit card numbers could be stolen, they could identify as many as 65 percent of digits spoken in the same room as the device by a single speaker. Wired Article.

Source: Fierce Wireless

Backdoors and surveillance mechanisms in iOS devices

image

This paper is actually half a year old – give or take – but it’s gotten a lot of attention recently due to, well, the fact that he has uploaded a PowerPoint from a talk about these matters, which is obviously a little bit more accessible than a proper scientific journal article.

For instance, despite Apple’s claims of not being able to read your encrypted iMessages, there’s this:

“In October 2013, Quarkslab exposed design flaws in Apple’s iMessage protocol demonstrating that Apple does, despite its vehement denial, have the technical capability to intercept private iMessage traffic if they so desired, or were coerced to under a court order. The iMessage protocol is touted to use end-to-end encryption, however Quarkslab revealed in their research that the asymmetric keys generated to perform this encryption are exchanged through key directory servers centrally managed by Apple, which allow for substitute keys to be injected to allow eavesdropping to be performed. Similarly, the group revealed that certificate pinning, a very common and easy-to-implement certificate chain security mechanism, was not implemented in iMessage, potentially allowing malicious parties to perform MiTM attacks against iMessage in the same fashion.”

There are also several services in iOS that facilitate organisations like the NSA, yet these features have no reason to be there. They are not referenced by any (known) Apple software, do not require developer mode (so they’re not debugging tools or anything), and are available on every single iOS device.

One example of these services is a packet sniffer, com.apple.pcapd, which “dumps network traffic and HTTP request/response data traveling into and out of the device” and “can be targeted via WiFi for remote monitoring”. It runs on every iOS device. Then there’s com.apple.mobile.file_relay, which “completely bypasses Apple’s backup encryption for end-user security”, “has evolved considerably, even in iOS 7, to expose much personal data”, and is “very intentionally placed and intended to dump data from the device by request”.

This second one, especially, only gave relatively limited access in iOS 2.x, but in iOS 7 has grown to give access to pretty much everything, down to “a complete metadata disk sparseimage of the iOS file system, sans actual content”, meaning time stamps, file names, names of all installed applications and their documents, configured email accounts, and lot more. As you can see, the exposed information goes quite deep.

Apple is a company that continuously claims it cares about security and your privacy, but yet they actively make it easy to get to all your personal data. There’s a massive contradiction between Apple’s marketing fluff on the one hand, and the reality of the access iOS provides to your personal data on the other – down to outright lies about Apple not being able to read your iMessages.

Those of us who aren’t corporate cheerleaders are not surprised by this in the slightest – Apple, Microsoft, Google, they’re all the same – but I still encounter people online every day who seem to believe the marketing nonsense Apple puts out. People, it doesn’t get much clearer than this: Apple does not care about your privacy any more or less than its competitors.

Source: OS News

Note: this is not mentioned in the original article but is definitely worth noting that there is at least one company put there that cares about your privacy and always has and is the leader in security. That’s BlackBerry of course, they should be recognized for how great they are and they continually get over looked unless it is for something negative. BlackBerry for life! Best mobile OS is BlackBerry 10, period.

Crooks Seek Revival of ‘Gameover Zeus’ Botnet

image

Cybercrooks today began taking steps to resurrect the Gameover ZeuS botnet, a complex crime machine that has been blamed for the theft more than $100 million from banks, businesses and consumers worldwide. The revival attempt comes roughly five weeks after the FBI joined several nations, researchers and security firms in a global and thus far successful effort to eradicate it.

The researchers who helped dismantle Gameover Zeus said they were surprised that the botmasters didn’t fight back. Indeed, for the past month the crooks responsible seem to have kept a low profile.

But that changed earlier this morning when researchers at Malcovery [full disclosure: Malcovery is an advertiser on this blog] began noticing spam being blasted out with phishing lures that included zip files booby-trapped with malware.

Looking closer, the company found that the malware shares roughly 90 percent of its code base with Gameover Zeus. Part of what made the original GameOver ZeuS so difficult to shut down was its reliance in part on an advanced peer-to-peer (P2P) mechanism to control and update the bot-infected systems.

But according to Gary Warner, Malcovery’s co-founder and chief technologist, this new Gameover variant is stripped of the P2P code, and relies instead on an approach known as fast-flux hosting. Fast-flux is a kind of round-robin technique that lets botnets hide phishing and malware delivery sites behind an ever-changing network of compromised systems acting as proxies, in a bid to make the botnet more resilient to takedowns.

Like the original Gameover, however, this variant also includes a “domain name generation algorithm” or DGA, which is a failsafe mechanism that can be invoked if the botnet’s normal communications system fails. The DGA creates a constantly-changing list of domain names each week (gibberish domains that are essentially long jumbles of letters).

In the event that systems infected with the malware can’t reach the fast-flux servers for new updates, the code instructs the botted systems to seek out active domains from the list specified in the DGA. All the botmasters need to do in this case to regain control over his crime machine is register just one of those domains and place the update instructions there.

Warner said the original Gameover botnet that was clobbered last month is still locked down, and that it appears whoever released this variant is essentially attempting to rebuild the botnet from scratch. “This discovery indicates that the criminals responsible for Gameover’s distribution do not intend to give up on this botnet even after suffering one of the most expansive botnet takeovers and takedowns in history,” Warner said.

Gameover is based on code from the ZeuS Trojan, an infamous family of malware that has been used in countless online banking heists. Unlike ZeuS — which was sold as a botnet creation kit to anyone who had a few thousand dollars in virtual currency to spend — Gameover ZeuS has since October 2011 been controlled and maintained by a core group of hackers from Russia and Ukraine. Those individuals are believed to have used the botnet in high-dollar corporate account takeovers that frequently were punctuated by massive distributed-denial-of-service (DDoS) attacks intended to distract victims from immediately noticing the thefts.

According to the Justice Department, Gameover has been implicated in the theft of more than $100 million in account takeovers. According to the U.S. Justice Department, the author of the ZeuS Trojan (and by extension the Gameover Zeus malware) is allegedly a Russian citizen named Evgeniy Mikhailovich Bogachev.

For more details, check out Malcovery’s blog post about this development.

For more information follow the source link below. 

Source: Krebs on Security

Android malware tool iBanking commands $5000 price for attackers

image

Evolving malicious tool adopts service model, grows increasingly complex

The market for malware tools is expanding, including the purchase of pre-made tools for a hefty fee from underground developers. One such tool aimed at Android, iBanking, promises to conduct a number of malicious actions including intercepting text messages, stealing phone information, pulling geolocation data and constructing botnets with infected devices. All it would cost to obtain the program is $5000, even after its source code leaked earlier in the year.

The iBanking malware has evolved from simply being able to steal SMS information, but has grown to be a much larger Trojan tool for would be data thieves. Applications injected with the iBanking code have hit the marketplace costumed as legitimate banking and social media apps as a way for users to be convinced to use them.

The apps often appear to users who have already been infected on desktop machines, prompting them to fill in personal information which then leads to an SMS message with a download link. Once the app is downloaded and installed, it begins feeding information to the attacker.

According to Symantec the tool is “one of the most expensive pieces of malware” the company has seen, especially for one with that sets up a service business. Other malware applications have paved the way for things like customer support and HTML control panels, but not at such a high price.

Part of the larger problem with iBanking is that it resists most attempts to reverse engineer the software, giving it a better strength against those trying to craft similar tools says an article from Ars Technica. iBanking uses encryption and code obfuscation to hide the commands and actions it carries out. This prevents researchers from breaking down the process of the malware, as well as keeping others from using the code to clone more software.

Source: Electronista