Wednesday, May 31, 2017
News in brief: NASA sends probe to the Sun; subway gets phone coverage; Facebook pushes back
Your daily round-up of some of the other stories in the news
NASA to send probe to the Sun
Boldly planning to go where no human – or spacecraft – has gone before, NASA is to send a probe 93m miles to the Sun. The probe, which will launch next year, has been named the Parker Solar Probe in honour of Eugene Parker, the astrophysicist who predicted the stream of plasma that flows out from the sun and into space, the high solar wind.
The probe was announced earlier this week, but NASA said on Wednesday that it had decided to name the probe after Parker at a ceremony at the University of Chicago where Parker, who turns 90 in just over a week, is the S. Chandrasekhar Distinguished Service Professor Emeritus.
Parker said: “The solar probe is going to a region of space that has never been explored before. It’s very exciting that we’ll finally get a look. One would like to have more detailed measurements of what’s going on in the solar wind. I’m sure there will be some surprises. There always are.”
The probe is due to launch in July next year and will get as close as 3.9m miles from the solar surface, where it will have to withstand temperatures of up to 2,500 degrees Fahrenheit.
Subway to get cellphone coverage
As any visitor to London – or native Londoner – knows, talking to other human beings is something of a taboo on the British capital’s subway system, known as the Tube. Chatting to another passenger is right up there with standing on the left of the escalator (that’s the side you use to walk up or down), not moving down inside the carriages to make room for others and stealing candy from babies.
So the news that it could soon be possible to have a mobile phone conversation on the Tube has predictably been greeted as one of the worst things possible by Londoners: Twitter users said it was a “truly horrific idea“, the “worst idea ever” and bemoaned “the horror“.
Tube passengers already have Wi-Fi at most stations – though not in the tunnels between stations – and the move to extend mobile connectivity to the network is the result of an initiative from the mayor, Sadiq Khan, who is due to invite bids from telecoms providers next week, said the FT.
Facebook warns of effect of new law
Facebook continued to push back against moves across the EU to curb the spread of fake news and hate speech earlier this week, criticising a new German law that could force Facebook and other social media providers to pay a fine of up to €50m if they don’t take down infringing content within 24 hours.
Facebook warned that the new law, which has been approved in Germany but hasn’t yet come into force, could mean legal content would be deleted, saying: “The draft law provides an incentive to delete content that is not clearly illegal when social networks face such a disproportionate threat of fines.”
The California company made the not unreasonable point to Engadget that the law “would have the effect of transferring responsibility for complex legal decisions from public authorities to private companies”, and added that it believes that the proposal isn’t compliant with EU law.
from Naked Security http://ift.tt/2qGyaKD
Wolf in sheep’s clothing: a SophosLabs investigation into delivering malware via VBA
Thanks to Graham Chantry of SophosLabs for the behind-the-scenes work on this article.
The document threat landscape has in recent years been dominated by Microsoft Word and Excel spreadsheet malware. This is thanks, in no small part, to the drastic resurgence of Visual Basic for Applications (VBA) being used as a delivery method for malicious payloads.
It’s a topic we’ve delved into before, most notably in this article from senior Sophos technologist Paul Ducklin. As he did back then, researcher Graham Chantry recently dug into the data and mechanics of the trend as seen from the SophosLabs’ perspective for an updated picture of the problem. What follows are his updated findings for the last six months.
By the numbers
First, some statistics that show the current state of affairs. In the pie chart on the left, we see that 68% of the file types used to deliver malware in the last six months were Word. Excel spreadsheets accounted for 15% and PDFs accounted for 13%. When it comes to the threat type, we see in the right-hand chart that 81% is VBA based, while embedded droppers account for 10% and phishing is 6%.
VBA Droppers first started to surface in July 2014 and became synonymous with the banking Trojan Dridex when they started to utilize them in aggressive spam campaigns. Since then, we have seen VBA Droppers used with a variety of other payloads that have evolved from simple 10-line droppers to verbose, complex and heavily obfuscated code.
And it’s not just the code that the bad guys have experimented with. In the same time period, Chantry said SophosLabs has seen attackers utilize a variety of file formats, such as the short-lived Office 2003 Standalone XML format, the MHTML Web Archive format and, in much rarer cases, embedding Office files within other document formats such as RTF and PDF.
The Matryoshka doll approach
The latter of these file formats has actually become far less rare. In just the last few weeks SophosLabs noticed a significant increase in the number of ransomware campaigns housing VBA droppers in PDF documents.
SophosLabs discovered one spam campaign where ransomware was downloaded and run by a macro hidden inside a Word document that was in turn nested within a PDF, like a Russian matryoshka doll. The ransomware in this case appeared to be a variant of Locky.
Most antivirus filters know how to recognize suspicious macros in documents, but hiding those document inside a PDF could be a successful way to sidestep it.
These attachments arrived in spam emails where the body was entirely empty, but the subjects started with either “Document”, “File” or “Copy” followed by a series of random numbers (File_78564545). The distinct lack of social engineering suggested the crooks are relying on curiosity alone for victims to open the enclosed PDF.
The PDF attachments themselves appear to always have a nonsensical filename such as “nm.pdf” (as shown in the screenshot above). If the recipient is naïve enough to open this attachment it will trigger the infection.
But before we replicate that, lets have a look at what’s actually inside this PDF file.
Anatomy of a malicious PDF
SophosLabs started by opening nm.pdf in a text editor and with little effort we can see an immediate red flag: the file contains an OpenAction event (see screenshot below). An Openaction event defines what will happen when the user first opens the document. In this case, the PDF reader will execute a JavaScript function called submarine. So what does this submarine function actually do? In order to find its definition, SophosLabs had to parse the remainder of the PDF.
PDF files consist of objects that define all aspects of the document’s content, such as images, fonts and of course the actual text. The OpenAction screenshot (above) can also help illustrate the format of a simple PDF object. Each object starts with a unique Index number (in this case decimal 14) and a version number (in this case version 0). The actual contents of the object are housed between the header obj and the footer endobj.
PDF objects can also indirectly reference each other and they do so via these unique index numbers.
In the screenshot above we can see that Object 14 (which holds our OpenAction event) references object 13, which itself references object 11, which references object 7 which finally references object 6. Object 6 is what is known as a “stream object” and the format tells us that it is 380 bytes in length and that its content is Flate encoded. This is illegible content when shown in text editor, so SophosLabs deflated it.
The screenshot above is Object 6’s deflated stream and right at the bottom is that submarine function for which Labs was searching. Unlike most modern JavaScript malware, this code was very straightforward, with little to no obfuscation.
Submarine consists of a single call to abc, which is a pointer to the inbuilt exportDataObject API. This API extracts an embedded file (in this case HGG4X.docm) and saves it to disk. If the nLaunch argument is non-zero the application will also open the extracted file in the default application. In this case the value of nLaunch is set to 2 which will result in the embedded file being saved to a temporary directory and then opened.
The next question was: where is HGG4X.docm? By tracking back to the root object (14), SophosLabs saw that Object 13 not only references JavaScript in object 11 but it also references “Embedded Files” in object 12.
The next question was where HGG4X.docm was hidden. By tracking back to the root object (14), we saw that Object 13 not only references JavaScript in object 11 but it also references “Embedded Files” in object 12.
Chantry said:
Unlike most document malware these days, the social engineering effort leaves a lot to the imagination: it simply asks you politely to open the embedded document. But if our user is naive enough to open an attachment from an unknown recipient, there is a good chance they’ll be naive enough to follow these instructions. We click “OK”; the JavaScript completes its mission and HGG4X.docm is dropped and opened into Microsoft Word.
As we anticipated, the second the user opens the attachment, the JavaScript kicks in and attempts to open the embedded VBA document. It’s not plain sailing; however, as Adobe Reader identifies that this might be something malicious and suspends the action. In order for the infection to continue the user will have to explicitly approve it.
Only two lines into the program, the Labs found the first indicator of something malicious, starting with an unconditional jump. As there is no label between the Goto statement and where it’s jumping to, the code wedged between them is unreachable (aka dead code). This is not very common in clean files as developers will often remove unused code. This sort of trick is very common among VBA downloaders and aims to try and confuse analysts trying to reverse engineer it. Unlike most samples that utilize this trick, however, the dead code in this file appears to be clean code snippets, likely taken from MSDN’s or other online resources.
Ingenious methods
Jumping over the junk code we see that Synomati starts by creating an Object (of type Cooper) and immediately calls one of its methods. Strangely, it doesn’t reference the method directly though, instead using the VBA function CallByName. This technique of calling an Object’s method allows the caller to specify the name of the method as a string argument rather than hardcoding it. In this code that name is stored in a TextBox component located on a VB form (called Window1).
Above is the Window1 VB Form as it appears in the Visual Basic editor. The red boxes indicate their names. Various attributes of these components are referenced throughout the program’s code.
Storing strings within form components is an ingenious method of concealing the true intentions of malicious code, as it’s often the strings that give the game away – eg suspicious IP addresses or calls to processes such as powershell.exe. We first started to see samples using this method in early 2016 but the majority of VBA droppers still prefer to obfuscate their strings, usually a variety of Xor, Base64 or RC4 encryption.
The CallByName function call from the previous screenshot was referring to the Text field of the TextBox T2. As seen in the bottom right corner of the form, that is the string “ratatu”. By searching for that expression in the Cooper class’s implementation, Labs found the method.
Just like its caller, ratatu also references strings stored in form components. This time though it’s in the Tag field of the ComboBox imaginatively named ffrrggbb.
The attribute isn’t visible from the Form Designer View so Labs needed to look at the properties tab for ffrrggbb. As you can see at the bottom of the screenshot below, the Tag field contains a long jumbled string.
Ratatu uses the VBA split function to divide this string into an array of smaller strings using the delimiter “FSUKE.”
The resulting array is a veritable who’s who of VBA dropper strings and, based on this information alone, Labs confidently predicted that the code was likely to download and run something. The array is stored in the variable AsStringName which is global in scope. This means it will be accessible from every other subroutine or function.
Another global variable is Vaucher which is assigned on the following line. The value it’s set to is at offset 0 of this newly created array “Microsoft.XMLHTTP.” This is because FreshID is a constant set to 0; so (0+0 * 2 / 13) is just a deliberately verbose method of declaring 0.
The function proceeds to call SubMui whose code can be seen in the screenshot below. The IF statement at the start of the subroutine is always true for this file (the ActiveDocument.Kind property is 0) so it proceeds to create 4 ActiveX objects using the strings from the global array we populated earlier.
Crucially, SubMui also generates another string array using the exact same Split method. This time, however, the delimiter is stored in the Label component named Command (string value “V”). This array is stored in the global variable MovedPermanently and contains four URIs that all point to the payload. We’ve censored parts of the domain names.
So we now have four ActiveX objects and also an array of URIs to download but SubMui isn’t finished. It also generates a path to the user’s “temp” directory, and it does so, by calling the Environment method of the recently created WScript.Shell ActiveX object. This method returns a dictionary of Environment variables for the current process. Using this object it looks up the value for the Environment Variable “Temp” and assign the value to the variable PUKALA_LAKOPPC.
At this point the code passed the baton to the misleadingly named MoveSheets subroutine, as its name is wholly unrepresentative of its functionality. This subroutine actually loops through the MovedPermanantly array (which contains all those dodgy URIs) and calls SaveDataCSVToolStripMenuItem_Click for each one.
Although we haven’t yet analyzed the SaveDataCSVToolStripMenuItem_Click subroutine at this point, it predicted that it’s likely downloading something, as the Status field of the Microsoft.XMLHTTP object (stored in CuPro) is being checked immediately after the call.
The HTTP status code 200 signifies that a request has been successful, so the IF condition here will raise a runtime error if a download was unsuccessful. In VBA, runtime errors can be caught and processed by error handlers defined using an On Error statement. MoveSheets defines an error handler at the label d13. All this label does though is to call Next which jumps back to the start of the For loop. Essentially if the download failed for any reason we get the next URI in the array and carry on.
By implementing this functionality, the bad guys ensure that if one of their domains is taken down before the victim opens the document there are still three others in the queue waiting to serve up the payload.
Cooper’s Challenge
So, on to another misleadingly named SaveDataCSVToolStripMenuItem_Click subroutine and we can see it starts by creating a full path to the current URI using the “http” string hidden in the ZK component. Similar to the Synomati function we also create a Cooper object to call its Challenge method. Note The IF condition is redundant as the parameter e is always less than 488.
Cooper’s Challenge method has a pretty basic implementation. It consists of two calls to the same subroutine: Vgux. The first of these calls has the parameter value set to 1 and the second with the value set to 8. If we navigate to Vgux’s code we can see that its behavior is in fact dependent on these values.
If the parameter is set to 1 it calls the Open method (of our “Microsoft.XMLHTTP” object) to initialize it as a GET request and to set the URI to download. The second time round, when the parameter is 8, it will call the setRequestHandler to initialize the User-Agent field.
So when we return to SaveDataCSVToolStripMenuItem_Click we now know our “Microsoft.XMLHTTP” object is initialized with all the right values and is ready to go. Predictably the next operation is to call Send on the object which will initiate the download of the payload.
Regardless of whether the download succeeds of fails the code flow returns to the MoveSheets subroutine. As we touched on before if any failure occurs we simply retrieve the next URI in the array and repeat the process until one succeeds or we run out of URIs. Whichever happens first?
In the case of the Labs’ investigation, the first HTTP download was successful so they proceeded to call the function Assimptota6, which immediately calls PUKALA_ProjectSpeed. Again the bad guys have attempted to complicate analysis by embedding dead code but, if we disregard that, it’s clear it’s just responsible for creating file paths for the dropped payload.
The function makes use of the temporary directory path stored in PUKALA_LAKOPPC (which we in SubMui) to generate two file paths stored in the global variables: PUKALA_Project and PUKALA_ProjectBBB. Note the integer value ProjectDarvin, which is included in both file paths, signifies which URI served up the payload: 20 indicates the first URI, 22 the second, 24 for the third and so on. The actual file paths generated can be seen in the table below.
Returning to the caller Assimptota6 and we find more redundant code. The conditional branch highlighted in red can never be true as the parameter NumHoja is always 22.
When we filter out this irrelevant code we can see that this function uses the Adodb.Stream object, we created earlier, to write the payload to a file on disk. It does so by first opening a stream of binary data, populating it with the content of the download and writing that stream back to disk using the SaveToFile method.
You might have noticed that the path of the file being written to is PUKALA_ProjectBBB. So let’s pause the program just after the SaveToFile call and take a look at what was actually written to eewadro20. The contents of the file don’t appear to be a recognizable file format; it appears to just be a series of random bytes. So we can probably make an educated guess that the payload is encrypted in some manner. So let’s resume the code execution and see how the program makes use of it.
Assimptota6 finishes up by calling the similarly titled Assimptota4. As the screenshot above shows, it consists of only two lines of code. Before delving into the subroutine call on the first line we can look ahead to the second line to see if that gives us any clues to what it’s trying to do. This line of code uses the Shell.Application object; we created earlier, to run the file pitupi20.exe. Of course this file doesn’t exist yet, so we know WidthA must be responsible for creating it. Looking at the arguments passed into WidthA only strengthens this assumption as it includes:
- the path to the file containing the encrypted payload
- the path to the Windows executable file that will be executed
- a string that appears to be some form of decryption key
When we jump into WidthA’s definition we can see that it reads the contents of the encrypted payload into the byte array Gbbb and later writes this array into the Windows executable file. Sandwiched between these two operations is a subroutine call to Subfunc which ominously takes our payload byte array and the decryption key as arguments. So it’s no longer a question of if it’s a decryption routine anymore it’s just a question of how does it decrypt it.
Stepping into SubFunc, Labs saw that it started by translating the decryption string “QOfPWKYMzQzNuuzBQGeax2Lkh3Y0oWEl” into an array of bytes using the VBA function StrConv. It then proceeded to perform an exclusive or (Xor) operation on each byte in the encrypted payload with the bytes in the decryption key array. Note the function Ashnorog is just a wrapper function for the expression bb Xor aa.
The diagram below shows the first 8 bytes of the encrypted payload array (at the top), the bytes in the decryption key array (in the middle) and the encrypted byte array after the Xor operation which we have renamed Decrypted Payload for readability.
In the first iteration of the loop CeLaP4 (the variable that is used to index the arrays) is set to 0. So we take the byte at index 0 of the Encrypted Payload (1c) and we Xor it with the byte at index 0 of the Decryption Key (51). The result of this operation (4D) is then written back into the encrypted payload array at index 0. The next iteration in the loop will Xor the bytes at index 1 (15 and 4F) and the result is written back to offset 1 (5A). This process continues until every byte in the array has been decrypted.
Note here that the Encrypted Payload Array is larger than the Decryption Key Array so we can’t Xor at the same offset in both arrays for every iteration. The code caters for this, however, by performing a Mod division of the index of the Encrypted Payload Array with the length of the Decryption Key Array. This means when the index reaches the last byte of the Decryption Key Array the next iteration will use the first byte in the array.
At this point, Labs let the decryption loop complete in the debugger and paused it just after WidthA has written the decrypted payload to “pitupi.exe”. Opening this file in a binary editor, Labs finally had a Windows executable payload.
Resuming the program, Assimptota4 proceeded to launch the newly decrypted payload using the Shell.Application object.
It then delivered the payload and executed it. This Windows executable now runs hidden in the background looking for files of interest and encrypting them. After a very short period of time, the inevitable ransom note and wallpaper change follows.
Enter Jaff
This ransomware calls itself Jaff and the bad news for the user is those treasured family pictures and tomorrow’s big presentation have all been renamed with .jaff extensions and their contents replaced with encrypted blobs. Chantry said:
The code analyzed in this paper is a far cry from those simple VBA downloader templates we saw at the start of the VBA boom back in September 2014. These samples conceal their strings in Form components, pollute useful code with redundant code and encrypt their payloads until the very last minute. All of this in no doubt in a bid to bypass AV detection that will look for specific strings or functions. The fact the functionality is split between so many procedures, however, and that it intermixes clean code with malicious suggested that it is also trying to prevent analysts from building a narrative when reverse engineering it.
Now what?
Just why the bad guys have decided to start hiding VBA Downloaders in PDF documents we can only speculate, but a good argument could be the tarnished reputation of Office documents as email attachments and perhaps a misguided interpretation of PDFs being somehow safer. IT administrators might by now have decided to automatically block VBA documents from entering their network, but it’s less likely they will have done so for PDFs. For want of a better analogy, it’s very much the wolf in sheep’s clothing.
Any AV vendor worth its muster can easily extract theses embedded file and this sort of attack requires the victim to have both PDF and Office software. That paired with the need for another level of social engineering means there are plenty of reasons the trend might not continue. In fact, this isn’t even the first time Labs has seen Office document malware being paired with PDF. The notorious CVE-2012-0158 vulnerability was exploited using PDF as a parent file but did so only briefly. Could VBA PDF files follow the same fate?
SophosLabs certainly isn’t betting against it.
Sophos detects the PDF and embedded Office Document as Troj/DocDl-IYE and the dropped Jaff payload as Mal/Ransom-FD. Our customers are protected.
from Naked Security http://ift.tt/2rF24DN
Improve Your Balance During Your Lunch Break
Ever wish you had better balance? Today’s workout gives you a chance to work on that skill. It won’t leave you sore or sweaty, though! These exercises are all about neuromuscular training: getting your nerves and muscles to work together so you can control your body precisely.
Our host Pahla Bowers starts us on the floor, so even if you have trouble here, you won’t fall far. Marching bridges and one-legged bridges give you a butt workout (take breaks if you need to) before we move on to bird dogs on all fours, and then some standing moves.
Advertisement
The moves get really challenging by the end, culminating in a one-legged exercise she calls the “drinky bird.” I made it through, but I have years of roller skating experience, which really helps. Nobody’s watching, so put a hand on a chair if you need a little extra stability during those standing exercises.
Pahla encourages us to stop the workout when it gets too difficult, and just repeat the moves you’ve done so far. The full sequence takes 16 minutes.
from Lifehacker http://ift.tt/2rF5C90
Keybase adds end-to-end encryption to messages on the web
Is Keybase the public key encryption platform that security mavens have been waiting for?
It’s been kicking around in slow-burning development for three years, during which time it has released a website, desktop app (Windows, Mac, Linux), mobile (Android, iOS) and chat apps. Last week came an extension to embed Keybase in the Chrome browser.
If this sounds like a standard messaging app mashup, what underpins Keybase is actually far more daring and, potentially, important – which is why we’re writing about it.
Keybase can be described as a system for users to generate a public encryption key (or upload their own existing ones) to verify their online identity with a high degree of certainty.
If this sounds a bit arcane, identity is the fundamental problem that lies at the root of many of security’s woes: nobody has any way of knowing someone is who they say they are and so must proceed based on risky assumptions.
Public key cryptography has tried to solve this by using either a hierarchy of trust (ie, certificates verified by an authority) or a “web of trust” (ie a network of users who vouch for each other), the latter a concept made famous PGP, Phil Zimmermann’s encryption software.
Web of trust sounds intriguing but turned out to be complex, which is why Keybase wants to reprise the idea – minus the hard corners.
Users verify their public key in Keybase through Twitter, Facebook, GitHub, Reddit, or Hacker News, each one boosting verification, the more the merrier. A hacker wanting to impersonate someone using a fake key would come up against a wall. In a sense, Keybase is a database of these proofs that verify a public identity.
Keybase wants to build security applications on top of this. With the new Chrome extension loaded, a blue button appears on the profiles of each registered service (such as Twitter) that allows Keybase users to DM each another with end-to-end security.
It also functions as a sort of social network that tells people how to communicate with someone using public keys, including initiating secure file exchanges. Users can follow one another and use keys to communicate securely.
For now, Keybase remains a work in progress. Marketing and documentation isn’t great for a company that had a $10.8m funding round in 2015, perhaps because it doesn’t want an influx at this stage.
Keybase might simply be trying to build a set of security capabilities that popularise public key encryption, or it might be trying to create a bigger platform that could be used in a number of ways by third parties. It’s not yet clear.
The biggest challenge will be to get users engaged in a world where some of what Keybase does is already covered, albeit imperfectly, by apps such as WhatsApp. Verification, identify and public-private keys are all very well but most users don’t understand their significance – or don’t care. Two decades ago, PGP struggled to break out for similar reasons. Security can’t afford history to repeat itself.
from Naked Security http://ift.tt/2qG51iF
Vulnerability affecting 1,000+ apps is exposing terabytes of data
A newly discovered backend data exposure vulnerability, dubbed HospitalGown, highlights the connection between mobile apps and insecure backend databases.
Appthority documented more than 1,000 apps with this vulnerability, and researched in detail 39 applications with big data leaks, which exposed an estimated 280 million records. These records were accessible as a result of weakly secured backends and did not require authentication of any kind to access the data.
“HospitalGown poses a direct risk to enterprises, opening them to an easy breach, exfiltration of sensitive data, and the costs from remediation, lawsuits, compliance infractions and loss of brand trust,” said Seth Hardy, Appthority Director of Security Research. “No amount of on-device application security can make up for relaxed security where the application stores user data. A breach at the backend takes the magnitude of the threat from being focused on a handful of devices to a much broader exposure for an entire enterprise, which could result in big data leaks or ransom of sensitive data.”
Researchers analyzed the network traffic of over a million enterprise mobile iOS and Android apps and discovered over 21,000 open Elasticsearch servers with unprotected data connected to apps frequently found on enterprise devices.
Key findings
- Affected apps are connecting to unsecured data stores on popular enterprise services, such as Elasticsearch and MySQL, which are leaking large amounts of sensitive data
- Apps using just one of these services revealed almost 43TB of exposed data
- Multiple affected apps leaked some form of PII, including passwords, location, travel and payment details, corporate profile data (including employees’ VPN PINs, emails, phone numbers), and retail customer data
- In multiple cases, data has already been accessed by unauthorized individuals and ransomed
- Even apps that have been removed from devices and the app stores still pose an exposure risk due to the sensitive data that remains stored on unsecured servers.
“The HospitalGown vulnerability isn’t just theoretical. Hundreds of apps are leaking terabytes of data, all due to simple human error – failure to secure the backend data stores. We recommend that, where possible, enterprises refrain from using apps that access or send sensitive information, particularly if the data is not encrypted in transit and at rest. If the use of an app impacted by HospitalGown is necessary, we suggest contacting the app developer or vendor to verify that the backend server has been secured,” said Seth Hardy, Director, Security Research at Appthority.
from Help Net Security http://ift.tt/2rUEtPs
Hackers blackmail patients of cosmetic surgery clinic
Hackers has been trying to blackmail patients of a Lithuanian plastic surgery clinic, by threatening to publish their nude “before and after” photos online.
The breach and the leak
The photos were stolen earlier this year, along with other sensitive data – passport scans, national insurance numbers, etc. – from the servers of Grozio Chirurgija, which has clinics in Vilnius and Kaunas.
According to The Guardian, the stolen data was first offered for sale in March. At that time, the hackers, who call themselves “Tsar Team,” released a small portion of the database to prove the veracity of their claims and to entice buyers.
They asked for 300 bitcoin for the entire lot, and at the same time contacted sone of the affected patients directly, offering to delete the sensitive data for a sum that varied between €50 and €2,000 (in bitcoin).
Apparently, among the patients of the clinic were also celebrities, both Lithuanian and not, and individuals from various European countries, including 1,500 from the UK.
It is unknown if any of them paid the ransom, but the clinic did not try to buy back the stolen data. Instead, they called in the Lithuanian police, CERT and other authorities to help them prevent the spread of the data online, and to find the culprits.
They’ve also asked the affected patients to notify the police if they got a ransom request from the hackers; to notify news portals, forums or social networking sites of any links to the stolen data that may have been published in the comments on their sites and ask them to remove them; and do the same if they find a link through Google Search.
In the meantime, the hackers decided to leak online over 25,000 of the private photos they have stolen, more than likely in an attempt to force the affected patients’ hand and get at least some money.
Who are the hackers?
It’s interesting to note that the name of the hacker group – Tsar Team – is also a name that has been associate with the Pawn Storm attackers (aka APT28, aka Sofacy), a Russian cyberespionage group that has targeted a wide variety of high-profile targets, including the NATO, European governments, the White House, and so on.
It is unclear, though, if this is the same group. Given that it is a very unusual target for APT28, it’s possible that these attackers have simply used the name to add weight to their demands.
from Help Net Security http://ift.tt/2rEa6Ne
Post-Quantum RSA
Interesting research on a version of RSA that is secure against a quantum computer:
Post-quantum RSA
Daniel J. Bernstein, Nadia Heninger, Paul Lou, and Luke Valenta
Abstract: This paper proposes RSA parameters for which (1) key generation, encryption, decryption, signing, and verification are feasible on today's computers while (2) all known attacks are infeasible, even assuming highly scalable quantum computers. As part of the performance analysis, this paper introduces a new algorithm to generate a batch of primes. As part of the attack analysis, this paper introduces a new quantum factorization algorithm that is often much faster than Shor's algorithm and much faster than pre-quantum factorization algorithms. Initial pqRSA implementation results are provided.
from Schneier on Security http://ift.tt/2sdI0FU
Chrome bug that lets sites secretly record you ‘not a flaw’, insists Google
Remember last year’s Google Chrome bug that gave pirates a way to steal streaming movies?
Well, we’re ready for our closeup, Mr DeMille! This time, we’re potentially the stars of hackers’ movies: there’s a Google Chrome “bug” (depending on who you ask) that allows sites to surreptitiously record audio and visual, all without an indicator light.
VIDEOAs BleepingComputer reports, AOL web developer Ran Bar-Zik discovered the issue – which Google says is not a security vulnerability – while at work, when he was dealing with a website that ran WebRTC code.
WebRTC is a protocol for streaming audio and video content over the internet in real time via peer-to-peer connections.
On the “this is not a security bug” side of the coin, a user first has to grant a site permission before it can access audio and video. After a site receives permission to stream audio and visual, it can run JavaScript code that records audio or video content before it sends the content to other participants of an WebRTC stream, as Bar-Zik’s bug report explains.
The thing is, the JavaScript doesn’t have to run in the same tab as where the permission was granted. It can record on a separate tab that doesn’t display the graphical red dot that indicates that WebRTC is recording. Thus, after permission is given, the site can listen to the user whenever it – or a hacker – wants to.
Th recording process is done via the JavaScript-based MediaRecorder API, according to BleepingComputer.
Bar-Zik reported the issue and heard back from Google on the same day. Its argument was that the red circle and dot recording icon aren’t present in all versions of Chrome, so the real way to defend against an attack would be in the permissions popup. Google’s take on it:
This isn’t really a security vulnerability – for example, WebRTC on a mobile device shows no indicator at all in the browser. The dot is a best-first effort that only works on desktop when we have chrome UI space available. That being said, we are looking at ways to improve this situation.
Bar-Zik doesn’t buy it. He says that it would be pretty easy to trick a victim who’s suffering from “I’m not reading another pop-up, I’ll just click OK” permissions fatigue.
“Real-world attacks aren’t going to be very obvious,” he told BleepingComputer. From the writeup:
For example, Bar-Zik argues that an attacker could use very small popups to launch the attack code. This code can use the camera for a millisecond to take a user’s picture, or for hours, recording the user’s movements or nearby audio.
If the user doesn’t notice the popup in his toolbar, there’s no visual indicator to cue him that someone is accessing his audio and video components. One of the sneakiest scenarios would be if the attacker disguised the popup as a mundane ad. If the user doesn’t immediately close the ad’s popup, an attacker remains with an surveillance channel opened on the user’s PC.
An attacker wouldn’t even have to create a website to steal the recording permission, he said. Rather, they could exploit a cross-site scripting (XSS) flaw – also known as one of the web attacks that refuse to die – on legitimate websites that have already been granted audio and video access.
Bug? Not bug? You can decide for yourself: Bar-Zik has put up a harmless demo that asks you for permission, launches a popup when you click OK, records 20 seconds of audio, and provides a download link for the recorded file.
The proof-of-concept code is also available for download here.
from Naked Security http://ift.tt/2qzH2qd
Cisco and IBM Security announce services and threat intelligence collaboration
In a new agreement, Cisco and IBM Security will work closer together across products, services and threat intelligence for the benefit of customers.
Cisco security solutions will integrate with IBM’s QRadar to protect organizations across networks, endpoints and cloud. Customers will also benefit from the scale of IBM Global Services support of Cisco products in their MSSP offerings. The agreement also establishes a new relationship between the IBM X-Force and Cisco Talos security research teams who will begin collaborating on threat intelligence research and coordinating on major cybersecurity incidents.
“In cybersecurity, taking a data-driven approach is the only way to stay ahead of the threats impacting your business,” said Bill Heinrich, Chief Information Security Director, BNSF Railway. “Cisco and IBM working together greatly increases our team’s ability to focus on stopping threats versus making disconnected systems work with each other. This more open and collaborative approach is an important step for the industry and our ability to defend ourselves against cybercrime.”
Integrating threat defenses across networks and cloud
As part of the collaboration, Cisco will build new applications for IBM’s QRadar security analytics platform. The first two new applications will be designed to help security teams understand and respond to advanced threats and will be available on the IBM Security App Exchange. These will enhance user experience, and help clients identify and remediate incidents more effectively when working with Cisco’s NGFW, NGIPS and AMP, and Threat Grid.
In addition, IBM’s Resilient Incident Response Platform (IRP) will integrate with Cisco’s Threat Grid to provide security teams with insights needed to respond to incidents faster. For example, analysts in the IRP can look up indicators of compromise with Cisco Threat Grid’s threat intelligence, or detonate suspected malware with its sandbox technology. This enables security teams to gain incident data in the moment of response.
“IBM has long been a proponent of open collaboration and threat sharing in cybersecurity,” said Marc van Zadelhoff, general manager, IBM Security. “With Cisco joining our immune system of defense, joint customers will greatly expand their ability to enhance their use of cognitive technologies like IBM Watson for Cybersecurity. Also, having our IBM X-Force and Cisco Talos teams collaborating is a tremendous advantage for the good guys in the fight against cybercrime.”
Threat intelligence and managed services
IBM X-Force and Cisco Talos research teams will collaborate on security research aimed at addressing the cybersecurity problems facing mutual customers by connecting their leading experts. For joint customers, IBM will deliver an integration between X-Force Exchange and Cisco’s Threat Grid. This integration greatly expands the historical and real-time threat intelligence that security analysts can correlate for insight.
For example, Cisco and IBM recently shared threat intelligence as part of the recent WannaCry ransomware attacks. The teams coordinated their response and researchers exchanged insights into how the malware was spreading.
Through this expanded collaboration, IBM’s Managed Security Services team, which manages security for over 3,700 customers globally, will work with Cisco to deliver new services. One of the first offerings is designed for the growing hybrid cloud market. As enterprise customers migrate security infrastructure to public and private cloud providers, IBM Security will provide Managed Security Services in support of Cisco security platforms in leading public cloud services.
from Help Net Security http://ift.tt/2rnC2E7
Attacks within the Dark Web
For six months, Trend Micro researchers operated a honeypot setup simulating several underground services on the Dark Web. The goal of their research was to see if those hidden services will be subjected to attacks.
Hidden services under attack
The setup consisted of:
- A closed, invite only black market
- A blog offering customized services and solutions for the Dark Web
- An underground forum that could be used only by invited, registered members
- A (misconfigured) private file server that allowed access via FTP and SSH.
Each honeypot sported one or more vulnerabilities, to make successful attacks likely. The researchers automatically recorded all logs after every compromise and restored the environment to a clean state each day, to await more attacks.
What the researchers discovered
One discovery that was made pretty quickly is that the attack did not come only from the Dark Web.
“Tor proxies like Tor2web made Tor hidden services reachable without requiring any additional configuration from the public internet. Our honeypot was automatically made available to traditional search engines, and implicitly dangled as a target for automated exploitation scripts,” the researchers shared.
In one month, the number of attacks spiked to over 170 per day – most of them successful.
“The majority of these attacks added web shells to the server, giving the attacker the ability to run system commands on our honeypot. This allowed the addition of other files, such as web mailers, defacement pages, and phishing kits,” they noted.
Using compromised hidden services for DDoS or spam attacks is a sweet deal for attackers, as the origin of the attack is automatically anonymized by Tor.
After they began filtering traffic from Tor proxies, the attacks decreased, and were limited to attackers from within the Dark Web.
Manual attackers
Unlike attacks from the “outside,” which were mostly performed with automated tools, Dark Web attackers preferred to tread more slowly and cautiously, and their attacks were manual.
“For example, once they gained access to a system via a web shell, they would gather information about the server first by listing directories, checking the contents of databases, and retrieving configuration/system files,” the researchers explained.
“These manual attackers often deleted any files they placed into our honeypot; some even went ahead and left messages for us (including ‘Welcome to the honeypot!’), indicating that they had identified our honeypot.
Aside of defacements that often functioned as promotion for competitor sites, the attackers also went after confidential data stored on the honeypot FTP file server, tried to hijack and spy on the communications originating to and from the honeypots, and targeted the forum application.
from Help Net Security http://ift.tt/2scOQez
Balancing act: Ensuring compliance with GDPR and US regulations
The impending GDPR, which will go into effect in a little less than a year from now, is going to have a significant impact on enterprise cybersecurity and data governance policies and practices beyond the European Union, significantly impacting global organizations based in the United States that handle data on EU citizens and residents.
Because of this, American companies with a global reach should take the GDPR seriously and start the process of implementing the necessary technologies, processes and people as soon as possible to ensure they are ready to comply with the law once it goes into effect on May 25, 2018. They must also make sure that this potentially monumental task doesn’t take away from efforts focused on ensuring compliance with their own stateside regulations.
As part of GDPR, many types of personally identifiable information (PII) will be protected, such as banking information, health records and government identity records, as well as any data that can be tied back to a data subject such as geo-location data from a cell phone, home address or data from a medical device. Organizations will need to gain a complete picture of all data that is collected, stored or processed. After that, companies must ensure that adequate means of protecting that data have been implemented, such as access being restricted to authorized personnel, proper authentication being used, proper procedures for backing up and archiving data and data retention and destruction policies. In addition, any third parties that have access to the data must be evaluated to ensure they too have adequate controls in place.
It also features lofty notification requirements modeled loosely after U.S. breach notification laws – the biggest difference being a new, shortened 72-hour time frame, which promises to be a major challenge for many organizations.
The US, of course, does not have an over-arching data protection law. Data protection measures are buried within numerous laws and regulations. Breach notification, for instance, is not mandated by federal law. Instead, it comes down to numerous state laws, with California and Massachusetts having the most stringent requirements (both states are also home to some of the largest technology companies in the world).
Organizations based in the US that hold data on European customers now have the daunting task of keeping track of each US regulation, while ensuring that they become one hundred percent compliant with GDPR. Given the numerous new requirements mentioned above, it’s enough to make any seasoned IT or data governance professional dizzy. So how do you balance it successfully?
The good news is that GDPR’s requirements for data protection are in line with most regulations in the U.S. For example, there is nothing in the NIST Cybersecurity Framework that conflicts with the data protection practices required by GDPR.
These organizations should not treat Americans’ and Europeans’ data in different ways. This would mean purchasing specific storage systems for EU customers and putting different policies and enforcement structures in place to achieve two separate compliance goals. Keep in mind that U.S. courts rely on case law, which often establishes a best or common practice standard. If EU data was better protected than U.S. data, that would lead to potential liability in civil courts.
The best solution is to create a unified compliance regime that accommodates both arenas. Since GDPR is more extensive than U.S. requirements, this will entail increased information lifecycle management (ILM) efforts. Through an in-depth ILM approach, organizations will be able to better manage the immense amounts of data and metadata collected through an information system, tracking it from creation and initial storage to the time when it’s no longer needed and is destroyed, while at the same time providing specific criteria for managing the data storage.
When ILM is implemented, there will be automated processes to classify data into tiers according to policies. This will enable companies to automate the migration of data from one tier to another based on the criteria within the policies.
Once information is collected, the decision must be made to only keep data that has been explicitly asked for. All other data, such as time and geo-location, will likely be classified as PII under GDPR. During the data storage process, long term archiving care should be taken to understand where it all resides – is it moved to a third party? Who has access to it? Are there backups? Knowing the answers to these questions will go a long way in remaining compliant with all necessary regulations.
At the end of the day, an organization’s CEO and Board of Directors are ultimately responsible for GDPR compliance and ensuring that practices are balanced with all other cybersecurity and data privacy regulations that must be adhered to depending on location and industry. This can be accomplished through effective, smart delegation – including hiring the right team and providing them with the necessary resources to be successful. If not done properly, global organizations will leave themselves incredibly vulnerable to huge fines and consequences.
This is no small task, but the countdown is on – May 25, 2018 will be here before we know it.
from Help Net Security http://ift.tt/2rmWlBU
Tuesday, May 30, 2017
Luminoodle: The Next Evolution Of String Lights
Looking to light your campsite, balcony, patio, bedroom, or the back of your TV? Power Practical has you covered.
The flagship of Power Practical’s lineup is the Luminoodle Basecamp, a twenty foot long (!), 3000 lumen (!), waterproof, illuminating rope with lighting modes that would make a gaming keyboard blush. The BaseCamp is waterproof and includes plugs for both AC outlets and car sockets, plus numerous straps and magnets to cover any hanging scenario.
I installed the Basecamp on my balcony railing, and am going to need another one for camping. Bring along something like the Anker PowerHouse, which we’ll be covering soon, to keep it going for ridiculous amounts of time.
The significantly cheaper Luminoodle Color is 1/4th the length of the Basecamp, 450 lumens, and powered via USB. That opens up a huge number of ways to power the color, from external battery packs to solar panels to camping stoves.
Beyond outdoor uses, the Luminoodle Color is a great option for accent lighting in places like your bedroom or under your desk- places you probably already have spare USB ports. For gamers already rocking RGB light shows on their mice, keyboards, headsets, and mouse pads, under-desk RGB lighting will blend nicely.
Bias lighting is one of our most popular product categories on Kinja Deals, and Power Practical’s take is the best we’ve used, with all the lighting modes from the rest of the Luminoodle collection in tow.
One of the weird quirks of bias lighting is that most options turn on whenever your TV does, including middle of the night firmware updates. Luminoodle’s bias lighting has a remote to shut it off if/when you need to.
Advertisement
Advertisement
One of the weird quirks of Power Practical products is that while their remotes are very responsive, everything uses the same signals. This is great if you want to control multiple products at once at a campsite, but you can imagine scenarios where you might want to turn off your headboard lighting but not your bias lighting.
We’ll also be checking out Power Practical’s Sparkr products as soon as we can get our hands on them.
from Lifehacker http://ift.tt/2re57mo
News in brief: no laptop ban from EU for now; China warns on new laws; bug bounty scheme for DHS
Your daily round-up of some of the other stories in the news
No laptop ban from Europe to US after all – for now
You’d be forgiven for finding it hard to keep up with the latest status of the US ban on laptops and other devices in the cabins of inbound aircraft. One minute it’s being extended; the next we’re being reassured that it’s not.
Politico reported on Tuesday that the US had decided not to extend the ban to flights from Europe, citing a call between the Department of Homeland Security and European officials.
However, that came just two days after John Kelly, the head of homeland security, had told Fox News that he was still considering banning devices on all international flights coming to the US hasn’t been warmly received.
Kelly said that there’s “a real threat” and added in response to a direct question to whether he was going to extend the ban: “I might… That’s really the thing that they are obsessed with, the terrorists, the idea of knocking down an airplane in flight, particularly if it’s a US carrier, particularly if it’s full of mostly US folks.”
So for now it looks as if you’ll be able to travel to the US without having to give up your Kindle, laptop and tablet – but that could change, noted Politico.
Big fines for businesses breaking China’s new laws
China warned on Monday that companies violating its strict new internet laws “will face hefty fines”. The laws, rubber-stamped in November, come into force on Thursday and tighten Beijing’s grip on news as well as banning internet companies from collecting and selling their users’ data.
Critics fear that the new laws will mean users of China’s social media platforms will be shut down, thus silencing critics. Meanwhile, foreign businesses operating in China are concerned about the requirement that user data is stored on servers in China, thus putting that data within easy reach of the government.
Foreign companies are concerned that the new laws could introduce new compliance hurdles for them in China, while the EU Chamber of Commerce has urged China to “delay the implementation of either the law or its relevant articles”.
DHS could get bug bounty scheme
The US Department of Homeland Security could have to implement a bug bounty programme under a bill introduced last week by two senators.
The bill, from Democrat senator Maggie Hassan and Rob Portman, a Republican, would establish a programme along the lines of the Hack the Pentagon scheme, when “white-hat” hackers found 138 vulnerabilities and the Department of Defense paid out $71,200 in bounties.
Hassan said: “Federal agencies like the DHS are under assault every day from cyberattacks. These attacks threaten the safety, security and privacy of millions of Americans and in order to protect the DHS and the American people from these threats, the Department will need help.”
from Naked Security http://ift.tt/2rkC7bA
Inmates Secretly Build and Network Computers while in Prison
Inmates Secretly Build and Network Computers while in Prison
This is kind of amazing:
Inmates at a medium-security Ohio prison secretly assembled two functioning computers, hid them in the ceiling, and connected them to the Marion Correctional Institution's network. The hard drives were loaded with pornography, a Windows proxy server, VPN, VOIP and anti-virus software, the Tor browser, password hacking and e-mail spamming tools, and the open source packet analyzer Wireshark.
Another article.
Clearly there's a lot about prison security, or the lack thereof, that I don't know. This article reveals some of it.
Tags: concealment, network security, prisons, searches
Posted on May 30, 2017 at 12:47 PM • 0 Comments
from Schneier on Security http://ift.tt/2qyeeKv
Shadow Brokers double down on zero-day subscription service
Shortly after its leak of NSA exploit tools enabled the spread of WannaCry, the Shadow Brokers hacking group promised to launch a monthly subscription service for more zero days. Tuesday, it started offering details.
To get in on the action, Shadow Brokers requires that subscribers send them 100 ZEC (Zcash cryptocurrency) or $21,000 per month. The group emptied its Bitcoin wallet yesterday, then switched over to Zcash, though the group said it could require a different currency the following month.
So what will this subscription service get you? A roll of the dice, essentially. Shadow Brokers put it this way on their site:
Monthly dump is being for high rollers, hackers, security companies, OEMs, and governments. Playing “the game” is involving risks.
They promise to continue with a seat-of-the-pants approach beyond June. Asked what will be in the next dump, the group said:
TheShadowBrokers is not deciding yet. Something of value to someone. See theshadowbrokers’ previous posts. The time for “I’ll show you mine if you show me yours first” is being over. Peoples is seeing what happenings when theshadowbrokers is showing theshadowbrokers’ first. This is being wrong question. Question to be asking “Can my organization afford not to be first to get access to theshadowbrokers dumps?”
Meanwhile, some on Twitter are suggesting it might be a good idea to set up crowdfunded access to the dump:
Sophos CTO Joe Levy warns that those who consider doing business with Shadow Brokers and others like them should tread very carefully.
As recent leaks show, the Shadow Brokers crew certainly seem to have acquired some high-value stolen goods, although their previous attempts to auction them off came to nothing and they ended up dumping the data for free. But there’s no reason to believe they have an ongoing supply, or that their subscription service is anything but a cash grab.
Would-be subscribers should ask themselves the following before diving in: what are you going to do if they don’t deliver? Ask for a refund? Report them to the ombudsman?
Sophos’s view is simple: don’t go there.
If you lie down with dogs, you’re likely get up with fleas, and maybe attract the entirely understandable attention of law enforcement.
from Naked Security http://ift.tt/2qy85xV
Security of medical devices ‘is a life or death issue’, warns researcher
There are more than 8,000 vulnerabilities in the code that runs in seven analyzed pacemakers from four manufacturers, according to a new study.
And that’s just a subset of the overall medical device scene, in which devices have scarcely any security at all. A second, separate, study that looked at the broader market of medical devices found that only 17% of manufacturers have taken serious steps to secure their devices, and only 15% of healthcare delivery organizations (HDOs) have taken significant steps to thwart attacks.
Cyber-tampering with medical devices such as insulin pumps or pacemakers can seem far-fetched – the product of researchers’ wild imaginations and their theoretical scenarios, unlikely to happen in the real world.
Case in point: even after the US Food and Drug Administration (FDA) and the Department of Homeland Security (DHS) released advisories about potentially life-threatening bugs in cardiac monitoring technology from St Jude Medical — on the same day in January 2017 that St Jude issued security fixes — St Jude shrugged off what it called “extremely low cyber-security risks.”
St Jude’s attitude was far from surprising, given that the company had sued IoT security firm MedSec for defamation after MedSec published what St Jude said was bogus information about the bugs … as in, the same bugs it went on to fix.
Companies like St Jude may shrug off the risks, but according to the Ponemon Institute — which conducted research into medical device security that was sponsored by the Internet of Things (IoT) security company Synopsys — patients have already suffered adverse events and attacks. Its findings:
- 31% of device makers and 40% of HDOs surveyed by Ponemon Institute said that they’re aware of patients suffering from such incidents.
- Of those respondents, 38% of HDOs said they were aware of inappropriate therapy/treatment delivered to patients because of an insecure medical device.
- Another 39% of device makers confirm that attackers have taken control of medical devices.
These are the causes of the adverse events or harm to patients:
Granted, these things aren’t easy to secure. A majority — 80% — of medical device manufacturers and users said the gadgets are “very difficult” to secure. Only 25% of respondents said that security protocols or architecture built inside the devices adequately protect clinicians and patients.
Still, the lack of quality assurance and testing that lead to the vulnerabilities is pretty appalling. Respondents said that these are the reasons for the vulnerable code:
As far as the pacemaker-specific vulnerabilities go, Researcher Billy Rios and Dr Jonathan Butts from security company WhiteScope found that few manufacturers encrypt or otherwise protect data on a device or when that data was being transferred to monitoring systems.
Neither were any of the devices they looked at protected with the most basic authentication: login name and password. Nor did the devices authenticate the devices or systems to which they connect.
Rios agreed with the Ponemon Institute: the small size of devices and low computing power of internal devices make it tough to apply security standards that help to keep other devices safe.
There’s still work to be done, he said in a longer paper (PDF):
To mitigate potential impact to patient care, it is recommended that vendors evaluate their respective implementations and validate that effective security controls are in place to protect against identified deficiencies that may lead to potential system compromise.
In the paper, WhiteScope provided questions that vendors can use to evaluate device security.
Dr Larry Ponemon, co-author of the study that looked at the security of the broader medical device market, said that it’s urgent for the industry to prioritize security:
The security of medical devices is truly a life or death issue for both device manufacturers and healthcare delivery organizations.
One thing that should help would be for manufacturers to implement advice from the US Food and Drug Administration (PDF) about how to secure devices.
As it is, the study found, only 49% of manufacturers are now following that advice.
from Naked Security http://ift.tt/2qvGIc6
What will it take to keep smart cities safe?
“Smart cities” use smart technologies in their critical infrastructure sectors: energy, transportation, environment, communications, and government.
This includes smart systems for energy management, parking management systems, public transportation information coordination, transportation sharing, traffic management, air quality monitoring, waste management, e-government, connectivity, and so on.
Smart cities are the future
Currently, over half of the world’s population resides in urban areas, and by 2050, that percentage is expected to rise to 66%. This influx will create considerable social, economic, and environmental challenges for those tasked with making these cities thrive – challenges that can successfully addressed through the implementation and secure running of smart city technologies.
This reality makes it inevitable for most (if not all) cities to become “smart.” It’s also inevitable that there will be attackers who will want to take advantage of this situation.
“Malicious individuals may consider smart cities as playgrounds they can test their hacking skills on. They may toy with available technologies for personal satisfaction,” Trend Micro researchers pointed out.
“For cybercriminals, the interconnectedness of devices and systems in a smart city can be a means to steal money and data from citizens and local enterprises. State-sponsored actors can also abuse the pervasiveness of smart city technologies to launch their own espionage or hacktivist campaigns. In very extreme cases, smart implementations may even be exploited for acts of terror.”
Advice for developering secure smart cities
According to the researchers, the security of a smart city depends on two key factors: the limitations of the technologies used and how they are implemented.
The former factor includes limitations in computing power, which make things like encryption a challenge, and the fact that software gets outdated. The latter covers poor implementation, poor configuration, poor firmware updating.
“To guide smart city developers, we came up with a quick 10-step cybersecurity checklist they can refer to when implementing smart technologies,” the researchers offered.
They advise them to:
- Perform quality inspection and penetration testing of smart technology (often, and with the help of independent contractors)
- Prioritize security in service-level agreements for all vendors and service providers (noncompliance to specified conditions should lead to penalties)
- Establish a municipal computer emergency response team (CERT) or computer security incident response team (CSIRT)
- Ensure the consistency and security of software updates (regular updates, encrypted and digitally signed)
- Plan around the life cycle of smart infrastructure (think about what to do when infrastructure becomes obsolete or requires maintenace)
- Process data with privacy in mind (anonymized data, access to which is restricted to few, and a clear information-sharing plan)
- Encrypt, authenticate, and regulate public communication channels (strong cryptography and authentication mechanisms)
- Always allow manual override
- Design a fault-tolerant system (reduced performance instead of failure)
- Ensure the continuity of basic services (think about alternative systems)
In the report they’ve also included an overview of the current situation and future plans regarding the implementation of smart tech in several cities around the world, like Yokohama, Singapore, Rotterdam and Jaipur.
from Help Net Security http://ift.tt/2qvTgAd
Why you should avoid Star Hop and Candy Link in Google Play
Thanks to Rowland Yu of SophosLabs for the behind-the-scenes work on this article.
When you see them in Google Play, Star Hop and Candy Link look like a couple of harmless games. But they hide malware that can switch on the wifi on your Android device’s and pummel you with spam.
SophosLabs researchers uncovered the apps – which have been downloaded some 50,000 times so far – during routine testing.
Star Hop is a game where the goal is to tap on two or more adjacent stars to destroy them:
Candy Link is billed as a game that helps users improve their concentration and cognitive abilities:
Researcher Rowland Yu said the apps hide malware SophosLabs has detected as Andr/Axent-EH. It appears the apps have been available on Google Play since March 2017.
The malware family is able to:
- Drop a malicious payload
- Enable wifi if it is off
- Connect to malicious remote websites
- Load spam messages on the home screen
How it works
The malware decrypts a .jar file in the “assets” folder, then drops a payload called decbiee.jar, as this screenshot shows:
The payload has the capability of checking wifi status and turning it on if it’s off:
The payload connects lce9v.com, then redirects to malicious website wi7cb.com, which has been blocked by Sophos:
Once the device is infected, the user receives spam messages like these every time they activate their home screen:
Defensive measures
As we mentioned above, SophosLabs has identified this as Andr/Axent-EH and protected Sophos users against it.
Our advice to non-Sophos customers is not to download these apps if you see it in Google Play. We’ve told Google Play about our discovery.
The continued onslaught of malicious Android apps demonstrates the need to use an Android anti-virus such as our free Sophos Mobile Security for Android.
By blocking the install of malicious and unwanted apps, even if they come from Google Play, you can spare yourself lots of trouble.
from Naked Security http://ift.tt/2sa1rPB