The Latest

international cyber conventionThe Cold War is a distant memory for most, but today we see a new struggle for dominance on the global stage – with cyber weapons being the latest focal point. The advance of sophisticated social engineering means that small but skilled groups of cyber attackers now have the potential to do more damage to a country’s infrastructure than a physical military strike.

Earlier this year, Brad Smith, President and Chief Legal Officer at Microsoft, gave a speech calling on governments to implement a Digital Cyber Geneva Convention to protect civilians from nation-state attacks. This convention would establish protocols for attacks that affect private enterprises and individuals, as well as civilian infrastructure such as power grids.

Mr Smith’s vision is commendable but unfortunately comes at a time when the combination of heightened international tensions and the proliferation of attack tools and threat actors makes the likelihood of a successful agreement more of a challenge than ever.

Indeed, United Nations negotiations on restricting cyber warfare collapsed in June, as members were unable to make progress on key issues. The talks had been in progress since 2004, with experts from 25 members of the UN security council participating. However, incidents such as the 2016 hacking of the US Democratic National Committee (DNC) caused further splits along old Cold War lines, and the final straw proved to be the right to self-defence against cyber attacks.

In the face of this breakdown and increasing global friction, will we ever be able to reach an agreement on how international cyber activity should be controlled and regulated?

The challenge of attribution

One of the biggest hurdles standing in the way of a Digital Geneva Convention is the challenge of attack attribution, and proving the perpetrator’s intention against the actual impact of the incident. Standard military action is usually fairly clear cut, but cyber attacks are much murkier, with very little concrete evidence.

A good example of this is the infamous Stuxnet attacks of 2010, which targeted industrial control systems in Iran but eventually spread to hit more than 200,000 machines around the world. While Israel and the United States have both been strongly suspected of launching the attack against Iran, nothing was ever conclusively proven. Likewise, even if the perpetrator could be proven, it is impossible to demonstrate if they also intended to hit other countries such as India and Indonesia, or if this was accidental.

Similarly, with the recent WannaCry ransomware attack, North Korea is widely believed to be the perpetrator, but the country itself has denied responsibility and many of the signs could be the result of attackers reusing old code, or even a false-flag attack. While apparently a money-making exercise, the attack also caused serious issues for the NHS in the UK, as well as considerable damage to private enterprises around the world. Even if concrete attribution was possible, how could we determine whether it was intended as a revenue generator that spiralled out of control, or an attempt to harm nation states with the ransomware serving as camouflage?

The blurred lines between citizens and governments

Another foggy issue is the need to determine the difference between an attack on a nation and an attack on a private citizen. Take, for example, the phishing attack that yielded criminal access to John Podesta’s emails during the 2016 US presidential elections.

Although the attack was clearly aimed at disrupting the campaign of Hillary Clinton by releasing sensitive material, it was actually Podesta’s personal account that was hit and many of the emails exposed were sent to him from people with no political role at all. The offenders could argue he was a legitimate military target and was just using the wrong kind of email account, but what type of collateral damage is reasonable to stay within the boundaries of a pledge?

Even fairly low-level criminal actors have access to a wide range of tools, such as VPNs and proxies, to hide their identity and evade the authorities. When it comes to activity by nation states, additional evasion techniques mean a country can have almost complete deniability.

In some cases, we may also see that nations don’t want to pursue cyber attackers on the international stage. Again, looking at the Podesta email attack, while Russia is generally accepted as the culprit, there are many who do not wish to pursue the case as it brings the legitimacy of the election into question. Particularly in politics, we are very likely to see future attacks denied even by the victim nation itself.

What can we do?

With the attribution of even the most notorious attacks of the last decade proving to be almost impossible, the traditional concept of a convention is extremely difficult to apply. How can sanctions and other standard international responses be effectively levied if the suspected perpetrator has complete deniability of their involvement?

Putting together an agreement is not only about finding terms that all potential signatories can agree on, but the agreement must also make technical sense. It must start with a firm technical foundation, taking into consideration what actions cause damage, and to whom. This understanding is crucial to any kind of international agreement succeeding.

While we are likely going to be waiting many years for any kind of Digital Geneva Convention, it is up to governments and private organisations alike to develop their own security and protect their assets and citizens. As it stands, we need to see a higher level of understanding of the threats, particularly at the decision-making level.

Improving our collective understanding must start with conveying ideas and concepts in a meaningful way. We often see attacks described with the wrong terminology, with everything being simply described as hacking or phishing. This kind of over-simplification ignores distinctions such as the difference between phishing and Business Email Compromise (BEC) or malware delivered by email.

When these mistakes filter all the way up to the people making purchasing decisions, it means they will do the wrong thing – an issue in both private and public sector organisations. Indeed, in many cases, governments are far behind private enterprises in their understanding. Until this changes, we can’t expect to move forward on an international level either.


from Help Net Security http://ift.tt/2zh45Kt

Seagate announced its SkyHawk AI hard disk drive, the first drive created specifically for artificial intelligence enabled video surveillance solutions. SkyHawk AI provides bandwidth and processing power to manage always-on, data-intensive workloads, while simultaneously analyzing and recording footage from multiple HD cameras.

SkyHawk AI

Analytics on video surveillance hardware is growing exponentially, forecasted to increase from 27.6 million shipments in 2016 to 126 million shipments in 2021, as hardware manufacturers continue to include analytics sensors on network video recorders (NVRs).

This will only increase as AI – particularly deep learning and machine learning applications, such as facial recognition and analyzing irregularities in behavior – become increasingly prevalent. In parallel, the need for fast video analytics will continue to rise, increasing the workload burden on NVR storage.

SkyHawk AI is ideal for intensive computational workloads that typically accompany AI work streams, as its high throughput and enhanced caching deliver low latency and excellent random read performance to locate and deliver video images and footage analysis. This enables on-the-edge decision making, eliminating the latency of exchanging cloud-based data and processing.

Equipped with Seagate ImagePerfect AI firmware, the drive reliably records high quality, sharp video footage with no dropped frames, while simultaneously facilitating AI-enabled NVR analytics – ensuring that intelligence gathered through video surveillance footage is not lost.

“Video analytics has been evolving over the past 10 years and is garnering a lot of attention these days due to the use of AI. Dahua Technology made an early start in AI applications and has since made several achievements in the industry, for example, the newly launched IVSS series,” said Yang Shengwei, products and solutions director, Domestic Sales Operation Center of Dahua Technology. “As a strategic partner, Seagate’s advanced technology will help Dahua to reach a new top in the AI field. We hope with the newly launched SkyHawk AI drive we can boost the AI application across the surveillance industry.”


from Help Net Security http://ift.tt/2ifh8Ba

Firefox 58, that’s the next but one version of the browser you all trust but don’t use, is going to become the first of the major browsers to do something about canvas fingerprinting – a devious, cookie-less way of tracking you on the web.

Canvas fingerprinting relies on websites being able to extract data from HTML <canvas> elements silently. In future Firefox users will be asked to give their permission before that extraction can take place, just as users of the Tor Browser are.

The similarity in behaviour to Tor Browser is no accident. That privacy-first browser is actually based on Firefox ESR (Extended Support Release) and a trickle of Tor Browser features and settings have been flowing slowing back upstream and into Firefox for a while now.

In the case of this simple feature, four years slowly.

So let’s look at why it’s better late than never.

Browser fingerprints

Browser fingerprinting has risen to prominence in recent years as the go-to approach for companies who want to track you without giving you a say in the matter.

It works by tracking your browser itself, rather than by tracking a beacon that’s placed on your browser, such as a cookie, Flash LSO (local shared object) or DOM storage value.

Beacons can be blocked or deleted, fingerprints can’t.

Fingerprints use information that’s gathered passively from your browser such as the version number, operating system, screen resolution, language, list of browser plugins and the list of fonts you have installed.

There are many different ingredients that can be used to make up a fingerprint but the more ingredients that are included, and the more entropy available from each one, the easier it is to tell your browser from anybody else’s.

One of the most popular ingredients uses the HTML <canvas> element.

Canvas fingerprints

The <canvas> element is, as you might guess, a surface a browser can draw on.

In canvas fingerprinting your browser is given instructions to render something (perhaps a combination of words and pictures) on a hidden canvas element. The resulting image is extracted from the canvas and passed through a hashing function, producing an ID.

Different graphics cards and operating systems work slightly differently, which means that if you give two different website visitors identical drawing instructions, they’ll actually draw slightly different pictures.

Complex instructions can produce enough variation between visitors to make canvas fingerprinting a potent ingredient in a fingerprinting recipe.

The more complex the instructions, the easier it is to tease out differences between individuals’ browsers, but the basic principle can be seen with a simple test.

The pictures below show the letter T as rendered by Firefox (left) and Safari (right) on my system, with hashes of the images shown beneath. The differences are just about visible, but all that really matters for the purpose of fingerprinting is that they aren’t exactly the same and will therefore produce different hash values.

T rendered by Firefox 33 on OS X55b2257ad0f20ecbf927fb66a15c61981f7ed8fc
T rendered by Safari 8 on OS X17bc79f8111e345f572a4f87d6cd780b445625d3

A step in the right direction

Fingerprinting is difficult to stop because it turns the complexity, customisability and openness of modern browsers against them. The more personalised your browser is, and the more willing it is to share information about itself, the more it stands out in a crowd.

Plugins can help by intercepting known fingerprinting scripts, but they also make things worse by adding entropy to your browser’s fingerprint.

A balance needs to be struck between the usefulness of any given feature and its potential for abuse. Browser vendors also need to stay on top of how features are actually being used, rather than how they’re supposed to be used.

A case in point is the Battery Status API. The feature exists so that “web developers are able to craft web content and applications which are power-efficient”. In fact the ability to determine which of 14,172,310 different levels of charge your battery is at has been largely ignored by developers, but adopted enthusiastically as a fingerprinting technique.

About a year ago it was summarily dumped by Firefox.

To combat canvas fingerprinting the Firefox developers have opted for the pragmatic opt-in approach of Tor Browser instead of outright rejection. That’s because although canvas fingerprinting is a bigger problem than battery status abuse, dropping <canvas> isn’t an option. It isn’t a white elephant like the Battery Status API is, it’s actually a fantastically  useful feature, but just happens to be a very popular fingerprinting technique too.

At least for a few more months, anyway.



from Naked Security http://ift.tt/2zgIIsx