Not (yet!) of a sentient digital entity that could turn rogue and cause the end of mankind, but the exploitation of artificial intelligence and machine learning for nefarious goals.

What sorts of AI-powered attacks can we expect to see soon if adequate defenses are not developed?
According to a group of 26 experts from various universities, civil society organizations, and think-tanks, the threat landscape can undergo dramatic changes in the next five to ten years.
“The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labor, intelligence, and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets,” they noted in a recently released report.
“New attacks may arise through the use of AI systems to complete tasks that would be otherwise impractical for humans. In addition, malicious actors may exploit the vulnerabilities of AI systems deployed by defenders.”
And it’s not only out digital security that will be under attack, but our physical and political security can suffer, as well.
Plausible attack scenarios
The experts have come up with extremely plausible scenarios for attacks powered by artificial intelligence: automated social engineering attacks, automated vulnerability discovery, human-like denial-of-service, data poisoning attacks to surreptitiously maim or create backdoors in consumer machine learning models, and so on.
In the physical realm, AI can be used to increase the scale of attacks, power swarming attacks, or increasingly remove attacks from the actors initiating them.
Finally, AI can be used to create extremely targeted propaganda, to manipulate audio and video messages (creating fake news, impersonating targets), to automate hyper-personalised disinformation campaigns, and more. Some of these approaches are already used by various states, but with AI they could become even more effective.
“We also expect novel attacks that take advantage of an improved capacity to analyse human behaviors, moods, and beliefs on the basis of available data. These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates,” the experts pointed out.
We have to do something, and we have to start now
“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real. There are choices that we need to make now, and our report is a call-to-action for governments, institutions, and individuals across the globe,” says Dr. Seán Ó hÉigeartaigh, Executive Director of Cambridge University’s Centre for the Study of Existential Risk and one of the co-authors of the report.
“For many decades hype outstripped fact in terms of AI and machine learning. No longer. This report looks at the practices that just don’t work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this.”
They call on policymakers to collaborate closely with technical researchers to investigate and mitigate potential malicious uses of AI and say that these problems and solutions should be discussed by a wide range of stakeholders and domain experts.
It’s also of crucial importance that AI researchers and engineers keep in mind the dual-use nature of their work, and allow their research priorities and norms to be influenced by misuse-related considerations.
“AI researchers and the organisations that employ them are in a unique position to shape the security landscape of the AI-enabled world. We highlight the importance of education, ethical statements and standards, framings, norms, and expectations,” the experts advised.
They also urge them to learn from and with the cybersecurity community, identify best practices in research areas with more mature methods for addressing dual-use concerns and import them (where applicable), and to explore different openness models for the research.
from Help Net Security http://ift.tt/2GwVsLc
On May 25, the General Data Protection Regulation will bring sweeping changes to data security in the European Union. If your organisation collects personal data or behavioural information from anyone in an EU country, it’s subject to GDPR requirements.
Wherever your team stands on its path to readiness, this whitepaper will help you better understand GDPR and your company’s compliance obligations.
Download the document for insights as you prepare, including the steps to put a plan in place. No registration required.
from Help Net Security http://ift.tt/2FjVzKH
Preorder Bryker Hyde Quick Draw Wallet | $30 | Kickstarter
Kickstarter’s a veritable design playground for wallet makers, and Bryker Hyde’s new Quick Draw wallet offers great features for minimalists, card hoarders, and self-hating, card-hoarding, wannabe minimalists like myself.
You would technically consider this wallet a bifold, since it does fold in the middle, but since the spine of the wallet doubles as a money clip, it doesn’t have the added bulk of a cash pocket. And unlike most bifolds, this one makes full use of its outside face, with two quick draw card slots on either side of the spine. Three of those pockets block RFID signals, but one purposely lets them through, so you can use a hotel room key without taking it out of your wallet.
Advertisement
Inside, you’ll find two more slots for cards, a transparent ID holder, and the aforementioned money clip. The whole package is exceptionally thin when empty, probably the thinnest folding wallet I’ve ever seen, but it was still totally usable after I stuffed nine cards plus an ID in there.
The Quick Draw is already fully funded on Kickstarter with two weeks to go, and you can put a preorder in for $30, or get two of them for $50.
We sort through the noise of Kickstarter to find you preorder discounts worth taking advantage of. Someone on our team has tested a prototype (or final version) of every Kickstarter we cover.
from Lifehacker http://ift.tt/2Cy9c5V
