The Latest

Technology companies could be doing much more to protect individuals and organizations from the threats posed by phishing, according to research by the University of Plymouth.

phishing filters effectiveness

However, users also need to make themselves more aware of the dangers to ensure potential scammers do not obtain access to personal or sensitive information.

Academics from Plymouth’s Centre for Security, Communications and Network (CSCAN) Research assessed the effectiveness of phishing filters employed by various email service providers.

They sent two sets of messages to victim accounts, using email content obtained from archives of reported phishing attacks, with the first as plain text with links removed and the second having links retained and pointing to their original destination.

They then examined which mailbox it reached within email accounts as well as whether they were explicitly labelled in any way to denote them as suspicious or malicious.

In the significant majority of cases (75% without links and 64% with links) the potential phishing messages made it into inboxes and were not in any way labelled to highlight them as spam or suspicious. Moreover, only 6% of messages were explicitly labelled as malicious.

Professor Steven Furnell, leader of CSCAN, worked on the study with MSc student Kieran Millet and Associate Professor of Cyber Security Dr Maria Papadaki. He said: “The poor performance of most providers implies they either do not employ filtering based on language content, or that it is inadequate to protect users.

“Given users’ tendency to perform poorly at identifying malicious messages this is a worrying outcome. The results suggest an opportunity to improve phishing detection in general, but the technology as it stands cannot be relied upon to provide anything other than a small contribution in this context.”

The number of phishing incidents has risen dramatically since they were first recorded in 2003. In fact, global software giant Kaspersky Lab reported that its anti-phishing system was triggered 482,465,211 times in 2018, almost double the number for 2017.

It is also a significant problem for businesses, with 80% telling the Cyber Security Breaches Survey 2019 that they have encountered ‘fraudulent emails or being directed to fraudulent websites’ – placing this category well ahead of malware and ransomware.

Phishing is designed to trick victims into divulging sensitive information, such as identity and financial-related data, and the threat can actually take several forms:

  • Bulk-phishing – where the approach is not specially targeted or tailored toward the recipient
  • Spear-phishing – where the message is targeted at specific individuals or companies and tailored accordingly
  • Clone-phishing – where the scammers take a legitimate email containing an attachment or link, and replace it with a malicious version
  • Whaling – in these cases the phishing is specifically targeted towards high value or senior individuals.

Professor Furnell, who has previously led various projects relating to user-facing security, added: “Phishing has now been a problem for over a decade and a half. Unfortunately, just like malware, it’s proven to be the cybersecurity equivalent of an unwanted genie that we can’t put back in the bottle.

“Despite many efforts to educate users and provide safeguards, people are still falling victim. Our study shows the technology can identify things that we would ideally want users to be able to spot for themselves – but while there is a net, it clearly has big holes.”


from Help Net Security https://ift.tt/331gwFz

Flaws that allow attackers to bypass the payment limits on Visa contactless cards have been discovered by researchers Leigh-Anne Galloway and Tim Yunusov at Positive Technologies.

Visa contactless cards flaws

The attack was tested with five major UK banks, successfully bypassing the UK contactless verification limit of £30 on all tested Visa cards, irrespective of the card terminal.

The researchers also found that this attack is possible with cards and terminals outside of the UK. These findings are significant because contactless payment verification limits are used to safeguard against fraudulent losses, which have been increasing in recent years.

The attack works by manipulating two data fields that are exchanged between the card and the terminal during a contactless payment. Predominantly in the UK, if payment needs an additional cardholder verification (which is required for payments over 30 pounds in the UK), cards will answer “I can’t do that,” which prevents against making payments over this limit.

Secondly, the terminal uses country specific settings, which demand that the card or mobile wallet provide additional verification of the cardholder, such as through the entry of the card PIN or fingerprint authentication on the phone.

Both of these checks can be bypassed using a device which intercepts communication between the card and the payment terminal.

This device acts as a proxy and is known to conduct man in the middle (MITM) attacks. First, the device tells the card that verification is not necessary, even though the amount is greater than £30. The device then tells the terminal that verification has already been made by another means.

This attack is possible because Visa does not require issuers and acquirers to have checks in place that block payments without presenting the minimum verification.

The attack can also be done using mobile wallets such as GPay, where a Visa card has been added to the wallet. Here, it is even possible to fraudulently charge up to £30 without unlocking the phone.

According to UK Finance, fraud on contactless cards and devices increased from £6.7 million in 2016 to £14 million in 2017. £8.4 million was lost to contactless fraud in the first half of 2018.

The discovery highlights the importance of additional security from the issuing bank, who shouldn’t be reliant on Visa to provide a secure protocol for payments. Instead, issuers should have their own measures in place to detect and block this attack vector and other payment attacks.

“The payment industry believes that contactless payments are protected by the safeguards they have put in place, but the fact is that contactless fraud is increasing,” said Tim Yunusov, Head of Banking Security for Positive Technologies.

“While it’s a relatively new type of fraud and might not be the number one priority for banks at the moment, if contactless verification limits can be easily bypassed, it means that we could see more damaging losses for banks and their customers.”

The researchers advise that contactless card users need to be vigilant in monitoring their bank account statements to catch fraud early and, if available with their bank, implement additional security measures such as payment verification limits and SMS notifications.

“It falls to the customer and the bank to protect themselves,” said Leigh-Anne Galloway, Head of Cyber Security Resilience at Positive Technologies.

“While some terminals have random checks, these have to be programmed by the merchant, so it is entirely down to their discretion. Because of this, we can expect to see contactless fraud continue to rise.

“Issuers need to be better at enforcing their own rules on contactless and increasing the industry standard. Criminals will always gravitate to the more convenient way to get money quickly, so we need to make it as difficult as possible to crack contactless.”


from Help Net Security https://ift.tt/2ynwpZ7

DevOps has transformed the way software engineers deliver applications by making it possible to collaborate, test and deliver software continuously. Dotscience, the pioneer in DevOps for machine learning (ML), emerged from stealth to signal the rise of a new paradigm where ML engineering should be just as easy, fast and safe as modern software engineering when using DevOps techniques.

For data science and ML organizations to achieve this DevOps for ML nirvana, the right tooling and processes need to be in place such as run tracking and collaboration, automated and full provenance (a complete record of all the steps taken to create an AI model) of AI model deployments and model health tracking throughout the AI lifecycle.

“Artificial Intelligence has the potential to reinvent the global economy, but as a discipline it’s the Wild West out there,” said Luke Marsden, founder and CEO at Dotscience.

“We’ve seen damaging levels of chaos and pain in efforts to operationalize AI due to insufficient tooling and ad-hoc processes. The lessons learned from DevOps sorely need to be applied to ML.”

History repeats itself: AI and data science today is like software engineering was in the 1990s

In the 1990s, software engineering work was split across development, testing and operations silos. Developers would work on a feature until it was done, often finding out too late that somebody else had been working on another part of the code that clashed with theirs.

Without version control and continuous integration, software engineering was difficult. The advent of DevOps in the late 2000s was and continues to be transformative for software development.

In fact, Forrester declared 2018 to be the year of enterprise DevOps with data confirming that 50% of organizations were implementing DevOps and that the movement had reached its “escape velocity.” Forester also “emphasized the importance of a collaborative and experimental culture in order to develop, drive and sustain DevOps success.”

“Version control and the workflows that it enables now allow software teams to iterate quickly because they can easily reproduce application code and collaborate with each other,” explained Marsden.

“However, because ML is fundamentally different to software development, data science and AI teams today are stuck where software development was in the late 1990s.

“We are fixing that by creating tooling which respects the unique ways that working with data, code and models together is different to working with just code. This ‘DevOps’ approach to ML provides a fundamentally better and more collaborative work environment for data engineers, data scientists and AI teams.”

The disjointed state of AI development today

Reproducibility and productivity are inextricably linked. It is difficult to be productive when different team members cannot reproduce each other’s work.

In normal software development it is enough to version the code and configuration of an application and teams have seen dramatic increases in productivity working this way. In ML reproducibility, and therefore collaboration, is more difficult because putting the code in version control isn’t enough.

“Collaboration around ML projects is harder than in normal software engineering because teams need a way to track not just the versions of their code, but also the runs of their code which tie together input data with code versions, model versions and the corresponding hyperparameters and metrics,” said Mark Coleman, VP of Product and Marketing at Dotscience.

“While some of the largest and most engineering-inclined companies have invested in creating proprietary tooling to solve this problem, many companies don’t have the necessary ability or budget and instead turn to manual processes that are both inefficient and risky.

“These cumbersome processes are often opaque and discourage collaboration, creating knowledge silos with teams, increasing key person risk and significantly diminishing team performance.”

In addition to enabling efficient collaboration, accurately tracking the ML model development process through run tracking means that the full provenance of a given model’s creation is recorded.

This aids debugging and can be invaluable if businesses must defend a model’s actions to auditors, customers or in court—a key requirement for any AI application that is making life-changing decisions in production.

Dotscience’s market research report, “The State of Development and Operations of AI Applications,” found that the top three challenges respondents experienced with AI workloads are duplicating work (33.2%), rewriting a model after a team member leaves (27.8%) and difficulty justifying value (27%).

The report examines the AI maturity of businesses by how they are deploying AI today and investigates the need for accountability and collaboration when building, deploying and iterating on AI.

The study also found that 52.4% of respondents track provenance manually and 26.6% do not track but think it is important. When provenance is tracked manually this usually means that teams are using spreadsheets with no access controls to record how their models were created which is both risky and cumbersome.

It is now possible to achieve DevOps for ML, immediately

In a separate press release, Dotscience launched its software platform for collaborative, end-to-end ML data and model management, enabling ML and data science teams to achieve reproducibility, accountability, collaboration and continuous delivery across the AI model lifecycle.

Dotscience allows ML and data science teams to simplify, accelerate and control every stage of the AI model lifecycle by solving for critical issues when developing AI applications.

Dotscience delivers the following features to make AI projects faster and less risky, ML teams happier and more productive and helps track data, code, models and metrics throughout the AI model lifecycle, delivering the simplest and fastest way to achieve DevOps for ML:

  • Concurrent collaboration across developer and operations teams
  • Version control of the model creation process
  • Automated tooling to maintain the provenance record in real time
  • The ability to explore and optimize hyperparameters when training a model
  • Tracked workflows that allow users to work with the open source tools they love and build better models by staying focused on the ML

from Help Net Security https://ift.tt/2Yq6UoH

Skyworks Solutions, an innovator of high performance analog semiconductors connecting people, places and things, unveiled its latest high reliability solutions for demanding military and space applications with stringent operating requirements.

Skyworks’ hermetically sealed, broadband low-noise and impedance-matched amplifiers function in harsh environments and can be leveraged in a multitude of communication platforms.

With all peripheral components integrated into an optimized ceramic QFN package, these devices simplify the design process and reduce board space while delivering robust performance for next generation aerospace and defense applications such as satellites and avionics systems.

“Skyworks is excited to introduce advanced products that operate seamlessly under severe conditions,” said Achim Soelter, general manager of defense and space for Skyworks.

“With the expansion of our portfolio, we continue to push the performance envelope, powering mission critical functions across navigation, communication and radar networks that must work day-in and day-out without fail.”

According to an estimate from BCC Research, the global satellite communications market, one segment of the aerospace and defense industry, is estimated to reach nearly $7.5 billion by 2022, up from nearly $4.6 billion in 2017, or a compounded annual growth rate of 11 percent.

About Skyworks’ high reliability solutions

Skyworks provides upscreened and hermetically sealed high-reliability optocouplers, RF diodes and RFICs including multi-chip modules (MCM) as part of its portfolio.

Product upscreening includes the equivalent of Class B and Class S of MIL-PRF-38535, Class H and Class K of MILPRF-38534, and JANS, JANTX and JANTXV level of MIL-PRF-19500. Select solutions include:

  • SKYH22001 – Hermetically sealed, integrated broadband low-noise amplifier with -55° to +125°C performance. Internally tuned for 700 MHz to 2.7 GHz and tunable up to 3.8 GHz.
  • SKYH22002 – Hermetically sealed, integrated gain block amplifier with -55° to +125°C performance. Internally tuned for 700 MHz to 2.7 GHz and tunable from 0.1 to 6 GHz.

from Help Net Security https://ift.tt/2K8oO5S

CoreSite, a premier provider of secure, reliable, high-performance data center and interconnection solutions in major U.S. metropolitan areas, announced that it is offering SDN inter-site connectivity between seven of its edge markets.

SDN connectivity between markets and campuses through CoreSite’s Open Cloud Exchange

With the CoreSite Inter-Site Connectivity solution customers can:

  • Secure their distributed IT infrastructure with private SDN connections, versus accessing data over the Internet
  • Simplify hybrid cloud architectures for multi-cloud and multi-site network capabilities,
  • Improve performance and greatly reduce network provisioning times, and
  • Obtain access to more than 775 network, cloud and IT service providers

CoreSite continues to broaden its product portfolio to address the evolving demands of the enterprise. As hybrid and multi-cloud architectures continue to gain prominence, demands for availability, security, performance and redundancy become increasingly important.

CoreSite’s Inter-site Connectivity solution will allow customers to reach additional cloud providers, as well as access multiple cloud regions from a single market.

“We are pleased to offer SDN connectivity between our markets by leveraging the CoreSite Open Cloud Exchange through the reach of its capabilities and ease of its online portal,” said Maile Kaiser, CoreSite’s SVP of Sales.

“With the CoreSite Inter-Site Connectivity solution, we make it easier for customers to expand and connect to CoreSite’s rich ecosystem of cloud and network providers as well as other enterprise organizations.”


from Help Net Security https://ift.tt/2K6Xo1C

Spirent Communications, a leading provider of test, measurement, assurance, and analytics solutions for next-generation devices and enterprise networks, announced that at Black Hat USA in Las Vegas (August 7-8) it will demonstrate a number of new capabilities in its CyberFlood Data Breach Assessment solution and preview new use cases for security assessment in 5G networks.

The new Reconnaissance Mode feature in CyberFlood Data Breach Assessment mirrors the activity of an actual hacker to identify the processes, services and applications that comprise an enterprise network security zone and then automatically creates specific and accurate assessments based on that information.

The new feature will be demonstrated for the first time at Black Hat (booth #2404), where Spirent will also preview new CyberFlood use cases for security assessment in 5G networks.

CyberFlood is a powerful, automated solution that generates realistic application traffic and security threats within live or production networks, providing organizations with a continual security assessment of their enterprise network infrastructures.

Unlike assessment solutions that simulate attacks, CyberFlood uses actual attack components, true hacker activity, and malware executables accessed through continually updated threat intelligence services to validate an organization’s vulnerability to cybercrime.

At Black Hat, Spirent will also showcase:

  • The unique ability of CyberFlood to assess and recommend policy, rule, and heuristics changes to security or network infrastructure in real-time recommendations based on vulnerabilities, misconfigurations, and other security weaknesses discovered during ongoing CyberFlood network assessment;
  • The advantages of the NetSecOPEN test suite incorporation into CyberFlood, allowing organizations to easily use the full breadth of NetSecOPEN’s open network security and performance test standard methodologies to assess their security systems;
  • The on-premises version of Spirent’s upcoming SecurityLabs vulnerability assessment and management platform, which provides up-to-the-minute security assessments of an organization’s entire attack surface – without the need for specialized testing personnel – by automatically managing, monitoring, and evaluating ongoing vulnerability and penetration testing; and
  • Smart City hack demonstrations of the vulnerability of Supervisory Control and Data Acquisition (SCADA) infrastructure or Industrial Control Systems (ICS) to attackers, who could infiltrate and hijack these critical systems while operators in a control room remain unaware of any threat.

from Help Net Security https://ift.tt/2ylYXSK

Rambus, a premier silicon IP and chip provider making data faster and safer, announced it has signed a definitive agreement to acquire Northwest Logic, a market leader in memory, PCIe and MIPI digital controllers.

Northwest Logic’s high-performance, high-quality and silicon-proven digital IP controller cores are optimized for use in both ASICs and FPGAs. Interface IP solutions consisting of a physical interface (PHY) and companion digital controller make it possible to optimize the transfer of data between chips and electronic devices.

Every SoC design that uses a Memory or a PCIe or a MIPI PHY also needs to use a controller associated with it. The combination of complementary digital and physical IP portfolios from Northwest Logic and Rambus will create a one-stop-shop for customers.

“With this acquisition, we expand our leading product portfolio for high-performance markets such as data center, networking, artificial intelligence, machine learning and automotive,” said Hemant Dhulla, vice president and general manager of IP Cores at Rambus.

“Northwest Logic’s innovative, best-in-class digital controllers complement Rambus’ proven strength in high-speed physical IP cores. Together, we will offer one of the most comprehensive high-performance interface IP solutions in the industry, leveraging our core strength in semiconductors, strong go-to-market advantage and global reach.”

Brian Daellenbach, president and CEO, Northwest Logic said: “Northwest Logic’s category-leading digital controllers fit perfectly with Rambus’ leadership portfolio of high-speed PHY solutions.

“This deal creates a one-stop-shop for SoC designers working on state-of-the-art applications across a broad range of high-performance markets. We look forward to continue serving our existing customers and working with our PHY partners.”

Critical to enabling the high performance of data center, networking, AI, ML and automotive applications, this acquisition will bring together the physical and digital IP core families from renowned market leaders to offer comprehensive memory and SerDes IP solutions for chip designers.

The transaction is expected to close in the current calendar quarter of 2019. Although this transaction will not materially impact 2019 results due to the expected timing of close and acquisition accounting, Rambus expects this acquisition to be accretive in 2020.


from Help Net Security https://ift.tt/2GCATQ1