Thursday, April 30, 2020

Growth of APIs for new services

This is the second of a series of articles that introduces and explains API security threats, challenges, and solutions for participants in software development, operations, and protection.

growth of APIs

Growth of APIs

When Salesforce and eBay became the first major Internet players to focus on making their systems available to external programs via an API (versus traditional means such as command line interface), they ushered in a new era of so-called open computing.

What this meant was that rather than close off their software to the external world, as was the general practice before 2000, open computing encouraged systems to allow for others to have their software connect directly. As one might guess, the result was explosive growth on the Internet.

Imagine, for example, how difficult it might have been for Amazon (which published their first API just after Salesforce and eBay), to have grown so quickly if they had walled off their applications from other systems on the Internet. Without open computing, they would have had trouble integrating security protections, purchasing partners, supply chain management, authentication services, and on and on. All the things we have come to expect from a modern Internet service now depend on open computing and APIs.

More recently, API usage has seen even greater exponential growth driven by several factors – the first of which is the ubiquitous mobile device. By making the Internet accessible anywhere, anytime, and to everyone – mobility increased the demand for more connected and integrated services. It’s hard to imagine API-heavy services such as Salesforce, eBay, and Amazon experiencing such great success without the explosion of mobile device usage.

Additional factors driving API usage might be less familiar to normal users. Software designers have moved, for example, to modular applications, which makes it easier for them to add features more quickly and to iterate more rapidly during software development to create standard interfaces. Network architects have also begun to adopt an approach known as a service mesh, which depends on hyper-connectivity between software workloads. As one might expect, this connectivity is achieved through the use of APIs.

Invention of the REST API

In 2000, Roy Fielding completed his PhD at the University of California at Irvine. His PhD thesis, unlike most such works, includes arguably the first meaningful description of what we would now refer to as an API. Specifically, “Architectural Styles and the Design of Network-Based Software Architectures” ushered in a new era of programming style for the web, using a technique referred to as Representational State Transfer or REST.

The specific details of REST APIs are beyond the scope of this short summary, but we can outline some of the more salient constraints that help define this uniform set of software connector interfaces. The first design constraint in the REST style of programming involves stateless processing for all client-server interactions. By reducing API requests to a single transaction (versus including history), it become much easier to create proper “visibility, reliability, and scalability,” as Fielding explains in his thesis.

Additionally, cache constraints are added to the REST API model to reduce the latency of interactions. The most central design constraint of REST APIs, however, is the uniformity of the interfaces that is inherent in the overall design. This is complemented by design layering, which reduces the complexity at a given layer (via abstraction of lower layers) and code-on-demand, which “allows client functionality to be extended by downloading and executing code in the form of applets or scripts,” again, as Fielding describes in his work.

The implications of REST API design from Fielding’s PhD proposal were immediately felt across the entire web community. Soon after publication of the thesis, companies like Salesforce and eBay began to demonstrate how the programming style as associated uniform connector model could substantially increase their reach to the web. They quickly saw that APIs not only made their interfaces more standard, but made the services they provided to the external community much more accessible and more popular.

Contributing author: Matthew Keil, Director of Product Marketing, Cequence.


from Help Net Security https://ift.tt/2KR05nq

Mitigating cybersecurity risks for employees working remotely

Many IT specialists are supporting fully remote teams for the first time ever, so it’s important for everyone to operate with the same caution (if not more) than they would if everybody was in an office. With an increased risk of employees falling prey to cyber attacks, business leaders must leverage new policies and technologies to keep their companies and employees safe.

cybersecurity risks working remotely

Here are five tips for IT specialists to mitigate the cybersecurity risks while employees are working remotely:

1. Employ basic input/output system technology

Hardware platform security has become even more important. Sophisticated hackers are able to compromise or bypass operating systems’ security protections by gaining root access or compromising the BIOS software underneath the OS. With a predominantly remote workforce, ensuring that employee devices have capabilities like BIOS resilience is more paramount than before.

Technologies like self-healing BIOS can help mitigate the risks of attacks below the OS where detection and remediation are challenging. Having these safeguards in place can ensure employees will not need to replace or reinstall hardware, provide detection and automatic recovery of the firmware system in the case of BIOS corruption or compromise due to malware, and provide a peace of mind.

2. Strategize against unsecure access points

No longer is work done just within the confines of the corporate network and access points. While this is something we were starting to see long before COVID-19, what has changed now is the almost overnight shift to work taking place exclusively outside of the confines of the four walls of the office.

While most of the world is under shelter-in-place restrictions and using their devices from home, it’s only a matter of time before workers across the globe begin heading back to shared workspaces, coffee shops, planes and everywhere else in between.

Addressing the risks posed by potentially logging onto a rogue access point is a vital consideration. Employees must be diligent in making sure that they are not logging onto the wrong Wi-Fi (sometimes slightly changed name or number). IT specialists should continue to hold employee training sessions on the danger of unsecured access points.

3. Streamline administrator rights and employee credentials

Credential and access management have long been a challenge for IT teams, many of which are over-burdened and short-staffed due to critical talent shortages. Addressing the basics of making sure users don’t have administrator rights, only have access to the systems, repositories, shares and networks that they need, and only for how long they need them, goes a long way to help mitigate against credential theft – and as a result, malicious access to more sensitive data and systems.

4. Have a “better safe than sorry” mindset with zero trust security

Zero trust goes beyond the usual marketing hype to emphasize access and privileges. The reality is that attackers are becoming increasingly sophisticated and operate like criminal corporations (i.e., they have a chain of command, an organized structure and financial motivation).

By adopting a zero trust model, we assume a “guilty until proven innocent” mindset in security. To frame it more gently, it’s about giving access and privileges based on a “need-to-know” basis.

5. Leverage contextual AI

The estimated current cybersecurity workforce is 2.8 million professionals, while the amount of additional trained staff needed to close the skills gap is 4.07 million professionals, according to (ISC)2. Combine this with attacker sophistication, data sprawl, cloud adoption, exponential growth in devices and more, and you have a recipe for disaster. To tip the scales in your favor, you have to leverage artificial intelligence at the endpoint.

These solutions are able to detect malicious activities and respond almost automatically to isolate the attack from the network and auto-immunize the endpoints against newly discovered threats. Some even offer the possibility to rollback an endpoint to its pre-infected state. However, there is a caveat all developers and employers should understand – not all AI is built the same. As a security team, it is important to understand your challenges and leverage contextual AI when applicable.

While COVID-19 has challenged businesses to think about security in a new way, the risks will not vanish once employees start getting back to the workplace. For example, if any machines were compromised while employees worked from home, once reconnected to the corporate network those machines can offer cybercriminals a door into your business. It is thus vital for business leaders to employ these security measures now, preventing the potential for a reputation damaging breach down the road.


from Help Net Security https://ift.tt/2YmqwJD

New infosec products of the week: May 1, 2020

Guardicore Infection Monkey now maps its actions to MITRE ATT&CK knowledge base

The latest version of Guardicore Infection Monkey now maps its actions to the MITRE ATT&CK knowledge base, providing a new report with the utilized techniques and recommended mitigations, to help security and network infrastructure teams simulate APT attacks and mitigate real attack paths intelligently.

infosec products May 2020

Datadog Security Monitoring: Detect threats in real time, investigate security alerts

Datadog Security Monitoring combines and analyzes traditional security signals with performance and environment data from applications to provide unique real-time insights. This allows the security, dev and ops teams to rapidly identify security issues, pinpoint the affected system and perform remediation quickly.

infosec products May 2020

Moogsoft Enterprise 8.0: Enabling Ops teams to accelerate incident detection and resolution

Moogsoft Enterprise 8.0 consolidates visibility and control of monitoring tools to help entire IT Ops and DevOps teams reduce noise, prioritize incidents, reduce escalations and ensure uptime. Working from anywhere, the IT operator can now analyze alerts, logs, metrics and traces to find and resolve the root cause of incidents before they become outages.

infosec products May 2020

Bugcrowd Classic Pen Test: Increase pen testing speed, scale and quality

Leveraging Bugcrowd’s global network of uniquely-skilled and proven pen testers, Bugcrowd Classic Pen Test adds to the company’s Pen Test Portfolio, helping organizations reduce testing timelines while meeting critical compliance requirements and adhering to security best practices.

infosec products May 2020

Guardsquare ThreatCast: Protecting mobile apps against suspicious activities and malicious users

With a multi-layered approach to application protection and a mobile security console, Guardsquare ThreatCast provides all the tools needed to assess threats in real time and the intelligence to protect mobile applications against suspicious activities and malicious users.

infosec products May 2020

Obsidian Security lets security teams monitor Zoom usage

Using Obsidian, Zoom customers have enterprise-level monitoring, detection, and response capabilities from a security, compliance, and risk perspective. Obsidian generates insights and alerts related to a variety of risks and threats.

infosec products May 2020

Cygilant Endpoint Security: Detecting malware and critical threats

Cygilant Endpoint Security is an agent-based solution that collects real-time security data from a company’s critical assets, detects suspicious files, services and other activity – and then streams alerts to the 24×7 Cygilant SOC for further investigation and action.

infosec products May 2020


from Help Net Security https://ift.tt/35nlYUM

Surge in phishing attacks using legitimate reCAPTCHA walls

Cyber scammers are starting to use legitimate reCAPTCHA walls to disguise malicious content from email security systems, Barracuda Networks has observed. The reCAPTCHA walls prevent email security systems from blocking phishing attacks and make the phishing site more believable in the eyes of the user.

reCAPTCHA walls

reCAPTCHA walls are typically used to verify human users before allowing access to web content, thus sophisticated scammers are starting to use the Google-owned service to prevent automated URL analysis systems from accessing the actual content of phishing pages.

Researchers observed that one email credential phishing campaign had sent out more than 128,000 emails to various organizations and employees using reCAPTCHA walls to conceal fake Microsoft login pages. The phishing emails used in this campaign claim that the user has received a voicemail message.

Once the user solves the reCAPTCHA in this campaign, they are redirected to the actual phishing page, which spoofs the appearance of a common Microsoft login page. Unsuspecting users will be unaware that any login information they enter will be sent straight to the cyber scammers, who will likely use this information to hack into the real Microsoft account.

Steve Peake, UK Systems Engineer Manager, Barracuda Networks comments: “In this difficult time, it is no surprise to see that cyber scammers are seeking increasingly sophisticated methods of stealing log-in credentials and data from unsuspecting, remote workers.

“Fortunately, there are a number of proactive measures employers and business owners can take to prevent a security breach. Most importantly, users must be educated about the threat so they know to be cautious instead of assuming a reCAPTCHA is a sign that a page is safe.

“Furthermore, whilst reCAPTCHA based scams make it harder for automated URL analysis to be conducted, sophisticated email security solutions can still detect these phishing attacks using AI-based email protection solutions. Ultimately, however, no security solution will catch everything, and the ability of the users to spot suspicious emails and websites is key.”


from Help Net Security https://ift.tt/2yfwfX9

What’s happening with all things cloud: Existing and future cloud strategies

Cloud spend exceeds budgets as organizations expect increased cloud use due to COVID-19, according to a Flexera report.

increased cloud use

“With employees working from home and more business interactions going digital,” said Jim Ryan, President and CEO of Flexera, “more than half of enterprise respondents said their cloud usage will be higher than originally planned at the beginning of the year due to the pandemic.

“Companies plan to migrate more services to cloud, yet they’re already exceeding cloud budgets. They will need to focus on optimizing workloads as they migrate in addition to cost management and governance to ensure operational efficiency.”

Organizations embrace multi-cloud

  • 93% of enterprises have a multi-cloud strategy; 87% have a hybrid cloud strategy
  • Respondents use an average of 2.2 public clouds and 2.2 private clouds.

Public cloud adoption continues to accelerate

  • Twenty percent of enterprises spend more than $12 million per year on public cloud
  • More than 50 percent of enterprise workloads and data are expected to be in a public cloud within 12 months
  • 59% of enterprises expect cloud usage to exceed prior plans due to COVID-19
  • The top challenge in cloud migration is understanding application dependencies.

Cloud cost optimization

  • Organizations are over budget for cloud spend by an average of 23 percent, and expect cloud spend to increase by 47 percent next year
  • Respondents estimate that 30 percent of cloud spend is wasted.

Cloud initiatives and metrics

  • 73% of organizations plan to optimize existing use of cloud (cost savings), making it the top initiative for the fourth year in a row
  • 61% percent of organizations plan to focus on cloud migration
  • 77% of organizations use cost efficiency and savings to measure cloud progress.

Organizational approach to cloud

  • 73% of enterprises have a central cloud team or cloud center of excellence
  • 57% of cloud teams are responsible for governing infrastructure-as-a-service (IaaS)/platform-as-a-service (PaaS) usage costs.

Cloud challenges

  • 83% of enterprises indicate that security is a challenge, followed by 82 percent for managing cloud spend and 79 percent for governance
  • For cloud beginners, lack of resources/expertise is the top challenge; for advanced cloud users, managing cloud spend is the top challenge
  • 56% of organizations report that understanding cost implications of software licenses is a challenge for software in the cloud.

from Help Net Security https://ift.tt/2SnNe0o

Citrix App Protection enables companies to protect apps and data on unmanaged endpoints

When remote work moved from something a few people did on occasion to a mandate for nearly all employees, companies around the world scrambled to scale up their resources and enable it. Many fell short, leaving employees to use personal devices to access the systems and information they need to do their jobs. And that’s created a gaping security hole.

To help plug it, Citrix Systems has launched App Protection, which enables companies to protect apps and data on unmanaged endpoints and ensure their corporate systems and information remain safe.

“Endpoints are the penultimate control point for the implementation of device, application, and data security. The rapid acceleration of remote work sparked by the COVID-19 pandemic and proliferation of unmanaged personal devices being used for business has created a special challenge, as decentralization is not the friend of security,” said Frank Dickson, Program Vice President, Security & Trust, IDC. “And specialized and sophisticated tools are required to overcome it.”

Dion Hinchcliffe, VP and Principal Analyst at Constellation Research – and Executive Fellow, Tuck School of Business, Center for Digital Strategies, agrees. “The recent mass global shift to remote work has in part been enabled by the ability to use available devices at hand, including unmanaged ones. Yet this has opened up a vast new cybersecurity attack surface area and put even more burdens on workers struggling to adapt to their new environment,” he says.

“App Protection provides an invaluable safety net so both workers and employers can rest assured that remote work devices are not leaking critical information, allowing everyone to focus on what matters most: a safe, secure, and productive digital workplace.”

Business is now personal

As employees around the world adjust to the new normal of working from home, many are using whichever endpoint gives them the quickest access to the resources they need to get work done. And this often includes personal devices such as laptops, tablets and phones.

“Key logging and screen capture malware are common on these endpoints and provide bad actors with easy entry to corporate networks and sensitive information,” said Eric Kenney, Senior Product Marketing Manager, Citrix.

Malware beware

When present on a device, key logging malware captures each key stroke entered by a user, including user names and passwords. Screen-capture malware periodically takes a snapshot of the user’s screen, saving it to a hidden folder on the device or directly uploading it to the attacker’s server where the information can be exploited. App Protection is uniquely designed to prevent this.

A blank stare

The unique feature thwarts keylogging and screen-capturing malware that may live on personal devices by scrambling keystrokes entered into a device and sending the attacker undecipherable text. It also prevents data exfiltration from screen shot malware by turning all screen shots into blank pictures.

With App Protection enabled, employees can stay productive by working on a personal, unmanaged endpoint without sacrificing security.


from Help Net Security https://ift.tt/35wehfr

Citrix launches virtual series empowering employees to be and do their best while working remotely

It’s being touted as the “new normal.” But for most companies and their employees, remote work is anything but. To help them adapt, Citrix Systems, has launched Remote Works, a new virtual series designed to share tips and best practices for staying engaged and productive while working from home.

“Working from home is perhaps the biggest change in the way business is done that the world has ever seen and the speed with which it moved from an experiment to a requirement has many companies reeling,” said Tim Minahan, Executive Vice President, Business Strategy and Chief Marketing Officer, Citrix.

“At Citrix, we have been enabling remote work for more than 30 years. And we’re committed to leveraging our experience to help businesses adjust and empower their employees to be and do their best no matter where they are working.”

A unique collection of engaging podcasts, on-demand webinars and interviews, Remote Works aims to provide companies with insights into what it takes to enable and support remote work and reap the benefits it can provide.

“Companies that invest in technology to provide access to the applications and information employees need to be informed, collaborate, and get work done from anywhere in a safe and secure manner can manage resources in the dynamic way that unpredictable business environments demand and position themselves well for the future,” Minahan said.

But it takes more than just technology to keep employees engaged and productive – particularly in uncertain and challenging times like these. Recognizing this, Remote Works takes on a broad range of topics, including:

  • Employee experience
  • Personal productivity
  • Work-life integration
  • Digital wellness
  • Security and reliability
  • Business readiness

“Remote work is top of mind for companies around the world. And while some see it as a short-term fix to the COVID-19 problem, smart companies recognize it may be a long-term solution as they plan for what promises to be a radically different future,” Minahan said.

“The very same approaches and technologies that are helping organizations keep their employees safe and connected and their businesses running during the current crisis will provide new levels of agility to capitalize on new opportunities and thrive in the future.”


from Help Net Security https://ift.tt/3f6t58P

dotData AI-FastStart: Adding AI/ML models to BI stacks and predictive analytics apps

dotData, focused on delivering full-cycle data science automation and operationalization for the enterprise, announced dotData AI-FastStart, a new all-inclusive bundle of technology and services that includes a one year license to a fully-hosted version of dotData’s autoML 2.0 platform, plus training and support.

Available exclusively to North American customers who are not existing dotData clients, the dotData FastTrack program is designed to empower business intelligence teams to quickly and efficiently add AI/ML models to their BI stacks and predictive analytics applications.

At the core of the new program is dotData’s full-cycle data science automation platform, dotData Enterprise, which accelerates ROI and lowers the total cost of model development by automating the entire data science process that is at the heart of AI/ML.

“We are seeing a huge demand for AI and ML capabilities in the market, but finding that many companies either do not have the internal resources to launch a data science program, or don’t know how to get one started,” said Ryohei Fujimaki, founder and CEO of dotData.

“The AI-FastStart program was created as an all-inclusive bundle to help enterprises fast-track AI/ML deployments, and immediately realize value from their data.”

The dotData AI-FastStart Program includes:

  • 1 year full license to the award-winning dotData Enterprise AutoML 2.0 platform
  • Full hosting by dotData on an enterprise-grade secure cloud infrastructure
  • 12 remote training sessions for an unlimited number of users
  • Support from dotData’s data science team to onboard and co-develop the first AI use-case
  • “Worry free” cancellation for any reason within 45 days of sign up
  • Discounts on additional years of licensing and on additional computation nodes in year one

dotData provides AutoML 2.0 solutions that help accelerate the process of developing AI and Machine Learning models for use in advanced predictive analytics BI dashboards and applications.

dotData makes it easy for BI developers and data engineers to develop AI/ML capabilities in just days by automating the full life-cycle of the data science process, from business raw data through feature engineering to implementation of ML in production utilizing its proprietary AI technologies.

dotData’s AI-powered feature engineering automatically applies data transformation, cleansing, normalization, aggregation, and combination, and transforms hundreds of tables with complex relationships and billions of rows into a single feature table, automating the most manual data science projects that are fundamental to developing predictive analytics solutions.

dotData democratizes data science by enabling BI developers and data engineers to make enterprise data science scalable and sustainable. dotData automates up to 100 percent of the AI/ML development workflow, enabling users to connect directly to their enterprise data sources to discover and evaluate millions of features from complex table structures and huge data sets with minimal user input.

dotData is also designed to operationalize AI/ML models by producing both feature and ML scoring pipelines in production, which IT teams can then immediately integrate with business workflows.

This can further automate the time-consuming and arduous process of maintaining the deployed pipeline to ensure repeatability as data changes over time. With the dotData GUI, AI/ML development becomes a five-minute operation, requiring neither significant data science experience nor SQL/Python/R coding.


from Help Net Security https://ift.tt/2SpBgDj

Verint unveils new program giving orgs insights into the productivity of work-from-home agents

Verint Systems, The Customer Engagement Company, announced a program that gives organizations metrics and multichannel insights into the productivity of work-from-home agents, and identifies connectivity and other engagement challenges.

The program is available to new customers as well as existing customers of Verint’s Desktop and Process Analytics (DPA) software. With a small services engagement, organizations can gain real-time insights to manage and support their work-from-home agents effectively.

“The transition to working from home has been challenging for everyone – agents, customers, managers and organizations,” says Verint’s Nancy Treaster, SVP and general manager, strategic operations.

“Verint is working diligently with organizations whose business operations have changed overnight – as they deploy remote agent environments that require accurate data, insights and better visibility into compliance and application usage. With application analysis, managers across departments can easily identify anomalies, technical or system issues, or employee training needs.”

A business analyst at a global insurer recently shared, “Our organization is clamoring for the Verint DPA solution now that we can’t physically see and coach employees. With the rapid transition to work-from-home, we anticipated an increase in idle time and it did spike initially.

“Now a month later, however, idle time has come down to regular levels. This is insight we would not have had without Verint DPA and its application analysis software.”


from Help Net Security https://ift.tt/2Wil2x5

PDI adds a new employee self-service mobile app to its PDI Enterprise Workforce software

PDI, a global provider of enterprise resource planning (ERP), fuel pricing, supply chain logistics, and loyalty solutions for the convenience retail and petroleum wholesale industries, announced it is adding a new employee self-service mobile app to its PDI Enterprise Workforce software.

PDI Employee Self-Service provides c-store employees real-time access to accurate shift coverage, schedule transparency and pay stub information. Not only will home office human resource teams save time avoiding manual requests and errors, but employees will also gain real-time information about what matters most to them — payment and scheduling.

“C-store operators will benefit from timely and accurate employee information, and workers now have control and visibility into their daily tasks and pay,” said Drew Mize, executive vice president and general manager of ERP Solutions at PDI.

“We worked closely with our customers to develop a feature rich app to provide much-needed transparency, especially during times like these when mobile-friendly tools make our lives easier.”

C-store employees can now interact with important information through a mobile app, message with managers and colleagues, or manage shifts and coordinate with other employees to provide coverage. The module has real-time management data and an updated site schedule.

With the coronavirus heavily impacting employee scheduling, this app can also help support flexible schedules for essential c-store workers, addressing the need for real-time shift coverage, while eliminating the vulnerability that manual errors or possible email spam filters can create.


from Help Net Security https://ift.tt/2yUQBFg

NextgenID’s new Identity-as-a-Service model features zero capital outlay for identity credentials

NextgenID, a technology leader in trusted identity assurance and credentialing solutions, announced its frictionless procurement model offering to provide federal agencies with additional payment options for the ID*Capture Kiosk and Supervised Remote In-person Proofing (SRIP).

With the Identity-as-a-Service (IDaaS) pay-as-you-go business model, agencies are able to immediately deploy and exercise state-of-the-art equipment and software on-site without the need for a capital expenditure. Instead, the system can be paid for over time through identity proofing transactions.

Current practices require agencies to set aside a tremendous budget for purchasing enrollment equipment in addition to paying for the staffing at each credentialing station. NextgenID’s all-in-one kiosk delivers automation, speed, trust, and security for capturing the personal biometrics and information mandated for PIV, PIV-I, CIV, TWIC, FRAC and CAC card enrollment.

The SRIP technology delivers everything needed to remotely manage the complete proofing process as if there in person. Operators conduct the process and guide the user remotely, while managing language barriers and facilitating an accurate and efficient workflow.

“Our frictionless model allows us to offer our revolutionary technology to agencies at a lower cost, in addition to saving them money long-term,” said Mohab Murrar, CEO of NextgenID.

“Customers have the choice of using capital to purchase the equipment upfront, paying completely by transaction, or a hybrid of a lower capital payment with a reduced transaction fee. This flexibility allows government agencies to take advantage of our solutions in an efficient and cost-effective way.”

The IDaaS frictionless model pays for itself and further reduces operational costs through cost reduction year after year. The solution combines the ID*Capture Kiosk and software and SRIP solution which is 100% compliant with NIST SP800-63-3 requirements.

Direct savings: Personnel staffing requirements are reduced on the order of 10 to 1 and transaction times are reduced up to 50%.

Indirect savings: Physical footprint decreases, hours of service are extended, language preferences and special needs are supported.

NextgenID’s ID*Capture Kiosk and unique SRIP solution is the first significant upgrade to the HSPD-12 identity credential issuance process since HSPD-12 was issued years ago. The ID*Capture Kiosk can be shipped and installed within 30 days.


from Help Net Security https://ift.tt/2WjiVca

A-LIGN A-SCEND 2.0: Enabling an anytime, anywhere approach to compliance for anyone

A-LIGN, a technology-enabled security and compliance partner trusted by more than 2,400 companies, announced the launch of A-SCEND 2.0, its propriety compliance management platform that enables an anytime, anywhere approach to compliance—for anyone.

A-SCEND 2.0 centralizes evidence collection and standardizes compliance requests, making it possible to consolidate multiple audits at once. This streamlined approach to compliance enables corporations to establish trust faster, so they can win new business sooner.

An additional ROI is achieved by cutting costs and saving time, which also enables businesses to focus on the more impactful work of digital transformation.

According to Gartner, “Consolidate audits when there is a need to obtain more than one certification or attestation. Consolidate audit planning, audit data gathering, interviews and evidence collection efforts to result in fewer audits with multiple security certifications/attestations, and leverage one certification provider.”

Business leaders are facing a mandate to embrace digital transformation to remain competitive—an imperative that has only become more urgent in the midst of a global pandemic. This new reality requires enabling a remote workforce, introducing a new class of cybersecurity risks and compliance challenges into an already complex and changing landscape.

The demand to demonstrate compliance to remain competitive has never been greater. Simultaneously, enterprise maturity models are pressuring organizations to improve their audit processes. Organizations that endorse a strategic approach to compliance and technology-enabled services are feeling less pain.

“Taking a strategic approach to compliance can save a lot of time and money, but you have to look at the big picture and plan ahead. Internally it’s critical to create standard policies and procedures, and externally the key is to consolidate audits with a one-stop shop,” said Nora Pan, VP, Products & Technology Standards & Compliance, PMO and Operations, TIBCO.

“We picked A-LIGN because of its knowledge across multiple standards. Its technology-enabled service streamlines and standardizes the audit process, so our teams can get back to their day jobs. The end result improves our efficiency, our interoperability and our integration across products and services—basically, we can do more with less.”

A-SCEND 2.0 streamlines compliance

A-LIGN’s compliance management platform is purpose-built for assessment, and designed for the end user with minimal jargon and maximum performance. A-LIGN combines its depth of experience working for more than 2,400 clients on 6,000 audits, with the breadth of its expertise across SOC, ISO, HITRUST, FedRAMP, FISMA, PCI DSS, HIPAA/HITECH, and others.

A-LIGN has empowered its clients to collect more than 1.2 million pieces of evidence for their audits, informing the design of A-SCEND 2.0 to streamline the audit process.

“Some companies struggle with compliance because they manage audits with spreadsheets and emails, but with A-SCEND 2.0 the same tool used to collect evidence is the same tool used to conduct audits,” said Gene Geiger, CTO, A-LIGN.

“Compliance is complex and time-consuming, but it shouldn’t have to be a full-time job—that’s why A-LIGN is transforming compliance by enabling an anytime, anywhere approach to make audits accessible to anyone.”

Key features and benefits of A-SCEND 2.0 include:

  • Centralized evidence collection—Save time by centralizing evidence collection with one-click/batch uploading.
  • Standardized compliance requests—Eliminate duplicate requests by automatically generating requests that apply evidence to multiple framework criteria.
  • Consolidated audit process—Minimize capital and operational expenses by uploading evidence throughout the year to conduct a single annual audit.
  • Modern UI/UX—Centralize project management, track workflows, enhance visibility, and integrate communication and collaboration. A-SCEND 2.0 is intuitive and easy-to-use.
  • Security by design—A-LIGN maintains its own independent SOC 2 Type 2 report and hosts A-SCEND 2.0 on the Google Cloud. A-SCEND 2.0 delivers additional security controls, including two-factor authentication and database encryption.

“Technology plays a critical role in compliance, but it is important not to overlook the human element. Just as you wouldn’t complete your corporate taxes without an accountant, you need a qualified guide to help navigate the complexity of compliance,” said Scott Price, CEO, A-LIGN.

“A-LIGN is unifying technology and humanity with its end-to-end compliance management solution and best-in-class experience—it is more than a tech-enabled service, it is a human-enabled service. Together, we are equipping our clients with new efficiencies allowing them to elevate their business to new heights.”


from Help Net Security https://ift.tt/2SiXk2z

AtScale platform updates enable orgs to leverage multidimensional business analysis in the cloud

AtScale, the intelligent data virtualization provider for advanced analytics, announced expanded security features, Autonomous Data Engineering enhancements, and dynamic scaling capabilities in the latest AtScale 2020.2 platform.

The release enables organizations to leverage multidimensional business analysis in the cloud, or Cloud OLAP, and includes native availability in the AWS, Microsoft Azure and Google Cloud Marketplaces.

“AtScale provides enterprises with a single interface to manage any combination of cloud and traditional data platforms for business intelligence applications,” said Christopher Lynch, Executive Chairman and Chief Executive Officer, AtScale.

“Our customers are increasingly deploying AtScale to thousands of data analysts, both internal and external, who perform Cloud OLAP [COLAP] on massive data sets. The dynamic scaling capabilities in AtScale 2020.2 maximizes performance while minimizing infrastructure costs for a virtually unlimited number of concurrent users.”

“Tyson Foods leverages AtScale for OLAP use cases on Google Big Query across a number of the company’s mission critical business functions,” said Chad Wahlquist, Director of Data Strategy and Technology, Tyson Foods.

“AtScale’s Cloud OLAP and Autonomous Data Engineering™ capabilities seamlessly deliver interactive query response times while minimizing load and improving concurrency across all of our operational data.”

AtScale’s intelligent data virtualization platform redefines traditional data virtualization, enabling enterprises to deliver on the promise of cloud transformation. The adaptive analytics platform provides customers with access to secure self-service analysis, while reducing compute costs, improving query performance and enhancing user concurrency.

AtScale’s 2020.2 platform release includes:

  • Dynamic scaling support for the company’s query engine.
  • Unified hybrid cloud authentication and authorization capabilities to bridge cloud, hybrid cloud and on-premises data stores.
  • Enhanced controls to boost Autonomous Data Engineering strategies and performance.
  • Availability in AWS, Azure and Google Cloud Marketplaces.
  • Support for Azure Active Directory, LDAPS and SAML.

“The recent ‘Big Data & Analytics Maturity 2020 Survey Report’ revealed that 79% of enterprises use multi-cloud or cloud strategies,” said Dave Mariani, Co-founder and Chief Strategy Officer, AtScale.

“In addition, 82% of survey respondents believe that it’s important to have consistent, integrated security and governance for their data in the cloud. AtScale’s new platform release enables enterprises to achieve that.”

This news comes on the heels of AtScale’s announcement that the company is offering free access to AtScale’s COVID-19 Cloud OLAP Model. The model was built to analyze COVID-19 data sets, including Boston Children’s Hospital’s COVIDNearYou.org and Starschema: COVID-19 Epidemiological Data, which is available through Snowflake’s Data Exchange.


from Help Net Security https://ift.tt/2VSDrRX

Google announces cull of low-quality, misleading Chrome extensions

With Google Chrome being by far the most widely used web browser, Google must constantly tweak protections, rules and policies to keep malicious, unhelpful and otherwise potentially unwanted extensions out of the Chrome Web Store. The latest change of that kind has been announced for August 27th 2020, when Google plans to boot from the CWS “low-quality and misleading” Chrome extensions.

misleading Chrome extensions

The announced changes

According to Google, there are currently around 200,000 browser extensions on the CWS, and many users have trouble finding exactly what they want because they have to wade through a multitude of copycat apps, apps with fake reviews and ratings, apps with misleading functionalities, and so on.

In order to make life easier and safer for users, Google will forbid developers and their affiliates to submit/publish:

  • Multiple extensions that provide duplicate experiences or functionality (e.g., wallpaper extensions that have different metadata but provide the user with the same wallpaper when installed)
  • Extensions whose only purpose is to install or launch another app, theme, webpage, or extension
  • Extensions that abuse notifications by sending spam, ads, promotions, phishing attempts, or unwanted messages that harm the user’s browsing experience
  • Extensions that send messages on behalf of the user without giving the user the ability to confirm the content and intended recipients
  • Extensions that have misleading, improperly formatted, non-descriptive, irrelevant, excessive, or inappropriate metadata (e.g., description, developer name, title, icon, etc.). “Developers must provide a clear and well-written description. Unattributed or anonymous user testimonials in the app’s description are also not allowed,” Google explained.

Finally, developers are forbidden from artificially manipulating how the Chrome Web Store orders and displays their extension, from providing incentives for users to download their extension, and from inflating product ratings and reviews.

Developers are urged to review the changes, read the spam policy FAQ to better understand them, and to start reviewing their apps and removing those that fall afoul of the new spam policy before the August deadline.

While Google’s intentions are laudable, it remains to be seen how strict they will be about removing misleading Chrome extensions and how effective they will be in preventing such extensions from being published on the CWS in the first place.


from Help Net Security https://ift.tt/2VT8aP3

Support Workers by Avoiding These Companies Tomorrow


If you can, avoid using Amazon, Instacart, Whole Foods, Walmart, Target and FedEx tomorrow, as their workers are planning a walk-off to protest their employers’ unprecedented profits, which are coming at the cost of employees’ health and safety. On Friday, May 1, employees will either call in sick or walk off the job during their lunch break.

U.S. has weak labor protections

As we’ve reported before, when it comes to labor protections, the U.S. ranks at the bottom of developed countries. This includes unemployment benefits, workplace protections, as well as weakened collective bargaining powers.

When it comes to staying safe, and weathering job losses, workers don’t have a lot of options, a fact that has become especially stark these past few months, especially given the Department of Labor’s recently issued announcement that employees who refuse to work out of a general sense of fear are ineligible for pandemic unemployment assistance.

For a lot of the workers who are preparing to walk off the job, their demands have been simple: personal protective gear, health care benefits, paid leave, and hazard pay.

Demands for safer conditions still unanswered

Workers’ demands have, to a large extent, gone unanswered. For example, although Instacart supposedly offers paid sick leave to workers who get sick with COVID-19, accessing it is almost impossible, as they don’t accept doctor’s notes.

These issues aren’t unique to a single company or industry, a fact that the organizers of this latest effort acknowledge.

In addition to dealing with hazardous working conditions and inadequate pay, workers are also watching their employers get richer off their hard work. Jeff Bezos’ net worth has increased by an estimated $25 billion since the start of this pandemic, all while his workers have reported inadequate pay and hazardous working conditions. Bezos is not an outlier—since the start of the pandemic, the billionaire class has added $308 billion to their net worth, at the same time unemployment claims have topped 26 million in the past five weeks.




from Lifehacker https://vitals.lifehacker.com/support-workers-by-avoiding-these-companies-tomorrow-1843164788

Protect Unlimited Devices for a Year With McAfee Total Protection, $30 Today Only

Best Tech DealsBest Tech DealsThe best tech deals from around the web, updated daily.

McAfee Total Protection 1-Year License (Unlimited Devices) | $30 | Amazon Gold Box

Whether you’re working from home or making some noobs upset on Call of Duty, your increased internet usage calls for increased caution against the digital dangers that lurk about. The least you can do is set up antivirus, and a year-long license to protect unlimited devices with McAfee’s total protection suite just happens to be $30 at Amazon Gold Box. You’ll be able to protect all your devices, and it’s compatible with PC, Mac, and mobile.



from Lifehacker https://ift.tt/2QY00lq

Bumper Adobe update fixes flaws in Magento, Bridge and Illustrator


After a light Patch Tuesday earlier this month, Adobe has issued an unexpectedly large bundle of critical security fixes for flaws affecting its Magento, Bridge and Illustrator products.

These might look casually out of band but in fact Adobe often staggers its patches throughout the month.

Nevertheless, with a total of 35 CVEs to fix in this update, including 24 described as ‘critical’, it’s likely the company has been saving up this patching haul from its bug bounty programme for some time.

Users will be pleased to have them, however, given how many can be exploited remotely to compromise a target system and are given high CVSS ratings.

Unfortunately, there appear to be a few of these, with users of Bridge, a component of Adobe Creative Suite, getting the most work to do. There are 17 fixes, of which 14 are critical, collectively identified as APSB20-19. The vulnerabilities affect version 10.0.1 and earlier for Windows and updates to Bridge version 10.0.4 for both Windows and macOS.

The different versions of the Magento ecommerce platform, Open Source and Enterprise (previously known as Community and Commerce) offers fixes for 13 CVEs, including six rated critical in APSB20-22, and individually listed with PRODSECBUG numbers.

Of the six critical flaws, all either allow command injection or security bypasses. The update affects Open Source version 2.3.4 and updates to 2.3.4-p2. For Magento Enterprise Edition the affected version is 1.14.4.4 and earlier and updates to version 1.14.4.5.

Finally, Illustrator 2020 gets fixes of five critical flaws in update APSB20-20. The affected version is 24.0.2 and earlier with the update taking the software to version 24.1.2.

Applying these patches isn’t window dressing – Adobe products are quickly targeted once vulnerabilities become known about.

For instance, recent attacks on Magento have included an exploit targeting an SQL injection flaw in April last year. And there’s always the tide of card skimming attacks on the platform to contend with.

It doesn’t help that many sites still run Magento 1.x, which prompted Visa to warn earlier this month that sites should be upgraded to 2.x before the software’s end-of-life in June this year and to dodge the skimming threat posed by Magecart.


Latest Naked Security podcast


from Naked Security https://ift.tt/2YgRVNf

Coronavirus delays trial of alleged Russian hacker a third time


Starting in 2012 and on up to his arrest while mulling a menu in a Czech restaurant in 2016, Yevgeniy Nikulin allegedly triggered mega-breaches at big-name online companies LinkedIn, Dropbox and Formspring.

Justice has already been slow in this case, and the pandemic isn’t helping: His trial has been postponed for a third time.

Nikulin’s trial in San Francisco federal court began 9 March but was paused on 18 March because of the coronavirus. It was supposed to restart on 4 May, but on Tuesday evening, US District Judge William Alsup postponed it yet again, rescheduling it to 1 June because some jurors and witnesses were hesitant about showing up at the courthouse during the pandemic.

That’s cutting it close. At this point, San Francisco’s lockdown has been extended to the end of May. If the trial gets extended past 1 June, Judge Alysup well may declare a mistrial, meaning that it will all have to start over from the beginning.

“I’m a little bit worried”

You can read jurors’ thoughts on the matter in their responses to a court questionnaire proposed by the lawyers and approved by Judge Alsup. They range from a “let’s plough ahead” attitude to a juror who says they’re a doctor on the frontlines, others’ concerns about underlying conditions and age, and somebody else whose friend became ill and doesn’t know if they themselves were exposed or not.

More than half of the 14 jurors and alternates – a total of nine – expressed concern about resuming the trial.

From one juror’s response:

I’m a little bit worried as my immune system has not been very strong, but if we all have fair distance, I think it should be okay. … I’m wondering if there’s any way we can do this remotely via video or the phone at home.

Law360 reports that on Monday, prosecution and defense were in the midst of an argument over whether or not video depositions would constitute constitutional testimony when the trial resumed. Two of the witnesses are regarded as high risk because of underlying health conditions:

  • Ganesh Krishnan, a former LinkedIn employee who led its security response to an attack that led to millions of passwords being put up for sale on the dark web, and
  • Federal investigator Emily Odom, who’s set to testify about an alleged co-conspirator’s ties to Nikulin.

The defense said no to remote testimony, however: Nikulin said that it would violate his Sixth Amendment right to confront witnesses. The Sixth Amendment also grants criminal defendants the right to a public trial that’s speedy, but no absolute time limit has ever been explicitly established.

The charges

Nikulin, a Russian citizen from Moscow, was 29 when he was indicted in 2016. According to the indictment, he allegedly targeted a LinkedIn engineer with malware so as to steal his access credentials. Then, he allegedly did the same thing to Dropbox.

Nikulin stands accused of damaging the computers of both the LinkedIn employee and to Dropbox by “transmitting a program, information, code, or command”. In other words, infecting the systems with malware. He and his co-conspirators were allegedly plotting to do the same to Formspring, a social networking service now known as Spring.me that’s a portal for the dating service Twoo.

He’s also suspected of trying to break into WordPress maker Automattic’s systems, by similarly posing as employees. He’s looking at a potential 10 years in prison if found guilty, though maximum sentences are rarely handed out.

Cold War tug-of-war

In March 2018, Nikulin was extradited from Prague to the US, where he pled not guilty. The extradition – the first delay in this much-delayed trial – took 18 months because Russia didn’t want it to happen. The country said it wanted to prosecute him for separate allegations, but as The Register has reported, it’s claimed that Russia actually recruited Nikulin to do hacking on its behalf and that the charges it wanted to bring against him were trivial in comparison to those in the US.

After he finally got to the US, Nikulin’s trial was delayed yet again by concerns over his mental health. He’s gone through multiple mental competency evaluations, ordered after his refusals to meet with psychiatrists or leave his cell.

Two doctors came to differing conclusions. A Russian doctor picked by the defense found that Nikulin was unfit to stand trial due to chronic post-traumatic stress disorder (PTSD) stemming from a family history of mental illness and trauma, abuse at the hands of his father, and his brother’s suicide.

The court didn’t put much stock in the evaluation, noting that it was the doctor’s first formal competency evaluation of a defendant and that he didn’t use common forensic tools.

The other doctor, picked by the prison board, concluded that Nikulin is fit to stand trial, his only problem being that he’s a narcissist: somebody prone to “a pervasive pattern of grandiosity, need for admiration, and lack of empathy.” In fact, according to court documents, Nikulin believes that he’s the only person competent enough to defend himself.

At any rate, the court agreed that he is in fact able to understand the charges he’s facing, to properly assist in his own defense, and to follow trial proceedings with the help of a Russian translator. … whenever these pandemic conditions allow that trial to proceed, that is.


from Naked Security https://ift.tt/2YlgCZ2

Turn Your Quarantine Video Chats into a Podcast


Screenshot: David Murphy (Anchor)

Even though most of your coronavirus-quarantine video chats are probably dull, that doesn’t mean that they’re meaningless. In fact, some might be so interesting, you might want to turn them into a podcast (At least, that’s how I feel about my new virtual Dungeons & Dragons sessions.)

That, or perhaps you want to start a podcast and have no idea how. Well, we can fix that, too. And by “we,” I mean “Anchor,” a free service you can use to bring your thoughts to the masses in an episodic format. Anchor dropped a new feature yesterday that you can use to turn video recordings into podcasts, which makes it even more easy to transform your dorky Zoom calls into an editable (and publishable) chunk of audio.

To get started, sign into your Anchor account (or sign up for one, if you don’t have one) and visit the “Create your episode” page. Click on the appropriate link to upload your video file, per the following restrictions:

Anchor will start extracting the audio from your video. When it’s done, you have a bunch of options: you can rename the snippet you uploaded, split it into multiple segments, add This American Life-style audio transitions, or simply click on “Save Episode” to start finalizing the details of the full podcast you’re looking to create. You can also trim and split your audio as needed, but you’ll have to be using Anchor’s iOS or Android app for that.

Me? I’d download the converted audio file from Anchor and process all my trimming in a third-party app like Audacity. I’d then reupload the files individually to Anchor and start building my podcast that way. This is kind of like the “advanced mode” of editing, but I think it would be even faster, ultimately, than using Anchor’s tools. However, if fussing around with waveforms isn’t your bag, stick with Anchor.

How to record your video chats

If you’re unsure how to even get started recording a video chat, don’t worry. It’s pretty easy to do, no matter what service you use. Anchor provided a handy list to the help pages for a bunch of video chat services, which we’ve lovingly copied below:

Note that Zoom and Google Meet require you to have a paid subscription to record video chats—free users need not apply. Though, I suppose you could also get fancy in this case and record the audio that’s coming out of your PC. That’ll defeat the point of uploading a video file to Anchor, but it will save you a good amount of space on your drive (and processing power). The things we do for personalized podcasts during a quarantine.


from Lifehacker https://ift.tt/2KOJAZ4

How to thwart human-operated ransomware campaigns?

Most ransomware campaigns hitting healthcare organizations and critical services right now are just the final act of a months-long compromise.

“Using an attack pattern typical of human-operated ransomware campaigns, attackers have compromised target networks for several months beginning earlier this year and have been waiting to monetize their attacks by deploying ransomware when they would see the most financial gain,” says the Microsoft Threat Protection Intelligence Team.

Organizations who have yet to witness the final act (data exfiltration, file encryption) may have time to prevent it altogether and boot the attackers out before more damage is done.

Of course, those who have checked for organization-wide compromise and found nothing are the luckiest ones, but should nevertheless put up protections and mitigations as soon as possible.

Skilled attackers and a common attack pattern

“Human-operated ransomware attacks represent a different level of threat because adversaries are adept at systems administration and security misconfigurations and can therefore adapt to any path of least resistance they find in a compromised network,” the team explained.

“If they run into a wall, they try to break through. And if they can’t break through a wall, they’ve shown that they can skillfully find other ways to move forward with their attack. As a result, human-operated ransomware attacks are complex and wide-reaching. No two attacks are exactly the same.”

They might not be exactly the same, but they are variation on a common attack pattern – the attackers achieve initial access via vulnerable and unmonitored internet-facing systems, steal credentials, perform lateral movement, make sure to achieve persistence on the systems/networks and, finally, deploy the ransomware payload

thwart ransomware campaigns

Mitigation

For the initial step, attackers usually:

  • Brute-force RDP endpoints or Virtual Desktop endpoints without multi-factor authentication (MFA)
  • Exploit misconfigurations of web servers (e.g., IIS), backup servers, systems management servers, electronic health record (EHR) software, etc.
  • Exploit vulnerabilities in older, no longer supported platforms (e.g., Windows Server 2003 and 2008)
  • Exploit vulnerabilities in widespread solutions like the Citrix Application Delivery Controller (ADC) systems (e.g., CVE-2019-1978) and Pulse Secure VPN systems (CVE-2019-11510).

Microsoft thinks it likely that CVE-2019-0604 (affecting Microsoft SharePoint servers), CVE-2020-0688 (affecting Microsoft Exchange Server), and CVE-2020-10189 (affecting Zoho ManageEngine Desktop Central) will also be soon exploited by these attackers.

Of course, attackers are not adverse to simultaneously try to deliver the ransomware via phishing emails or downloader Trojans that may already present on enterprise systems.

While fixing those weak spots is imperative, it may already be too late, so enterprise administrators and cybersecurity teams must also search for indication that their systems and networks have been breached and, if they find them, to start remediation immediately.

Detection and remediation

The Microsoft Threat Protection Intelligence Team has shared possible indicators of compromise for human-operated ransomware campaigns, such as presence of malicious PowerShell scripts, penetration testing tools, suspicious access to Local Security Authority Subsystem Service (LSASS) or suspicious registry modifications, security event logs that have been tampered with, and more.

They’ve also provided advice on how to go about eradicating the attackers’ presence completely and mitigating the fallout.

“As ransomware operators continue to compromise new targets, defenders should proactively assess risk using all available tools. You should continue to enforce proven preventive solutions—credential hygiene, minimal privileges, and host firewalls—to stymie these attacks, which have been consistently observed taking advantage of security hygiene issues and over-privileged credentials,” they noted.

Keith McCammon, co-founder and CSO of threat detection and response specialist Red Canary, says that the fact that ransomware actors continue to successfully leverage some textbook breach tactics underscores the need not just for better preventative controls, but for robust detection coverage, careful investigation, and proactive hunting for threats that others controls have missed.

“Microsoft’s dedication to preventing and stopping these everyday ransomware attacks is refreshing in a world where many security vendors focus their attention primarily on splashy detection of nation-state actors,” he added.


from Help Net Security https://ift.tt/2SCeW9V

Wednesday, April 29, 2020

As companies rely on digital revenue, the need for web and mobile app security skyrockets

As non-essential businesses have been forced to shut their doors around the world, many companies that previously relied heavily on the brick-and-mortar side of the business are now leaning more on revenue from their digital platforms. By 2023, according to research performed by Statista, applications may generate nearly $935 billion in revenue. With increased reliance on these applications and increasing customer traffic, security will play a critical role.

applications security

Although the use of applications has steadily increased, the difference in the ways that web and mobile applications are protected is not widely understood. Additionally, many companies that have been using security tools for their web application may feel that moving these security tools to mobile may be difficult, but it isn’t.

Let’s delve deeper into the similarities and differences in mobile and web apps, and what protection for each of those platforms looks like.

Mobile applications

When it comes to mobile applications, the customer that is using a service has an operating system that stores data. Because sessions and identities are usually saved, the app knows who the user is when it’s opened. The user’s data is saved on that particular device. If the application that the user uses is hacked, the cybercriminal can also access the personal information and sessions that the application uses to remember and authenticate the user.

Web applications

Unlike mobile applications, web applications don’t have long-term memory (although they do perform some caching). This means that they do not save a large amount of data in the same way that mobile applications do. If a web application is hacked, the cybercriminal has gained a foothold, but not instant access to user data. The foothold can later be leveraged to access back-end databases or other sensitive places within the company’s network. This can lead to a potential breach.

Application security for both types of apps

Both types of applications can be protected through the right type of application security testing. Mobile application security testing (MAST) and web application security testing tools are easily accessible nowadays. According to research performed by WhiteHat Security, organizations that perform scans during the application’s production have a lower chance of being breached. Additionally, organizations that include security in DevOps are able to lower the risk of a breach, reduce costs and have better time to market.

Security testing for web applications

Web application security testing focuses mainly on the relationship between the request and response. Because the size and complexity of websites has increased over time, the need for web app security testing tools that contextualize the risk carried by the vast amounts of data collected as they try to spot anomalies and identify vulnerabilities has increased as well.

There are two types of tools that can achieve this: dynamic application security testing (DAST) and static application security testing (SAST).

DAST scans apps on an ongoing basis once they have been deployed, and SAST scans applications at the pre-production level. Combining both DAST and SAST is a great way to strengthen the application’s security not only through the DevOps lifecycle, but also into production when the app is live and in use.

All about MAST

MAST looks at the coordination between the request and its response, and also how they are handled within the operating system. The best MAST approach uses both dynamic and static automated scanning in addition to manual mobile application-layer penetration testing. This offers coverage throughout the entire DevOps lifecycle. It also tackles compliance requirements, reduces risk and produces safer mobile apps that stay secure against potential attacks.

Adding a mobile app to a company’s product line-up should not be nerve-wrecking, even in these stressful times. By adding application security testing when implementing these applications, companies can save themselves and their customers from major data breaches and give the business time to appreciate the benefits of having an application – rather than fretting over risks.


from Help Net Security https://ift.tt/2YiB5h2

Keeping your app’s secrets secret

The software development process has vastly changed in this past decade. Thanks to the relentless efforts of the cloud and virtualization technology providers, we now have nearly limitless compute and storage resources at our fingertips. One may think of this as the first wave of automation within the application development and deployment process.

secrets management

With the rise in automation, machines must authenticate against each other. Authorization is nearly implicit in this handshake. Secrets are increasingly used by applications and (micro) services as a bootstrapping mechanism for initiation and continuity in operations. However, these secrets, which are largely credentials, need safe keeping and secure access in order to ultimately protect the end user. If left to their own devices, secrets will sprawl over time leading to a cornucopia of leaks and implications.

In the past, programmers, testers, and release managers found radically new ways to build and deliver applications from development sandboxes to production environments. This emphasized a more rapid software delivery for teams and the classic waterfall model was no longer as desirable for the consumers of the technology. Agile quickly became the buzzword and nearly every software team strived to become leaner in their size and methodology.

A critical requirement in the delivery lifecycle was the concept of a sprint, which divvied up each project into many bursts of short and fast cycles of articulation, programming, testing, and deployment. This drastically increased the quantity of code produced by each team, and thereby put a greater emphasis on code quality and release processes. Testing and deployment thus began their rapid ascent into automation, which has since resulted in a gargantuan proportion of secrets that are created and referenced within code. These secrets could be perceived as static or dynamic in respect to their use and longevity.

With the advent of container technology, the application team, referred to as DevOps, found newly empowered ways to build, test and release. The underlying need for hard resources faded away completely and each team now produced several copies of their software for all manner of consumption.

Containers gave new meaning to software lifecycle as many application components became fragmented with shortened lifespans. Containers would be summoned and discarded with such simplicity that application teams now had to think of their code merely as a (micro) service within a larger ecosystem. These applications would go from being stateful to stateless as services became context-aware only in the presence of secrets.

Containerization is gathering momentum, with Gartner reporting 60 percent of companies adopting it, up from 20 percent just 3 years ago. One can argue about whether Docker or Kubernetes is the more influential offering in this trend, but cloud providers are equally responsible for its adoption.

Regardless, the need for actively managing secrets is now front and center for every application team. The question is whether your application secrets are really a secret or simply a hard-to-reach set of variables. What is needed is a simple prescriptive plan for ensuring better application security for your team. It is no longer the job of DevOps but the collective responsibility of DevSecOps.

Building blocks of application security

Application and/or information security teams need more proactive prevention, while realizing that reactive detection isn’t the main tool in the arsenal. Getting ahead of adversarial code isn’t trivial, but in practice it starts with a few simple steps. Secrets are the sentries to applications and fortifying them requires a proactive approach, including:

1. Application inventory – Every information security leader should take it upon themselves to demand an audit of all applications within the enterprise. Armed with such a list, it is their responsibility to now identify the domains which are critical for business and/or sensitive to the customer. This list is by no means static and should be evaluated periodically to ensure maturing security models and threats. The list may comprise applications (and/or micro services) designed in-house or those leveraged externally from service providers.

Regardless, a matrix of all such applications and services needs to be audited for dependencies on code repositories, data storage, and cloud-augmented resources. Common externalities can be found at GitHub, GitLab, Amazon AWS, Google Cloud, Microsoft Azure, Digital Ocean, OpenStack, Docker Hub, etc. This is not a comprehensive list, so organizations should cautiously audit each application and service for its dependencies in-house as well as externally.

Upon discovery of the repositories housing the business critical or customer sensitive information, it is time to forge a plan for the security of content residing in each. This acts as a manifesto for the enterprise, to which application teams must adhere. Established practices such as peer reviews and automation tools can ensure violations are mitigated in a timely manner, if not completely avoided. Teams can appoint a Data Officer or Curator who is responsible for maintaining the standards and ensure compliance.

2. Code and resource repository standards – At a bare minimum, applications must encrypt data at-rest transparently and transmit it securely over the network or across processes. However, there are times when even computation of the data within a process needs to occur securely. These are usually privileged processes that act upon highly sensitive data and must either do so using homomorphic encryption or a secure enclave, after weighing the practicality of either approach.

The next best option is to tokenize all sensitive data so the encryption preserves the original format as per NIST publication 800-38G. Applications and services can continue to work with the tokenized content unless a privileged user or entity must ascertain the original content through an authorized request.

Whether an application relies on encryption or tokenization, it needs to store, access, and assert the rights of users and other services. Hence, it all comes down to a core set of secrets that applications rely upon in order to function normally as per the rules set forth by its owners. When it comes to management of application secrets, several guidelines are available, ranging from the OWASP Top 10 to CSRF and ESCA.

Secrets were often used primarily to encrypt data at rest and in transit but are increasingly used for identity and access management. Secrets are littered across application delivery pipelines. They are found in the code or configurations directly as credentials themselves or as references to certificates or keys that are reused with suboptimal entropy to generate secrets.

Most often these secrets manifest themselves as environment variables that are passed to containers and/or virtual hosts. Securing the secrets – and, more importantly, providing the highest level of security for access to the secrets – becomes paramount to the application architecture.

3. Centralize secrets with dynamic credentials – There is a multitude of services and products that claim to provide security for application secrets. As a CISO, it is incumbent to ask what makes a product or service secure. The answer comes down to a phrase – root of trust, which is now being uprooted by the concept of zero trust.

Almost all products and services offering secrets management are based on the former root of trust model, where the master key needs to be secured, which is not a trivial undertaking given the hybrid or complex nature of deployments and dependencies. DevOps or DevSecOps is eager to vault or conjure all secrets and summon them freely across containers, hosts, virtualized services etc. What many do not realize is that the processes running these secret repositories are quite vulnerable and leak a plethora of ancillary secrets.

Enterprises can no longer assume that teams are sufficiently mindful when it comes to application architecture, since there are so many options that check-off that box so security will allow teams to stay on track or within a budget. By allowing this to continue, enterprises have created human gatekeepers as the critical bearers of information security and thereby increase their risk of exposure and leaks.

As NIST publication 800-207 comes to bearing, many enterprises will realize the need for a true “Zero Trust” application architecture. This is available today for applications built on container orchestration platforms such as Google Kubernetes or OpenShift, as well as from leading cloud services rendered on Azure, Google and AWS. Authentication (AuthN) and authorization (AuthZ) have become intertwined and with the advent of mutual authentication, it is the foundation for building zero trust within the application.

Fundamentally, a client is always requesting a service (or server) for resources. Zero trust in this transaction would translate to validated provenance of the client and server to enable claims on resources based on associative rights. Trustworthy JSON Web Tokens are increasingly becoming the standard in this paradigm of strong security with roots in cryptography. Servers will deny any resource claims for invalidated or expired tokens and similarly clients need not accept unverifiable responses. Having centralized secrets management with strong access controls and a robust API is critical to application security.

Secrets management: Summary

The age of automation is just beginning and information security goes hand in hand with end user privacy and business continuity. We should be forewarned by the stream of attacks that often could be thwarted by simple practices that were established gradually over time at the core of the enterprise.

Application teams may find it easier to pilot a single service more securely in this manner rather than awaiting the information security leader or CISO to codify it within the enterprise. The need for a proven secrets management application or service is ever present. Pick a solution that is:

1. Flexible in its deployment model whether on-premises or natively in the cloud, or some combination (hybrid, multi-cloud etc.)
2. Secure in a way that goes beyond a simple key-value store that most secrets management providers ultimately provide
3. Capable of connecting to other applications and services through open standards such as OAuth, OpenID (SAML), LDAP, Trustworthy JWT and PKI
4. Proven to work for national agencies and regulatory bodies alike, since these entities have pivotal security considerations.


from Help Net Security https://ift.tt/3f6gySz

Suspicious business emails increase, imposters pretend to be executives

U.S. small businesses report an increase in suspicious business emails over the past year, a cyber survey by HSB shows, and employees are taking the bait as they fall for phishing schemes and transfer tens of thousands of dollars in company funds into fraudulent accounts.

suspicious business emails

“Whether it’s a phishing scheme, fraud or malware, most cyber-attacks start with an email,” said Timothy Zeilman, vice president for HSB, part of Munich Re. “Even companies that have information security training and fairly savvy employees fall victim to these deceptions.”

A rise in suspicious emails

Over half of business executives (58 percent) said suspicious emails had increased in the past year.

More than a third (37 percent) of the organizations received an email from someone pretending to be a senior manager or vendor requesting payments.

Almost half of employees receiving those emails (47 percent) responded by transferring company funds, resulting in losses most often in the $50,000 to $100,000 range (37 percent) and rarely less than $10,000 (only 11 percent).

Business email schemes could become an even bigger threat

The scam is convincing because cyber thieves in many cases gain access to business email accounts and assume the false identities of company managers.

With millions of Americans working remotely from home since the outbreak of the coronavirus, business email schemes could become an even bigger threat, Zeilman said.

“It’s more important than ever to pay attention to safe cybersecurity practices and make sure you verify requests for payments,” he said. “Don’t rely on email alone – call the person and confirm the payment is legitimate before releasing any funds.”


from Help Net Security https://ift.tt/2zJcOGE

Organizations look to build resiliency with hybrid and multi-cloud architectures

Hybrid and multi-cloud architectures have become the de-facto standard among organizations, with 53 percent embracing them as the most popular form of deployment.

multi-cloud architectures

Advantages of hybrid and multi-cloud architectures

Surveying over 250 worldwide business executives and IT professionals from a diverse group of technical backgrounds, Denodo’s cloud usage survey revealed that hybrid cloud configurations are the centre of all cloud deployments at 42 percent, followed by public (18 percent) and private clouds (17 percent).

The advantages of hybrid cloud and multi-cloud configurations according to respondents include the ability to diversify spend and skills, build resiliency, and cherry-pick features and capabilities depending on each cloud service provider’s particular strengths, all while avoiding the dreaded vendor lock-in.

The use of container technologies increased by 50 percent year-over-year indicating a growing trend to use it for scalability and portability to the cloud. DevOps professionals continue to look to containerization for production, because it enables reproducibility and the ability to automate deployments.

About 80 percent of the respondents are leveraging some type of container deployment, with Docker being the most popular (46 percent) followed by Kubernetes (40 percent) which is gaining steam, as is evident from the consistent support of all the key cloud providers.

Most popular cloud service providers

A foundational metric for demonstrating cloud adoption maturity, 78 percent of all respondents are running some kind of a workload in the cloud. Over the past year, there has been a positive reinforcement of cloud adoption with at least a 10 percent increase across beginners, intermediate, and advanced adopters.

About 90 percent of those embracing cloud are selecting AWS and Microsoft Azure as their service providers, demonstrating the continued dominance of these front-runners.

But users are not just lifting their on-premises applications and shifting them to either of or both of these clouds; 35 percent said they would re-architect their applications for the best-fit cloud architecture.

For the most popular cloud initiative, analytics and BI came out at the top with two out of three (66 percent) participants claiming to use it for big data analytics projects. AWS, Azure, and Google Cloud each has its own specific strengths, but analytics surfaced as the top use case across all three of them. This use case was followed closely by both logical data warehouse (43 percent) and data science (41 percent) in the cloud.

multi-cloud architectures

Data formats

When it comes to data formats, two thirds of the data being used is still in structured format (68 percent), while there is a vast pool of unstructured data that is growing in importance. Cloud object storage (47 percent) along with SaaS data (44 percent) are frequently used to maximize ease of computation and performance optimization.

Further, cloud marketplaces are growing at a phenomenal speed and are becoming more popular. Half (50 percent) of those surveyed are leveraging cloud marketplaces with utility/pay-as-you-go pricing being the most popular incentive (19 percent) followed by its self-service capability/ability to minimize IT dependency (13 percent). Avoiding a long-term commitment also played a role (6 percent).

“As data’s center of gravity shifts to the cloud, hybrid cloud and multi-cloud architectures are becoming the basis of data management, but the challenge of integrating data in the cloud has almost doubled (43 percent),” said Ravi Shankar, SVP and CMO of Denodo.

“Today, users are looking to simplify cloud data integration in a hybrid/multi-cloud environment without having to depend on heavy duty data migration or replication which may be why almost 50 percent of respondents said they are considering data virtualization as a key part of their cloud integration and migration strategy.”


from Help Net Security https://ift.tt/2yc7RWq