Friday, April 30, 2021

StorONE S1:Azure minimizes TCO of Azure storage

StorONE announced S1:Azure, which is available immediately. S1:Azure is a storage solution that minimizes the total cost of ownership (TCO) of Azure storage while also reducing the customer’s entire Azure investment, making the cloud more affordable.

“We are pleased to see the StorONE offering available to Azure customers. It provides organizations the ability to seamlessly move existing applications to Azure cloud and continue to benefit from the enterprise feature set they deploy on-premises, like snapshots, auto-tiering, and replication.

StorONE has done an outstanding job of testing and validation to bring a solution to Azure that enhances our platform without sacrificing the performance or reliability that customers need,” said Karl Rautenstrauch, Principal Program Manager for Azure Storage at Microsoft.

S1:Azure use cases

S1:Azure is a cloud enterprise storage solution that runs natively on Azure Infrastructure. Customers can create cost-effective Cloud DR and archive for on-premises workloads. They can leverage its platform capabilities to create a hybrid cloud to navigate sudden peak demands by returning only changed data back on-premises.

They can leverage maximum data protection to confidently migrate block (iSCSI) and file applications (NFS/SMB) to run permanently in Azure.

“I am delighted to partner with Microsoft to bring the advantages of StorONE technology to Azure users. StorONE is the only storage company that delivers an Enterprise Storage Platform with maximum data protection at a minimum TCO,” said Gal Naor, CEO and Co-Founder of StorONE.

Four keys to minimum Azure TCO

S1:Azure powers cloud applications with a minimum TCO across their entire Azure investment:

  • S1:Azure minimizes Azure TCO by delivering high performance from compute instances that are significantly less expensive and require less memory than competing solutions.
  • S1:Azure supports all Azure Managed Disks and automatically moves inactive data to lower-cost tiers.
  • Organizations can also quickly and safely power down unneeded S1:Azure instances when not in use.
  • S1:Azure is a platform, not a point solution. Instead of forcing customers to purchase and operate multiple cloud storage solutions, it consolidates databases and applications via iSCSI and unstructured data via NFS/SMB/S3.

Maximum data protection

S1:Azure lowers Azure TCO while providing maximum data protection. S1:Azure includes StorONE’s snapshot technology at no additional charge. Cloud Operators can apply unlimited, frequent, and space-efficient snapshots for backup, ransomware protection, and versioning of mission-critical data.

StorONE’s cascading replication continuously updates DR copies in other Azure regions or on-premises. Maximum data protection enables moving to the cloud with complete confidence.


from Help Net Security https://ift.tt/2Sm4Tsh

Avaya OneCloud CCaaS connects voice, digital and AI apps using a single visual design environment

Avaya introduced new capabilities for Avaya OneCloud CCaaS that deliver better outcomes for customers by connecting voice, digital and AI applications using a single visual design environment.

The graphical low code/no code conversation composer empowers domain experts to quickly integrate a wide range of AI-enabled insights and processes with advanced OneCloud CCaaS voice and digital capabilities. Contact center staff are now empowered to create more memorable customer experiences.

AI-based customer service continues to grow rapidly, by identifying, predicting and enabling better customer experiences faster than traditional approaches.

Gartner predicts that by 2023, 40 percent of enterprise applications will have embedded conversational AI, up from less than 5 percent today. These new AI-based benefits enable Avaya to deliver on the promise and potential of OneCloud CCaaS voice-based automation, in particular.

Key benefits of AI workflow:

  • Realize the power of voice-based automation and intelligent interactions using domain-led designs that are effortlessly composed
  • Compose and modify applications easily for hybrid cloud deployments using a single visual user interface
  • Leverage pluggable, pre-built, bring or create your own AI​, with multilingual Virtual Agent, Chatbot, and Agent Assist capabilities​ and OOTB integrations including Google Dialogflow, Microsoft LUIS, IBM Watson and Alexa Skills Kit​
  • Use built-in analytics and insights to better support decisions on what customers want and need
  • Take advantage of low code / no code feature for flexible, agile integrations
  • Leverage 20+ languages including English, German, Spanish, Japanese, Chinese and more, while also supporting language-independent machine learning models ​

According to Gartner Peer Insights, global organizations that have implemented Avaya OneCloud CCaaS to improve customer experience have touted its functionality and performance, as well as the future vision for the solution.

Users have called Avaya OneCloud CCaaS a “hugely capable solution providing group-wide benefits for a digital business,” and a “simple and easy to use cloud solution.”

“Leveraging the power of AI, machine learning and a multi-cloud platform, Avaya is helping customers move beyond the traditional contact center to create composable customer experience centers that drive revenue and build real brand advocacy,” said Anthony Bartolo, Executive Vice President and Chief Product Officer, Avaya.

“Avaya’s AI-powered workflow capability enables users to easily compose and customize applications, providing full integration with data repositories ensuring continual improvement of underlying machine learning algorithms within Avaya’s multi-cloud ecosystem across the Avaya OneCloud portfolio.​

“We’re making it easier than ever for OneCloud CCaaS users to synchronize resources across the entire organization and deliver the right knowledge at the right time for the optimal outcomes.”

“Businesses are rapidly becoming composable organizations,” said Zeus Kerravala, Founder and Principal Analyst, ZK Research.

“Digital transformation, COVID-19 and other trends have taught us a valuable lesson and that is business agility is everything, particularly in the area of contact center as customer experience is now the top brand differentiator.

“The new AI workflow capability for Avaya OneCloud CCaaS enables companies to realize the benefits of AI in a number of different ways. The CCaaS solution has a number of pre-built AI features but then customers can use the low code/no code features to build custom capabilities.”


from Help Net Security https://ift.tt/3gOZaFq

A1 Digital partners with Klarrio to provide big data and streaming solutions on EU cloud infrastructure

Klarrio is now offering its customers the opportunity to use EU-hosted infrastructure for their cloud needs selecting any of Exoscale’s data center locations.

Combining Klarrio’s system integration expertise with Exoscale cloud infrastructure while adhering to initiatives such as Gaia-X will provide customers with best-of-breed technology and solutions.

“We chose to partner with Klarrio because of their outstanding expertise and experience in cloud native big data solutions.

“Jointly we are committed to provide our customers with cutting edge Gaia-X compliant cloud infrastructure and data services,” explains Mathias Nöbauer, Director Cloud at A1 Digital and CEO Exoscale.

“Customers are requesting more flexibility and control over where their data resides, specifically within the EU,” explains Kurt Jonckheer, CEO of Klarrio.

“By partnering with A1 Digital and using its cloud platform Exoscale, we have found an EU-based partner that is able to offer the cloud infrastructure and services that our customers have come to expect.”

Klarrio strongly believes that data loses value over time. In a world where everything is connected the need for real-time data analysis becomes more and more of a requirement instead of a luxury.

Evolutions such as 5G connectivity and autonomous driving will pave the way for new low-latency use-cases and grow the need for data stream processing and cloud services.

The company assists its customers with the implementation of big data and advanced analytics solutions, leveraging primarily cloud-native open source software which can be deployed on any cloud infrastructure.

This enables customers to leverage state of the art technology, have the ability to remain in control over where their data resides while avoiding vendor lock-in regardless of the vertical industry they are in.

Exoscale, an A1 Digital product, offers cloud services focusing on simplicity, scalability, and security for SaaS businesses and web applications.

With a simple and intuitive web administration interface, coupled with fixed pricing, Exoscale makes complex infrastructure concepts easy to implement.

Exoscale focuses on fast and flexible self-service solutions for customers. At the same time, trustworthy and reliable infrastructure components ensure maximum scalability, reliability, and performance.

Based in Lausanne, Switzerland, and with data centers throughout Switzerland as well as in Vienna, Frankfurt, Munich, and Sofia, Exoscale benefits from Swiss and European data protection regulations and therefore complies with all GDPR guidelines.


from Help Net Security https://ift.tt/2PEpyGX

IBM and NeuVector extend container security partnership

NeuVector announced that the Kubernetes-native, end-to-end container security solution is now available to IBM Cloud customers through the IBM Cloud catalog.

NeuVector is also announcing platform integration with IBM Security QRadar, IBM’s security information and event management solution.

The new integrations extend NeuVector’s collaboration with IBM to provide container security capabilities for IBM Cloud clients. NeuVector has already helped IBM Cloud clients secure their container environments through IBM Cloud Kubernetes Service (IKS).

NeuVector similarly provides its full lifecycle security solution to enterprises through Red Hat Marketplace, an open cloud marketplace that makes it easier to discover and access certified software for container-based environments across the hybrid cloud.

Now, with NeuVector available through the IBM Cloud catalog, IBM Cloud clients can secure their container architectures running on IBM Cloud across the entire container lifecycle – from pipeline to deployment.

Additionally, IBM customers using QRadar can now leverage NeuVector as part of their QRadar security intelligence deployments. By installing the NeuVector DSM application from the IBM Security App Exchange, QRadar users are able to easily ingest and analyze container security insights as part of a broader security analytics program.

This can allow users to detect and connect related threat activity, and efficiently triage and respond to these events.

NeuVector enables enterprises to secure container and Kubernetes environments throughout the full application lifecycle. The solution delivers the defense-in-depth capabilities to defeat even zero-day attacks and threats with unknown origin.

Through behavioral learning, Security-as-Code and continually-added capabilities like compliance templates and serverless security, NeuVector identifies vulnerabilities and abnormal behavior to neutralize all threats while automating security throughout the CI/CD pipeline and at run-time.

“We’re proud to bring NeuVector into the IBM Cloud catalog, and to offer IBM Cloud customers a way to secure their container infrastructures – from development all the way through production,” said Fei Huang, Chief Strategy Officer, NeuVector.

“We’re also excited to be part of the QRadar ecosystem, and to offer enterprises using QRadar our unique and robust capabilities for automated threat and vulnerability mitigation.”

“As the pace of cloud migration continues to pick up, delivering the industry’s most secure cloud capabilities is critical in helping our clients modernize,” said Aki Duvvur, vice president, IBM Cloud.

“By working with companies like NeuVector, we are further helping our clients take advantage of the flexibility and speed of cloud, while ensuring their critical data remains secure.”

IBM Cloud customers can find NeuVector in the Security section of the IBM Cloud catalog, and then initiate a free trial or paid subscription.

Registry authentication and license generation are all automated as part of this process. Paid subscriptions are billed only for the actual usage of the NeuVector platform and appear on the IBM Cloud customer’s statement.

NeuVector is part of IBM’s partner ecosystem, an initiative to support partners of all types – whether they build on, service, or resell IBM technologies and platforms – to help clients manage and modernize workloads with Red Hat OpenShift for any cloud environment, including the IBM Cloud.

Red Hat OpenShift is the industry’s leading enterprise Kubernetes platform. The IBM Cloud is the industry’s most secure and open public cloud for business. With its security leadership, enterprise-grade capabilities and support for open source technologies, the IBM Cloud is designed to differentiate and extend on hybrid cloud capabilities for enterprise workloads.


from Help Net Security https://ift.tt/3eCVWSw

Brook Lovatt joins Cloudentity as CPO

Cloudentity announced its new CPO, Brook Lovatt, who joins the team to drive product innovation in 2021 and beyond.

With over 20 years of experience specific to Identity and Access Management (IAM) as an executive at IBM and several security-focused boutique consulting firms, Brook Lovatt is an industry expert who will play a key role in helping Cloudentity to strategically drive its product roadmap forward.

Cloudentity is preparing for a year of continued growth with this executive hire, along with its recent successes in the area of Open Banking and new partnerships with leading enterprises Okta, Axway and Simeio.

Cloudentity also recently released its Dynamic Authorization Open Banking Sandbox, which is unique to the industry and provides a reference point and fast start for financial organizations that need to deploy API-driven services for Open Banking.

The Open Banking Sandbox uses Dynamic Authorization based on contextual information about who, what, where, when and why in order to govern authorization for access to financial apps and services as well as to track and govern users’ consent to share private data between these services.

“Brook is an ideal fit as our Chief Product Officer and has a proven track record of large-scale SaaS product delivery, which is the direction Cloudentity continues to execute against as we reach our next phase of growth,” said Jasen Meece, CEO of Cloudentity.

“By investing in building out the C-suite, Cloudentity continues our exciting stage of product innovation and growth.”

Cloudentity offers the only OAuth 2.1 implementation on the market that is capable of providing strong customer authentication (SCA) for Open Banking-compliant transactional consent workflows. Cloudentity’s products also provide API security with its Dynamic Authorization and governance for other Open Data applications.

In addition to this new executive appointment, Cloudentity recently joined forces with a leader in the IAM space, Simeio, for a partnership that will provide dynamic authorization and services to joint customers.

“We partnered with Cloudentity to provide secure dynamic authorization for Open Banking applications because it makes authorization governance flexible and scalable like never before,” said Asif Savvas, Senior Vice President of Products at Simeio.

“As an industry, we are better when we all work together to protect identities with steadfast user authentication that enterprises can trust so our customers can focus on driving innovation.”


from Help Net Security https://ift.tt/3e8njVS

Why Fake Travel Sites Are Fooling More People

Photo: Prostock-studio (Shutterstock)

After a year without vacations, some people are a little rusty at booking travel, and scammers are taking full advantage. The Better Business Bureau has reported a spike in the number of people being scammed by fully functional, legit-looking travel booking sites that are actually fake—honey traps looking to steal your money and personal information. Here’s what to look for so you can avoid them.

How the scam works

While doing an online search for cheap flights or a hotel, you come across an unusually cheap deal. You book the flight or hotel using a credit card, either on the site directly or by calling a customer support line, and receive a confirmation email that doesn’t actually include the ticket or reservation you just booked. In some versions of this scam, you’ll then receive a call from a company “representative” who will try to charge you additional fees to finalize the booking. Later, when you contact the airline or hotel, they’ll have no record of the transaction.

Fake travel sites are increasingly sophisticated

Fake booking sites have been around for years, but they’ve become increasingly slicker and more functional, to the point they don’t look all that different from legitimate low-budget travel sites with second-rate design. They also have functional search features, allowing you to pick a city, set departure and arrival dates, and choose from what will look to be bargain-basement deals via a functional calendar view. The search fields often even have auto-complete capabilities, suggesting possible destinations as you type.

G/O Media may get a commission

However, despite the polished veneer, these sites tend to be hastily built and poorly maintained, so you can still spot a fake if you know what to look for. One confirmed scam site I checked out appeared impressively legit at first, a deeper investigation revealed multiple red flags:

  • Some destinations are listed as regions or states, not cities (“Alabama”), while the occasional airline company name (“Westjet”) is misspelled.
  • There are logos for VISA, Discover, and Mastercard, but the images are low-res, outdated designs.
  • A “DMCA” security shield is prominently displayed—but DMCA is related to copyright claims, not internet security, which makes no sense on the landing page of a travel-booking site.
  • There’s a functional FAQ page, but the copy is full of typos and doesn’t quite make sense—as if it was written to be seen, not actually read.

How to avoid fake booking sites

The BBB recommends you research any unfamiliar site before entering your personal information to make sure it’s legitimate (you can start by checking BBB.org for reviews and feedback from previous customers). Double-check URLS before entering credit card information (you should see “https://” with a padlock icon next to it in your address bar), and be wary of cheap-looking sites in general.

If you’ve been a victim of an airline ticket or other travel scam, report your experience at BBB.org/ScamTracker.


from Lifehacker https://ift.tt/3u8V3Yr

How Do I Install Windows 10 Without All Those Extra Updates?

Get free tech support from Lifehacker’s Senior Technology Editor

Do you have a tech question keeping you up at night? We’d love to answer it in a future Tech 911 column! Describe your problem in an email to david.murphy@lifehacker.com, and make sure you put “Tech 911" in the subject line.

Every Windows PC would benefit from a spring cleaning. Hell, it doesn’t even have to be spring. If you can’t remember the last time you saved all your critical data, wiped your drive, and reinstalled a fresh copy of the operating system, you’re due.

While I can’t promise massive performance improvements—that’ll vary with your particular hardware—I can say that I’ve always found Windows 10 snappier after a brand-new installation. As a bonus, you’ll also regain a ton of hard drive space you can proceed to fill with more apps, games, and data. And Windows Updates, too.

In this week’s Tech 911 Q&A, Lifehacker reader Susan writes:

Hi David, I have a Windows 10 question. I was going to do a factory reset to speed up and clean up my computer. I have done it in the past but can’t remember whether the system stays updated to the latest version or not. Mine is Version 20H2 (OS Build 19042.928). Does it keep this version or drop back to an older version and redownload all the updates?

G/O Media may get a commission

How to install an (almost updated) version of Windows 10

I almost replied to your question immediately, Susan, but I wanted to double-check my answer—and I’m glad I did. I had in mind that running a cloud-based installation of Windows 10 (or grabbing the latest Windows 10 installation .ISO using the Media Creation Tool) would give you a fresh version of the operating system with all the latest updates already integrated.

Not so much.

To test this, I fired up a (virtualized) copy of Windows 10 I haven’t touched in a few months, ran a Windows Update, and took a screenshot of what it was aiming to install:

I then reset my version of Windows 10, making sure to select “Cloud download” when prompted in the hopes that would get me the latest and greatest version of the OS.

But when Windows 10 finished reinstalling, I ran a Windows Update. Lo and behold:

Perhaps the “Reset” method isn’t the way to go. I then tried using the aforementioned Media Creation Tool to download the latest Windows installation .ISO from Microsoft directly. I wiped the virtual hard drive, reinstalled Windows 10 using that .ISO as the source, ran through the initial setup process, and fired up Windows Update as soon as the operating system loaded up. The results were identical:

So it seems you can get closer to the most up-to-date version of Windows 10 with either method, but you won’t be able to get a perfect installed copy of the operating system that won’t require an update the second you start using it.

That said, there are programs you should be able to use to integrate the latest Windows 10 updates directly into a Windows 10 installation .ISO. I haven’t tried them in some time, but they exist. Using the NTlite app, you mount the .ISO on your system like you would any disc image, load it up in the app, pick the version of Windows 10 you’ll be installing, locate and integrate any updates (again, all through the app), and create a brand-new .ISO once everything has downloaded and installed directly from Microsoft. You can even use the tool to strip versions of Windows 10 out of the .ISO that you’ll never install (like Professional, for example). You’ll then use this custom .ISO to install Windows 10 as normal.

The only reason I won’t walk you through this process directly is because I don’t think it saves you much time. The updates have to download one way or the other, after all. And the time you spend messing with creating a custom .ISO, you could use to... install Windows 10, run the updates, get dinner, and come back to a fresh-and-ready PC. Unless you’re planning to install Windows 10 on multiple systems at once (or many times on the same system), a custom .ISO feels unnecessary for a one-off reset.

However, the option is there if you want to take advantage of it! Otherwise, just reset Windows or manually reinstall it as you normally would. You won’t save time or bandwidth trying to integrate updates into the installation media itself—but definitely use a cloud download or the Media Creation Tool so you have to deal with as few updates as possible.


Do you have a tech question keeping you up at night? Tired of troubleshooting your Windows or Mac? Looking for advice on apps, browser extensions, or utilities to accomplish a particular task? Let us know! Tell us in the comments below or email david.murphy@lifehacker.com.

 



from Lifehacker https://ift.tt/3u6yWSD

Serious MacOS Vulnerability Patched

Apple just patched a MacOS vulnerability that bypassed malware checks.

The flaw is akin to a front entrance that’s barred and bolted effectively, but with a cat door at the bottom that you can easily toss a bomb through. Apple mistakenly assumed that applications will always have certain specific attributes. Owens discovered that if he made an application that was really just a script—code that tells another program what do rather than doing it itself—and didn’t include a standard application metadata file called “info.plist,” he could silently run the app on any Mac. The operating system wouldn’t even give its most basic prompt: “This is an application downloaded from the Internet. Are you sure you want to open it?”

More.


from Schneier on Security https://ift.tt/32ZTyQD

Shedding light on the threat posed by shadow admins

Few organizations would purposefully hand a huge responsibility to a junior staff member before letting them fly solo on their own personal projects, but that’s effectively what happens inside too many corporate networks: organizations delegate specific administrative access to user accounts so they can do a particular privileged task, and they promptly forget about it. These “shadow admin” accounts often get ignored by everyone except attackers and threat actors, for whom they are valuable targets.

shadow admins

Shadow admins pose a threat to organizations because these accounts have privileged access to perform limited administrative functions on Active Directory objects. AD administrators can delegate administrative privileges to reset passwords, create and delete accounts, or other tasks.

The danger is that these can slip off the radar, meaning they often operate without the security team’s full scrutiny. If threat actors take control of one of these accounts, they can extend their attack in many ways, perhaps seeking opportunities for lateral movement or privilege escalation whilst staying incognito.

Typically, there is no straightforward way of finding these delegated administrator accounts except to conduct an exhaustive audit, meaning they can pose a threat that is often not fully quantified. If one can’t see a problem and gauge its extent, how can one prepare for it?

Into the darkness

Threat actors seek shadow admin accounts because of their privilege and the stealthiness they can bestow upon attackers. These accounts are not part of a group of privileged users, meaning their activities can go unnoticed. If an account is part of an Active Directory (AD) group, AD admins can monitor it, and unusual behaviour is therefore relatively straightforward to pinpoint.

However, shadow admins are not members of a group since they gain a particular privilege by a direct assignment. If a threat actor seizes control of one of these accounts, they immediately have a degree of privileged access. This access allows them to advance their attack subtly and craftily seek further privileges and permissions while escaping defender scrutiny.

Leaving shadow admin accounts on an organization’s AD is a considerable risk that’s best compared to handing over the keys to one’s kingdom to do a particular task and then forgetting to track who has the keys and when to ask for it back. It pays to know who exactly has privileged access, which is where AD admin groups help.

Conversely, the presence of shadow admin accounts could be a sign that an attack is underway. If a threat actor can grant themselves permissions to create these accounts and then assign them with higher privileges, they can extend their attack in many directions.

What is a shadow admin?

Shadow admins gain privileges through permission assigned using an access control list (ACL) applied to an object located on the AD. These objects can be files, events, processes, or anything else which has a security descriptor. Crucially, shadow admins are accounts that are not members of a privileged AD group.

AD is composed of a tree of objects that define the network and all its accounts, assets, groups, system, GPOs, and more. Each AD object has its separate list of permissions called ACEs (Access Control Entries) that make up the ACL, with an object’s ACL defining who has permissions on that specific object and what actions they can perform on it. There are general permissions like “Full Control”, and individual permissions like “Write”, ”Delete”, “Read” and even some “Extended Rights” such as “User-Force-Change-Password”.

There are four main categories of privileged accounts:

  • Domain privileged accounts such as a domain admin user or DCHP (Dynamic Host Configuration Protocol) admin
  • Local privileged accounts such as local admins on endpoints and servers or “root” on Unix and Linux systems
  • Application and services accounts such as DB or SharePoint admins
  • Privileged business accounts such as finance users or the corporate social media account.

How to find shadow admins

Unfortunately, the nature of shadow admin accounts means that finding them is often easier said than done. The best cure, in this case, is prevention, which is fine if one is working with a newly installed AD, but tricky if the AD has been around for a while and carries the scars, knots, and gnarls accumulated over its lifetime – not to mention the increased havoc seen with mergers and acquisitions.

The native way to identify shadow admin accounts is to conduct an exhaustive audit of all ACL entries within AD. This process takes time and is also inefficient because its manual nature means an inevitable chance to overlook these dangerous accounts.

The security community is now seeing the advent of innovations that can identify shadow admin accounts at the AD controller level as excess privilege exposures. If an organization uses these new tools they can gain early and valuable insights to improve visibility and provide detection of exposed API keys, credentials, and secrets that will show shadow admins, access to domain controllers, and other risks.

Turning the tables with deception

Forward-looking organizations could also take advantage of the fact that shadow admins are attractive to adversaries by using fake accounts to detect and redirect them to decoys. Deception and concealment technologies can hide and deny access to accounts with privileges, such as domain or shadow admin accounts.

Defenders can then put decoy accounts in their place, which will trigger an alert if threat actors access them or even misdirect them away from production assets and into a decoy environment.

If the organization deploys decoys at other stages of the kill chain, they can snare attackers in a hall of mirrors to limit their damage. Meanwhile, the defenders can study their techniques and amass yet more information about system vulnerabilities or novel exploits the adversaries used. If threat actors access a decoy, security teams and systems can closely analyze their behaviour, amassing valuable threat intelligence, which helps fend off future attacks.

It’s a fair bet that mature organizations have shadow admins lurking in their networks. Perhaps it’s time to find them and even make them work to one’s advantage by using attack path visibility tools along with deception and concealment technologies.


from Help Net Security https://ift.tt/3aTaf4B

Thursday, April 29, 2021

APIs in the insurance industry: Accessing a growing world of data

The insurance industry is vast and varied. It can be found in nearly every country in the world, with the earliest references dating back as early as 1750 BC. Modern insurance, however, started around 1686 with Lloyd’s of London and with the U.S. founding its first fire insurance company in 1732.

api insurance

Much of the industry is heavily regulated, has multiple markets (U.S., London, Switzerland, Bermuda, Singapore, and so forth), and covers a plethora of risk types. Spanning across motor, health, pet, oilrig, terrorism, catastrophes, liability, and bespoke covers, each of these segments of insurance vary enormously and they are proceeding at different speeds on their digital transformation journeys.

In many regions, personal lines insurance is already adopting features like self-service platforms and paperless policies, working with aggregators and automated quotes, not to mention automating claim handling with same-day payments.

Innovators have created new products applying Usage Based Insurance, working with manufacturers to have insurance built-in, and offering dynamic insurance that changes according to an individual’s behavior and needs. The customer experience has leapt forwards in many ways, with no shortage of ideas of how it will further evolve.

However, commercial insurance is much more complex and has some challenges. For example, it remains document-centric and becoming data-orientated is a considerable issue that is holding the industry back from progressing beyond the constraints of today. Yet there is room for optimism here; recent changes in technology are starting to move the data ingestion issue forward through artificial intelligence and machine learning, and alternative sources of data are coming online like IoT, drones, augmented reality and wearables.

With usage of this tech widely forecasted to become pervasive in the next five years, it will further expand the data available via API-connected devices. It’s the data that is critical to insurance that is driving the push for more API usage.

From a business perspective, APIs are powering omnichannel capabilities that are increasingly important to ensure policyholders, agents, brokers and partners can consume data and insights during key processes in a way that suits them best. Beyond core operational processes, the industry is quickly moving towards newer capabilities that include different insurance models and a risk prevention focus.

These newer innovations only work by getting data that is distributed around the globe to the processor for decision-making and action. The application for data and APIs for the insurance industry is endless, together with the promise to decrease the harm, the inconvenience, and the inefficiency that risks bring when they materialize.

As the number of connected devices rapidly increase, APIs will be the default communications channel. Insurance will help drive both production and consumption of device endpoints as we strive to better understand and interact with our environment via digital means. The devices themselves will also need insurance for protection from physical damage, cyber risks, maintenance liability, and other relevant perils.

As the world continues to fill with data, until AI takes another jump forwards, we are likely to shift our focus from processing large unstructured data towards these IoT devices that will become authoritative, trustworthy, and respected sources of data in insurance.

By enabling data flow across the value chain more efficiently, individual processes become paperless and frictional cost is greatly reduced. Security will always remain an essential part of the APIs from development throughout the lifecycle, and with more distributed compute and edge devices, and more ventures appearing without the necessary organizational maturity, security will become more challenging.

While InsurTechs are starting to significantly impact the industry, the regulation, capital requirements, and ecosystem dependencies are slowing the rate of disruption as the barrier to entry is often too high to compete with insurers. While a number have managed to enter the market (e.g., Lemonade, Hippo, Ping An), the majority are tech and data providers that are supporting existing insurers, brokers, and agents with new services to enhance the ecosystem, and nearly every single one is data orientated and API enabled. So, with the explosion of data and API endpoints, where do we get started?

The purpose is not to produce APIs; they are not the target. APIs are the construct to enable data exchange between parties and across ecosystems. Done well, they can also be channel enablers, a key asset to your business, and a representation of your technical brand. This isn’t about writing some code to develop or consume an API. This is about developing a scalable capability within your organization to take advantage of the data-driven API explosion that is coming.

  • Strategy: Be purposeful and explicit about what you want to achieve, what outcomes your business needs, your resources, tech preferences, and the capabilities you need. Plan to avoid common mistakes and minimize the rework. Plan the benefits and know your governance.
  • Standards: Designing and building APIs in line with a functional and technical foundational core standard is a critical primary step. This is important for your customers (e.g., a single authentication model, a single use of HTTP options, consistent paging and searching) and internal reusability.
  • Operational platform: Deployed APIs should be managed so that their execution is controlled, made secure, and operable for version-controlled change, audit, and insights. A good starting point in an API gateway is to make the full API platforms available, as requirements become more advanced.
  • Adoption: Outcomes can only be achieved with good adoption, dependent on beneficial business features, discoverability, and adaptability. An adoption stream and a passionate focus on the developer experience (DX) is important to build a community and grow your ecosystem.
  • Skills: APIs should not just be a development asset. For businesses that want to be leaders in their space, APIs should be managed as business assets. Providing the proper training, governance, role enablement and culture to the cross-technology, data, and business teams produces an effective capability.
  • Scaling execution: Standards, tooling, automation, and repeatable processes are all good ingredients. A C4E approach has proven to be highly effective for larger companies and has tangible value longer-term where a quantity and longevity of work is expected.
  • Stakeholders & partners: Business product owners, platform owners, data governance stewards, security architects, and ecosystem evangelists, to mention a few. These individuals are critical in realizing the benefits that APIs offer, in an effective and secure manner.
  • Executive sponsorship: API insurance products will often require investment and focus over a multi-year program. Ensuring that the strategy is held to account and the broader teams stay the course is essential to realizing the benefits.

The insurance industry is changing, though not via a single disruptor or even at a pace that observers are calling for. However, it continues to move forwards with more acceleration than in recent decades, and with billions of endpoints to interact with, there’s significant opportunity ahead.


from Help Net Security https://ift.tt/3aMvkNW

What is threat modeling and why should you care?

While there is not one exact industry wide definition, threat modeling can be summarized as a practice to proactively analyze the cyber security posture of a system or system of systems. Threat modeling can be conducted both in the design/development phases and for live system environments.

what is threat modeling

It is often referred to as Designing for Security. In short, threat modeling answers questions as “Where am I most vulnerable to attacks?”, “What are the key risks?”, and “What should I do to reduce these risks?”.

More specifically, threat modeling identifies cybersecurity threats and vulnerabilities and provides insights into the security posture, and what controls or defenses should be in place given the nature of the system, the high-value assets to be protected, the potential attackers’ profiles, the potential attack vectors, and the potential attack paths to the high-value assets.

Threat modeling can consist of the following steps:

1. Create a representation of the environment to be analyzed
2. Identify the high value assets, the threat actors, and articulate risk tolerance
3. Analyze the system environment from potential attackers’ perspective:

  • How can attackers reach and compromise my high value assets? I.e. what are the possible attack paths for how attackers can reach and compromise my high-value assets?
  • What of these paths are easier and harder for attackers?
  • What is my cyber posture — how hard is it for attackers to reach and compromise my high-value assets?

If the security is too weak/risks are too high:

4. Identify potential measures to improve security to acceptable/target levels
5. Identify the potential measures that should be implemented — the most efficient ways for your organization to reach acceptable/target risk levels.

what is threat modeling

Why threat model: The business values

Threat modeling is a very effective way to make informed decisions when managing and improving your cybersecurity posture. It can be argued that threat modeling, when done well, can be the very most effective way of managing and improving your cyber risk posture, as it can enable you to identify and quantify risks proactively and holistically and steer your security measures to where they create the best value.

Identify and manage vulnerabilities and risks before they are implemented and exploited
  • Before implementation: Threat modeling enables companies to “shift left” and identify and mitigate security risks already in the planning/ design/ development phases, which is multiples — often 10x, 100x, or even more — times more cost-effective than fixing them in the production phase.
  • Before exploited: As rational and effective cyber defenders we need both proactive and reactive cyber capabilities. Strengthening security proactively, before attacks happen, has clear advantages. However, it also comes with a cost. An effective threat modeling enables the user to make risk-based decisions on what measures to implement proactively.
Prioritize security resources to where they create the best value
  • One of the very key challenges in managing cybersecurity is to determine how to prioritize and allocate scarce resources to manage risks with the best effect per dollar spent. The process for threat modeling, presented in the first section of this text, is a process for determining exactly this. When done effectively, it takes into consideration all the key parts guiding rational decision making.

There are several additional benefits to threat modeling. One is that all the analyses are conducted on a model representation of your environment, which creates significant advantages as the analyses are non-intrusive and that analyzers can test scenarios before implementations.

Another set of values are that threat models create a common ground for communication in your organization and increase cybersecurity awareness. To keep this text concise, we here primarily highlight the values above. We also want to state that there are several other excellent descriptions of the values of threat modeling, and we encourage you to explore them.

Who does threat modeling and when?

On the question “Who should threat model?” the Threat Modeling Manifesto says “You. Everyone. Anyone who is concerned about the privacy, safety, and security of their system.” While we do agree with this principle in the long term, we want to nuance the view and highlight the need for automation.

Threat modeling in development

This is the ”base case” for threat modeling. Threat modeling is typically conducted from the design phase and onward in the development process. It is rational and common to do it more thoroughly for high criticality systems and less thorough for low criticality systems. Threat modeling work is typically done by a combination of development/DevOps teams and the security organization.

More mature organizations typically have more of the work done by the development/DevOps teams and the less mature organizations have more work support from the security organization.

Threat modeling of live environments

Many organizations also do threat modeling on their live environments. Especially for high criticality systems. As with the threat modeling in development, organizations have organized the work in different ways. Here, the work is typically done by a combination of operations/DevOps teams and security organization.

Naturally, it is advantageous when threat models fit together and evolves over time from development through operations and DevOps cycles.


from Help Net Security https://ift.tt/3t0IA7P

Researchers develop program that helps assess encryption systems’ vulnerabilities

Anastasia Malashina, a doctoral student at HSE University, has proposed a new method to assess vulnerabilities in encryption systems, which is based on a brute-force search of possible options of symbol deciphering. The algorithm was also implemented in a program, which can be used to find vulnerabilities in ciphers.

encryption systems vulnerabilities

Most of online messages are sent in encrypted form since open communication channels are not protected from data interception. Messengers, cloud services, banking systems–all of these need to be protected from data breaches. The problem of data encryption is one of the main issues for cryptographers.

The problem of cipher vulnerability search

The problem of cipher vulnerability search is always a relevant one. To avoid hacks, it is necessary to reinforce the cipher protection from leaks and to test encryption systems for vulnerabilities.

All ciphers can be split into two big classes: block ciphers and stream ciphers. Stream data has a big advantage: they provide an acceptable speed of information transmission, suitable for images and videos.

Stream ciphering is based on a combination of data with random sequencing on a special algorithm. Special keys are used for this kind of ciphering. There are many requirements to the keys, so that the data coded with their use can be produced and stored. Meanwhile, it is not always possible to ensure that a reliable key is used. That’s why stream ciphering systems need to be pre-tested for vulnerabilities.

“I was interested in not only suggesting an algorithm that is able to detect the initial text of a transmitted message, but to find opportunities to restore the text both theoretically and practically in a direct way, without finding the key,” said Anastasia Malashina.

How it works

To find vulnerabilities, she used a method that helps assess the possibility of restoring separate parts of a message without a key, in case a vulnerable cipher is used or there is a leak in the communication channel.

The algorithm uses information about possible options for each of the ciphered symbols in the initial message and brutally searches the values for all the other symbols. In case the initial cipher has a vulnerability, this method helps detect it.

The suggested algorithm was implemented in a special program, part of which has recently been patented. This program helps assess encryption systems’ reliability and breach risks in case of data leaks.

“During my study, I looked at a corpus of social-political texts, and an open corpus of Russian language. A statistical analysis of dictionaries helped me assess the entropy of texts, for which I later assessed the possibility of partial deciphering. Furthermore, corpus-based dictionaries are used in the experimental part of the study to implement a dictionary-based attack. Similar results for the English language were reached based on the iWeb corpus,” said Malashina.


from Help Net Security https://ift.tt/2QAJj2I

Lack of visibility into IT assets impacting security priorities

Axonius released a report which reveals the extremes to which the pandemic escalated lack of visibility into IT assets and how that is impacting security priorities.

IT assets visibility

According to the study conducted by ESG, organizations report widening visibility gaps in their cloud infrastructure (79%, which was a 10% increase over 2020), end-user devices (75%), and IoT device initiatives (75%), leading to increased risk and security incidents.

Around nine out of 10 respondents report that automating IT asset visibility could materially improve a variety of security operations.

“Collectively, these assets represent an attack surface that organizations must protect against an ever-expanding threat landscape used by adversaries to compromise infrastructure and carry out malicious activities,” said Dave Gruber, ESG senior analyst.

“When IT and security teams lack visibility into any part of their attack surface, they lose the ability to meet security and operational objectives, putting the business at risk. In some cases, organizations were reporting 3.3 times more incidents caused by lack of visibility into IT assets.”

The report explores the impact that the pandemic has had on IT complexity and security, and explains the challenges that lie ahead. It also reveals how automating asset management can close visibility gaps caused by the rapid shift to remote work, IoT adoption, and accelerated digital transformation.

“This year’s survey once again reinforces that lack of visibility into assets is one of the most critical challenges facing every organization today. Building a comprehensive inventory remains a slow, arduous, often inadequate process, and as a result, more incidents are occurring,” said Dean Sysman, Axonius CEO.

“However, automating cybersecurity asset management can dramatically improve security and compliance efforts. According to the study, eliminating visibility gaps results in a 50% reduction in end-user device security incidents.”

Organizations plagued by pandemic-driven IT complexity

More than 70% of respondents report that additional complexity in their environments has contributed to increasing visibility gaps. More than half cite the rapid shift to remote work and changes to technology infrastructure necessitated by security and privacy regulations as key reasons for this increased complexity.

Nearly 90% of respondents say that the pandemic has accelerated public cloud adoption. The study also reveals that the majority of organizations have suffered more than five cloud-related security incidents in the last year. Half of the respondents report visibility and management challenges with public cloud infrastructure, mostly associated with data spread across different tools, clouds, and infrastructure.

Participants also anticipate a 74% increase in remote workers, even after pandemic restrictions lift. This requires organizations to develop long-term operating and security plans for hybrid work environments so that IT and security teams do not remain blind to the personal networks and devices supporting remote employees.

Although organizations furloughed many IoT projects during the pandemic, they may not be prepared when these initiatives reignite. Only 34% report they have a strong strategy for maintaining IoT device visibility, while 62% report facing continued challenges with the variety of devices in use.

Remote work shifts priorities and resources

The rapid move to remote work motivated a significant change in BYOD policies for 94% of organizations. Pre-pandemic, close to half of the organizations surveyed prohibited using personal devices for corporate activities, but this number has fallen to 29% in this year’s study, adding new management and security challenges.

As device diversity increases, IT and security teams are putting more focus on identity and access management (IAM) solutions, with 65% reporting that IAM is more challenging. And security teams are facing increased workloads for investigations, with incidents on the rise.

Investment in asset management on the rise

Organizations depend on an average of eight different tools to pull together asset inventories while reporting intensive, manual processes to pull together the data. On average, it takes more than two weeks to generate an asset inventory, utilizing a combination of tools that weren’t built for this task, including endpoint management and security tools, network access controls, network scanning, configuration and patch management, and vulnerability assessments.

With this kind of effort, 64% report asset inventory as an event versus a process, only updating inventories monthly or quarterly. This cadence leaves significant visibility gaps in between, resulting in unmeasurable business risk, and takes away from other priority tasks, such as vulnerability assessments and improved threat investigations and response. Fortunately, realizing the critical importance, more than 80% report plans to increase investments this year to combat the problem.


from Help Net Security https://ift.tt/3t4eyAh

AI can alter geospatial data to create deepfake geography

A fire in Central Park seems to appear as a smoke plume and a line of flames in a satellite image. Colorful lights on Diwali night in India, seen from space, seem to show widespread fireworks activity. Both images exemplify what a University of Washington-led study calls “location spoofing.”

deepfake geography

The photos – created by different people, for different purposes – are fake but look like genuine images of real places. And with the more sophisticated AI technologies available today, researchers warn that such deepfake geography could become a growing problem.

Identifying new ways of detecting fake satellite photos

So, using satellite photos of three cities and drawing upon methods used to manipulate video and audio files, a team of researchers set out to identify new ways of detecting fake satellite photos, warn of the dangers of falsified geospatial data and call for a system of geographic fact-checking.

“This isn’t just Photoshopping things. It’s making data look uncannily realistic,” said Bo Zhao, assistant professor of geography at the UW and lead author of the study.

“The techniques are already there. We’re just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it.”

As Zhao and his co-authors point out, fake locations and other inaccuracies have been part of mapmaking since ancient times. That’s due in part to the very nature of translating real-life locations to map form, as no map can capture a place exactly as it is. But some inaccuracies in maps are spoofs created by the mapmakers. The term “paper towns” describes discreetly placed fake cities, mountains, rivers or other features on a map to prevent copyright infringement.

On the more lighthearted end of the spectrum, an official Michigan Department of Transportation highway map in the 1970s included the fictional cities of “Beatosu and “Goblu,” a play on “Beat OSU” and “Go Blue,” because the then-head of the department wanted to give a shoutout to his alma mater while protecting the copyright of the map.

But with the prevalence of geographic information systems, Google Earth and other satellite imaging systems, location spoofing involves far greater sophistication, researchers say, and carries with it more risks.

AI-manipulated satellite images: A severe national security threat

In 2019, the director of the National Geospatial Intelligence Agency, the organization charged with supplying maps and analyzing satellite images for the U.S. Department of Defense, implied that AI-manipulated satellite images can be a severe national security threat.

To study how satellite images can be faked, Zhao and his team turned to an AI framework that has been used in manipulating other types of digital files. When applied to the field of mapping, the algorithm essentially learns the characteristics of satellite images from an urban area, then generates a deepfake image by feeding the characteristics of the learned satellite image characteristics onto a different base map — similar to how popular image filters can map the features of a human face onto a cat.

Comparing features and creating new images of one city

Next, the researchers combined maps and satellite images from three cities — Tacoma, Seattle and Beijing — to compare features and create new images of one city, drawn from the characteristics of the other two. They designated Tacoma their “base map” city and then explored how geographic features and urban structures of Seattle (similar in topography and land use) and Beijing (different in both) could be incorporated to produce deepfake images of Tacoma.

In the example below, a Tacoma neighborhood is shown in mapping software (top left) and in a satellite image (top right). The subsequent deep fake satellite images of the same neighborhood reflect the visual patterns of Seattle and Beijing.

Low-rise buildings and greenery mark the “Seattle-ized” version of Tacoma on the bottom left, while Beijing’s taller buildings, which AI matched to the building structures in the Tacoma image, cast shadows — hence the dark appearance of the structures in the image on the bottom right. Yet in both, the road networks and building locations are similar.

deepfake geography

The untrained eye may have difficulty detecting the differences between real and fake, the researchers point out. A casual viewer might attribute the colors and shadows simply to poor image quality. To try to identify a “fake,” researchers homed in on more technical aspects of image processing, such as color histograms and frequency and spatial domains.

Some simulated satellite imagery can serve a purpose, Zhao said, especially when representing geographic areas over periods of time to, say, understand urban sprawl or climate change.

There may be a location for which there are no images for a certain period of time in the past, or in forecasting the future, so creating new images based on existing ones — and clearly identifying them as simulations — could fill in the gaps and help provide perspective.

Fact-checking geography

The study’s goal was not to show that geospatial data can be falsified, Zhao said. Rather, the authors hope to learn how to detect fake images so that geographers can begin to develop the data literacy tools, similar to today’s fact-checking services, for public benefit.

“As technology continues to evolve, this study aims to encourage more holistic understanding of geographic data and information, so that we can demystify the question of absolute reliability of satellite images or other geospatial data,” Zhao said. “We also want to develop more future-oriented thinking in order to take countermeasures such as fact-checking when necessary,” he said.


from Help Net Security https://ift.tt/3t0y1lb

PKI market valuation to cross $7 billion by 2027

The market valuation of public key infrastructure will cross $7 billion by 2027, according to Global Market Insights. Rising digital interaction and reliance on digital authentications and regulatory compliance across enterprises are expected to boost the market growth.

PKI market valuation

What’s driving demand for PKI solutions?

The demand for PKI solutions and services is primarily driven by the increasing need across enterprises to improve security capabilities in response to the growing instances of file-based attacks and malware.

The PKI consists of software policies, roles, and a procedure that is used to create, manage and distribute digital certificates, and certify cryptography owned by the user. PKI used encryption algorithms help companies to secure communications and ensure the privacy of data sent from one computer to another.

The managed service segment in the PKI market is anticipated to witness a 20% growth rate till 2027. Managed PKI services are hosted in a redundant infrastructure, featuring robust data backup and intelligent monitoring capabilities. A service provider manages the complexity of enterprise PKI solutions and offers extensive security.

A managed PKI service automates the configuration of the encryption and signing applications including multiple browsers and platforms. It also verifies the integrity and origination of sensitive documents and emails from all devices.

Cloud-based PKI system to hold significant revenue share

The public key infrastructure market for cloud deployment segment is estimated to hold a significant revenue share during the forecast period. The cloud-based PKI system significantly reduces infrastructure investments, resources, and time of individual organizations by eliminating the need for organizations to invest in infrastructure deployment.

PKI service providers help enterprises by controlling the maintenance of all ongoing operations while maintaining the availability and scalability of the service with hassle-free and efficient service.

The large enterprises segment will account for over 50% of the public key infrastructure market share by 2027 due to consistent and reliable services offered by PKI service providers with low operational cost and global reach.

Large enterprises are geographically dispersed and invest heavily in the latest technologies to increase their overall business productivity and efficiency. The expansive demand from established enterprises to eliminate cyber risks and ensure secure data transfer processes is poised to spur the market expansion.

PKI market for retail to show exponential growth

The public key infrastructure market for the retail segment is projected to showcase exponential growth during the forecast period. The internet and smartphone penetration have created a trend for online shopping. In response, the industry is reporting an expansive number of digital payments and financial transactions.

Retailers need authentication and encryption services to secure their transactions and network infrastructure. The enterprise-wide need to process sensitive customer information, including financial data, is slated to foster the demand for PKI solutions and services across the retail sector.

Europe’s public key infrastructure market is expected to reach $1.5 billion by 2027 on account of increasing demand for encryption technologies from enterprises to secure cloud-based attacks on digital enterprise resources.

The presence of several PKI vendors and the growing implementation of PKI solutions to detect and prevent threats at their early stages are set to impel the market value in the region.


from Help Net Security https://ift.tt/3aRDKDF

Code42 enhances Incydr to help identify insider risk related to file uploads to unsanctioned websites

Code42 is introducing enhanced capabilities to the Code42 Incydr data risk detection and response product for identifying insider risk related to file uploads to unsanctioned websites.

Incydr Browser Upload Detection is built to detect and alert security teams to unsanctioned browser upload activity, such as employees uploading business documents to personal cloud, email or social media accounts or source code repositories, regardless of the network or internet browser being used.

The risk to company data is ever-present and increasing. Not only do security teams have incomplete visibility into how data is moving in and out of their organization, employees are finding ways – unsanctioned or not – to get their jobs done through cloud collaboration tools.

Incydr helps stop data exposure and gives security teams more accurate alerts and context about file exposure events that happen via browsers.

The Incydr browser upload detection capability is more efficient for security teams to manage as there is no need to maintain browser plug-ins or proxies, and makes investigation and response quicker and more accurate.

“Today, browsers are as critical and ubiquitous in our professional lives as they are in our non-work activities.

“If users can navigate somewhere by browser, they can upload data in seconds, thereby putting valuable company data and intellectual property at risk,” said Code42 CTO Rob Juncker.

“Incydr is uniquely equipped with market-leading technology to provide security teams with broad visibility and detailed context about file content and upload destinations happening via browsers so they can easily detect and respond to data exfiltration – whether malicious or unintentional.”

According to Code42 research from February 2020, the top four unauthorized tools (i.e., not sanctioned by their employer) that employees most commonly use to share files with colleagues are WhatsApp, Google Drive, Facebook and personal email.

Subsequent research from Code42 found that 71% of security teams lack complete visibility to sensitive data movement, regardless of whether the tools used are authorized or not.

At the same time, 59% of IT security leaders expect insider risk to increase in the coming two years.

Incydr allows security teams to:

  • Protect against web-based data leaks with patent-pending detection methodology, which works regardless of browser type or version.
  • Reduce their management burden by removing the need to configure or maintain proxies, SSL inspection, or browser plug-ins.
  • See and easily drill into critical context about file, vector and user for all data exposure events.
  • Define a list of trusted domains so that only uploads to untrusted domains for actual data exposure events generate alerts. The ability to exclude trusted domains minimizes alert noise that occurs when employees upload documents to trusted domains for legitimate work purposes.
  • Understand at-a-glance where files are moving by assigning untrusted uploads to destination categories, such as cloud storage, email, messaging, social media platform or source code repository.

Code42 Incydr is the purpose-built solution for Insider Risk Management.


from Help Net Security https://ift.tt/3nKr1It

Entrust PKIaaS simplifies cloud migration with pre-built secure solutions

Entrust announced Public Key Infrastructure (PKI) as-a-Service. The next generation of its high-assurance PKI, Entrust PKIaaS is secure, quick to deploy, scales on-demand, and runs in the cloud. This service helps reduce complexity and enhance the security of an organization’s cloud applications.

Entrust PKI as a Service (PKIaaS) simplifies cloud migration with pre-built secure solutions that are ready to implement quickly and efficiently, backed by more than 25 years of Entrust PKI expertise and innovation.

The PKIaaS architecture allows customers to scale on-demand, while maintaining simplicity by reducing on-premise services, applications, and software for use cases such as Active Directory PKI Service and Private TLS/SSL ACME Service.

The Entrust PKIaaS delivers four key benefits:

  • Scale: Modern use cases require more certificates and shorter life certificates. PKIaaS is a high performance, cloud-native system that grows as required with nearly limitless capacity, and supported by expert Entrust professional services.
  • Speed: A customer’s PKI needs to operate fast and run where it does business. PKIaaS deploys and expands within minutes, delivering a quick solution to secure a range of business use cases.
  • Security: PKIaaS provides the assurance customers expect from Entrust, with dedicated CAs and keys protected in Entrust datacenters, secured by nShield HSMs.
  • Simplicity: Management becomes more challenging as deployments diversify and use cases grow more complex. Entrust PKIaaS manages the PKI so customers don’t have to. It’s managed and deployed from the Entrust Central Certificate Services portal allowing customers to manage their public and private certificate estate in one easy to use location.

The launch is part of the wider range of Entrust as-a-Service offerings, an evolution that’s designed to provide organizations with simplicity, security, speed, and scalability as they migrate to the cloud.

Together, these applications secure critical and complex digital security and identity use cases with turnkey cloud services that are easy to consume and backed by experts.

Additionally, PKIaaS supports other use cases like credential based authentication and integrates seamlessly with Entrust HSMs, providing a clear and trusted roadmap for their cloud deployments.

“As customers migrate to the cloud, their security and identity needs to go with them. Entrust is a pioneer in certificate solutions that enable trust between people, systems and things across public and private environments.

“By bringing Entrust PKI technology and experience to the cloud, PKIaaS provides a secure identity foundation with the scale, speed and simplicity required for businesses rapidly migrating critical services to the cloud,” said Jon Ferguson, Product Management Director of PKI & IoT at Entrust.

“Leveraging our secure datacenters, and managed by Entrust experts, our ‘born in the cloud’ solutions secure a wide range of critical security and identity use cases with turnkey cloud services making it quick and easy to deploy.”


from Help Net Security https://ift.tt/3gOHFVG

Kisi Intrusion Detection allows customers to implement alarm policies from their Kisi dashboard

Kisi has launched its own Intrusion Detection product, moving the company towards becoming a complete physical security solution.

Intrusion Detection allows customers to natively implement alarm policies from their Kisi dashboard.

Intrusion Detection uses contact sensors, which could already be used to identify access, to detect intruders.

By natively integrating intrusion detection, Kisi helps facility owners and admins address common security issues such as protecting the company’s intellectual property and employees without having to purchase additional security hardware.

Security can now be managed from one cloud dashboard, removing the complexity of legacy systems (that require server rooms and in-house IT specialists to function) and the tedious maintenance of multiple platforms that characterizes modern solutions.

Coordinating from the same solution translates into lower false alarms and a lower risk of human error while operating the dashboards.

“Our vision has always been to connect people with spaces,” says Carl Pfeiffer, CTO and founder. This explains the move of expanding the platform into a more comprehensive physical security platform.

“For us, it was never about access control or intrusion,” adds Bernhard Mehl, CEO and founder – “it was more about customer enablement from the start.”

Mehl continues, “The pandemic has been an occasion for getting creative, for many tech companies, and we wanted to be creative in how we add new capabilities so that they are easily deployed.

“Adding more security to your facilities should be simple and that’s what we’ve done with Intrusion Detection.”

Kisi releases Intrusion Detection with the clear goal of decreasing admin struggle and granting an additional security layer to its customers.

Normally, access and alarm policies are managed by different dashboards: one from the access control system and the other one from the alarm system. This means that the two applications do not collaborate but simply work on their individual tasks.

Having a solution that merges both reduces the risk of misunderstandings between one platform and the other.

This means no false alarms, automatic disarming when the employee/visitor is authorized to access (no need to disarm through pin pads), a lower chance of human errors (as you do not need to set up and maintain two platforms) and reduced costs as you don’t need to purchase and maintain two solutions.

Finally, an additional advantage is the lack of hardware required to start using the feature due to this being a software product.

Kisi Intrusion Detection works with your existing sensors and can be quickly set up and operated.

Kisi adds this extra layer of security in a way that favors scalability and customer support, making it a solution that grows with you without adding complexity for admins.


from Help Net Security https://ift.tt/3b8cEsn

Echoworx introduces biometric authentication to its Email Encryption platform

Echoworx announced the introduction of biometric authentication to its Echoworx Email Encryption platform, enabling secure passwordless authentication options.

By leveraging biometrics, along with their growing list of seven authentication options, Echoworx enables enterprises with the option to access encrypted communications in seconds, without the need for registration, questions or passwords.

“People trust their devices and mobile is the fastest growing channel for reaching customers,” says Michael Ginsberg, CEO of Echoworx.

“That’s why we’ve decided to leverage biometric authentication, like fingerprint and facial recognition already built into devices, to access encrypted messages.

“Eliminating the need to manage and use passwords is the future, and we feel biometrics are the obvious frontrunner for achieving this goal.”

According to Gartner, over 60 per cent of large enterprises are looking to implement passwordless methods of authentication in over 50 per cent of their business by 2022.

In fact, industry leaders like Bank of America and Wells Fargo, are already implementing and trusting biometrics to provide seamless access to secure information.

Echoworx’s customizable encryption offers organizations eight ways to deliver secure email, support for 27 languages and seven authentication options.

And, through their addition of biometrics, Echoworx further demonstrates its commitment to providing global enterprises a customizable platform required to accommodate their evolving digital workforce and connected customers.


from Help Net Security https://ift.tt/3aQ5SXL

Mirantis Lens IDE for Kubernetes helps accelerate adoption of cloud-native technologies

Mirantis announced a new version of Lens – the Kubernetes IDE (Integrated Development Environment).

Lens 5 unlocks teamwork and collaboration, eliminating the pain of accessing Kubernetes clusters while providing a unique way for accessing clusters, services, tools, pipelines, automations, and any other related resources in one click, regardless of where or how they are running.

Lens 5 introduces Lens Spaces, a centralized cloud-based service — integrated with Lens IDE — that lets teams create collaborative spaces for their cloud-native development needs.

Lens brings entire cloud-native technology stacks together, making developers more productive.

For example, in addition to team management functionality which allows easy onboarding for new users, Lens Spaces features a centralized catalog providing easy discovery and access to all clusters, services, tools, pipelines, automations, and related resources used by developer teams, regardless of how or where they are running.

“The trends in computing and modern software development are to empower developers, and move to the cloud,” said Miska Kaipiainen, principal of Lens IDE and senior director of engineering at Mirantis.

“We already have a wide spectrum of cloud-native technologies to support these trends and Kubernetes is at the center of gravity pulling it all together.

“While running and operating Kubernetes workloads is becoming easier, development is very complicated and adoption is slow. We are on a mission to change that. Not just for Kubernetes but everything around it.”

With Lens Spaces, users can access and work with Kubernetes clusters easily from anywhere, without sacrificing security or breaking the Kubernetes cluster role-based access control (RBAC) model.

Lens 5’s new Cluster Connect uses end-to-end encryption to secure connections between users and clusters, eliminating the need for a VPN.

One of the most significant advantages is that users do not need to manage kubeconfig files to gain access to their clusters. Lens Spaces admins can easily manage permissions and share access securely among Lens Spaces members and teams.

Lens 5 features include:

  • Lens Spaces — an integrated team environment that allows users to create collaborative spaces for cloud native development. Lens Spaces admins can easily organize, access, and share clusters from anywhere whether they are on premises or in the cloud.
  • Catalog — a system that provides easy discovery and access to all services, tools, pipelines, automation, clusters, and related resources used in cloud-native projects — a personal or a shared cloud-native directory used to enable a much more efficient workspace.
  • Hotbar — a new function and the main navigation that allows users to build their own workflows and automation within the desktop application. Items in the Hotbar can be customized by assigning different labels, colors, and icons for easy recall. Items can also be arranged, for example, to prioritize or perform actions in a specific sequence.

from Help Net Security https://ift.tt/3xBcGCr

StackPulse helps enterprises deliver reliable production-grade Kubernetes applications

StackPulse announced a Kubernetes-centric “operations center” initiative as a part of its Reliability platform.

With these additions, StackPulse gives organizations running Kubernetes a powerful set of capabilities to augment their existing incident response practices, helping Site Reliability Engineers (SRE) understand and investigate issues faster, and deploy well-tested outage mitigation strategies, helping prevent customer-facing downtime.

The 15-month old company that exited stealth mode in January, with $28 million in funding, automates tasks associated with outage response so that SRE and DevOps teams can recover applications more quickly, saving lost revenue and degraded customer experiences.

Since Kubernetes is the de-facto standard for running containerized applications, StackPulse wanted to create a set of code-based tools engineers could use to operationalize incident response for production Kubernetes-based applications.

When an error is detected in a Kubernetes environment, StackPulse automatically executes diagnostic steps to gather information from the clusters, and assists engineers in performing the root-cause analysis.

This automation helps them quickly identify how to mitigate and resolve an issue. Additionally, StackPulse has released more than a dozen playbooks built by SRE experts that remediate common Kubernetes problems.

Using the StackPulse platform to automate these playbooks significantly reduces the time to resolution, helping teams restore services faster and meet SLOs.

“If you’re serious about cloud-native, you’re using Kubernetes, but it requires learning new concepts, and turning applications alongside infrastructure for best performance,” said Leonid Belkind, CTO and co-founder of StackPulse.

“While developer teams push to adopt K8s due to the benefits in velocity it brings, it can be hard for Ops teams or on-call developers to know how to respond to alerts, or fix issues in production.

“This leads to costly incidents and outages. What we’re releasing today is a set of automated tools for diagnostics, mitigation, and remediation that help any Kubernetes environment operate with the best practices of planet-scale Kubernetes shops.”

All the Kubernetes tools and automated diagnostics are available to teams in the same platform as StackPulse’s incident response functionality so teams can communicate during outages, centralize event data, and take action to remediate.

From detecting issues by correlating signals from multiple sources to enriching alerts sent to on-call teams with root cause and remediation information, StackPulse drastically decreases the customer impact of production issues, helping stop outages in their tracks.


from Help Net Security https://ift.tt/3u4SnLH

Snyk enables Bitbucket Cloud users to manage and mitigate their open source risk

Snyk announced that Snyk is now integrated into Bitbucket tooling, giving Bitbucket Cloud users rich security insights without having to leave the product.

In addition, and as a further sign of the two companies’ continued commitment to the ongoing partnership, Atlassian has designated Snyk as the company’s featured security partner for its critical Open DevOps initiative.

This newest collaboration will surface Snyk’s developer-first security solution in the Bitbucket Cloud platform for the first time, empowering all Bitbucket Cloud users to now manage and mitigate their open source risk as part of the development process and throughout Bitbucket workflows.

This enables the following:

  • Individual developers: While building their applications on Bitbucket Cloud, these users can seamlessly integrate Snyk’s security insights and automated remediation to more easily find, prioritize and fix vulnerabilities in their open source dependencies and containers.
  • Developer team managers: Team leaders can understand exactly what risk exists within the codebases their teams contribute to daily in order to proactively resolve issues before they are escalated to security teams (with minimal interruption to their efficient, fast workflows).
  • Security analysts: This integration offers security practitioners greater visibility into existing vulnerabilities and license issues to better understand their cloud application risk and identify how to better prioritize fixes.

“Atlassian is deepening our existing partnership with Snyk so our millions of worldwide users can leverage the company’s unparalleled, actionable security intelligence to eliminate risk before production, which is vital to developers to build software securely,” said Suzie Prince, Head of Product, DevOps, Atlassian.

“This new joint development is a crucial part of our overall commitment and ongoing effort to ensure security is front and center and fully embraced by all teams committed to the future of Open DevOps.”

With a shared mission to provide developers with a more integrated, accessible security experience directly within Bitbucket, the new integration provides:

  • Repo scanning during coding, allowing developer teams to prioritize fixes during development (vs. waiting for security to flag urgent issues after shipping to production).
  • Automated pull requests within Bitbucket Cloud to fix vulnerabilities with security analysis for pull requests within Code Insights.
  • Security embedded into continuous integration/continuous delivery (CI/CD) workflows via Bitbucket Pipes.

“Snyk has long admired Atlassian’s focus on the developer experience, which is also fundamental to our company ethos,” said Geva Solomonovich, CTO, Global Alliances, Snyk.

“In a world where developers need to continuously manage and connect multiple tools, making Bitbucket and Snyk now so tightly interoperable is removing a major pain point from the developer’s day-to-day experience.

“Snyk is also honored to be the featured security partner for Atlassian’s important Open DevOps initiative, working in lockstep to help more developers worldwide embrace and evangelize a security mindset.”

Building on the long-standing partnership between the two companies, Snyk is also a Strategic Sponsor for Atlassian Team ’21 alongside AWS and Slack.

Prior to announcement, the Snyk and Atlassian partnership included integrations with Bitbucket Cloud, Bitbucket Pipelines and Jira.


from Help Net Security https://ift.tt/32YwIsD

Split and Atlassian offer bidirectional integration for Jira issues and feature flags

Split announced a new integration with Jira Software in support of Open DevOps, an open toolchain allowing software development teams to use Atlassian products with third-party tools as a seamless, all-in-one solution.

The integration unites Split’s feature flagging capabilities with Jira project planning, giving engineering and product teams greater visibility, enhancing coordination when tracking release progress, and enabling greater efficiencies from flag creation to rollout to code cleanup.

“We’re particularly excited to partner with our investor Atlassian on this new Open DevOps launch,” said Trevor Stuart, Co-founder and President at Split.

“Every customer we work with has their own unique software development and business requirements, and brings a complex and diverse toolchain built to serve those requirements.

“The faster and easier Split can integrate with that toolchain, the better we set up that customer for success. Open DevOps makes this all possible.

“Our integration with Jira Software is the first of many we aim to deliver with Atlassian’s best-of-breed tools in unifying the DevOps and feature delivery lifecycles.”

Split is the only feature flagging vendor with an integration that enables users to connect, view, and share Jira issue and flag information in both platforms.

As issues and flags scale, this informs teams working on either platforms when features are safe to rollout or flags are ready for code cleanup, mitigating release risk and technical debt.

Moreover, all of this can be achieved without the upfront investment of engineering time spent stitching together disparate tools.

With Jira Software as the backbone of their toolchain, teams can deploy code faster on Split and Jira without constantly switching between tools or tracking down the owner responsible for every project and release.

“We saw teams bringing their tools together in Jira naturally and wanted to make this a little easier to do by drastically reducing the barriers to integrate tools,” remarked Suzie Prince, Head of Product at Atlassian.

“The tight integrations with best-of-breed tools, coupled with insights in Jira, means teams will be operating at a much higher velocity, than ever before.”

Now when creating a new Jira issue, an engineer can connect a feature flag and deploy code on their own schedule without the risk of disrupting the user experience.

When ready for release, the engineer targets the new feature to a small percentage of the user base in just a few clicks within Split.

When the feature is deemed safe to rollout to more users, a teammate or a product manager in Split quickly reviews the associated issue status before dialing up the percentage of users targeted.

Once the feature is 100% released and no outstanding issues remain, the engineer can then safely perform code cleanup. The feature status is continuously reflected in Jira so that the entire development team has visibility.

After observing how teams have used Jira and Split together to complete this process in the past, it was clear that an integration would deliver immediate efficiency gains and be a prime candidate for helping software development teams discover the value of Open DevOps.

“Split’s combination of feature flags and data is instrumental to a team’s ability to ship better software, faster,” said Chris Hecht, Head of Corporate Development at Atlassian.

“We know developers will want Split in their toolchain and require a seamless experience with Atlassian products. We are thrilled to be deepening our partnership with Split through this Jira integration and Open DevOps launch.”


from Help Net Security https://ift.tt/3gX6EpI

Nintex offers business solution templates to help organizations accelerate their DX

Nintex announced it is helping organizations accelerate their digital business transformation initiatives with pre-built configurable process maps, workflow and forms automation, as well as robotic process automation templates.

Downloadable business solution templates from Nintex span common use cases, industries, and departments and are available in the Nintex Solution Accelerator Gallery and integrated with Nintex Workflow Cloud.

“Our pre-built and easily-configurable digital business solution templates are designed to save every organization valuable time while accelerating how fast processes can be documented and automated,” said Nintex Chief Product Officer Neal Gottsacker.

“Every process map and automation template is built to meet specific business process scenarios across departments and industries like government, financial services, manufacturing, and more.”

With nearly 290 templates and more than 15,000 template downloads, the Nintex Solution Accelerator Gallery is a free online resource to help organizations of all sizes accelerate digital transformation with a best-practice approach to process mapping and automation.

The gallery is easily searchable with filters which makes it fast to find an ideal template for a business process to be documented, reengineered or automated. Filters include:

  • Industry – banking, financial services, health and lifestyle services, energy, government, manufacturing, technology, education, and food and beverage
  • Department – customer services, finance and legal, human resources, information technology, operations and procurement, and sales and marketing
  • Capability – process maps, workflows, RPA Botflows, connectors and Nintex K2 Cloud

Nintex Workflow Cloud customers can also quickly access every Nintex Solution Accelerator Gallery template from within their Nintex Workflow Cloud tenant via integrated links to the gallery.

This helps organizations quickly auto-import their Nintex tenant details into relevant templates to efficiently deploy solutions even faster.

Popular Nintex templates include employee onboarding process maps and workflow templates, as well as process maps for invoice processing, workflow templates for work from home agreements, and templates to quickly convert SharePoint 2010/2013 workflows.


from Help Net Security https://ift.tt/3eFVFP0

Fusion Risk Management helps financial institutions meet Bank of England, PRA, FCA regulatory requiremets

Fusion Risk Management announced that it has further strengthened its offerings to help financial institutions meet and exceed new Bank of England, PRA, and FCA regulatory requirements which take effect in early 2022, in addition to the recently formalized guidance shared by the Basel Committee.

Fusion consulted with regulated firms, industry advocacy groups, and supervisory authorities to bolster necessary processes in anticipation of the regulations, enhancing its already comprehensive operational resilience approach.

Fusion continues to grow rapidly and now counts five global systemically important banks (GSIBs), 50% of the top 10 largest US domestic banks, and more than 120 leading financial institutions globally as customers.

Fusion’s collaborative ENGAGE customer community fosters a common understanding and best practices between those working toward greater operational resilience in financial services.

Weekly more than 90 organizations meet to discuss their most critical issues, often led by regulated banks and financial services participants.

“Financial institutions today must navigate an increasingly complex and demanding regulatory environment, and they need a partner that understands the landscape and anticipates their operational resilience needs,” said Michael Campbell, Chief Executive Officer, Fusion Risk Management.

“Many institutions do not have the necessary processes and framework to adequately respond to new resilience regulations. Fusion’s proven track record as a provider of best-in-class service ensures our customers stay ahead of regulatory expectations.

“Our community of customers are at the leading edge of operational resilience, including risk management, third party management, cyber security, disaster recovery, business continuity and we’re proud to help them remain resilient regardless of unforeseen occurrences.”

“Fusion’s mission has always been to keep businesses in business and safeguard our customers’ ability to deliver on their brand promises, regardless the disruption,” said Rich Cooper, Global Head of Financial Service Go-To-Market, Fusion Risk Management.

“Financial services providers in particular trust the Fusion Framework System to maintain a robust operational resilience program that exceeds regulatory requirements and optimizes their operational efficiency.

“We take pride in meeting our global customers’ needs and look forward to continuing to provide ever evolving solutions. We are pleased that our capabilities are tested, available, and ready to implement today.”


from Help Net Security https://ift.tt/3gP4JDD