Skip to content

Archive for

The Truth About Privileged Access Security On AWS And Other Public Clouds

 

Bottom Line: Amazon’s Identity and Access Management (IAM) centralizes identity roles, policies and Config Rules yet doesn’t go far enough to provide a Zero Trust-based approach to Privileged Access Management (PAM) that enterprises need today.

AWS provides a baseline level of support for Identity and Access Management at no charge as part of their AWS instances, as do other public cloud providers. Designed to provide customers with the essentials to support IAM, the free version often doesn’t go far enough to support PAM at the enterprise level. To AWS’s credit, they continue to invest in IAM features while fine-tuning how Config Rules in their IAM can create alerts using AWS Lambda. AWS’s native IAM can also integrate at the API level to HR systems and corporate directories, and suspend users who violate access privileges.

In short, native IAM capabilities offered by AWS, Microsoft Azure, Google Cloud, and more provides enough functionality to help an organization get up and running to control access in their respective homogeneous cloud environments. Often they lack the scale to fully address the more challenging, complex areas of IAM and PAM in hybrid or multi-cloud environments.

The Truth about Privileged Access Security on Cloud Providers Like AWS

The essence of the Shared Responsibility Model is assigning responsibility for the security of the cloud itself including the infrastructure, hardware, software, and facilities to AWS and assign the securing of operating systems, platforms, and data to customers. The AWS version of the Shared Responsibility Model, shown below, illustrates how Amazon has defined securing the data itself, management of the platform, applications and how they’re accessed, and various configurations as the customers’ responsibility:

AWS provides basic IAM support that protects its customers against privileged credential abuse in a homogenous AWS-only environment. Forrester estimates that 80% of data breaches involve compromised privileged credentials, and a recent survey by Centrify found that 74% of all breaches involved privileged access abuse.

The following are the four truths about privileged access security on AWS (and, generally, other public cloud providers):

  1. Customers of AWS and other public cloud providers should not fall for the myth that cloud service providers can completely protect their customized and highly individualized cloud instances. As the Shared Responsibility Model above illustrates, AWS secures the core areas of their cloud platform, including infrastructure and hosting services. AWS customers are responsible for securing operating systems, platforms, and data and most importantly, privileged access credentials. Organizations need to consider the Shared Responsibility Model the starting point on creating an enterprise-wide security strategy with a Zero Trust Security framework being the long-term goal. AWS’s IAM is an interim solution to the long-term challenge of achieving Zero Trust Privilege across an enterprise ecosystem that is going to become more hybrid or multi-cloud as time goes on.
  2. Despite what many AWS integrators say, adopting a new cloud platform doesn’t require a new Privileged Access Security model. Many organizations who have adopted AWS and other cloud platforms are using the same Privileged Access Security Model they have in place for their existing on-premises systems. The truth is the same Privileged Access Security Model can be used for on-premises and IaaS implementations. Even AWS itself has stated that conventional security and compliance concepts still apply in the cloud. For an overview of the most valuable best practices for securing AWS instances, please see my previous post, 6 Best Practices For Increasing Security In AWS In A Zero Trust World.
  3. Hybrid cloud architectures that include AWS instances don’t need an entirely new identity infrastructure and can rely on advanced technologies, including Multi-Directory Brokering. Creating duplicate identities increases cost, risk, and overhead and the burden of requiring additional licenses. Existing directories (such as Active Directory) can be extended through various deployment options, each with their strengths and weaknesses. Centrify, for example, offers Multi-Directory Brokering to use whatever preferred directory already exists in an organization to authenticate users in hybrid and multi-cloud environments. And while AWS provides key pairs for access to Amazon Elastic Compute Cloud (Amazon EC2) instances, their security best practices recommend a holistic approach should be used across on-premises and multi-cloud environments, including Active Directory or LDAP in the security architecture.
  4. It’s possible to scale existing Privileged Access Management systems in use for on-premises systems today to hybrid cloud platforms that include AWS, Google Cloud, Microsoft Azure, and other platforms. There’s a tendency on the part of system integrators specializing in cloud security to oversell cloud service providers’ native IAM and PAM capabilities, saying that a hybrid cloud strategy requires separate systems. Look for system integrators and experienced security solutions providers who can use a common security model already in place to move workloads to new AWS instances.

Conclusion

The truth is that Identity and Access Management solutions built into public cloud offerings such as AWS, Microsoft Azure, and Google Cloud are stop-gap solutions to a long-term security challenge many organizations are facing today. Instead of relying only on a public cloud provider’s IAM and security solutions, every organization’s cloud security goals need to include a holistic approach to identity and access management and not create silos for each cloud environment they are using. While AWS continues to invest in their IAM solution, organizations need to prioritize protecting their privileged access credentials – the “keys to the kingdom” – that if ever compromised would allow hackers to walk in the front door of the most valuable systems an organization has. The four truths defined in this article are essential for building a Zero Trust roadmap for any organization that will scale with them as they grow. By taking a “never trust, always verify, enforce least privilege” strategy when it comes to their hybrid- and multi-cloud strategies, organizations can alleviate costly breaches that harm the long-term operations of any business.

What Needs To Be On Your CPQ Channel Roadmap In 2019

Bottom Line:  Adding new features to your CPQ channel selling platform directly benefits your resellers and channel partners, driving greater revenue, channel loyalty, and expansion into new markets.

Personalization Is Key To CPQ Succeeding In Channels

Sustaining and strengthening relationships across all indirect selling channels succeeds when dealers, multi-tier distributors, resellers, intermediaries, and service providers each can personalize the CPQ applications and platforms they use. Larger dealers, distributors, and resellers are adept at personalizing CPQ selling portals by the various roles in their organization. Personalization combined with a highly intuitive, configurable interface improves CPQ applications’ ease of use, enabling channel partners to get more done. The more intuitive and easy a CPQ application is to use, the more channel partners rely on it to place orders. When distributors are representing, on average, 12 different manufacturers,  the one with the most intuitive, easily used CPQ system often gets the majority of sales.

Another aspect of personalization is defining levels of resellers. When many organizations first launch their CPQ channel selling strategies, one of the first requests they have is to organize all channel partners into performance categories. Differentiating channel partners on sales performance, customer satisfaction, and aftermarket revenue then gamifying how every one of them can move up a level is proving to be very effective at increasing channel sales. Competing with one another to be the top reseller for the manufacturing and service companies lifts an entire channel network to higher performance.

Every dealer, multi-tier distributor, reseller, intermediary, and service provider also has a unique way of selling that works best for their business. Another must-have feature on any CPQ channel roadmap is greater workflow flexibility to support increasingly complex, IoT- and AI-enabled configurable products. Smart, connected products are the future of manufacturing and channel sales. Capgemini estimates that the size of the connected products market will be $519B to $685B by 2020. Workflows like the one shown below of an internal sales rep using a multichannel CPQ system to order a customized product are due for a refresh to support even greater flexibility for more channels and greater product options.


Most Valuable Features For A CPQ Channel Roadmap In 2019

There’s a direct link between how effective a CPQ platform is across multi-tier distribution networks and the productivity of sales teams using them. 83% of sales teams are using CPQ apps today based on Accenture Interactive’s recent study, Empowering Your Sales Force: It’s Not Just Automation, It’s Personal (8 pp., PDF, no opt-in). There’s ample evidence that the more effective a CPQ platform is at equipping dealers, multi-tier distributors, resellers, intermediaries, and service providers, the greater the sales they achieve. The 2019 B2B Buyers Survey Report, by DemandGen in collaboration with DemandBase, found that B2B buyers are more likely to purchase from sales representatives who demonstrate a stronger knowledge of the solution area and the business landscape (65%) compared to competitors. B2B buyers also give high praise for sales teams who can provide quotes quickly and respond to their inquiries promptly (63%), in addition to providing higher-quality content (61%). Each of these benefits is derived from a CPQ platform that can scale across every phase of the selling lifecycle.

The following are the key features needed on CPQ channel roadmaps in 2019 to stay competitive and scale sales and revenue on pace with market growth:

  • Greater personalization for each type of partner portal supported by real-time integration to CRM and ERP systems, designed to scale for sales team turnover across multi-tier distribution networks. Channel partners’ sales teams tend to churn quickly, and it’s best to design in intuitive, easily configured portals by sales role to help new hires get up to speed fast. Channel sales associates are typically the fastest-churning area of any selling business. With greater personalization comes the need for greater integration to provide the data needed to enable partner portals to have a greater depth of functionality. The following graphic from Deloitte’s recent study, Configure, Price, and Quote (CPQ) Capabilities illustrates this point:

  • Support for multi-tier pricing, price management, price optimization, price enforcement, and special workflows, including Special Pricing Requests (SPR). Baseline CPQ platforms support price management and have successfully transitioned multi-tier distribution networks off of Microsoft Excel spreadsheets to a single pricing model that scales across all products and channels. Consider adopting advanced pricing logic to support SPRs so sales operations teams don’t have to do this process manually. In manufacturers who have transitioned from manual to automated SPR approvals, average deal sizes have increased over 60%, and productivity jumped over 76% according to a recent Gartner survey.
  • Augment advanced product configuration tools by making them more intuitive and easier to use to sell the more advanced products in your catalog. It’s time to push the boundaries of CPQ channel selling systems to sell more complex products and drive greater revenue and margins. Forward-thinking manufacturers are taking a virtual design and 3D-based design approach to accomplish this. Enabling channel partners to take larger orders for more complex products is paying off.
  • Upgrade guided selling strategies to be more than catalog-based selection systems, mining customer data using machine learning to see which products they have the greatest propensity to buy when. It’s time to migrate off of the guided selling systems that are selecting products from catalogs that may deliver the best gross margins or have a traditionally high attach rate with the product the customer is buying. Machine learning is making it possible to provide greater accuracy and precision to recommendations than ever before.
  • Improve the usability of sales promotions, rebates, and most importantly, Market Development Funds (MDF). It’s amazing how much time manufacturers are spending manually handling MDF claims today. It’s time to automate this area of the CPQ channel roadmap and save thousands of hours and dollars a year while enabling resellers to get reimbursed faster or get the funds they need to grow their businesses.
  • Contract management is a must-have for CPQ channel roadmaps today. Integrating a cloud-based contract management system into a CPQ platform is vital for taking one more step towards an end-to-end quote-to-cash workflow being in place. Real-time integration to contract management can save days of waiting for contract approvals, all leading to more closed deals and faster, more lucrative sales cycles.
  • Manufacturers can realize greater revenue potential through their channels by combining machine learning insights to find those aftermarket customers most ready to buy while accelerating sales closing cycles with CPQ. Manufacturers want to make sure they are getting their fair share of the aftermarket. Using a machine learning-based application, they can help their resellers increase average deal sizes by knowing which products and services to offer when. They’ll also know when to present upsell and cross-sell offers into an account at a specific point in time when they will be most likely to lead to additional sales, all based on machine learning-based insights. Combining machine learning-based insights to guide resellers to the most valuable and highest probability customer accounts ready to buy with an intuitive CPQ system increases sales efficiency leading to higher revenues.

Conclusion

Now that the solutions exist for resellers to simplify CPQ selling strategies, it’s up to each manufacturer to decide how competitive they want their channel partner roadmap to be. Any given manufacturer’s quoting and configuration tools today are competing with 11 others on average for a reseller’s time, it is clear that roadmaps need a refresh to stay competitive. Suggested options include offering greater personalization, multi-tier pricing and a more thorough approach to price management, advanced product configuration support, revamped guided selling strategies and improved usability of sales promotions, rebates, and Market Development Funds (MDF). Manufacturers need to prioritize each of these features relative to their product- and revenue-specific goals by channel. A fascinating company who has deep expertise in designing, implementing, and scaling analytics, service, sales, IoT, and CPQ solutions for manufacturers is eLogic. The company’s mission is to enable manufacturers to achieve the highest value customer engagement and product & service lifecycle performance. eLogic is regarded as the leading system integration partner in CPQ and product configuration and is considered a global leader in delivering business solutions for manufacturers across SAP configuration technologies and Microsoft Dynamics 365, Power Platform & Azure.

Your Mobile Phone Is Your Identity. How Do You Protect It?

 The average cost of a data breach has risen 12% over the past 5 years and is now $3.92M. U.S.-based breaches average $8.19M in losses, leading all nations. Not integrating mobile phone platforms and protecting them with a Zero Trust Security framework can add up to $240K to the cost of a breach. Companies that fully deploy security automation technologies experience around half the cost of a breach ($2.65M on average) compared to those that do not deploy these technologies ($5.16M on average). These and many other fascinating insights are from the 14th annual IBM Security Cost of a Data Breach Report, 2019. IBM is making a copy of the report available here for download (76 pp., PDF, opt-in). IBM and Ponemon Institute collaborated on the report, recruiting 507 organizations that have experienced a breach in the last year and interviewing more than 3,211 individuals who are knowledgeable about the data breach incident in their organizations. A total of 16 countries and 17 industries were included in the scope of the study. For additional details regarding the methodology, please see pages 71 - 75 of the report. Key insights from the report include the following: Lost business costs are 36.2% of the total cost of an average breach, making it the single largest loss component of all. Detection and escalation costs are second at 31.1%, as it can take up to 206 days to first identify a breach after it occurs and an additional 73 days to contain the breach. IBM found the average breach lasts 279 days. Breaches take a heavy toll on the time resources of any organization as well, eating up 76% of an entire year before being discovered and contained. U.S.-based breaches average $8.19M in losses, leading all nations with the highest country average. The cost of U.S.-based breaches far outdistance all other countries and regions of the world due to the value and volume of data exfiltrated from enterprise IT systems based in North America. North American enterprises are also often the most likely to rely on mobile devices to enable greater communication and collaboration, further exposing that threat surface. The Middle East has the second-highest average breach loss of $5.97M. In contrast, Indian and Brazilian organizations had the lowest total average cost at $1.83M and $1.35M, respectively. Data breach costs increase quickly in integration-intensive corporate IT environments, especially where there is a proliferation of disconnected mobile platforms. The study found the highest contributing costs associated with a data breach are caused by third parties, compliance failures, extensive cloud migration, system complexity, and extensive IoT, mobile and OT environments. This reinforces that organizations need to adopt a Zero Trust Security (ZTS) framework to secure the multiple endpoints, apps, networks, clouds, and operating systems across perimeter-less enterprises. Mobile devices are enterprises’ fasting growing threat surfaces, making them one of the highest priorities for implementing ZTS frameworks. Companies to watch in this area include MobileIron, which has created a mobile-centric, zero-trust enterprise security framework. The framework is built on the foundation of unified endpoint management (UEM) and additional zero trust-enabling technologies, including zero sign-on (ZSO), multi-factor authentication (MFA), and mobile threat detection (MTD). This approach to securing access and protect data across the perimeter-less enterprise is helping to alleviate the high cost of data breaches, as shown in the graphic below. Accidental, inadvertent breaches from human error and system glitches are still the root cause for nearly half (49%) of the data breaches. And phishing attacks on mobile devices that are lost, stolen or comprised in workplaces are a leading cause of breaches due to human error. While less expensive than malicious attacks, which cost an average of $4.45M, system glitches and human error still result in costly breaches, with an average loss of $3.24M and $3.5M respectively. To establish complete control over data, wherever it lives, organizations need to adopt Zero Trust Security (ZTS) frameworks that are determined by “never trust, always verify.”. For example, MobileIron’s mobile-centric zero-trust approach validates the device, establishes user context, checks app authorization, verifies the network, and detects and remediates threats before granting secure access to a device or user. This zero-trust security framework is designed to stop accidental, inadvertent and maliciously-driven, intentional breaches. The following graphic compares the total cost for three data breach root causes: Conclusion Lost business is the single largest cost component of any breach, and it takes years to fully recover from one. IBM found that 67% of the costs of a breach accrue in the first year, 22% accrue in the second year and 11% in the third. The more regulated a company’s business, the longer a breach will accrue costs and impact operations. Compounding this is the need for a more Zero Trust-based approach to securing every endpoint across an organization. Not integrating mobile phone platforms and protecting them with a Zero Trust Security (ZTS) framework can add up to $240K to the cost of a breach. Companies working to bridge the gap between the need for securing mobile devices with ZTS frameworks include MobileIron, which has created a mobile-centric, zero-trust enterprise security framework. There’s a significant amount of innovation happening with Identity Access Management that thwarts privileged account abuse, which is the leading cause of breaches today. Centrify’s most recent survey, Privileged Access Management in the Modern Threatscape, found that 74% of all breaches involved access to a privileged account. Privileged access credentials are hackers’ most popular technique for initiating a breach to exfiltrate valuable data from enterprise systems and sell it on the Dark Web.

  • The average cost of a data breach has risen 12% over the past 5 years and is now $3.92M.
  • U.S.-based breaches average $8.19M in losses, leading all nations.
  • Not integrating mobile phone platforms and protecting them with a Zero Trust Security framework can add up to $240K to the cost of a breach.
  • Companies that fully deploy security automation technologies experience around half the cost of a breach ($2.65M on average) compared to those that do not deploy these technologies ($5.16M on average).

These and many other fascinating insights are from the 14th annual IBM Security Cost of a Data Breach Report, 2019. IBM is making a copy of the report available here for download (76 pp., PDF, opt-in). IBM and Ponemon Institute collaborated on the report, recruiting 507 organizations that have experienced a breach in the last year and interviewing more than 3,211 individuals who are knowledgeable about the data breach incident in their organizations. A total of 16 countries and 17 industries were included in the scope of the study. For additional details regarding the methodology, please see pages 71 – 75 of the report.

Key insights from the report include the following:

  • Lost business costs are 36.2% of the total cost of an average breach, making it the single largest loss component of all. Detection and escalation costs are second at 31.1%, as it can take up to 206 days to first identify a breach after it occurs and an additional 73 days to contain the breach. IBM found the average breach lasts 279 days. Breaches take a heavy toll on the time resources of any organization as well, eating up 76% of an entire year before being discovered and contained.

  • U.S.-based breaches average $8.19M in losses, leading all nations with the highest country average. The cost of U.S.-based breaches far outdistance all other countries and regions of the world due to the value and volume of data exfiltrated from enterprise IT systems based in North America. North American enterprises are also often the most likely to rely on mobile devices to enable greater communication and collaboration, further exposing that threat surface. The Middle East has the second-highest average breach loss of $5.97M. In contrast, Indian and Brazilian organizations had the lowest total average cost at $1.83M and $1.35M, respectively.

  • Data breach costs increase quickly in integration-intensive corporate IT environments, especially where there is a proliferation of disconnected mobile platforms. The study found the highest contributing costs associated with a data breach are caused by third parties, compliance failures, extensive cloud migration, system complexity, and extensive IoT, mobile and OT environments. This reinforces that organizations need to adopt a Zero Trust Security (ZTS) framework to secure the multiple endpoints, apps, networks, clouds, and operating systems across perimeter-less enterprises. Mobile devices are enterprises’ fasting growing threat surfaces, making them one of the highest priorities for implementing ZTS frameworks. Companies to watch in this area include MobileIron, which has created a mobile-centric, zero-trust enterprise security framework. The framework is built on the foundation of unified endpoint management (UEM) and additional zero trust-enabling technologies, including zero sign-on (ZSO), multi-factor authentication (MFA), and mobile threat detection (MTD). This approach to securing access and protect data across the perimeter-less enterprise is helping to alleviate the high cost of data breaches, as shown in the graphic below.

  • Accidental, inadvertent breaches from human error and system glitches are still the root cause for nearly half (49%) of the data breaches. And phishing attacks on mobile devices that are lost, stolen or comprised in workplaces are a leading cause of breaches due to human error. While less expensive than malicious attacks, which cost an average of $4.45M, system glitches and the human error still result in costly breaches, with an average loss of $3.24M and $3.5M respectively. To establish complete control over data, wherever it lives, organizations need to adopt Zero Trust Security (ZTS) frameworks that are determined by “never trust, always verify.”. For example, MobileIron’s mobile-centric zero-trust approach validates the device, establishes user context, checks app authorization, verifies the network, and detects and remediates threats before granting secure access to a device or user. This zero-trust security framework is designed to stop accidental, inadvertent and maliciously-driven, intentional breaches. The following graphic compares the total cost for three data breach root causes:

Conclusion

Lost business is the single largest cost component of any breach, and it takes years to fully recover from one. IBM found that 67% of the costs of a breach accrue in the first year, 22% accrue in the second year and 11% in the third.  The more regulated a company’s business, the longer a breach will accrue costs and impact operations. Compounding this is the need for a more Zero Trust-based approach to securing every endpoint across an organization.

Not integrating mobile phone platforms and protecting them with a Zero Trust Security (ZTS) framework can add up to $240K to the cost of a breach. Companies working to bridge the gap between the need for securing mobile devices with ZTS frameworks include MobileIron, which has created a mobile-centric, zero-trust enterprise security framework. There’s a significant amount of innovation happening with Identity Access Management that thwarts privileged account abuse, which is the leading cause of breaches today. Centrify’s most recent survey, Privileged Access Management in the Modern Threatscape, found that 74% of all breaches involved access to a privileged account. Privileged access credentials are hackers’ most popular technique for initiating a breach to exfiltrate valuable data from enterprise systems and sell it on the Dark Web.

How To Deal With Ransomware In A Zero Trust World

  • Lake City, Florida’s city government paid ransomware attackers about $530,000 or 42 Bitcoins, to restore access to systems and data last month.
  • The City of Riviera Beach, Florida, paid ransomware attackers about $600,000 to regain access to their systems last month.
  • Earlier this month, LaPorte County, Indiana paid over $130,000 worth of Bitcoins to ransomware hackers to regain access to part of its computer systems.
  • This week, Louisiana Governor John Bel Edwards activated a state of emergency in response to a wave of ransomware infections that have hit multiple school districts in North Louisiana.

The recent ransomware attacks on Lake City, FloridaRiviera Beach, FloridaLaPorte County, Indiana, the City of Baltimore, Maryland, and a diverse base of enterprises including Eurofins ScientificCOSCONorsk Hydro, the UK Police Federation, and Aebi Schmidt reflect higher ransoms are being demanded than in the past to release high-value systems. There’s been a 44% decline in the number of organizations affected by ransomware in the past two years, yet an 89% increase in ransom demands over the last 12 months according to the Q1, 2019 Ransomware Marketplace Report published by Coveware. The Wall Street Journal’s article “How Ransomware Attacks Are Forcing Big Payments From Cities, Counties” provides an excellent overview of how Ryuk, a ransomware variant, works and is being used to hold unprepared municipalities’ IT networks for ransom.

How To Handle A Ransomware Attack

Interested in learning more about ransomware and how to help municipalities and manufacturers protect themselves against it, I attended Centrify’s recent webinar, “5 Steps To Minimize Your Exposure To Ransomware Attacks”. Dr. Torsten George, noted cybersecurity evangelist, delivered a wealth of insights and knowledge about how any business can protect itself and recover from a ransomware attack. Key insights from his webinar include the following:

  • Ransomware attackers are becoming more sophisticated using spear-phishing emails that target specific individuals and seeding legitimate websites with malicious code – it’s helpful to know the anatomy of an attack. Some recent attacks have even started exploiting smartphone vulnerabilities to penetrate corporate networks, according to Dr. George. The following graphic from the webinar explains how attackers initiate their ransomware attempts by sending a phishing email that might include a malicious attachment or link that leads to a malicious website. When a user clicks on the file/webpage, it unloads the malware and starts executing. It then establishes communications to the Command and Control Server – more often than not via TOR, which is free, open-source software for enabling anonymous communication. In the next step, the files get encrypted, and the end-user gets the infamous ransomware screen. From there on, communications with the end-user is done via TOR or similar technologies. Once the ransom is paid – often via Bitcoin to avoid any traces to the attacker – the private key is delivered to the users to regain access to their data.

  • To minimize the impact of a ransomware attack on any business, Business Continuity and Prevention strategies need to be in place now. A foundation of any successful Business Continuity strategy is following best practices defined by the U.S. Government Interagency Technical Guidance. These include performing regular data backup, penetration testing, and secure backups as the graphic below illustrate:

  • There are six preventative measures every business can take today to minimize the risk and potential business disruption of ransomware, according to the U.S. Government Interagency Technical Guidelines and FBI. One of the most valuable insights gained from the webinar was learning about how every business needs to engrain cybersecurity best practices into their daily routines. Calling it “cyber hygiene,” Dr. George provided insights into the following six preventative measures:

  • Stopping privileged access abuse with a Zero Trust Privilege-based approach reduces ransomware attacks and breaches’ ability to proliferate. Centrify found that 74% of all data breaches involve access to a privileged account. In a separate study, The Forrester Wave™: Privileged Identity Management, Q4 2018, (PDF, 19 pp., no opt-in) found that at least 80% of data breaches have a connection to compromised privileged credentials. Dr. George observed that hackers don’t hack in anymore—they log in using weak, default, stolen, or otherwise compromised credentials. Zero Trust Privilege requires granting least privilege access based on verifying who is requesting access, the context of the request, and the risk of the access environment.
  • One of the most valuable segments of the webinar covered five steps for minimizing an organization’s exposure to ransomware taking a Zero Trust-based approach. The five steps that every organization needs to consider how to reduce the threat of ransomware includes the following:
  1. Immediately Establish A Secure Admin Environment. To prevent malware from spreading during sessions that connect servers with privileged access, establish policies that only authorize privileged access from a “clean” source. This will prevent direct access from user workstations that are connected to the Internet and receive external email messages, which are too easily infected with malware.
  2. Secure remote access from a Zero Trust standpoint first, especially if you are working with remote contractors, outsourced IT, or development staff. When remote access is secured through a Zero Trust-based approach, it alleviates the need for a VPN and handles all the transport security between the secure client and distributed server connector gateways. Ransomware can travel through VPN connections and spread through entire corporate networks. Taking advantage of a reverse proxy approach, there is no logical path to the network, and ransomware is unable to spread from system to the network.
  3. Zoning off access is also a must-have to thwart ransomware attacks from spreading across company networks. The webinar showed how it’s a very good idea to create and enforce a series of access zones that restrict access by privileged users to specific systems and requires multi-factor authentication (MFA) to reach assets outside of their zone. Without passing an MFA challenge, ransomware can’t spread to other systems.
  4. Minimizing attack surfaces is key to stopping ransomware. Minimizing attack surfaces reduces ransomware’s potential to enter and spread throughout a company’s network. Dr. George made the point that vaulting away shared local accounts is a very effective strategy for minimizing attack surfaces. The point was made that ransomware does not always need elevated privileges to spread, but if achieved, the impact will be much more damaging.
  5. Least Privilege Access is foundational to Zero Trust and a must-have on any network to protect against ransomware. When least privilege access is in place, organizations have much tighter, more granular control over which accounts and resources admin accounts and users have access to. Ransomware gets stopped in its tracks when it can’t install files or achieve least privilege access to complete installation of a script or code base.

Conclusion

Ransomware is the latest iteration of a criminal strategy used for centuries for financial gain. Holding someone or something for ransom has now graduated to holding entire cities and businesses hostage until a Bitcoin payment is made. The FBI warns that paying ransomware attackers only fuels more attacks and subsidizes an illegal business model. That’s why taking the preventative steps provided in the Centrify webinar is something every business needs to consider today.

Staying safe from ransomware in the modern threatscape is a challenge, but a Zero Trust Privilege approach can reduce the risk your organization will be the next victim forced to make a gut-wrenching decision of whether or not to pay a ransom.

AI Is Predicting The Future Of Online Fraud Detection

Bottom Line: Combining supervised and unsupervised machine learning as part of a broader Artificial Intelligence (AI) fraud detection strategy enables digital businesses to quickly and accurately detect automated and increasingly complex fraud attempts.

Recent research from the Association of Certified Fraud Examiners (ACFE)KPMGPwC, and others reflects how organized crime and state-sponsored fraudsters are increasing the sophistication, scale, and speed of their fraud attacks. One of the most common types of emerging attacks is based on using machine learning and other automation techniques to commit fraud that legacy approaches to fraud prevention can’t catch. The most common legacy approaches to fighting online fraud include relying on rules and predictive models that are no longer effective at confronting more advanced, nuanced levels of current fraud attempts. Online fraud detection needs AI to stay at parity with the quickly escalating complexity and sophistication of today’s fraud attempts.

Why AI is Ideal for Online Fraud Detection

It’s been my experience that digitally-based businesses that have the best track record of thwarting online fraud rely on AI and machine learning to do the following:

  • Actively use supervised machine learning to train models so they can spot fraud attempts quicker than manually-based approaches. Digitally-based businesses I’ve talked with say having supervised machine learning categorize and then predict fraudulent attempts is invaluable from a time-saving standpoint alone. Adopting supervised machine learning first is easier for many businesses as they have analytics teams on staff who are familiar with the foundational concepts and techniques. Digital businesses with high-risk exposure given their business models are adopting AI-based online fraud detection platforms to equip their fraud analysts with the insights they need to identify and stop threats early.
  • Combine supervised and unsupervised machine learning into a single fraud prevention payment score to excel at finding anomalies in emerging data. Integrating the results of fraud analysis based on supervised and unsupervised machine learning into one risk score is one way AI enables online fraud prevention to scale today. Leaders in this area of online fraud prevention can deliver payment scores in 250 milliseconds, using AI to interpret the data and provide a response. A more integrated approach to online fraud prevention that combines supervised and unsupervised machine learning can deliver scores that are twice as predictive as previous approaches.
  • Capitalizes on large-scale, universal data networks of transactions to fine-tune and scale supervised machine learning algorithms, improving fraud prevention scores in the process. The most advanced digital businesses are looking for ways to fine-tune their machine learning models using large-scale universal data sets. Many businesses have years of transaction data they rely on initially for this purpose. Online fraud prevention platforms also have large-scale universal data networks that often include billions of transactions captured over decades, from thousands of customers globally.

The integration of these three factors forms the foundation of online fraud detection and defines its future growth trajectory. One of the most rapid areas of innovation in these three areas is the fine-tuning of fraud prevention scores. Kount’s unique approach to creating and scaling its Omniscore indicates how AI is immediately redefining the future of online fraud detection.

Kount is distinct from other online fraud detection platforms due to the company’s ability to factor in all available historical data in their universal data network that includes billions of transactions accumulated over 12 years, 6,500 customers, across over 180 countries and territories, and multiple payment networks.

Insights into Why AI is the Future of Online Fraud Detection

Recent research studies provide insights into why AI is the future of online fraud detection. According to the Association of Certified Fraud Examiners (ACFE) inaugural Anti-Fraud Technology Benchmarking Report, the amount organizations are expected to spend on AI and machine learning to thwart online fraud is expected to triple by 2021. The ACFE study also found that only 13% of organizations currently use AI and machine learning to detect and deter fraud today. The report predicts another 25% plan to adopt these technologies in the next year or two – an increase of nearly 200%. The ACFE study found that AI and machine learning technology will most likely be adopted in the next two years to fight fraud, followed by predictive analytics and modeling.

PwC’s 2018 Global Economic Crime and Fraud Survey is based on interviews with 7,200 C-level and senior management respondents across 123 different nations and territories and was conducted to determine the true state of digital fraud prevention across the world. The study found that 42% of companies said they had increased funds used to combat fraud or economic crime. In addition, 34% of the C-level and senior management executives also said that existing approaches to combatting online fraud was generating too many false positives. The solution is to rely more on machine learning and AI in combination with predictive analytics as the graphic below illustrates. Kount’s unique approach to combining these technologies to define their Omniscore reflects the future of online fraud detection.

AI is a necessary foundation of online fraud detection, and for platforms built on these technologies to succeed, they must do three things extremely well. First, supervised machine learning algorithms need to be fine-tuned with decades worth of transaction data to minimize false positives and provide extremely fast responses to inquiries. Second, unsupervised machine learning is needed to find emerging anomalies that may signal entirely new, more sophisticated forms of online fraud. Finally, for an online fraud platform to scale, it needs to have a large-scale, universal data network of transactions to fine-tune and scale supervised machine learning algorithms that improve the accuracy of fraud prevention scores in the process.

AWS Certifications Increase Tech Pay Up To $12K A Year

AWS Certifications Increase Tech Pay Up To $12K A Year

  • AWS and Google certifications are among the most lucrative in North America, paying average salaries of $129,868 and $147,357 respectively.
  • Cross-certifying on AWS is providing a $12K salary bump to IT professionals who already have Citrix and Red Hat/Linux certifications today
  • Globally, four of the five top-paying certifications are in cloud computing.

These and many other insights of which certifications provide the highest salaries by region of the world are from the recently published Global Knowledge 2019 IT Skills and Salary ReportThe report is downloadable here (27 pp., PDF, free, opt-in). The methodology is based on 12,271 interviews across non-management IT staffs (29% of interviews), mid-level professionals including managers and team leads (43%), and senior-level and executive roles (28%) across four global regions. For additional details regarding the study’s methodology, please see page 24 of the report.

Key insights from the report include the following:

  • Cross-certifying on AWS is providing a $12K salary bump to IT professionals who already have Citrix and Red Hat/Linux certifications. Citrix certifications pay an average salary of $109,546 and those earning an AWS certification see a $12,339 salary bump on average. Red Hat/Linux certification-based jobs pay an average of $113,165 and are seeing an average salary bump of $12,553.  Cisco-certified IT professionals who gain AWS certification increase their salaries on average from $101,533 to $111,869, gaining a 10.2% increase. The following chart compares the salary bump AWS certifications are providing to IT professionals with seven of the more popular certifications (please click on the graphic to expand for easier reading).

  • AWS and Google certifications are among the most lucrative in North America, paying average salaries of $129,868 and $147,357 while the most popular are cybersecurity, governance, compliance, and policy. 27% of all respondents to Global Knowledge’s survey have at least one certification in this category. Nearly 18% are ITIL certified. In North American, the most popular certification categories beyond cybersecurity are CompTIA, Microsoft, and Cisco. The following table from the report provides an overview of salary by certification category (please click on the graphic to expand for easier reading).

  • AWS Certified Solutions Architect – Associate is the most popular AWS certification today, with 72% of respondents having achieved its requirements. Certified Solutions Architect – Associate leads the top five most commonly held AWS certifications today according to the survey. AWS Certified Developer – Associate (33%), AWS Certified SysOps Administrator – Associate (24%), AWS Certified Solutions Architect – Professional (16%) and AWS Certified Cloud Practitioner round out the top five most common AWS certifications across the 12,271 global respondents to the Global Knowledge survey.

10 Charts That Will Change Your Perspective Of Amazon’s Patent Growth

10 Charts That Will Change Your Perspective Of Amazon's Patent Growth

  • Since 2010 Amazon has grown its patent portfolio from less than 1,000 active patents in 2010 to nearly 10,000 in 2019, a ten-fold increase in less than a decade.
  • Amazon heavily cites Microsoft, IBM, and Alphabet, with 39%, 32% and 28% of Amazon’s total Patent Asset Index
  • Amazon’s patent portfolio is dominated by Cloud Computing, with the majority of the patents contributing to AWS’ current and future services roadmap. AWS achieved 41% year-over-year revenue growth in the latest fiscal quarter, reaching $7.6B in revenue.

Patents are fascinating because they provide a glimpse into potential plans, and roadmaps tech companies are considering. Amazon has one of the most interesting patent portfolios today that encompass a wide spectrum of technologies, from aircraft technology, drones, cloud computing, to machine learning. Interested in learning more about Amazon’s unique patent portfolio, I contacted PatentSight, a LexisNexis company, one of the leading providers of patent analytics and provider of the PatentSight analytics platform used for creating the ten charts shown below.

  • Amazon patents grew at a Compound Annual Growth Rate (CAGR) of above 35% between 2010 and 2019. PatentSight’s analysis shows that Amazon’s patent portfolio has increased tenfold in the last decade, and is comprised entirely of organic patents with only a small percentage gained from acquisitions. PatentSight also finds that Amazon’s patents have a falling average quality as measured by their Competitive Impact score shown on the vertical axis of the chart below. As Amazon’s patent portfolio has grown, there has been a downward trend of quality. William Mansfield, Head of Consulting and Customer Success at LexisNexis PatentSight explains why. “To maintain a high quality when growing the portfolio is difficult, as each patent would need to be equally as good as or better than the previous,” he said. Mr. Mansfield’s analysis found that Amazon’s portfolio has an average Competitive Impact of 2 today, double the PatentSight database average of 1.

  • Amazon’s patent portfolio is unique in that 100% of it is protected in the U.S. “The protection strategy of Amazon is also uncommon. While it can be the case that US firms tend to be US-centric, Amazon is an extreme case,” said William Mansfield. It’s surprising how many Amazon patents are active only in the USA (86%) and invented in the USA and active only in the USA (81%). William explained that “one factor for this US-centricity could be the great acceptance of software patents in the USA, we do also see high US-only filing for other tech giants, but are a level of around 60% vs. Amazon’s 86%.”

  • PatentSight found that the majority of the Amazon portfolio falls in the 2nd decile of Competitive Impact (top 20% – 10%). Comparable technology-based organizations have a higher density of patents in the top 10% of Competitive Impact, which is another unusual aspect regarding Amazon’s patent growth. “This is unusual compared to other big tech companies which have more in the top 10%, it could be Amazon is holding onto more lower value assets than required,” William Mansfield remarked.

  • Amazon’s patent citations most often cite Microsoft, IBM, and Alphabet, with 39%, 32% and 28% of Amazon’s total Patent Asset Index. Interesting that PatentSight’s analysis finds the reciprocal is not the case. A much smaller percentage of companies cite Amazon in return. This can be attributed to a few other firms having the breadth and depth of patent development that Amazon does today.  PatentSight found that less than 10% of their respective portfolios even mention Amazon.  William Mansfield explains that “one factor here is the larger size of these companies, vs. Amazon. However, even in absolute terms, Microsoft and IBM cite Amazon much less than the other way round. However, citation value is close to equal in absolute terms between Amazon and Alphabet.”

  • Relying on patents to keep AWS’ rapid growth going appears to be Amazon’s high priority patent strategy today. As can be seen from the portfolio below, Cloud Computing patents dominate Amazon’s patent portfolio today. In the latest fiscal quarter ending March 31, 2019, AWS delivered $7.9B in revenue and$2.2B in operating income, growing 41% year-over-year. “Amazon’s ongoing developments in alternative delivery methods in Urban Logistics and Drones are noteworthy with Drones being one area of particular strength in the portfolio as seen from the high Competitive Impact, despite the smaller portfolio size,” notes William Mansfield.

  • Amazon’s prioritization of cloud computing, AI, and machine learning patents is evident when 18 years of patent history is compared. The proliferation of AI and machine learning-based services on the AWS platform is apparent in the trend line starting in 2014. The success of Amazon’s SageMaker machine learning platform is a case in point. Amazon SageMaker enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at scale.

  • Amazon is already one of the top 10 patent holders in Drone technology, just behind Alphabet and Toyota Motors. PatentSight defines Drone technology as encompassing aviation, autonomous robots, and autonomous driving. Amazon’s rapid ascent in this area is attributable to the logistics and supply chain efficiencies possible when Drones and their related technologies are applied to their supply chain’s more complex challenges.

  • PatentSight finds that FinTech is an area of long-standing strength in the Amazon patent portfolio, attribute to their payment systems being the backbone of their e-commerce business. Reflecting how diverse their business model has become, Amazon is now one of the top 15 patent holders in this area due to cloud computing, AI, and machine learning taking precedence. “FinTech is a highly competitive field with many established players, and while Amazon is not in the top 10, but top 15 players, it’s still an impressive achievement,” said William Mansfield.

  • Amazon’s patent portfolio in speech recognition encompasses Alexa, its related patents, and Amazon Lex, an AWS service used for creating conversational interfaces for applications. Alphabet, Apple, Microsoft, and Samsung are patent leaders, according to PatentSight’s analysis. The fact that Amazon is in the top 10 speaks to the level of activity and patent production going on in the Alexa research and development and product teams.

  • Amazon’s patent strategy is eclectic yet always anchored to cloud computing to make AWS the platform of choice. The following selected patens reflect how broad the Amazon patent portfolio is. What each share in common is a reliance on AWS as the platform to ensure service consistency, reliability, and scale. An example of this is their patents Video Game Streaming.

%d bloggers like this: