Skip to content

How To Improve Your CPQ Pricing Strategies

Manufacturers can get more than their fair share of channel sales and margins by improving price management for every dealer, distributor, and reseller they sell-through. It’s possible to expand earnings by 50% on slight increases in volume when pricing is consistent channel-wide. McKinsey’s latest research on the topic, Pricing: Distributors’ Most Powerful Value-Creation Lever, shows how the highest performing distributors use pricing to create value. For manufacturers competing for more sales through distributors, they share with competitors, improving their channel partners’ margins is the single best strategy to win more sales and long-term loyalty.

  • A 1% price increase yields a 22% increase in Earnings before Interest & Taxes (EBITDA) margins for distribution-based businesses.
  • It would take a 7.5% reduction in fixed costs to achieve the same 22% increase in EBITDA that a 1% increase in pricing achieves.
  • A distribution-based business would need to increase volume by 5.9% while holding operating expenses flat to achieve the same impact as a 1% price increase.
  • Channel partners are more loyal to margin than manufacturers, which is why price management needs urgent attention on CPQ roadmaps.

CPQ Strategies Need To Deliver More Margin Back To The Channel

The typical manufacturer who has over $100M in sales generates 40% or more of their sales through indirect channels. The channel partners they recruit and sell through are also reselling 12 other competitive products on average. Which factors most influence a distributor or channel partner’s decision to steer a sale to one manufacturer versus another?  The following are the steps manufacturers can take now to improve price management and drive more channel sales:

  • Upgrade the pricing module in CPQ to deliver more than configurable price lists to include pricing waterfalls, automated approval levels for pricing requests, and discounts. Distributors drive more deals to manufacturers whose CPQ systems are designed to give them greater freedom in tailoring pricing to every customer and selling situation they have. Automating approval levels using machine learning-based supervised algorithms that serve as pricing guardrails on every quote a channel partner creates is proving effective at delivering a 1% price increase which drives margin back to resellers. The more a manufacturer can make margins flow back to its channel partners, the faster the channel partners can grow. The following graphic from McKinsey’s latest pricing research illustrates why.

  • Distributors will drive more deals to manufacturers who automate pricing approvals, guiding their sales teams to the largest and most profitable deals first. One of the best ways to compete and win more deals through channel partners is to achieve the ambitious goal of delivering pricing approval within seconds on a 24/7 basis. Pricing needs to provide guardrails that guide channel sales reps to the largest, most profitable, and most ready-to-buy new and aftermarket sales opportunities. Manufacturers capturing more channel sales are relying on machine learning-based pricing systems that optimize price approvals while recommending only those new and aftermarket deals that will drive a 1% or greater price increase. Machine learning is making solid contributions to automating pricing approvals. It’s proving most effective when it is balanced with the flexibility of responding to subjective competitive situations where pricing on specific products need discounts to win deals in aggregate. The following workflows from Deloitte explain how this is being accomplished today:

  • Helping distributors solve sales compensation problems by improving price management drives more deals in the short-term and keep distributors in business long-term. Distributors start out building their sales comp plans on volume and growth alone. The problem is comp plans reward revenue growth at the expense of profits. That’s making it harder every year for distributors to stay in business. Manufacturers delivering new pricing management and optimization apps in their CPQ platforms need to provide real-time guidance on margin potential by the deal, pricing waterfall logic that includes margins, contract pricing overrides for margins and more if they are going to help their distributors stay in business.

Conclusion – Pricing Is the Engine Powering CPQ’s Market Growth Today

Manufacturers who excel at growing indirect product and services revenue through channels realize that every one of their channel partners is more loyal to pricing and margins than any specific vendor they resell. Providing a CPQ application or platform they can personalize, and automate workflows is just the beginning. The bottom line is that manufacturers need to put more intensity into improving pricing today if they’re going to hold onto the distributors they have and attract new ones.

Pricing is the primary catalyst driving the CPQ market’s growth as well. According to Gartner, the CPQ grew 36% in 2017, reaching $1.084B with the majority of growth attributable to cloud-based solutions. It’s no wonder CPQ is considered one of the hottest CRM technologies for the foreseeable future, projected to grow at a 25% Compound Annual Growth Rate (CAGR) through 2020. Supervised machine learning algorithms capable of providing guardrails in real-time for every potential deal a reseller sales representative has is what’s needed to protect a distributor’s margins. Winning more deals with channel partners starts by respecting how vital margins are to their success and improving pricing management as part of a broader CPQ strategy that delivers results.

Sources:

Configure, Price, and Quote (CPQ) Capabilities: Why the right CPQ capability is key to transitioning to a flexible consumption model, 8 pp., PDF, no opt-in, Deloitte, 2019.

Pricing: Distributors’ Most Powerful Value-Creation Lever, McKinsey & Company, September 2019.

It’s Time To Solve K-12’s Cybersecurity Crisis

It's Time To Solve K-12's Cybersecurity Crisis

  • There were a record 160 publicly-disclosed security incidents in K-12 during the summer months of 2019, exceeding the total number of incidents reported in all of 2018 by 30%.
  • 47% of K-12 organizations are making cybersecurity their primary investment, yet 74% do not use encryption.
  • 93% of K-12 organizations rely on native client/patch management tools that have a 56% failure rate, with 9% of client/patch management failures never recovered.

These and many other fascinating insights are from Absolute’s new research report, Cybersecurity and Education: The State of the Digital District in 2020​, focused on the state of security, staff and student safety, and endpoint device health in K-12 organizations. The study’s findings reflect the crisis the education sector is facing as they grapple with high levels of risk exposure – driven in large part by complex IT environments and a digitally savvy student population – that have made them a prime target for cybercriminals and ransomware attackers. The methodology is based on data from 3.2M devices containing Absolute’s endpoint visibility and control platform, active in 1,200 K-12 organizations in North America (U.S. and Canada). Please see the full report for complete details on the methodology.

Here’ the backdrop:

  • K-12 cybersecurity incidents are skyrocketing, with over 700 reported since 2016 with 160 occurring during the summer of 2019 alone. Educational IT leaders face the challenge of securing increasingly complex IT environments while providing access to a digitally savvy student population capable of bypassing security controls. Schools are now the second-largest pool of ransomware victims, just behind local governments and followed by healthcare organizations. As of today, 49 school districts have been hit by ransomware attacks so far this year.

“Today’s educational IT leaders have been tasked with a remarkable feat: adopting and deploying modern learning platforms, while also ensuring student safety and privacy, and demonstrating ROI on security and technology investments,” said Christy Wyatt, CEO of Absolute.

Research from Absolute found:

K-12 IT leaders are now responsible for collectively managing more than 250 unique OS versions, and 93% are managing up to five versions of common applications. The following key insights from the study reflect how severe K-12’s cybersecurity crisis is today:

  • Digital technologies’ rapid proliferation across school districts has turned into a growth catalyst for K-12’s cybersecurity crisis. 94% of school districts have high-speed internet, and 82% provide students with school-funded devices through one-to-one and similar initiatives. Absolute found that funding for educational technology has increased by 62% in the last three years. The Digital Equity Act goes into effect this year, committing additional federal dollars to bring even more technology to the classroom. K-12 IT leaders face the daunting challenge of having to secure on average 11 device types, 258 unique operating systems versions and over 6,400 unique Chrome OS extensions and more, reflecting the broad scale of today’s K-12 cybersecurity crisis. Google Chromebooks dominate the K-12 device landscape. The following graphic illustrates how rapidly digital technologies are proliferating in K-12 organizations:

  • 42% of K-12 organizations have staff and students regularly bypass security endpoint controls using web proxies and rogue VPN apps, inadvertently creating gateways for malicious outsiders to breach their schools’ networks. Absolute found that there are on average 10.6 devices with web proxy/rogue VPN apps per school and 319 unique web proxy/rogue VPN apps in use today, including “Hide My Ass” and “IP Vanish.”  Many of the rogue VPN apps originate in China, and all of them are designed to evade web filtering and other content controls. With an average of 10.6 devices per school harboring web proxies and rogue VPN apps, schools are also at risk of non-compliance with the Children’s Internet Protection Act (CIPA).

  • While 68% of education IT leaders say that cybersecurity is their top priority, 53% rely on client/patch management tools that are proving ineffective in securing their proliferating IT infrastructures. K-12 IT leaders are relying on client/patch management tools to secure the rapidly proliferating number of devices, operating systems, Chrome extensions, educational apps, and unique application versions. Client/patch management agents fail 56% of the time, however, and 9% never recover. There are on average, nine daily encryption agents’ failures, 44% of which never recover. The cybersecurity strategy of relying on native client/patch management isn’t working, leading to funds being wasted on K-12 security controls that don’t scale:

“Wyatt continued, this is not something that can be achieved by simply spending more money… especially when that money comes from public funds. The questions they each need to be asking are if they have the right foundational security measures in place, and whether the controls they have already invested in are working properly. Without key foundational elements of a strong and resilient security approach in place – things like visibility and control, it becomes nearly impossible to protect your students, your data, and your investments.”

  • Providing greater device visibility and endpoint security controls while enabling applications and devices to be more resilient is a solid first step to solving the K-12 cybersecurity crisis. Thwarting the many breach and ransomware attacks K-12 organizations receive every day needs to start by considering every device as part of the network perimeter. Securing K-12 IT networks to the device level delivers asset management and security visibility that native client/patch management tools lack. Having visibility to the device level also gives K-12 IT administrators and educators insights into how they can tailor learning programs for broader adoption. The greater the visibility, the greater the control. K-12 IT administrators can ensure internet safety policies are being adhered to while setting controls to be alerted of suspicious activity or non-compliant devices, including rogue VPNs or stolen devices. Absolute’s Persistence platform provides a persistent connection to each endpoint in a K-12’s one-to-one program, repairing or replacing critical apps that have been disabled or removed.

You can download the full Absolute report here.

What’s New In Gartner’s Hype Cycle For AI, 2019

What's New In Gartner's Hype Cycle For AI, 2019

  • Between 2018 and 2019, organizations that have deployed artificial intelligence (AI) grew from 4% to 14%, according to Gartner’s 2019 CIO Agenda survey.
  • Conversational AI remains at the top of corporate agendas spurred by the worldwide success of Amazon Alexa, Google Assistant, and others.
  • Enterprises are making progress with AI as it grows more widespread, and they’re also making more mistakes that contribute to their accelerating learning curve.

These and many other new insights are from Gartner Hype Cycle For AI, 2019 published earlier this year and summarized in the recent Gartner blog post, Top Trends on the Gartner Hype Cycle for Artificial Intelligence, 2019.  Gartner’s definition of Hype Cycles includes five phases of a technology’s lifecycle and is explained here. Gartner’s latest Hype Cycle for AI reflects the growing popularity of AutoML, intelligent applications, AI platform as a service or AI cloud services as enterprises ramp up their adoption of AI. The Gartner Hype Cycle for AI, 2019, is shown below:

Details Of What’s New In Gartner’s Hype Cycle For AI, 2019

  • Speech Recognition is less than two years to mainstream adoption and is predicted to deliver the most significant transformational benefits of all technologies on the Hype Cycle. Gartner advises its clients to consider including speech recognition on their short-term AI technology roadmaps. Gartner observes, unlike other technologies within the natural-language processing area, speech to text (and text to speech) is a stand-alone commodity where its modules can be plugged into a variety of natural-language workflows. Leading vendors in this technology area Amazon, Baidu, Cedat 85, Google, IBM, Intelligent Voice, Microsoft, NICE, Nuance, and Speechmatics.
  • Eight new AI-based technologies are included in this year’s Hype Cycle, reflecting Gartner enterprise clients’ plans to scale AI across DevOps and IT while supporting new business models. The latest technologies to be included in the Hype Cycle for AI reflect how enterprises are trying to demystify AI to improve adoption while at the same time, fuel new business models. The new technologies include the following:
  1. AI Cloud Services – AI cloud services are hosted services that allow development teams to incorporate the advantages inherent in AI and machine learning.
  2. AutoML – Automated machine learning (AutoML) is the capability of automating the process of building, deploying, and managing machine learning models.
  3. Augmented Intelligence – Augmented intelligence is a human-centered partnership model of people and artificial intelligence (AI) working together to enhance cognitive performance, including learning, decision making, and new experiences.
  4. Explainable AI – AI researchers define “explainable AI” as an ensemble of methods that make black-box AI algorithms’ outputs sufficiently understandable.
  5. Edge AI – Edge AI refers to the use of AI techniques embedded in IoT endpoints, gateways, and edge devices, in applications ranging from autonomous vehicles to streaming analytics.
  6. Reinforcement Learning – Reinforcement learning has the primary potential for gaming and automation industries and has the potential to lead to significant breakthroughs in robotics, vehicle routing, logistics, and other industrial control scenarios.
  7. Quantum Computing – Quantum computing has the potential to make significant contributions to the areas of systems optimization, machine learning, cryptography, drug discovery, and organic chemistry. Although outside the planning horizon of most enterprises, quantum computing could have strategic impacts in key businesses or operations.
  8. AI Marketplaces – Gartner defines an AI Marketplace as an easily accessible place supported by a technical infrastructure that facilitates the publication, consumption, and billing of reusable algorithms. Some marketplaces are used within an organization to support the internal sharing of prebuilt algorithms among data scientists.
  • Gartner considers the following AI technologies to be on the rise and part of the Innovation Trigger phase of the AI Hype Cycle. AI Marketplaces, Reinforcement Learning, Decision Intelligence, AI Cloud Services, Data Labeling, and Annotation Services, and Knowledge Graphs are now showing signs of potential technology breakthroughs as evidence by early proof-of-concept stories. Technologies in the Innovation Trigger phase of the Hype Cycle often lack usable, scalable products with commercial viability not yet proven.
  • Smart Robots and AutoML are at the peak of the Hype Cycle in 2019. In contrast to the rapid growth of industrial robotics systems that adopted by manufacturers due to the lack of workers, Smart Robots are defined by Gartner as having electromechanical form factors that work autonomously in the physical world, learning in short-term intervals from human-supervised training and demonstrations or by their supervised experiences including taking direction form human voices in a shop floor environment. Whiz robot from SoftBank Robotics is an example of a SmartRobot that will be sold under robot-as-a service (RaaS) model and originally be available only in Japan. AutoML is one of the most hyped technology in AI this year. Gartner defines automated machine learning (AutoML) as the capability of automating the process of building, deploying, or managing machine learning models. Leading vendors providing AutoML platforms and applications include Amazon SageMaker, Big Squid, dotData, DataRobot, Google Cloud Platform, H2O.ai, KNIME, RapidMiner, and Sky Tree.
  • Nine technologies were removed or reassigned from this years’ Hype Cycle of AI compared to 2018. Gartner has removed nine technologies, often reassigning them into broader categories. Augmented reality and Virtual Reality are now part of augmented intelligence, a more general category, and remains on many other Hype Cycles. Commercial UAVs (drones) is now part of edge AI, a more general category. Ensemble learning had already reached the Plateau in 2018 and has now graduated from the Hype Cycle. Human-in-the-loop crowdsourcing has been replaced by data labeling and annotation services, a broader category. Natural language generation is now included as part of NLP. Knowledge management tools have been replaced by insight engines, which are more relevant to AI. Predictive analytics and prescriptive analytics are now part of decision intelligence, a more general category.

Sources:

Hype Cycle for Artificial Intelligence, 2019, Published 25 July 2019, (Client access reqd.)

Top Trends on the Gartner Hype Cycle for Artificial Intelligence, 2019 published September 12, 2019

10 Ways AI And Machine Learning Are Improving Endpoint Security

  • Gartner predicts $137.4B will be spent on Information Security and Risk Management in 2019, increasing to $175.5B in 2023, reaching a CAGR of 9.1%. Cloud Security, Data Security, and Infrastructure Protection are the fastest-growing areas of security spending through 2023.
  •  69% of enterprise executives believe artificial intelligence (AI) will be necessary to respond to cyberattacks with the majority of telecom companies (80%) saying they are counting on AI to help identify threats and thwart attacks according to Capgemini.
  •  Spending on AI-based cybersecurity systems and services reached $7.1B in 2018 and is predicted to reach $30.9B in 2025, attaining a CAGR of 23.4% in the forecast period according to Zion Market Research.

Traditional approaches to securing endpoints based on the hardware characteristics of a given device aren’t stopping breach attempts today. Bad actors are using AI and machine learning to launch sophisticated attacks to shorten the time it takes to compromise an endpoint and successfully breach systems. They’re down to just 7 minutes after comprising an endpoint and gaining access to internal systems ready to exfiltrate data according to Ponemon. The era of trusted and untrusted domains at the operating system level, and “trust, but verify” approaches are over. Security software and services spending is soaring as a result, as the market forecasts above show.

AI & Machine Learning Are Redefining Endpoint Security

AI and machine learning are proving to be effective technologies for battling increasingly automated, well-orchestrated cyberattacks and breach attempts. Attackers are combining AI, machine learning, bots, and new social engineering techniques to thwart endpoint security controls and gain access to enterprise systems with an intensity never seen before. It’s becoming so prevalent that Gartner predicts that more than 85% of successful attacks against modern enterprise user endpoints will exploit configuration and user errors by 2025. Cloud platforms are enabling AI and machine learning-based endpoint security control applications to be more adaptive to the proliferating types of endpoints and corresponding threats. The following are the top ten ways AI and machine learning are improving endpoint security:

  • Using machine learning to derive risk scores based on previous behavioral patterns, geolocation, time of login, and many other variables is proving to be effective at securing and controlling access to endpoints. Combining supervised and unsupervised machine learning to fine-tune risk scores in milliseconds is reducing fraud, thwarting breach attempts that attempt to use privileged access credentials, and securing every identity on an organizations’ network. Supervised machine learning models rely on historical data to find patterns not discernable with rules or predictive analytics. Unsupervised machine learning excels at finding anomalies, interrelationships, and valid links between emerging factors and variables. Combining both unsupervised and supervised machine learning is proving to be very effective in spotting anomalous behavior and reducing or restricting access.
  • Mobile devices represent a unique challenge to achieving endpoint security control, one that machine learning combined with Zero Trust is proving to be integral at solving.  Cybercriminals prefer to steal a mobile device, its passwords, and privileged access credentials than hack into an organization. That’s because passwords are the quickest onramp they have to the valuable data they want to exfiltrate and sell. Abandoning passwords for new techniques including MobileIron’s zero sign-on approach shows potential for thwarting cybercriminals from getting access while hardening endpoint security control. Securing mobile devices using a zero-trust platform built on a foundation of unified endpoint management (UEM) capabilities enables enterprises to scale zero sign-on for managed and unmanaged services for the first time. Below is a graphic illustrating how they’re adopting machine learning to improve mobile endpoint security control:
  • Capitalizing on the core strengths of machine learning to improve IT asset management is making direct contributions to greater security.  IT Management and security initiatives continue to become more integrated across organizations, creating new challenges to managing endpoint security across each device. Absolute Software is taking an innovative approach to solve the challenge of improving IT asset management, so endpoint protection is strengthened at the same time. Recently I had a chance to speak with Nicko van Someren, Ph.D. and Chief Technology Officer at Absolute Software, where he shared with me how machine learning algorithms are improving security by providing greater insights into asset management. “Keeping machines up to date is an IT management job, but it’s a security outcome. Knowing what devices should be on my network is an IT management problem, but it has a security outcome. And knowing what’s going on and what processes are running and what’s consuming network bandwidth is an IT management problem, but it’s a security outcome. I don’t see these as distinct activities so much as seeing them as multiple facets of the same problem space. Nicko added that Absolute’s endpoint security controls begin at the BIOS level of over 500M devices that have their endpoint code embedded in them. The Absolute Platform is comprised of three products: Persistence, Intelligence, and Resilience—each building on the capabilities of the other. Absolute Intelligence standardizes the data around asset analytics and security advocacy analytics to allow Security managers to ask any question they want. (“What’s slowing down my device? What’s working and what isn’t? What has been compromised? What’s consuming too much memory? How does this deviate from normal performance?”). An example of Absolute’s Intelligence providing insights into asset management and security is shown below:
  • Machine learning has progressed to become the primary detection method for identifying and stopping malware attacks. Machine learning algorithms initially contributed to improving endpoint security by supporting the back-end of malware protection workflows. Today more vendors are designing endpoint security systems with machine learning as the primary detection method. Machine learning trained algorithms can detect file-based malware and learn which files are harmful or not based on the file’s metadata and content. Symantec’s Content & Malware Analysis illustrates how machine learning is being used to detect and block malware. Their approach combines advanced machine learning and static code file analysis to block, detect, and analyze threats and stop breach attempts before they can spread.
  • Supervised machine learning algorithms are being used for determining when given applications are unsafe to use, assigning them to containers, so they’re isolated from production systems. Taking into account an applications’ threat score or reputation, machine learning algorithms are defining if dynamic application containment needs to run for a given application. Machine learning-based dynamic application containment algorithms and rules block or log unsafe actions of an application based on containment and security rules. Machine learning algorithms are also being used for defining predictive analytics that define the extent of a given applications’ threat.
  •  Integrating AI, machine learning, and SIEM (Security Information and Event Management) in a single unified platform are enabling organizations to predict, detect, and respond to anomalous behaviors and events. AI and machine learning-based algorithms and predictive analytics are becoming a core part of SIEM platforms today as they provide automated, continuous analysis and correlation of all activity observed within a given IT environment. Capturing, aggregating, and analyzing endpoint data in real-time using AI techniques and machine learning algorithms is providing entirely new insights into asset management and endpoint security. One of the most interesting companies to watch in this area is LogRhythm. They’ve developed an innovative approach to integrating AI, machine learning, and SIEM in their LogRhythm NextGen SIEM Platform, which delivers automated, continuous analysis and correlation of all activity observed within an IT environment. The following is an example of how LogRhythm combines AI, machine learning, and SIEM to bring new insights into securing endpoints across a network.
  • Machine learning is automating the more manually-based, routine incident analysis, and escalation tasks that are overwhelming security analysts today. Capitalizing on supervised machine learnings’ innate ability to fine-tune algorythms in milliseconds based on the analysis of incidence data, endpoint security providers are prioritizing this area in product developnent. Demand from potential customers remains strong, as nearly everyone is facing a cybersecurity skills shortage while facing an onslaught of breach attempts.  “The cybersecurity skills shortage has been growing for some time, and so have the number and complexity of attacks; using machine learning to augment the few available skilled people can help ease this. What’s exciting about the state of the industry right now is that recent advances in Machine Learning methods are poised to make their way into deployable products,” Absolute’s CTO Nicko van Someren added.
  • Performing real-time scans of all processes with an unknown or suspicious reputation is another way how machine learning is improving endpoint security. Commonly referred to as Hunt and Respond, supervised and unsupervised machine learning algorithms are being used today to seek out and resolve potential threats in milliseconds instead of days. Supervised machine learning algorithms are being used to discover patterns in known or stable processes where anomalous behavior or activity will create an alert and pause the process in real-time. Unsupervised machine learning algorithms are used for analyzing large-scale, unstructured data sets to categorize suspicious events, visualize threat trends across the enterprise, and take immediate action at a single endpoint or across the entire organization.
  • Machine learning is accelerating the consolidation of endpoint security technologies, a market dynamic that is motivating organizations to trim back from the ten clients they have on average per endpoint today. Absolute Software’s 2019 Endpoint Security Trends Report found that a typical device has ten or more endpoint security agents installed, each often conflicting with the other. The study also found that enterprises are using a diverse array of endpoint agents, including encryption, AV/AM, and Endpoint Detection and Response (EDR). The wide array of endpoint solutions make it nearly impossible to standardize a specific test to ensure security and safety without sacrificing speed. By helping to accelerate the consolidation of security endpoints, machine learning is helping organizations to see the more complex and layered the endpoint protection, the greater the risk of a breach.
  • Keeping every endpoint in compliance with regulatory and internal standards is another area machine learning is contributing to improving endpoint security. In regulated industries, including financial services, insurance, and healthcare, machine learning is being deployed to discover, classify, and protect sensitive data. This is especially the case with HIPAA (Health Insurance Portability and Accountability Act) compliance in healthcare. Amazon Macie is representative of the latest generation of machine learning-based cloud security services. Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides organizations with dashboards, alerts, and contextual insights that give visibility into how data is being accessed or moved. The fully managed service continuously monitors data access activity for anomalies and generates detailed alerts when it detects the risk of unauthorized access or inadvertent data leaks. An example of one of Amazon Macie’s dashboard is shown below:

Three Reasons Why Killing Passwords Improves Your Cloud Security

Jack Dorsey’s Twitter account getting hacked by having his telephone number transferred to another account without his knowledge is a wake-up call to everyone of how vulnerable mobile devices are. The hackers relied on SIM swapping and convincing Dorsey’s telecom provider to bypass requiring a passcode to modify his account. With the telephone number transferred, the hackers accessed the Twitter founder’s account. If the telecom provider had adopted zero trust at the customer’s mobile device level, the hack would have never happened.

Cloud Security’s Weakest Link Is Mobile Device Passwords

The Twitter CEO’s account getting hacked is the latest in a series of incidents that reflect how easy it is for hackers to gain access to cloud-based enterprise networks using mobile devices. Verizon’s Mobile Security Index 2019 revealed that the majority of enterprises, 67%, are the least confident in the security of their mobile assets than any other device. Mobile devices are one of the most porous threat surfaces a business has. They’re also the fastest-growing threat surface, as every employee now relies on their smartphones as their ID. IDG’s recent survey completed in collaboration with MobileIron, titled Say Goodbye to Passwords found that 89% of security leaders believe that mobile devices will soon serve as your digital ID to access enterprise services and data.

Because they’re porous, proliferating and turning into primary forms of digital IDs, mobile devices and their passwords are a favorite onramp for hackers wanting access to companies’ systems and data in the cloud. It’s time to kill passwords and shut down the many breach attempts aimed at cloud platforms and the valuable data they contain.

Three Reasons Why Killing Passwords Improves Your Cloud Security

Killing passwords improve cloud security by:

  1. Eliminating privileged access credential abuse. Privileged access credentials are best sellers on the Dark Web, where hackers bid for credentials to the world’s leading banking, credit card, and financial management systems. Forrester estimates that 80% of data breaches involve compromised privileged credentials, and a recent survey by Centrify found that 74% of all breaches involved privileged access abuse. Killing passwords shuts down the most common technique hackers use to access cloud systems.
  2. Eliminating the threat of unauthorized mobile devices accessing business cloud services and exfiltrating data. Acquiring privileged access credentials and launching breach attempts from mobile devices is the most common hacker strategy today. By killing passwords and replacing them with a zero-trust framework, breach attempts launched from any mobile device using pirated privileged access credentials can be thwarted. Leaders in the area of mobile-centric zero trust security include MobileIron, whose innovative approach to zero sign-on solves the problems of passwords at scale. When every mobile device is secured through a zero-trust platform built on a foundation of unified endpoint management (UEM) capabilities, zero sign-on from managed and unmanaged services become achievable for the first time.
  3. Giving organizations the freedom to take a least-privilege approach to grant access to their most valuable cloud applications and platforms. Identities are the new security perimeter, and mobile devices are their fastest-growing threat surface. Long-standing traditional approaches to network security, including “trust but verify” have proven ineffective in stopping breaches. They’ve also shown a lack of scale when it comes to protecting a perimeter-less enterprise. What’s needed is a zero-trust network that validates each mobile device, establishes user context, checks app authorization, verifies the network, and detects and remediates threats before granting secure access to any device or user. If Jack Dorsey’s telecom provider had this in place, his and thousands of other people’s telephone numbers would be safe today.

Conclusion

The sooner organizations move away from being so dependent on passwords, the better. The three reasons why killing passwords improve cloud security are just the beginning. Imagine how much more effective distributed DevOps teams will be when security isn’t a headache for them anymore, and they can get to the cloud-based resources they need to get apps built. And with more organizations adopting a mobile-first development strategy, it makes sense to have a mobile-centric zero-trust network engrained in key steps of the DevOps process. That’s the future of cloud security, starting with the DevOps teams creating the next generation of apps today.

State Of AI And Machine Learning In 2019

  • Marketing and Sales prioritize AI and machine learning higher than any other department in enterprises today.
  • In-memory analytics and in-database analytics are the most important to Finance, Marketing, and Sales when it comes to scaling their AI and machine learning modeling and development efforts.
  • R&D’s adoption of AI and machine learning is the fastest of all enterprise departments in 2019.

These and many other fascinating insights are from Dresner Advisory Services’6th annual 2019 Data Science and Machine Learning Market Study (client access reqd) published last month. The study found that advanced initiatives related to data science and machine learning, including data mining, advanced algorithms, and predictive analytics are ranked the 8th priority among the 37 technologies and initiatives surveyed in the study. Please see page 12 of the survey for an overview of the methodology.

“The Data Science and Machine Learning Market Study is a progression of our analysis of this market which began in 2014 as an examination of advanced and predictive analytics,” said Howard Dresner, founder, and chief research officer at Dresner Advisory Services. “Since that time, we have expanded our coverage to reflect changes in sentiment and adoption, and have added new criteria, including a section covering neural networks.”

Key insights from the study include the following:

  • Data mining, advanced algorithms, and predictive analytics are among the highest-priority projects for enterprises adopting AI and machine learning in 2019. Reporting, dashboards, data integration, and advanced visualization are the leading technologies and initiatives strategic to Business Intelligence (BI) today. Cognitive BI (artificial-intelligence-based BI) ranks comparatively lower at 27th among priorities. The following graphic prioritizes the 27 technologies and initiatives strategic to business intelligence:

  • 40% of Marketing and Sales teams say data science encompassing AI and machine learning is critical to their success as a department. Marketing and Sales lead all departments in how significant they see AI and machine learning to pursue and accomplish their growth goals. Business Intelligence Competency Centers (BICC), R&D, and executive management audiences are the next most interested, and all top four roles cited carry comparable high combined “critical” and “very important” scores above 60%. The following graphic compares the importance levels by department for data science, including AI and machine learning:

  • R&D, Marketing, and Sales’ high level of shared interest across multiple feature areas reflect combined efforts to define new revenue growth models using AI and machine learning. Marketing, Sales, R&D, and the Business Intelligence Competency Centers (BICC) respondents report the most significant interest in having a range of regression models to work with in AI and machine learning applications. Marketing and Sales are also most interested in the next three top features, including hierarchical clustering, textbook statistical functions, and having a recommendation engine included in the applications and platforms they purchase. Dresner’s research team believes that the high shared interest in multiple features areas by R&D, Marketing and Sales is leading indicator enterprises are preparing to pilot AI and machine learning-based strategies to improve customer experiences and drive revenue. The following graphic compares interest and probable adoption by functional area of the enterprises interviewed:

  • 70% of R&D departments and teams are most likely to adopt data science, AI, and machine learning, leading all functions in an enterprise. Dresner’s research team sees the high level of interest by R&D teams as a leading indicator of broader enterprise adoption in the future. The study found 33% of all enterprises interviewed have adopted AI and machine learning, with the majority of enterprises having up to 25 models. Marketing & Sales lead all departments in their current evaluation of data science and machine learning software.

  • Financial Services & Insurance, Healthcare, and Retail/Wholesale say data science, AI, and machine learning are critical to their succeeding in their respective industries. 27% of Financial Services & Insurance, 25% of Healthcare and 24% of Retail/Wholesale enterprises say data science, AI, and machine learning are critical to their success. Less than 10% of Educational institutions consider AI and machine learning vital to their success. The following graphic compares the importance of data science, AI, and machine learning by industry:

  • The Telecommunications industry leads all others in interest and adoption of recommendation engines and model management governance. The Telecommunications, Financial Services, and Technology industries have the highest level of interest in adopting a range of regression models and hierarchical clustering across all industry respondent groups interviewed. Healthcare respondents have much lower interest in these latter features but high interest in Bayesian methods and text analytics functions. Retail/Wholesale respondents are often least interested in analytical features. The following graphic compares industries by their level of interest and potential adoption of analytical features in data science, AI, and machine learning applications and platforms:

  • Support for a broad range of regression models, hierarchical clustering, and commonly used textbook statistical functions are the top features enterprises need in data science and machine learning platforms. Dresner’s research team found these three features are considered the most important or “must-have” when enterprises are evaluating data science, AI and machine learning applications and platforms. All enterprises surveyed also expect any data science application or platform they are evaluating to have a recommendation engine included and model management and governance. The following graphic prioritizes the most and least essential features enterprises expect to see in data science, AI, and machine learning software and platforms:

  • The top three usability features enterprises are prioritizing today include support for easy iteration of models, access to advanced analytics, and an initiative, simple process for continuous modification of models. Support and guidance in preparing analytical data models and fast cycle time for analysis with data preparation are among the highest- priority usability features enterprises expect to see in AI and machine learning applications and platforms. It’s interesting to see the usability attribute of a specialist not required to create analytical models, test and run them at the lower end of the usability rankings. Many AI and machine learning software vendors rely on not needing a specialist to use their applications as a differentiator when the majority of enterprises value  support for easy iteration of models at a higher level as the graphic below shows:

  • 2019 is a record year for enterprises’ interest in data science, AI, and machine learning features they perceive as the most needed to achieve their business strategies and goals. Enterprises most expect AI and machine learning applications and platforms to support a range of regression models, followed by hierarchical clustering and textbook statistical functions for descriptive statistics. Recommendation engines are growing in popularity as interest grew to at least a tie as the second most important feature to respondents in 2019. Geospatial analysis and Bayesian methods were flat or slightly less important compared to 2018. The following graphic compares six years of interest in data science, AI, and machine learning techniques:

5 Key Insights From Absolute’s 2019 Endpoint Security Trends Report

  • Endpoint security tools are 24% of all IT security spending, and by 2020 global IT security spending will reach $128B according to Morgan Stanley Research.
  • 70% of all breaches still originate at endpoints, despite the increased IT spending on this threat surface, according to IDC.

To better understand the challenges organizations have securing the proliferating number and type of endpoints, Absolute launched and published their 2019 Endpoint Security Trends Report. You can get a copy of the report here. Their findings and conclusions are noteworthy to every organization who is planning and implementing a cybersecurity strategy. Data gathered from over 1B change events on over 6M devices is the basis of the multi-phased methodology. The devices represent data from 12,000 anonymized organizations across North America and Europe. Each device had Absolute’s Endpoint Resilience platform activated. The second phase of the study is based on exploratory interviews with senior executives from Fortune 500 organizations. For additional details on the methodology, please see page 12 of the study.

Key insights from the report include the following:

  1. Increasing security spending on protecting endpoints doesn’t increase an organizations’ safety and in certain cases, reduces it. Organizations are spending more on cybersecurity than ever before, yet they aren’t achieving greater levels of safety and security. Gartner’s latest forecast of global information security and risk management spending is forecast to reach $174.5B in 2022, attaining a five-year Compound Annual Growth Rate (CAGR) of 9.2%. Improving endpoint controls is one of the highest-priority investments driving increased spending. Over 70% of all breaches are still originating at endpoints, despite millions of dollars spent by organizations every year. It’s possible to overspend on endpoint security and reduce its effectiveness, which is a key finding of the study. IBM Security’s most recent Cost of a Data Breach Report 2019 found that the average cost of a data breach in the U.S. grew from $3.54M in 2006 to $8.19M in 2019, a 130% increase in 14 years.
  2. The more complex and layered the endpoint protection, the greater the risk of a breach. One of the fascinating findings from the study is how the greater the number of agents a given endpoint has, the higher the probability it’s going to be breached. Absolute found that a typical device has ten or more endpoint security agents installed, each conflicting with the other. MITRE’S Cybersecurity research practice found there are on average, ten security agents on each device, and over 5,000 common vulnerabilities and exposures (CVEs) found on the top 20 client applications in 2018 alone. Enterprises are using a diverse array of endpoint agents, including encryption, AV/AM, and Endpoint Detection and Response (EDR). The wide array of endpoint solutions make it nearly impossible to standardize a specific test to ensure security and safety without sacrificing speed. Absolute found organizations are validating their endpoint configurations using live deployments that often break and take valuable time to troubleshoot. The following graphic from the study illustrates how endpoint security is driving risk:

  1. Endpoint security controls and their associated agents degrade and lose effectiveness over time. Over 42% of endpoints experience encryption failures, leaving entire networks at risk from a breach. They’re most commonly disabled by users, malfunction or have error conditions or have never been installed correctly in the first place. Absolute found that endpoints often failed due to the fragile nature of their encryption agents’ configurations. 2% of encryption agents fail every week, and over half of all encryption failures occurred within two weeks, fueling a constant 8% rate of decay every 30 days. 100% of all devices experiencing encryption failures within one year. Multiple endpoint security solutions conflict with each other and create more opportunities for breaches than avert them:

  1. One in five endpoint agents will fail every month, jeopardizing the security and safety of IT infrastructure while prolonging security exposures. Absolute found that 19% of endpoints of a typical IT network require at least one client or patch management repair monthly. The patch and client management agents often require repairs as well. 75% of IT teams reported at least two repair events, and 50% reported three or more repair events. Additionally, 5% could be considered inoperable, with 80 or more repair events in the same one-month. Absolute also looked at the impact of families of applications to see how they affected the vulnerability of endpoints and discovered another reason why endpoint security is so difficult to attain with multiple agents. The 20 most common client applications published over 5,000 vulnerabilities in 2018. If every device had only the top ten applications (half), that could result in as many as 55 vulnerabilities per device just from those top ten apps, including browsers, OSs, and publishing tools. The following graphic summarizes the rates of failure for Client/Patch Management Agent Health:

  1. Activating security at the device level creates a persistent connection to every endpoint in a fleet, enabling greater resilience organization-wide. By having a persistent, unbreakable connection to data and devices, organizations can achieve greater visibility and control over every endpoint. Organizations choosing this approach to endpoint security are unlocking the value of their existing hardware and network investments. Most important, they attain resilience across their networks. When an enterprise network has persistence designed to the device level, there’s a constant, unbreakable connection to data and devices that identifies and thwarts breach attempts in real-time.

Bottom Line:  Identifying and thwarting breaches needs to start at the device level by relying on secured, persistent connections that enable endpoints to better detecting vulnerabilities, defending endpoints, and achieve greater resilience overall.

How AI Is Protecting Against Payments Fraud

  • 80% of fraud specialists using AI-based platforms believe the technology helps reduce payments fraud.
  • 63.6% of financial institutions that use AI believe it is capable of preventing fraud before it happens, making it the most commonly cited tool for this purpose.
  • Fraud specialists unanimously agree that AI-based fraud prevention is very effective at reducing chargebacks.
  • The majority of fraud specialists (80%) have seen AI-based platforms reduce false positives, payments fraud, and prevent fraud attempts.

AI is proving to be very effective in battling fraud based on results achieved by financial institutions as reported by senior executives in a recent survey, AI Innovation Playbook published by PYMNTS in collaboration with Brighterion. The study is based on interviews with 200 financial executives from commercial banks, community banks, and credit unions across the United States. For additional details on the methodology, please see page 25 of the study. One of the more noteworthy findings is that financial institutions with over $100B in assets are the most likely to have adopted AI, as the study has found 72.7% of firms in this asset category are currently using AI for payment fraud detection.

Taken together, the findings from the survey reflect how AI thwarts payments fraud and deserves to be a high priority in any digital business today. Companies, including Kount and others, are making strides in providing AI-based platforms, further reducing the risk of the most advanced, complex forms of payments fraud.

Why AI Is Perfect For Fighting Payments Fraud

Of the advanced technologies available for reducing false positives, reducing and preventing fraud attempts, and reducing manual reviews of potential payment fraud events, AI is ideally suited to provide the scale and speed needed to take on these challenges. More specifically, AI’s ability to interpret trend-based insights from supervised machine learning, coupled with entirely new knowledge gained from unsupervised machine learning algorithms are reducing the incidence of payments fraud. By combining both machine learning approaches, AI can discern if a given transaction or series of financial activities are fraudulent or not, alerting fraud analysts immediately if they are and taking action through predefined workflows. The following are the main reasons why AI is perfect for fighting payments fraud:

  • Payments fraud-based attacks are growing in complexity and often have a completely different digital footprint or pattern, sequence, and structure, which make them undetectable using rules-based logic and predictive models alone. For years e-commerce sites, financial institutions, retailers, and every other type of online business relied on rules-based payment fraud prevention systems. In the earlier years of e-commerce, rules and simple predictive models could identify most types of fraud. Not so today, as payment fraud schemes have become more nuanced and sophisticated, which is why AI is needed to confront these challenges.
  • AI brings scale and speed to the fight against payments fraud, providing digital businesses with an immediate advantage in battling the many risks and forms of fraud. What’s fascinating about the AI companies offering payments fraud solutions is how they’re trying to out-innovate each other when it comes to real-time analysis of transaction data. Real-time transactions require real-time security. Fraud solutions providers are doubling down on this area of R&D today, delivering impressive results. The fastest I’ve seen is a 250-millisecond response rate for calculating risk scores using AI on the Kount platform, basing queries on a decades-worth of data in their universal data network. By combining supervised and unsupervised machine learning algorithms, Kount is delivering fraud scores that are twice as predictive as previous methods and faster than competitors.
  • AI’s many predictive analytics and machine learning techniques are ideal for finding anomalies in large-scale data sets in seconds. The more data a machine learning model has to train on, the more accurate its predictive value. The greater the breadth and depth of data, a given machine learning algorithm learns from means more than how advanced or complex a given algorithm is. That’s especially true when it comes to payments fraud detection where machine learning algorithms learn what legitimate versus fraudulent transactions look like from a contextual intelligence perspective. By analyzing historical account data from a universal data network, supervised machine learning algorithms can gain a greater level of accuracy and predictability. Kount’s universal data network is among the largest, including billions of transactions over 12 years, 6,500 customers, 180+ countries and territories, and multiple payment networks. The data network includes different transaction complexities, verticals, and geographies, so machine learning models can be properly trained to predict risk accurately. That analytical richness includes data on physical real-world and digital identities creating an integrated picture of customer behavior.

Bottom Line:  Payments fraud is insidious, difficult to stop, and can inflict financial harm on any business in minutes. Battling payment fraud needs to start with a pre-emptive strategy to thwart fraud attempts by training machine learning models to quickly spot and act on threats then building out the strategy across every selling and service channel a digital business relies on.

Why Manufacturing Supply Chains Need Zero Trust

  • According to the 2019 Verizon Data Breach Investigation Report, manufacturing has been experiencing an increase in financially motivated breaches in the past couple of years, whereby most breaches involve Phishing and the use of stolen credentials.
  • 50% of manufacturers report experiencing a breach over the last 12 months, 11% of which were severe according to Sikich’s 5th Manufacturing and Distribution Survey, 2019.
  • Manufacturing’s most commonly data compromised includes credentials (49%), internal operations data (41%), and company secrets (36%) according to the 2019 Verizon Data Breach Investigation Report.
  • Manufacturers’ supply chains and logistics partners targeted by ransomware which have either had to cease operations temporarily to restore operations from backup or have chosen to pay the ransom include Aebi SchmidtASCO Industries, and COSCO Shipping Lines.

Small Suppliers Are A Favorite Target, Ask A.P. Møller-Maersk

Supply chains are renowned for how unsecured and porous they are multiple layers deep. That’s because manufacturers often only password-protect administrator access privileges for trusted versus untrusted domains at the operating system level of Windows NT Server, haven’t implemented multi-factor authentication (MFA), and apply a trust but verify mindset only for their top suppliers. Many manufacturers don’t define, and much less enforce, supplier security past the first tier of their supply chains, leaving the most vulnerable attack vectors unprotected.

It’s the smaller suppliers that hackers exploit to bring down many of the world’s largest manufacturing companies. An example of this is how an accounting software package from a small supplier, Linkos Group, was infected with a powerful ransomware agent, NotPetya, bringing one of the world’s leading shipping providers,  A.P. Møller-Maersk, to a standstill. Linkos’ Group accounting software was first installed in the A.P. Møller-Maersk offices in Ukraine. The NotPetya ransomware was able to take control of the local office servers then propagate itself across the entire A.P. Møller-Maersk network. A.P. Møller-Maersk had to reinstall their 4,000 servers, 45,000 PCs, and 2500 applications, and the damages were between $250M to $300M. Security experts consider the ransomware attack on A.P. Møller-Maersk to be one of the most devastating cybersecurity attacks in history. The Ukraine-based group of hackers succeeded in using an accounting software update from one of A.P. Møller-Maersk’s smallest suppliers to bring down one of the world’s largest shipping networks. My recent post, How To Deal With Ransomware In A Zero Trust World explains how taking a Zero Trust Privilege approach minimizes the risk of falling victim to ransomware attacks. Ultimately, treating identity as the new security perimeter needs to be how supply chains are secured. The following geographical analysis of the attack was provided by CargoSmart, showing how quickly NotPetya ransomware can spread through a global network:

CargoSmart provided a Vessel Monitoring Dashboard to monitor vessels during this time of recovery from the cyber attack.

Supply Chains Need To Treat Every Supplier In Their Network As A New Security Perimeter

The more integrated a supply chain, the more the potential for breaches and ransomware attacks. And in supply chains that rely on privileged access credentials, it’s a certainty that hackers outside the organization and even those inside will use compromised credentials for financial gain or disrupt operations. Treating every supplier and their integration points in the network as a new security perimeter is critical if manufacturers want to be able to maintain operations in an era of accelerating cybersecurity threats.

Taking a Zero Trust Privilege approach to securing privileged access credentials will help alleviate the leading cause of breaches in manufacturing today, which is privileged access abuse. By taking a “never trust, always verify, and enforce least privilege” approach, manufacturers can protect the “keys to the kingdom,” which are the credentials hackers exploit to take control over an entire supply chain network.

Instead of relying on trust but verify or trusted versus untrusted domains at the operating system level, manufacturers need to have a consistent security strategy that scales from their largest to smallest suppliers. Zero Trust Privilege could have saved A.P. Møller-Maersk from being crippled by a ransomware attack by making it a prerequisite that every supplier must have ZTP-based security guardrails in place to do business with them.

Conclusion

Among the most porous and easily compromised areas of manufacturing, supply chains are the lifeblood of any production business, yet also the most vulnerable. As hackers become more brazen in their ransomware attempts with manufacturers and privileged access credentials are increasingly sold on the Dark Web, manufacturers need a sense of urgency to combat these threats. Taking a Zero Trust approach to securing their supply chains and operations, helps manufacturers to implement least privilege access based on verifying who is requesting access, the context of the request, and the risk of the access environment. By implementing least privilege access, manufacturers can minimize the attack surface, improve audit and compliance visibility, and reduce risk, complexity, and costs for the modern, hybrid manufacturing enterprise.

Top 10 Most Popular Cybersecurity Certifications In 2019

Top 10 Most Popular Cybersecurity Certifications In 2019

  • IT decision-makers (ITDMs) report that cybersecurity is the hardest area to find qualified talent, followed by cloud computing skills.
  • 56% of ITDMs report that certified personnel closes organizational skills gaps.
  • 48% of ITDMs report that certifications boost productivity.
  • 44% of ITDM report that certifications help meet client requirements.

Knowing which cybersecurity certifications are in the greatest demand is invaluable in planning a career in the field. I asked Global Knowledge, the world’s largest dedicated IT training company, which hosts over 3,000 unique IT courses delivered by over 1,100 subject matter experts for their help in finding out which cybersecurity certifications are the most sought after in North America this year. Their 2019 IT Skills and Salary Report is considered the gold standard of IT skills, certification, and salary data, with many IT professionals relying on it to plan their careers. Human Resource professionals also use the report and consider it an invaluable reference to guide their recruiting efforts. Thank you Global Knowledge for providing custom research of the current state of demand for cybersecurity certifications.

Ranking The Most Sought-After Cybersecurity Certifications

Of the 63% of North American IT professionals planning to or are pursuing a certification in 2019, 23% are pursuing a cybersecurity certification according to the latest Global Knowledge IT Skills and Salary Report. The certifications reflect how quickly unique, specialized areas of knowledge are gaining in popularity. “Traditionally, cybersecurity senior leadership-level certifications have been dominated in popularity by the administrative and Governance, Risk Management, and Compliance accreditations. This continues to be reflected in the latest data with the most popular (ISC)2 and ISACA certification bodies represented well in the list,” said Brad Puckett, Global Knowledge’s global product director for cybersecurity. Brad used the Global Knowledgebase of survey data to produce the ten most sought-after cybersecurity certifications in North America in 2019 shown below:

1.    (ISC)2: CISSP – Certified Information Systems Security Professional

2.   ISACA: CISM – Certified Information Security Manager

3.   EC-Council: CEH – Certified Ethical Hacker

4.   ISACA: CRISC – Certified in Risk and Information Systems Control

5.   (ISC)2: CCSP – Certified Cloud Security Professional

6.   ISACA: CISA – Certified Information Systems Auditor

7.   (ISC)2: CISSP-ISSMP – Information Systems Security Management Professional also please see the ISC’s specifics on this certification here.

8.   (ISC)2: CISSP-ISSAP – Information Systems Security Architecture Professional also please see the ISC’s specifics on this certification here.

9.   ISACA: CGEIT – Certified in the Governance of Enterprise IT

10. EC-Council: CHFI – Computer Hacking Forensic Investigator

 

 

%d bloggers like this: