Skip to content

Posts from the ‘Machine learning’ Category

5 Proven Ways Manufacturers Can Get Started With Analytics

5 Proven Ways Manufacturers Can Get Started With Analytics

Going into 2020, manufacturers are at an inflection point in their adoption of analytics and business intelligence (BI). Analytics applications and tools make it possible for them to gain greater insights from the massive amount of data they produce every day. And with manufacturing leading all industries on the planet when it comes to the amount of data generated from operations daily, the potential to improve shop floor productivity has never been more within reach for those adopting analytics and BI applications.

Analytics and BI Are High Priorities In Manufacturing Today

Increasing the yield rates and quality levels for each shop floor, machine and work center is a high priority for manufacturers today. Add to that the pressure to stay flexible and take on configure-to-order and engineer-to-order special products fulfilled through short-notice production runs and the need for more insight into how each phase of production can be improved. Gartner’s latest survey of heavy manufacturing CIOs in the 2019 CIO Agenda: Heavy Manufacturing, Industry Insights, by Dr. Marc Halpern. October 15, 2018 (Gartner subscription required) reflects the reality all manufacturers are dealing with today. I believe they’re in a tough situation with customers wanting short-notice production time while supply chains often needing to be redesigned to reduce or eliminate tariffs. They’re turning to analytics to gain the insights they need to take on these challenges and more. The graphic below is from Gartner’s latest survey of heavy manufacturing CIOs, it indicates the technology areas where heavy manufacturing CIOs’ organizations will be spending the largest amount of new or additional funding in 2019 as well as the technology areas where their organizations will be reducing funding by the highest amount in 2019 compared with 2018:

Knowing Which Problems To Solve With Analytics

Manufacturers getting the most value from analytics start with a solid business case first, based on a known problem they’ve been trying to solve either in their supply chains, production or fulfillment operations. The manufacturers I’ve worked with focus on how to get more orders produced in less time while gaining greater visibility across production operations. They’re all under pressure to stay in compliance with customers and regulatory reporting; in many cases needing to ship product quality data with each order and host over 60 to 70 audits a year from customers in their plants. Analytics is becoming popular because it automates the drudgery of reporting that would otherwise take IT team’s days or weeks to do manually.

As one CIO put it as we walked his shop floor, “we’re using analytics to do the heavy data crunching when we’re hosting customer audits so we can put our quality engineers to work raising the bar of product excellence instead of having them run reports for a week.” As we walked the shop floor he explained how dashboards are tailored to each role in manufacturing, and the flat-screen monitors provide real-time data on how five key areas of performance are doing. Like many other CIOs facing the challenge of improving production efficiency and quality, he’s relying on the five core metrics below in the initial roll-out of analytics across manufacturing operations, finance, accounting, supply chain management, procurement, and service:

  • Manufacturing Cycle Time – One of the most popular metrics in manufacturing, Cycle Time quantifies the amount of elapsed time from when an order is placed until the product is manufactured and entered into finished goods inventory. Cycle times vary by segment of the manufacturing industry, size of manufacturing operation, global location and relative stability of supply chains supporting operations. Real-time integration, applying Six Sigma to know process bottlenecks, and re-engineering systems to be more customer-focused improve this metrics’ performance. Cycle Time is a predictor of the future of manufacturing as this metric captures improvement made across systems and processes immediately.
  • Supplier Inbound Quality Levels – Measuring the dimensions of how effective a given supplier is at consistently meeting a high level of product quality and on-time delivery is valuable in orchestrating a stable supply chain. Inbound quality levels often vary from one shipment to the next, so it’s helpful to have Statistical Process Control (SPC) charts that quantify and show the trends of quality levels over time. Nearly all manufacturers are relying on Six Sigma programs to troubleshoot specific trouble spots and problem areas of suppliers who may have wide variations in product quality in a given period. This metric is often used for ranking which suppliers are the most valuable to a factory and production network as well.
  • Production Yield Rates By Product, Process, and Plant Location – Yield rates reflect how efficient a machine or entire process is in transforming raw materials into finished products. Manufacturers rely on automated and manually-based approaches to capture this metric, with the latest generation of industrial machinery capable of producing its yield rate levels over time. Process-related manufacturers rely on this metric to manage every production run they do. Microprocessors, semiconductors, and integrated circuit manufacturers are continually monitoring yield rates to determine how they are progressing against plans and goals. Greater real-time integration, improved quality management systems, and greater supply chain quality and compliance all have a positive impact on yield rates. It’s one of the key measures of production yield as it reflects how well-orchestrated entire production processes are.
  • Perfect Order Performance – Perfect order performance measures how effective a manufacturer is at delivering complete, accurate, damage-free orders to customers on time. The equation that defines the perfect order Index (POI) or perfect order performance is the (Percent of orders delivered on time) * (Percent of orders complete) * (Percent of orders damage free) * (Percent of orders with accurate documentation) * 100. The majority of manufacturers are attaining a perfect order performance level of 90% or higher, according to The American Productivity and Quality Center (APQC). The more complex the product lines, configuration options, including build-to-order, configure-to-order, and engineer-to-order, the more challenging it is to attain a high, perfect order level. Greater analytics and insights gained from real-time integration and monitoring help complex manufacturers attained higher perfect order levels over time.
  • Return Material Authorization (RMA) Rate as % Of Manufacturing – The purpose of this metric is to define the percentage of products shipped to customers that are returned due to defective parts or not otherwise meeting their requirements. RMAs are a good leading indicator of potential quality problems. RMAs are also a good measure of how well integrated PLM, ERP and CRM systems, resulting in fewer product errors.

Conclusion

The manufacturers succeeding with analytics start with a compelling business case, one that has an immediate impact on the operations of their organizations. CIOs are prioritizing analytics and BI to gain greater insights and visibility across every phase of manufacturing. They’re also adopting analytics and BI to reduce the reporting drudgery their engineering, IT, and manufacturing teams are faced with as part of regular customer audits. There are also a core set of metrics manufacturers rely on to manage their business, and the five mentioned here are where many begin.

5 Strategies Healthcare Providers Are Using To Secure Networks

5 Strategies Healthcare Providers Are Using To Secure Networks

  • Healthcare records are bestsellers on the Dark Web, ranging in price from $250 to over $1,000 per record.
  • The growing, profitable market for Protected Health Information (PHI) is attracting sophisticated cybercriminal syndicates, several of which are state-sponsored.
  •  Medical fraud is slower to detect and notify, unlike financial fraud (ex. stolen credit cards), contributing to its popularity with cybercriminals globally.
  • Cybercriminals prefer PHI data because it’s easy to sell and contains information that is harder to cancel or secure once stolen. Examples include insurance policy numbers, medical diagnoses, Social Security Numbers (SSNs), credit card, checking and savings account numbers.

These and many other insights into why healthcare provider networks are facing a cybersecurity crisis are from the recently declassified U.S. Department of Health & Human Services HC3 Intelligence Briefing Update Dark Web PHI (Protected Health Information) Marketplace presented April 11th of this year. You can download a copy of the slides here (PDF, 13 pp, no opt-in). The briefing provides a glimpse into how the dark web values the “freshness’ of healthcare data and the ease of obtaining elderly patient records, skewing stolen identities to children, and elderly patients. Protenus found that the single largest healthcare breach this year involves 20 million patent records stolen from a medical collections agency. The breach was discovered after the records were found for sale on the dark web. Please see their 2019 Mid-Year Breach Barometer Report (opt-in required) for an analysis of 240 of the reported 285 breach incidents affecting 31,611,235 patient records in the first six months of this year. Cybercriminals capitalize on medical records to drive one or more of the following strategies as defined by the HC3 Intelligence Briefing:

Stopping A Breach Can Avert A HIPAA Meltdown

To stay in business, healthcare providers need to stay in compliance with the Health Insurance Portability and Accountability Act of 1996 (HIPAA). HIPAA provides data privacy and security provisions for safeguarding medical information. Staying in compliance with HIPAA can be a challenge given how mobile healthcare provider workforces are, and the variety of mobile devices they use to complete tasks today. 33% of healthcare employees are working outside of the office at least once a week. And with government incentives for decentralized care expected to expand mobile workforces industry-wide, this figure is expected to increase significantly. Health & Human Services provides a Breach Portal that lists all cases under investigation today. The Portal reflects the severity of healthcare providers’ cybersecurity crisis. Over 39 million medical records have been compromised this year alone, according to HHS’ records from over 340 different healthcare providers. Factoring in the costs of HIPAA fines that can range from $25,000 to $15.M per year, it’s clear that healthcare providers need to have endpoint security on their roadmaps now to avert the high costs of HIPAA non-compliance fines.

Securing endpoints across their healthcare provider networks is one of the most challenging ongoing initiatives any Chief Information Security Officer (CISO) for a healthcare provider has today. 39% of healthcare security incidents are caused by stolen or misplaced endpoints. CISOs are balancing the need their workforces have for greater device agility with the need for stronger endpoint security. CISOs are solving this paradox by taking an adaptive approach to endpoint security that capitalizes on strong asset management. “Keeping machines up to date is an IT management job, but it’s a security outcome. Knowing what devices should be on my network is an IT management problem, but it has a security outcome. And knowing what’s going on and what processes are running and what are consuming network bandwidth is an IT management problem, but it’s a security outcome “, said Nicko van Someren, Ph.D. and Chief Technology Officer at Absolute Software.

5 Strategies for Healthcare Providers Are Using To Secure Networks

Thwarting breaches to protect patients’ valuable personal health information starts with an adaptive, strong endpoint strategy. The following are five proven strategies for protecting endpoints, assuring HIPAA compliance in the process:

  1. Implementing an adaptive IT asset management program delivers endpoint security at scale. Healthcare providers prioritizing IT asset management control and visibility can better protect every endpoint on their network. Advanced features including real-time asset management to locate and secure devices, geolocation fencing so devices can only be used in a specific area and device freeze options are very effective for securing endpoints. Healthcare providers are relying more and more on remote data delete as well. The purpose of this feature is to wipe lost or stolen devices within seconds.
  2.  Improve security and IT operations with faster discovery and remediation across all endpoints. Implement strategies that enable greater remediation and resilience of every endpoint. Healthcare providers are having success with this strategy, relying on IT asset management to scale remediation and resilience to every endpoint device. Absolute’s Persistence technology is a leader in this area by providing scalable, secure endpoint resiliency. Absolute also has a proven track record of providing self-healing endpoints extending their patented firmware-embedded Persistence technology that can self-heal applications on compatible endpoint devices.
  3. Design in HIPAA & HITECH compliance and reporting to each endpoint from the first pilot. Any endpoint security strategy needs to build in ongoing compliance checks and automated reports that are audit-ready. It also needs to be able to probe for violations across all endpoints. Advanced endpoint security platforms are capable of validating patient data integrity with self-healing endpoint security. All of these factors add up to reduce time to prepare audits with ongoing compliance checks across your endpoint population.
  4. A layered security strategy that includes real-time endpoint orchestration needs to anchor any healthcare network merger or acquisition, ensuring patient data continues to be protected. Private Equity (PE) firms continue acquiring providers to create healthcare networks that open up new markets. The best breach prevention, especially in merged or acquired healthcare networks, is a comprehensive layered defense strategy that spans endpoints and networks. If one of the layers fails, there are other layers in place to ensure your organization remains protected. Healthcare providers’ success with layered security models is predicated on how successful they are achieving endpoint resiliency. Absolute’s technology is embedded in the core of laptops and other devices at the factory. Once activated, it provides healthcare providers with a reliable two-way connection so they can manage mobility, investigate potential threats, and take action if a security incident occurs.
  5. Endpoint security needs to be tamper-proof at the operating system level on the device yet still provides IT and cybersecurity teams with device visibility and access to modify protections. Healthcare providers need an endpoint visibility and control platform that provides a persistent, self-healing connection between IT, security teams, and every device, whether it is active on the network or not. Every identity is a new security perimeter. Healthcare providers’ endpoint platforms need to be able to secure all devices across different platforms, automate endpoint hygiene, speed incident detection, remediation, and reduce IT asset loss by being able to self-diagnose and repair endpoint devices on real-time.

Scaling Cloud Services Is Key To Growing A Digital Business

  • 93% of enterprises are securing remote locations with a centralized approach that rarely scales to secure every endpoint and identity of remote branch locations, leaving an enterprise more vulnerable to a breach.
  • Enabling network security is the greatest challenge enterprises face when managing a highly distributed network with numerous remote locations.
  • In an era of cloud-first networks, 9 out of 10 companies are still relying on centrally managed networks that don’t scale for remote system users, creating productivity bottlenecks.
  • 75% of enterprises experience branch and remote location network interruptions several times a year or more frequently, costing an organization thousands of dollars an hour in lost productivity.

The challenges of scaling cloud services to grow a digital business are many and are well-explained in the recent research report, Remote Office Networks Pose Business and Reliability Risk A Survey of IT Professionals (27 pp., PDF, no opt-in), published on August 2019 by Dimensional Research in collaboration with Infoblox. This report provides valuable insights into why scaling cloud services is essential for growing a digital business. The study’s findings reflect how remote branch and production locations’ lack of IT security and site personnel are one of the most challenging constraints to overcome and keep growing their business. Please see page 22 of the study for specifics on the methodology.

99% or nearly all enterprises with distributed operations suffer adverse business impacts from network interruptions. Of the many causes of network disruption, one of the most common is not directing traffic to the closest point of entry into cloud platforms. Taking a software-based approach to wide-area networking (SDWAN) is proving effective in improving cloud-based application performance, including Microsoft Office 365 cloud-based application performance. The report shows how SD-WAN is replacing outdated centralized IT models that lack the scale to flex and support new digital business models.

Key insights from the research report include the following:

  • Enterprises realize the model of relying on centralized IT security isn’t scaling to support and protect the proliferation of user devices with internet access, leaving branch offices less secure than ever before. Every IT architect, IT Director, or CIO needs to consider how taking an SDWAN-based approach to network management reduces the risk of a breach and data exfiltration. 93% of enterprises are securing remote locations with a centralized approach that rarely scales to secure every endpoint and identity of remote branch locations, leaving an enterprise more vulnerable to a breach. Enterprises are upgrading their core network services, including DNS, DHCP, and IP address management, on cloud-based DDI platforms to bring greater security scale and reliability across their enterprise networks. Enterprises are also devising Zero Trust Security (ZTS) frameworks to secure every network, cloud, and on-premise platform, operating system, and application across their branch offices. Chase Cunningham of Forrester, Principal Analyst, is the leading authority on Zero Trust Security, and his recent video, Zero Trust in Action, is worth watching to learn more about how enterprises can secure their IT infrastructures. You can find his blog here.

  • 75% or the majority of an enterprises’ branch offices experience network interruptions several times a year, with 49% of them requiring three or more hours to resolve remote office network outages. Enterprises continue to pay a very high price in lost productivity due to network interruptions and the time it takes to troubleshoot them and get a branch or remote location back online. Enterprises are upgrading their core network services, including DNS, DHCP, and IP address management, on cloud-based DDI platforms to bring greater scale and reliability across their enterprise networks. Cloud-based DDI platforms enable enterprises to manage networking for hundreds to thousands of remote sites with unprecedented cost-efficiency.

  • Relying on centralized IT creates many challenges and security threats for remote offices, with the most costly not having IT staff at remote sites. Network security at remote locations is the greatest challenge enterprises face when managing a highly distributed network with numerous remote locations. A contributing factor to security being the leading challenge of managing a highly distributed network is the lack of IT employees at remote branches. 65% of enterprises are routinely sending IT employees to remote branches to resolve networking issues alone. Travel costs combined with lost productivity from having to send IT technicians out for a week or longer to solve network performance issues is another reason why enterprises are adopting cloud-based DDI platforms.

  • Enterprises are adopting cloud-based DDI platforms that enable enterprises to simplify the management of highly distributed remote networks as well as to optimize the network performance of cloud-based applications. Dimensional Research’s study reflects how enterprises are meeting the challenge of increasingly complex, distributed networks that have a proliferating number of remote locations and endpoints. The majority of enterprises, 71%, are looking to integrate core network services, DNS, DHCP, and IP address management, into a single cloud-based DDI platform. The problem is, conventional DDI solutions for branch locations are too slow or complicated for a cloud-first world. The following graphic from the study shows what motivating enterprises to adopt SD-WAN today is.

What’s New In Gartner’s Hype Cycle For AI, 2019

What's New In Gartner's Hype Cycle For AI, 2019

  • Between 2018 and 2019, organizations that have deployed artificial intelligence (AI) grew from 4% to 14%, according to Gartner’s 2019 CIO Agenda survey.
  • Conversational AI remains at the top of corporate agendas spurred by the worldwide success of Amazon Alexa, Google Assistant, and others.
  • Enterprises are making progress with AI as it grows more widespread, and they’re also making more mistakes that contribute to their accelerating learning curve.

These and many other new insights are from Gartner Hype Cycle For AI, 2019 published earlier this year and summarized in the recent Gartner blog post, Top Trends on the Gartner Hype Cycle for Artificial Intelligence, 2019.  Gartner’s definition of Hype Cycles includes five phases of a technology’s lifecycle and is explained here. Gartner’s latest Hype Cycle for AI reflects the growing popularity of AutoML, intelligent applications, AI platform as a service or AI cloud services as enterprises ramp up their adoption of AI. The Gartner Hype Cycle for AI, 2019, is shown below:

Details Of What’s New In Gartner’s Hype Cycle For AI, 2019

  • Speech Recognition is less than two years to mainstream adoption and is predicted to deliver the most significant transformational benefits of all technologies on the Hype Cycle. Gartner advises its clients to consider including speech recognition on their short-term AI technology roadmaps. Gartner observes, unlike other technologies within the natural-language processing area, speech to text (and text to speech) is a stand-alone commodity where its modules can be plugged into a variety of natural-language workflows. Leading vendors in this technology area Amazon, Baidu, Cedat 85, Google, IBM, Intelligent Voice, Microsoft, NICE, Nuance, and Speechmatics.
  • Eight new AI-based technologies are included in this year’s Hype Cycle, reflecting Gartner enterprise clients’ plans to scale AI across DevOps and IT while supporting new business models. The latest technologies to be included in the Hype Cycle for AI reflect how enterprises are trying to demystify AI to improve adoption while at the same time, fuel new business models. The new technologies include the following:
  1. AI Cloud Services – AI cloud services are hosted services that allow development teams to incorporate the advantages inherent in AI and machine learning.
  2. AutoML – Automated machine learning (AutoML) is the capability of automating the process of building, deploying, and managing machine learning models.
  3. Augmented Intelligence – Augmented intelligence is a human-centered partnership model of people and artificial intelligence (AI) working together to enhance cognitive performance, including learning, decision making, and new experiences.
  4. Explainable AI – AI researchers define “explainable AI” as an ensemble of methods that make black-box AI algorithms’ outputs sufficiently understandable.
  5. Edge AI – Edge AI refers to the use of AI techniques embedded in IoT endpoints, gateways, and edge devices, in applications ranging from autonomous vehicles to streaming analytics.
  6. Reinforcement Learning – Reinforcement learning has the primary potential for gaming and automation industries and has the potential to lead to significant breakthroughs in robotics, vehicle routing, logistics, and other industrial control scenarios.
  7. Quantum Computing – Quantum computing has the potential to make significant contributions to the areas of systems optimization, machine learning, cryptography, drug discovery, and organic chemistry. Although outside the planning horizon of most enterprises, quantum computing could have strategic impacts in key businesses or operations.
  8. AI Marketplaces – Gartner defines an AI Marketplace as an easily accessible place supported by a technical infrastructure that facilitates the publication, consumption, and billing of reusable algorithms. Some marketplaces are used within an organization to support the internal sharing of prebuilt algorithms among data scientists.
  • Gartner considers the following AI technologies to be on the rise and part of the Innovation Trigger phase of the AI Hype Cycle. AI Marketplaces, Reinforcement Learning, Decision Intelligence, AI Cloud Services, Data Labeling, and Annotation Services, and Knowledge Graphs are now showing signs of potential technology breakthroughs as evidence by early proof-of-concept stories. Technologies in the Innovation Trigger phase of the Hype Cycle often lack usable, scalable products with commercial viability not yet proven.
  • Smart Robots and AutoML are at the peak of the Hype Cycle in 2019. In contrast to the rapid growth of industrial robotics systems that adopted by manufacturers due to the lack of workers, Smart Robots are defined by Gartner as having electromechanical form factors that work autonomously in the physical world, learning in short-term intervals from human-supervised training and demonstrations or by their supervised experiences including taking direction form human voices in a shop floor environment. Whiz robot from SoftBank Robotics is an example of a SmartRobot that will be sold under robot-as-a service (RaaS) model and originally be available only in Japan. AutoML is one of the most hyped technology in AI this year. Gartner defines automated machine learning (AutoML) as the capability of automating the process of building, deploying, or managing machine learning models. Leading vendors providing AutoML platforms and applications include Amazon SageMaker, Big Squid, dotData, DataRobot, Google Cloud Platform, H2O.ai, KNIME, RapidMiner, and Sky Tree.
  • Nine technologies were removed or reassigned from this years’ Hype Cycle of AI compared to 2018. Gartner has removed nine technologies, often reassigning them into broader categories. Augmented reality and Virtual Reality are now part of augmented intelligence, a more general category, and remains on many other Hype Cycles. Commercial UAVs (drones) is now part of edge AI, a more general category. Ensemble learning had already reached the Plateau in 2018 and has now graduated from the Hype Cycle. Human-in-the-loop crowdsourcing has been replaced by data labeling and annotation services, a broader category. Natural language generation is now included as part of NLP. Knowledge management tools have been replaced by insight engines, which are more relevant to AI. Predictive analytics and prescriptive analytics are now part of decision intelligence, a more general category.

Sources:

Hype Cycle for Artificial Intelligence, 2019, Published 25 July 2019, (Client access reqd.)

Top Trends on the Gartner Hype Cycle for Artificial Intelligence, 2019 published September 12, 2019

10 Ways AI And Machine Learning Are Improving Endpoint Security

  • Gartner predicts $137.4B will be spent on Information Security and Risk Management in 2019, increasing to $175.5B in 2023, reaching a CAGR of 9.1%. Cloud Security, Data Security, and Infrastructure Protection are the fastest-growing areas of security spending through 2023.
  •  69% of enterprise executives believe artificial intelligence (AI) will be necessary to respond to cyberattacks with the majority of telecom companies (80%) saying they are counting on AI to help identify threats and thwart attacks according to Capgemini.
  •  Spending on AI-based cybersecurity systems and services reached $7.1B in 2018 and is predicted to reach $30.9B in 2025, attaining a CAGR of 23.4% in the forecast period according to Zion Market Research.

Traditional approaches to securing endpoints based on the hardware characteristics of a given device aren’t stopping breach attempts today. Bad actors are using AI and machine learning to launch sophisticated attacks to shorten the time it takes to compromise an endpoint and successfully breach systems. They’re down to just 7 minutes after comprising an endpoint and gaining access to internal systems ready to exfiltrate data according to Ponemon. The era of trusted and untrusted domains at the operating system level, and “trust, but verify” approaches are over. Security software and services spending is soaring as a result, as the market forecasts above show.

AI & Machine Learning Are Redefining Endpoint Security

AI and machine learning are proving to be effective technologies for battling increasingly automated, well-orchestrated cyberattacks and breach attempts. Attackers are combining AI, machine learning, bots, and new social engineering techniques to thwart endpoint security controls and gain access to enterprise systems with an intensity never seen before. It’s becoming so prevalent that Gartner predicts that more than 85% of successful attacks against modern enterprise user endpoints will exploit configuration and user errors by 2025. Cloud platforms are enabling AI and machine learning-based endpoint security control applications to be more adaptive to the proliferating types of endpoints and corresponding threats. The following are the top ten ways AI and machine learning are improving endpoint security:

  • Using machine learning to derive risk scores based on previous behavioral patterns, geolocation, time of login, and many other variables is proving to be effective at securing and controlling access to endpoints. Combining supervised and unsupervised machine learning to fine-tune risk scores in milliseconds is reducing fraud, thwarting breach attempts that attempt to use privileged access credentials, and securing every identity on an organizations’ network. Supervised machine learning models rely on historical data to find patterns not discernable with rules or predictive analytics. Unsupervised machine learning excels at finding anomalies, interrelationships, and valid links between emerging factors and variables. Combining both unsupervised and supervised machine learning is proving to be very effective in spotting anomalous behavior and reducing or restricting access.
  • Mobile devices represent a unique challenge to achieving endpoint security control, one that machine learning combined with Zero Trust is proving to be integral at solving.  Cybercriminals prefer to steal a mobile device, its passwords, and privileged access credentials than hack into an organization. That’s because passwords are the quickest onramp they have to the valuable data they want to exfiltrate and sell. Abandoning passwords for new techniques including MobileIron’s zero sign-on approach shows potential for thwarting cybercriminals from getting access while hardening endpoint security control. Securing mobile devices using a zero-trust platform built on a foundation of unified endpoint management (UEM) capabilities enables enterprises to scale zero sign-on for managed and unmanaged services for the first time. Below is a graphic illustrating how they’re adopting machine learning to improve mobile endpoint security control:
  • Capitalizing on the core strengths of machine learning to improve IT asset management is making direct contributions to greater security.  IT Management and security initiatives continue to become more integrated across organizations, creating new challenges to managing endpoint security across each device. Absolute Software is taking an innovative approach to solve the challenge of improving IT asset management, so endpoint protection is strengthened at the same time. Recently I had a chance to speak with Nicko van Someren, Ph.D. and Chief Technology Officer at Absolute Software, where he shared with me how machine learning algorithms are improving security by providing greater insights into asset management. “Keeping machines up to date is an IT management job, but it’s a security outcome. Knowing what devices should be on my network is an IT management problem, but it has a security outcome. And knowing what’s going on and what processes are running and what’s consuming network bandwidth is an IT management problem, but it’s a security outcome. I don’t see these as distinct activities so much as seeing them as multiple facets of the same problem space. Nicko added that Absolute’s endpoint security controls begin at the BIOS level of over 500M devices that have their endpoint code embedded in them. The Absolute Platform is comprised of three products: Persistence, Intelligence, and Resilience—each building on the capabilities of the other. Absolute Intelligence standardizes the data around asset analytics and security advocacy analytics to allow Security managers to ask any question they want. (“What’s slowing down my device? What’s working and what isn’t? What has been compromised? What’s consuming too much memory? How does this deviate from normal performance?”). An example of Absolute’s Intelligence providing insights into asset management and security is shown below:
  • Machine learning has progressed to become the primary detection method for identifying and stopping malware attacks. Machine learning algorithms initially contributed to improving endpoint security by supporting the back-end of malware protection workflows. Today more vendors are designing endpoint security systems with machine learning as the primary detection method. Machine learning trained algorithms can detect file-based malware and learn which files are harmful or not based on the file’s metadata and content. Symantec’s Content & Malware Analysis illustrates how machine learning is being used to detect and block malware. Their approach combines advanced machine learning and static code file analysis to block, detect, and analyze threats and stop breach attempts before they can spread.
  • Supervised machine learning algorithms are being used for determining when given applications are unsafe to use, assigning them to containers, so they’re isolated from production systems. Taking into account an applications’ threat score or reputation, machine learning algorithms are defining if dynamic application containment needs to run for a given application. Machine learning-based dynamic application containment algorithms and rules block or log unsafe actions of an application based on containment and security rules. Machine learning algorithms are also being used for defining predictive analytics that define the extent of a given applications’ threat.
  •  Integrating AI, machine learning, and SIEM (Security Information and Event Management) in a single unified platform are enabling organizations to predict, detect, and respond to anomalous behaviors and events. AI and machine learning-based algorithms and predictive analytics are becoming a core part of SIEM platforms today as they provide automated, continuous analysis and correlation of all activity observed within a given IT environment. Capturing, aggregating, and analyzing endpoint data in real-time using AI techniques and machine learning algorithms is providing entirely new insights into asset management and endpoint security. One of the most interesting companies to watch in this area is LogRhythm. They’ve developed an innovative approach to integrating AI, machine learning, and SIEM in their LogRhythm NextGen SIEM Platform, which delivers automated, continuous analysis and correlation of all activity observed within an IT environment. The following is an example of how LogRhythm combines AI, machine learning, and SIEM to bring new insights into securing endpoints across a network.
  • Machine learning is automating the more manually-based, routine incident analysis, and escalation tasks that are overwhelming security analysts today. Capitalizing on supervised machine learnings’ innate ability to fine-tune algorythms in milliseconds based on the analysis of incidence data, endpoint security providers are prioritizing this area in product developnent. Demand from potential customers remains strong, as nearly everyone is facing a cybersecurity skills shortage while facing an onslaught of breach attempts.  “The cybersecurity skills shortage has been growing for some time, and so have the number and complexity of attacks; using machine learning to augment the few available skilled people can help ease this. What’s exciting about the state of the industry right now is that recent advances in Machine Learning methods are poised to make their way into deployable products,” Absolute’s CTO Nicko van Someren added.
  • Performing real-time scans of all processes with an unknown or suspicious reputation is another way how machine learning is improving endpoint security. Commonly referred to as Hunt and Respond, supervised and unsupervised machine learning algorithms are being used today to seek out and resolve potential threats in milliseconds instead of days. Supervised machine learning algorithms are being used to discover patterns in known or stable processes where anomalous behavior or activity will create an alert and pause the process in real-time. Unsupervised machine learning algorithms are used for analyzing large-scale, unstructured data sets to categorize suspicious events, visualize threat trends across the enterprise, and take immediate action at a single endpoint or across the entire organization.
  • Machine learning is accelerating the consolidation of endpoint security technologies, a market dynamic that is motivating organizations to trim back from the ten clients they have on average per endpoint today. Absolute Software’s 2019 Endpoint Security Trends Report found that a typical device has ten or more endpoint security agents installed, each often conflicting with the other. The study also found that enterprises are using a diverse array of endpoint agents, including encryption, AV/AM, and Endpoint Detection and Response (EDR). The wide array of endpoint solutions make it nearly impossible to standardize a specific test to ensure security and safety without sacrificing speed. By helping to accelerate the consolidation of security endpoints, machine learning is helping organizations to see the more complex and layered the endpoint protection, the greater the risk of a breach.
  • Keeping every endpoint in compliance with regulatory and internal standards is another area machine learning is contributing to improving endpoint security. In regulated industries, including financial services, insurance, and healthcare, machine learning is being deployed to discover, classify, and protect sensitive data. This is especially the case with HIPAA (Health Insurance Portability and Accountability Act) compliance in healthcare. Amazon Macie is representative of the latest generation of machine learning-based cloud security services. Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides organizations with dashboards, alerts, and contextual insights that give visibility into how data is being accessed or moved. The fully managed service continuously monitors data access activity for anomalies and generates detailed alerts when it detects the risk of unauthorized access or inadvertent data leaks. An example of one of Amazon Macie’s dashboard is shown below:

Three Reasons Why Killing Passwords Improves Your Cloud Security

Jack Dorsey’s Twitter account getting hacked by having his telephone number transferred to another account without his knowledge is a wake-up call to everyone of how vulnerable mobile devices are. The hackers relied on SIM swapping and convincing Dorsey’s telecom provider to bypass requiring a passcode to modify his account. With the telephone number transferred, the hackers accessed the Twitter founder’s account. If the telecom provider had adopted zero trust at the customer’s mobile device level, the hack would have never happened.

Cloud Security’s Weakest Link Is Mobile Device Passwords

The Twitter CEO’s account getting hacked is the latest in a series of incidents that reflect how easy it is for hackers to gain access to cloud-based enterprise networks using mobile devices. Verizon’s Mobile Security Index 2019 revealed that the majority of enterprises, 67%, are the least confident in the security of their mobile assets than any other device. Mobile devices are one of the most porous threat surfaces a business has. They’re also the fastest-growing threat surface, as every employee now relies on their smartphones as their ID. IDG’s recent survey completed in collaboration with MobileIron, titled Say Goodbye to Passwords found that 89% of security leaders believe that mobile devices will soon serve as your digital ID to access enterprise services and data.

Because they’re porous, proliferating and turning into primary forms of digital IDs, mobile devices and their passwords are a favorite onramp for hackers wanting access to companies’ systems and data in the cloud. It’s time to kill passwords and shut down the many breach attempts aimed at cloud platforms and the valuable data they contain.

Three Reasons Why Killing Passwords Improves Your Cloud Security

Killing passwords improve cloud security by:

  1. Eliminating privileged access credential abuse. Privileged access credentials are best sellers on the Dark Web, where hackers bid for credentials to the world’s leading banking, credit card, and financial management systems. Forrester estimates that 80% of data breaches involve compromised privileged credentials, and a recent survey by Centrify found that 74% of all breaches involved privileged access abuse. Killing passwords shuts down the most common technique hackers use to access cloud systems.
  2. Eliminating the threat of unauthorized mobile devices accessing business cloud services and exfiltrating data. Acquiring privileged access credentials and launching breach attempts from mobile devices is the most common hacker strategy today. By killing passwords and replacing them with a zero-trust framework, breach attempts launched from any mobile device using pirated privileged access credentials can be thwarted. Leaders in the area of mobile-centric zero trust security include MobileIron, whose innovative approach to zero sign-on solves the problems of passwords at scale. When every mobile device is secured through a zero-trust platform built on a foundation of unified endpoint management (UEM) capabilities, zero sign-on from managed and unmanaged services become achievable for the first time.
  3. Giving organizations the freedom to take a least-privilege approach to grant access to their most valuable cloud applications and platforms. Identities are the new security perimeter, and mobile devices are their fastest-growing threat surface. Long-standing traditional approaches to network security, including “trust but verify” have proven ineffective in stopping breaches. They’ve also shown a lack of scale when it comes to protecting a perimeter-less enterprise. What’s needed is a zero-trust network that validates each mobile device, establishes user context, checks app authorization, verifies the network, and detects and remediates threats before granting secure access to any device or user. If Jack Dorsey’s telecom provider had this in place, his and thousands of other people’s telephone numbers would be safe today.

Conclusion

The sooner organizations move away from being so dependent on passwords, the better. The three reasons why killing passwords improve cloud security are just the beginning. Imagine how much more effective distributed DevOps teams will be when security isn’t a headache for them anymore, and they can get to the cloud-based resources they need to get apps built. And with more organizations adopting a mobile-first development strategy, it makes sense to have a mobile-centric zero-trust network engrained in key steps of the DevOps process. That’s the future of cloud security, starting with the DevOps teams creating the next generation of apps today.

State Of AI And Machine Learning In 2019

  • Marketing and Sales prioritize AI and machine learning higher than any other department in enterprises today.
  • In-memory analytics and in-database analytics are the most important to Finance, Marketing, and Sales when it comes to scaling their AI and machine learning modeling and development efforts.
  • R&D’s adoption of AI and machine learning is the fastest of all enterprise departments in 2019.

These and many other fascinating insights are from Dresner Advisory Services’6th annual 2019 Data Science and Machine Learning Market Study (client access reqd) published last month. The study found that advanced initiatives related to data science and machine learning, including data mining, advanced algorithms, and predictive analytics are ranked the 8th priority among the 37 technologies and initiatives surveyed in the study. Please see page 12 of the survey for an overview of the methodology.

“The Data Science and Machine Learning Market Study is a progression of our analysis of this market which began in 2014 as an examination of advanced and predictive analytics,” said Howard Dresner, founder, and chief research officer at Dresner Advisory Services. “Since that time, we have expanded our coverage to reflect changes in sentiment and adoption, and have added new criteria, including a section covering neural networks.”

Key insights from the study include the following:

  • Data mining, advanced algorithms, and predictive analytics are among the highest-priority projects for enterprises adopting AI and machine learning in 2019. Reporting, dashboards, data integration, and advanced visualization are the leading technologies and initiatives strategic to Business Intelligence (BI) today. Cognitive BI (artificial-intelligence-based BI) ranks comparatively lower at 27th among priorities. The following graphic prioritizes the 27 technologies and initiatives strategic to business intelligence:

  • 40% of Marketing and Sales teams say data science encompassing AI and machine learning is critical to their success as a department. Marketing and Sales lead all departments in how significant they see AI and machine learning to pursue and accomplish their growth goals. Business Intelligence Competency Centers (BICC), R&D, and executive management audiences are the next most interested, and all top four roles cited carry comparable high combined “critical” and “very important” scores above 60%. The following graphic compares the importance levels by department for data science, including AI and machine learning:

  • R&D, Marketing, and Sales’ high level of shared interest across multiple feature areas reflect combined efforts to define new revenue growth models using AI and machine learning. Marketing, Sales, R&D, and the Business Intelligence Competency Centers (BICC) respondents report the most significant interest in having a range of regression models to work with in AI and machine learning applications. Marketing and Sales are also most interested in the next three top features, including hierarchical clustering, textbook statistical functions, and having a recommendation engine included in the applications and platforms they purchase. Dresner’s research team believes that the high shared interest in multiple features areas by R&D, Marketing and Sales is leading indicator enterprises are preparing to pilot AI and machine learning-based strategies to improve customer experiences and drive revenue. The following graphic compares interest and probable adoption by functional area of the enterprises interviewed:

  • 70% of R&D departments and teams are most likely to adopt data science, AI, and machine learning, leading all functions in an enterprise. Dresner’s research team sees the high level of interest by R&D teams as a leading indicator of broader enterprise adoption in the future. The study found 33% of all enterprises interviewed have adopted AI and machine learning, with the majority of enterprises having up to 25 models. Marketing & Sales lead all departments in their current evaluation of data science and machine learning software.

  • Financial Services & Insurance, Healthcare, and Retail/Wholesale say data science, AI, and machine learning are critical to their succeeding in their respective industries. 27% of Financial Services & Insurance, 25% of Healthcare and 24% of Retail/Wholesale enterprises say data science, AI, and machine learning are critical to their success. Less than 10% of Educational institutions consider AI and machine learning vital to their success. The following graphic compares the importance of data science, AI, and machine learning by industry:

  • The Telecommunications industry leads all others in interest and adoption of recommendation engines and model management governance. The Telecommunications, Financial Services, and Technology industries have the highest level of interest in adopting a range of regression models and hierarchical clustering across all industry respondent groups interviewed. Healthcare respondents have much lower interest in these latter features but high interest in Bayesian methods and text analytics functions. Retail/Wholesale respondents are often least interested in analytical features. The following graphic compares industries by their level of interest and potential adoption of analytical features in data science, AI, and machine learning applications and platforms:

  • Support for a broad range of regression models, hierarchical clustering, and commonly used textbook statistical functions are the top features enterprises need in data science and machine learning platforms. Dresner’s research team found these three features are considered the most important or “must-have” when enterprises are evaluating data science, AI and machine learning applications and platforms. All enterprises surveyed also expect any data science application or platform they are evaluating to have a recommendation engine included and model management and governance. The following graphic prioritizes the most and least essential features enterprises expect to see in data science, AI, and machine learning software and platforms:

  • The top three usability features enterprises are prioritizing today include support for easy iteration of models, access to advanced analytics, and an initiative, simple process for continuous modification of models. Support and guidance in preparing analytical data models and fast cycle time for analysis with data preparation are among the highest- priority usability features enterprises expect to see in AI and machine learning applications and platforms. It’s interesting to see the usability attribute of a specialist not required to create analytical models, test and run them at the lower end of the usability rankings. Many AI and machine learning software vendors rely on not needing a specialist to use their applications as a differentiator when the majority of enterprises value  support for easy iteration of models at a higher level as the graphic below shows:

  • 2019 is a record year for enterprises’ interest in data science, AI, and machine learning features they perceive as the most needed to achieve their business strategies and goals. Enterprises most expect AI and machine learning applications and platforms to support a range of regression models, followed by hierarchical clustering and textbook statistical functions for descriptive statistics. Recommendation engines are growing in popularity as interest grew to at least a tie as the second most important feature to respondents in 2019. Geospatial analysis and Bayesian methods were flat or slightly less important compared to 2018. The following graphic compares six years of interest in data science, AI, and machine learning techniques:

5 Key Insights From Absolute’s 2019 Endpoint Security Trends Report

  • Endpoint security tools are 24% of all IT security spending, and by 2020 global IT security spending will reach $128B according to Morgan Stanley Research.
  • 70% of all breaches still originate at endpoints, despite the increased IT spending on this threat surface, according to IDC.

To better understand the challenges organizations have securing the proliferating number and type of endpoints, Absolute launched and published their 2019 Endpoint Security Trends Report. You can get a copy of the report here. Their findings and conclusions are noteworthy to every organization who is planning and implementing a cybersecurity strategy. Data gathered from over 1B change events on over 6M devices is the basis of the multi-phased methodology. The devices represent data from 12,000 anonymized organizations across North America and Europe. Each device had Absolute’s Endpoint Resilience platform activated. The second phase of the study is based on exploratory interviews with senior executives from Fortune 500 organizations. For additional details on the methodology, please see page 12 of the study.

Key insights from the report include the following:

  1. Increasing security spending on protecting endpoints doesn’t increase an organizations’ safety and in certain cases, reduces it. Organizations are spending more on cybersecurity than ever before, yet they aren’t achieving greater levels of safety and security. Gartner’s latest forecast of global information security and risk management spending is forecast to reach $174.5B in 2022, attaining a five-year Compound Annual Growth Rate (CAGR) of 9.2%. Improving endpoint controls is one of the highest-priority investments driving increased spending. Over 70% of all breaches are still originating at endpoints, despite millions of dollars spent by organizations every year. It’s possible to overspend on endpoint security and reduce its effectiveness, which is a key finding of the study. IBM Security’s most recent Cost of a Data Breach Report 2019 found that the average cost of a data breach in the U.S. grew from $3.54M in 2006 to $8.19M in 2019, a 130% increase in 14 years.
  2. The more complex and layered the endpoint protection, the greater the risk of a breach. One of the fascinating findings from the study is how the greater the number of agents a given endpoint has, the higher the probability it’s going to be breached. Absolute found that a typical device has ten or more endpoint security agents installed, each conflicting with the other. MITRE’S Cybersecurity research practice found there are on average, ten security agents on each device, and over 5,000 common vulnerabilities and exposures (CVEs) found on the top 20 client applications in 2018 alone. Enterprises are using a diverse array of endpoint agents, including encryption, AV/AM, and Endpoint Detection and Response (EDR). The wide array of endpoint solutions make it nearly impossible to standardize a specific test to ensure security and safety without sacrificing speed. Absolute found organizations are validating their endpoint configurations using live deployments that often break and take valuable time to troubleshoot. The following graphic from the study illustrates how endpoint security is driving risk:

  1. Endpoint security controls and their associated agents degrade and lose effectiveness over time. Over 42% of endpoints experience encryption failures, leaving entire networks at risk from a breach. They’re most commonly disabled by users, malfunction or have error conditions or have never been installed correctly in the first place. Absolute found that endpoints often failed due to the fragile nature of their encryption agents’ configurations. 2% of encryption agents fail every week, and over half of all encryption failures occurred within two weeks, fueling a constant 8% rate of decay every 30 days. 100% of all devices experiencing encryption failures within one year. Multiple endpoint security solutions conflict with each other and create more opportunities for breaches than avert them:

  1. One in five endpoint agents will fail every month, jeopardizing the security and safety of IT infrastructure while prolonging security exposures. Absolute found that 19% of endpoints of a typical IT network require at least one client or patch management repair monthly. The patch and client management agents often require repairs as well. 75% of IT teams reported at least two repair events, and 50% reported three or more repair events. Additionally, 5% could be considered inoperable, with 80 or more repair events in the same one-month. Absolute also looked at the impact of families of applications to see how they affected the vulnerability of endpoints and discovered another reason why endpoint security is so difficult to attain with multiple agents. The 20 most common client applications published over 5,000 vulnerabilities in 2018. If every device had only the top ten applications (half), that could result in as many as 55 vulnerabilities per device just from those top ten apps, including browsers, OSs, and publishing tools. The following graphic summarizes the rates of failure for Client/Patch Management Agent Health:

  1. Activating security at the device level creates a persistent connection to every endpoint in a fleet, enabling greater resilience organization-wide. By having a persistent, unbreakable connection to data and devices, organizations can achieve greater visibility and control over every endpoint. Organizations choosing this approach to endpoint security are unlocking the value of their existing hardware and network investments. Most important, they attain resilience across their networks. When an enterprise network has persistence designed to the device level, there’s a constant, unbreakable connection to data and devices that identifies and thwarts breach attempts in real-time.

Bottom Line:  Identifying and thwarting breaches needs to start at the device level by relying on secured, persistent connections that enable endpoints to better detecting vulnerabilities, defending endpoints, and achieve greater resilience overall.

How AI Is Protecting Against Payments Fraud

  • 80% of fraud specialists using AI-based platforms believe the technology helps reduce payments fraud.
  • 63.6% of financial institutions that use AI believe it is capable of preventing fraud before it happens, making it the most commonly cited tool for this purpose.
  • Fraud specialists unanimously agree that AI-based fraud prevention is very effective at reducing chargebacks.
  • The majority of fraud specialists (80%) have seen AI-based platforms reduce false positives, payments fraud, and prevent fraud attempts.

AI is proving to be very effective in battling fraud based on results achieved by financial institutions as reported by senior executives in a recent survey, AI Innovation Playbook published by PYMNTS in collaboration with Brighterion. The study is based on interviews with 200 financial executives from commercial banks, community banks, and credit unions across the United States. For additional details on the methodology, please see page 25 of the study. One of the more noteworthy findings is that financial institutions with over $100B in assets are the most likely to have adopted AI, as the study has found 72.7% of firms in this asset category are currently using AI for payment fraud detection.

Taken together, the findings from the survey reflect how AI thwarts payments fraud and deserves to be a high priority in any digital business today. Companies, including Kount and others, are making strides in providing AI-based platforms, further reducing the risk of the most advanced, complex forms of payments fraud.

Why AI Is Perfect For Fighting Payments Fraud

Of the advanced technologies available for reducing false positives, reducing and preventing fraud attempts, and reducing manual reviews of potential payment fraud events, AI is ideally suited to provide the scale and speed needed to take on these challenges. More specifically, AI’s ability to interpret trend-based insights from supervised machine learning, coupled with entirely new knowledge gained from unsupervised machine learning algorithms are reducing the incidence of payments fraud. By combining both machine learning approaches, AI can discern if a given transaction or series of financial activities are fraudulent or not, alerting fraud analysts immediately if they are and taking action through predefined workflows. The following are the main reasons why AI is perfect for fighting payments fraud:

  • Payments fraud-based attacks are growing in complexity and often have a completely different digital footprint or pattern, sequence, and structure, which make them undetectable using rules-based logic and predictive models alone. For years e-commerce sites, financial institutions, retailers, and every other type of online business relied on rules-based payment fraud prevention systems. In the earlier years of e-commerce, rules and simple predictive models could identify most types of fraud. Not so today, as payment fraud schemes have become more nuanced and sophisticated, which is why AI is needed to confront these challenges.
  • AI brings scale and speed to the fight against payments fraud, providing digital businesses with an immediate advantage in battling the many risks and forms of fraud. What’s fascinating about the AI companies offering payments fraud solutions is how they’re trying to out-innovate each other when it comes to real-time analysis of transaction data. Real-time transactions require real-time security. Fraud solutions providers are doubling down on this area of R&D today, delivering impressive results. The fastest I’ve seen is a 250-millisecond response rate for calculating risk scores using AI on the Kount platform, basing queries on a decades-worth of data in their universal data network. By combining supervised and unsupervised machine learning algorithms, Kount is delivering fraud scores that are twice as predictive as previous methods and faster than competitors.
  • AI’s many predictive analytics and machine learning techniques are ideal for finding anomalies in large-scale data sets in seconds. The more data a machine learning model has to train on, the more accurate its predictive value. The greater the breadth and depth of data, a given machine learning algorithm learns from means more than how advanced or complex a given algorithm is. That’s especially true when it comes to payments fraud detection where machine learning algorithms learn what legitimate versus fraudulent transactions look like from a contextual intelligence perspective. By analyzing historical account data from a universal data network, supervised machine learning algorithms can gain a greater level of accuracy and predictability. Kount’s universal data network is among the largest, including billions of transactions over 12 years, 6,500 customers, 180+ countries and territories, and multiple payment networks. The data network includes different transaction complexities, verticals, and geographies, so machine learning models can be properly trained to predict risk accurately. That analytical richness includes data on physical real-world and digital identities creating an integrated picture of customer behavior.

Bottom Line:  Payments fraud is insidious, difficult to stop, and can inflict financial harm on any business in minutes. Battling payment fraud needs to start with a pre-emptive strategy to thwart fraud attempts by training machine learning models to quickly spot and act on threats then building out the strategy across every selling and service channel a digital business relies on.

Why Manufacturing Supply Chains Need Zero Trust

  • According to the 2019 Verizon Data Breach Investigation Report, manufacturing has been experiencing an increase in financially motivated breaches in the past couple of years, whereby most breaches involve Phishing and the use of stolen credentials.
  • 50% of manufacturers report experiencing a breach over the last 12 months, 11% of which were severe according to Sikich’s 5th Manufacturing and Distribution Survey, 2019.
  • Manufacturing’s most commonly data compromised includes credentials (49%), internal operations data (41%), and company secrets (36%) according to the 2019 Verizon Data Breach Investigation Report.
  • Manufacturers’ supply chains and logistics partners targeted by ransomware which have either had to cease operations temporarily to restore operations from backup or have chosen to pay the ransom include Aebi SchmidtASCO Industries, and COSCO Shipping Lines.

Small Suppliers Are A Favorite Target, Ask A.P. Møller-Maersk

Supply chains are renowned for how unsecured and porous they are multiple layers deep. That’s because manufacturers often only password-protect administrator access privileges for trusted versus untrusted domains at the operating system level of Windows NT Server, haven’t implemented multi-factor authentication (MFA), and apply a trust but verify mindset only for their top suppliers. Many manufacturers don’t define, and much less enforce, supplier security past the first tier of their supply chains, leaving the most vulnerable attack vectors unprotected.

It’s the smaller suppliers that hackers exploit to bring down many of the world’s largest manufacturing companies. An example of this is how an accounting software package from a small supplier, Linkos Group, was infected with a powerful ransomware agent, NotPetya, bringing one of the world’s leading shipping providers,  A.P. Møller-Maersk, to a standstill. Linkos’ Group accounting software was first installed in the A.P. Møller-Maersk offices in Ukraine. The NotPetya ransomware was able to take control of the local office servers then propagate itself across the entire A.P. Møller-Maersk network. A.P. Møller-Maersk had to reinstall their 4,000 servers, 45,000 PCs, and 2500 applications, and the damages were between $250M to $300M. Security experts consider the ransomware attack on A.P. Møller-Maersk to be one of the most devastating cybersecurity attacks in history. The Ukraine-based group of hackers succeeded in using an accounting software update from one of A.P. Møller-Maersk’s smallest suppliers to bring down one of the world’s largest shipping networks. My recent post, How To Deal With Ransomware In A Zero Trust World explains how taking a Zero Trust Privilege approach minimizes the risk of falling victim to ransomware attacks. Ultimately, treating identity as the new security perimeter needs to be how supply chains are secured. The following geographical analysis of the attack was provided by CargoSmart, showing how quickly NotPetya ransomware can spread through a global network:

CargoSmart provided a Vessel Monitoring Dashboard to monitor vessels during this time of recovery from the cyber attack.

Supply Chains Need To Treat Every Supplier In Their Network As A New Security Perimeter

The more integrated a supply chain, the more the potential for breaches and ransomware attacks. And in supply chains that rely on privileged access credentials, it’s a certainty that hackers outside the organization and even those inside will use compromised credentials for financial gain or disrupt operations. Treating every supplier and their integration points in the network as a new security perimeter is critical if manufacturers want to be able to maintain operations in an era of accelerating cybersecurity threats.

Taking a Zero Trust Privilege approach to securing privileged access credentials will help alleviate the leading cause of breaches in manufacturing today, which is privileged access abuse. By taking a “never trust, always verify, and enforce least privilege” approach, manufacturers can protect the “keys to the kingdom,” which are the credentials hackers exploit to take control over an entire supply chain network.

Instead of relying on trust but verify or trusted versus untrusted domains at the operating system level, manufacturers need to have a consistent security strategy that scales from their largest to smallest suppliers. Zero Trust Privilege could have saved A.P. Møller-Maersk from being crippled by a ransomware attack by making it a prerequisite that every supplier must have ZTP-based security guardrails in place to do business with them.

Conclusion

Among the most porous and easily compromised areas of manufacturing, supply chains are the lifeblood of any production business, yet also the most vulnerable. As hackers become more brazen in their ransomware attempts with manufacturers and privileged access credentials are increasingly sold on the Dark Web, manufacturers need a sense of urgency to combat these threats. Taking a Zero Trust approach to securing their supply chains and operations, helps manufacturers to implement least privilege access based on verifying who is requesting access, the context of the request, and the risk of the access environment. By implementing least privilege access, manufacturers can minimize the attack surface, improve audit and compliance visibility, and reduce risk, complexity, and costs for the modern, hybrid manufacturing enterprise.

%d bloggers like this: