Skip to content

Archive for

Why Security Needs To Be Integral To DevOps

Why Security Needs To Be Integral To DevOps

Bottom Line: DevOps and security teams need to leave one-time gating inspections in the past and pursue a more collaborative real-time framework to achieve their shared compliance, security and time-to-market goals.

Shorter product lifecycles the need to out-innovate competitors and exceed customer expectations with each new release are a few of the many reasons why DevOps is so popular today. Traditional approaches to DevOps teams collaborating with security aren’t working today and product releases are falling behind or being rushed to-market leading to security gaps as a result.

Based on conversations with DevOps team leaders and my own experience being on a DevOps team the following are factors driving the urgency to integrate security into DevOps workflows:

  • Engineering, DevOps and security teams each have their lexicon and way of communicating reinforced by siloed systems.
  • Time-to-market and launch delays are common when engineering, DevOps and security don’t have a unified system to use that includes automation tools to help scale tasks and updates.
  • Developers are doing Application Security Testing (AST) with tools that aren’t integrated into their daily development environments, making the process time-consuming and challenging to get done.
  • Limiting security to the testing and deployment phases of the Software Development Lifecycle (SDLC) is a bottleneck that jeopardizes the critical path, launch date and compliance of any new project.
  • 70% of DevOps team members have not been trained on how to secure software adequately according to a DevSecOps Global Skills survey.

Adding to the urgency is the volume of builds DevOps teams produce in software companies and enterprises daily and the need for having security integrated into DevOps becomes clear. Consider the fact that Facebook on Android alone does 50,000 to 60,000 builds a day according to research cited from Checkmarx who is taking on the challenge of integrating DevOps and security into a unified workflow. Their Software Security Platform unifies DevOps with security and provides static and interactive application security testing, newly launched software composition analysis and developer AppSec awareness and training programs to reduce and remediate risk from software vulnerabilities.

Synchronizing Security Into DevOps Delivers Much Needed Speed & Scale

DevOps teams thrive in organizations built for speed, continuous integration, delivery and improvement. Contrast the high-speed always-on nature of DevOps teams with the one-time gating inspections security teams use to verify regulatory, industry and internal security and compliance standards and it’s clear security’s role in DevOps needs to change. Integrating security into DevOps is proving to be very effective at breaking through the roadblocks that stand in the way of getting projects done on time and launched into the market.  Getting the security and DevOps team onto the same development platform is needed to close the gaps between the two teams and accelerate development. Of the many approaches available for accomplishing this Checkmarx’s approach to integrating Application Security Testing into DevOps shown below is among the most comprehensive:

Why Security Needs To Be Integral To DevOps

Making DevOps A Core Strength Of An Organization

By 2025 nearly two-thirds of enterprises will be prolific software producers with code deployed daily to meet constant demand and over 90% of new apps will be cloud-native, enabling agility and responsiveness according to IDC FutureScape: Worldwide IT Industry 2020 Predictions. IDC also predicts there will be 1.6 times more developers than now, all working in collaborative systems to enable innovation. The bottom line is that every company will be a technology company in the next five years according to IDC’s predictions.

To capitalize on the pace of change happening today driven by DevOps, organizations need frameworks that deliver the following:

  • Greater agility and market responsiveness – Organizations need to create operating models that integrate business, operations and technology into stand-alone businesses-within-the-business domains.
  • Customer Centricity at the core of business models – The best organizations leverage a connected economy to ensure that they can meet and exceed customer expectations.  By creating an ecosystem that caters to every touchpoint of the customer journey using technology, these organizations seem to anticipate their customer needs and deliver the goods and services needed at the right time via the customer’s preferred channel.  As a result, successful organizations see growth from their existing customer base while they acquire new ones.
  • Have a DNA the delivers a wealth of actionable Insights – Organizations well-positioned to turn data into insights that drive actions to serve and anticipate customer needs are ahead of competitors today regarding time-to-market.  These organizations know how to pull all the relevant information, capabilities and people together so they can act quickly and efficiently in making the right decisions. They are the companies that will know the outcome of their actions before they take them and they will be able to anticipate their success.

BMC’s Autonomous Digital Enterprise framework, shown below highlights how companies that have an innovation mindset and the three common traits of agility, customer centricity and actionable insights at their foundation have greater consistency and technology maturity in their business model characteristics compared to competitors. They also can flex and support fundamental operating model characteristics and key technology-enabled tenets. These tenets include delivering a transcendent customer experience, automating customer transactions and providing automation everywhere seeing enterprise DevOps as a natural evolution of DevOps, enabling a business to be more data-driven and achieving more adaptive cybersecurity in a Zero-Trust framework.

Why Security Needs To Be Integral To DevOps

Conclusion

Meeting the challenge of integrating security in DevOps provides every organization with an opportunity to gain greater agility and market responsiveness, become more customer-centric and develop the DNA to be more data-driven. These three goals are achievable when organizations look to how they can build on their existing strengths and reinvent themselves for the future. As DevOps success goes so goes the success of any organization. Checkmarx’s approach to putting security at the center of DevOps is helping to break down the silos that exist between engineering, DevOps and security. To attain greater customer-centricity, become more data-driven and out-innovate competitors, organizations are adopting frameworks including BMC’s Autonomous Digital Enterprise to reinvent themselves and be ready to compete in the future now.

 

 

 

 

Dissecting The Twitter Hack With A Cybersecurity Evangelist

Dissecting The Twitter Hack With A Cybersecurity Evangelist

Bottom Line: Shattering the false sense of security in tech, the recent Twitter hack blended altruism, fame, greed, social engineering via SIM swapping and insider threats to steal $120,000 from victims when the economic and political damage could have been far worse.

Targeting the most influential celebrities on Twitter, hackers orchestrated a social engineering-based attack Wednesday promoting a cryptocurrency scam. Business leaders, celebrities, politicians and billionaires’ accounts were hacked using Twitter’s administrative tools. Personal Twitter accounts hacked include those of Amazon CEO Jeff Bezos, Joe Biden, Tesla CEO Elon Musk, President Barack Obama, Bill Gates, Warren Buffet and others. Apple and Uber’s Twitter accounts were also hacked.

Using SIM swapping, in which threat actors trick, coerce or bribe employees of their victims to gain access to privileged account credentials and administrative tools, hackers were able first to change the email address of each targeted account. Next, two-factor authentication was turned off so when an alert was sent of the account change it went to the hacker’s email address. With the targeted accounts under their control, hackers began promoting their cryptocurrency scam. While not all details of the attack have surfaced Motherboard’s story of how hackers convinced a Twitter employee to help them the hijack accounts makes for fascinating reading.

Dissecting The Hack

Interested in dissecting the hack from a cybersecurity standpoint, I contacted Dr. Torsten George, Cybersecurity Evangelist and industry expert from Centrify. Torsten is also a leading authority on privileged access management and how to thwart breaches involving privileged access credentials.

Louis:  What was your initial impression upon breaking news of the hack and what did you believe would cause such a massive hack of celebrity and leading political figures accounts this past week?

Torsten: When the news broke, the media probably polled other security experts and the first initial reaction was, ‘Oh, that’s a massive attack, most likely a credential-based attack,’ because 80% of today’s data breaches go back to privilege access abuse. They are typically first triggered by phishing attacks, the precursor to many attacks where the attackers tried to capture these credentials and then leverage them to attack their victim’s organizations.

So, the breaking news indicated that most likely, somebody was able to leverage a compromised credential to enter into the Twitter environment and take over accounts. However, more and more information became available, with screenshots being shared of internal Twitter tools. For me, that raised a red flag, because in a typical attack pattern we’re seeing three distinct phases in the cyber-attack lifecycle: the compromise, the exploration phase and the exfiltration of sensitive data, which includes covering up tracks and potentially creating a backdoor for future attacks.

When performing reconnaissance, hackers commonly try to identify regular IT schedules, security measures, network traffic flows and scan the entire IT environment to gain an accurate picture of the network resources, privileged accounts and services. Domain controllers, Active Directory and servers are prime reconnaissance targets to hunt for additional privileged credentials and privileged access.

They wouldn’t necessarily look for administrative tools that could be leveraged for their attack unless they have intimate knowledge that those tools exist in the victim’s environment — be it by having worked for the company in the past or representing an insider threat.

Louis: What’s the anatomy of an insider attack, based on your experience?

Torsten: As was later confirmed by Twitter, it became very apparent that this is a case of insider threats, where you have an insider that has been leveraged for this attack. The most common insider threats can be defined by the intent and motivation of the individuals involved. The 2019 Verizon Insider Threat Report defines five distinct insider threats based on data breach scenarios and they all have excellent, accurate names: the Careless Worker, the Inside (often recruited) Agent, the Disgruntled Employee, the Malicious Insider and the Feckless Third-Party.

Considering the global environment we’re facing right now, with Covid-19 and other related economic hardships, the risk of insider threats is exacerbated, as pending furloughs or pay cuts may tempt employees to exfiltrate data to secure a new job or make up for income losses.

So a privileged administrator might be more open to people that approach them and say, ‘Would you be willing to share with us your access credentials, or would you do something on our behalf to exfiltrate data or to manipulate data?’ That risk has increased dramatically across all industries.

So it turned out the first suspicion was phishing attacks, followed by compromised credentials. It turns out to be an insider threat. Organizations need to be prepared for that.

Louis: What can companies do to reduce the likelihood a malicious insider will hack them?

Torsten: It becomes a little bit trickier when you deal with a malicious insider because they most likely know your environment, they might know your defense mechanisms and they might know the security tools that your likely using. So they can bypass these security controls and try to gain the control of data that they can then profit from.

Organizations have to rethink the way that they’ve structured their defense controls and truly take an approach of an in-depth strategy with a different layer of defenses. The first layer that comes to mind in this particular case is multi-factor authentication (MFA) which is still low-hanging fruit. There are still many organizations out there that are not taking advantage of implementing MFA.

While MFA is highly recommended, it isn’t as effective against insider threats because they have that second factor of authentication and can pass those challenges. Organizations need to go beyond MFA if they want to have a layered security strategy.

Louis: What are some of the ways they can go beyond MFA to avoid being the victim of an insider threat?

Torsten: A very important component of your defense strategy should be the approach of zero standing privileges, which is something Gartner recommends to its clients. That means that I have normal privileges and entitlements to do my job, like answering emails and using the Internet, but that’s probably all I need. If I need more access, I’ll have to elevate my privilege for the time needed to do that particular task but then rescind that privilege once it’s done.

If I have zero standing privileges – even if somebody compromises my credential, even if I’m an insider – I don’t have immediate access to the keys to the kingdoms to do whatever I want.

And before privilege elevation, organizations should require context through a formal request. For example, require the user to submit a ticket through ServiceNow or any other IT Service Management platform to detail what they need to access, for how long and to do what. That way, there is an auditing trail and an approval process. If the threat actor – whether insider or not – doesn’t do this they don’t get privileged access to that target system.

Louis: Besides those perhaps expected controls, what other controls might have helped in this particular scenario?

Torsten: Organizations should also take advantage of modern tools to leverage machine learning technology, so that looks at user behavior and risk factors to also get a hold of these insider attacks. All the other security controls are more tailored towards external preparation at first. Still, once you implement machine learning technology and user behavior analytics that’s where you also can capture insider threats.

Machine learning can look for suspicious activity, such as a target being accessed outside of a typical maintenance window, or is the administrator logging in from a different location or device than usual. It can then trigger an MFA request and also issue a real-time alert, regardless of whether the MFA challenge is successfully resolved.

Furthermore, in the case of Twitter, there are privacy and regulatory concerns that could also be additional triggers for real-time alerts and to shut down this activity automatically. Regulations like the CCPA (California Consumer Privacy Act) and GDPR (General Data Protection Regulation) mean that platforms like Twitter have to be very careful with any access to or manipulation of a customer’s feed. That could – and should have – instantly triggered a real-time alert when an administrator was posting on behalf of a user.

Louis: Do you think this is going to be the start of an entirely new era of hacks where hackers will pay off internal employees for promotional messages?

Torsten: Quite frankly, we have seen an uptick since the start of the Covid-19 pandemic. And I believe now that this Twitter attack has been covered in the press so much, you will have copycats that will try to do the same. Some of them will also target social media platforms, but others that might be a little bit smarter because social media is easily detectable if something goes wrong. An industry like healthcare could be a prime target and there is already news that Russian hackers are attacking healthcare providers and research labs to try to gain access to vaccine research.

Louis: Given how significant this hack is in terms of the progression or the growing sophistication of threats, what are the top three predictions you have for the rest of 2020?

Torsten: Ransomware is an example of a technique that has changed quite significantly in two ways. First, they are no longer only delivered via an email, but also via social media platforms, SMS messages and more. Second, ransomware is no longer only focused on shutting down business operations. The most recent example with EDP Renewables North American, a subsidiary of an European-based electric utilities company, showed that hackers leveraged ransomware to exfiltrate data. Not to lock it down, but to exfiltrate data and then ask for ransom from their victim to not publish the data on the Dark Web.

Second, as I’ve already covered, the current economic hardships of the pandemic will cause more people to jump on the bandwagon and become cybercriminals. And these aren’t the people you see in movies – dark characters in hoodies using sophisticated hacking techniques to breach the government. These are your neighbors, the little boys next door. For them it’s not a big deal to become a cyber-criminal.

Third, as you’d expect, the number of cyber-attacks will increase as a result and they will continue to find new and innovative ways to find the easiest way in. The Twitter incident taught us that there was no technology “breach” required. It was just finding the right person with the right privileges and paying them to do 25 Tweets. That’s an easy payday.

I think this whole crisis that we’re going through will see a major uptick in attacks from the traditional cyber hackers, but also from a whole bunch of newbies and greenhorns that will try out their luck and see if they can make a buck. Either by ransomware attacks, phishing attacks, social engineering or any combination thereof.

10 Ways AI Is Improving New Product Development

10 Ways AI Is Improving New Product Development

  • Startups’ ambitious AI-based new product development is driving AI-related investment with $16.5B raised in 2019, driven by 695 deals according to PwC/CB Insights MoneyTree Report, Q1 2020.
  • AI expertise is a skill product development teams are ramping up their recruitment efforts to find, with over 7,800 open positions on Monster, over 3,400 on LinkedIn and over 4,200 on Indeed as of today.
  • One in ten enterprises now uses ten or more AI applications, expanding the Total Available Market for new apps and related products, including chatbots, process optimization and fraud analysis, according to MMC Ventures.

From startups to enterprises racing to get new products launched, AI and machine learning (ML) are making solid contributions to accelerating new product development. There are 15,400 job positions for DevOps and product development engineers with AI and machine learning today on Indeed, LinkedIn and Monster combined. Capgemini predicts the size of the connected products market will range between $519B to $685B this year with AI and ML-enabled services revenue models becoming commonplace.

Rapid advances in AI-based apps, products and services will also force the consolidation of the IoT platform market. The IoT platform providers concentrating on business challenges in vertical markets stand the best chance of surviving the coming IoT platform shakeout. As AI and ML get more ingrained in new product development, the IoT platforms and ecosystems supporting smarter, more connected products need to make plans now how they’re going to keep up. Relying on technology alone, like many IoT platforms are today, isn’t going to be enough to keep up with the pace of change coming.   The following are 10 ways AI is improving new product development today:

  • 14% of enterprises who are the most advanced using AI and ML for new product development earn more than 30% of their revenues from fully digital products or services and lead their peers is successfully using nine key technologies and tools. PwC found that Digital Champions are significantly ahead in generating revenue from new products and services and more than a fifth of champions (29%) earn more than 30% of revenues from new products within two years of information. Digital Champions have high expectations for gaining greater benefits from personalization as well. The following graphic from Digital Product Development 2025: Agile, Collaborative, AI-Driven and Customer Centric, PwC, 2020 (PDF, 45 pp.) compares Digital Champions’ success with AI and ML-based new product development tools versus their peers:

10 Ways AI Is Improving New Product Development

 

  • 61% of enterprises who are the most advanced using AI and ML (Digital Champions) use fully integrated Product Lifecycle Management (PLM) systems compared to just 12% of organizations not using AI/ML today (Digital Novices). Product Development teams the most advanced in their use of AL & ML achieve greater economies of scale, efficiency and speed gains across the three core areas of development shown below. Digital Champions concentrate on gaining time-to-market and speed advantages in the areas of Digital Prototyping, PLM, co-creation of new products with customers, Product Portfolio Management and Data Analytics and AI adoption:

10 Ways AI Is Improving New Product Development

  • AI is actively being used in the planning, implementation and fine-tuning of interlocking railway equipment product lines and systems.  Engineer-to-order product strategies introduce an exponential number of product, service and network options. Optimizing product configurations require an AI-based logic solver that can factor in all constraints and create a Knowledge Graph to guide deployment. Siemens’ approach to using AI to find the optimal configuration out of 1090 possible combinations provides insights into how AI can help with new product development on a large scale. Source: Siemens, Next Level AI – Powered by Knowledge Graphs and Data Thinking, Siemens China Innovation Day, Michael May, Chengdu, May 15, 2019.

10 Ways AI Is Improving New Product Development

  • Eliminating the roadblocks to getting new products launched starts with using AI to improve demand forecast accuracy. Honeywell is using AI to reduce energy costs and negative price variance by tracking and analyzing price elasticity and price sensitivity as well. Honeywell is integrating AI and machine-learning algorithms into procurement, strategic sourcing and cost management getting solid returns across the new product development process. Source: Honeywell Connected Plant: Analytics and Beyond. (23 pp., PDF, no opt-in) 2017 Honeywell User’s Group.

10 Ways AI Is Improving New Product Development

  • Relying on AI-based techniques to create and fine-tune propensity models that define product line extensions and add-on products that deliver the most profitable cross-sell and up-sell opportunities by product line, customer segment and persona. It’s common to find data-driven new product development and product management teams using propensity models to define the products and services with the highest probability of being purchased. Too often, propensity models are based on imported data, built-in Microsoft Excel, making their ongoing use time-consuming. AI is streamlining creation, fine-tuning and revenue contributions of up-sell and cross-sell strategies by automating the entire progress. The screen below is an example of a propensity model created in Microsoft Power BI.

10 Ways AI Is Improving New Product Development

  • AI is enabling the next generation of frameworks that reduce time-to-market while improving product quality and flexibility in meeting unique customization requirements on every customer order. AI is making it possible to synchronize better suppliers, engineering, DevOps, product management, marketing, pricing, sales and service to ensure a higher probability of a new product succeeding in the market. Leaders in this area include BMC’s Autonomous Digital Enterprise (ADE). BMC’s ADE framework shows the potential to deliver next-generation business models for growth-minded organizations looking to run and reinvent their businesses with AI/ML capabilities and deliver value with competitive differentiation enabled by agility, customer centricity and actionable insights. The ADE framework is capable of flexing and responding more quickly to customer requirements than competitive frameworks due to the following five factors: proven ability to deliver a transcendent customer experience; automated customer interactions and operations across distributed organizations; seeing enterprise DevOps as natural evolution of software DevOps; creating the foundation for a data-driven business that operates with a data mindset and analytical capabilities to enable new revenue streams; and a platform well-suited for adaptive cybersecurity. Taken together, BMC’s ADE framework is what the future of digitally-driven business frameworks look like that can scale to support AI-driven new product development. The following graphic compares the BMC ADE framework (left) and the eight factors driving digital product development as defined by PwC (right) through their extensive research. For more information on BMC’s ADE framework, please see BMC’s Autonomous Digital Enterprise site. For additional information on PwC’s research, please see the document Digital Product Development 2025: Agile, Collaborative, AI-Driven and Customer Centric, PwC, 2020 (PDF, 45 pp.).

10 Ways AI Is Improving New Product Development

  • Using AI to analyze and provide recommendations on how product usability can be improved continuously. It’s common for DevOps, engineering and product management to run A/B tests and multivariate tests to identify the usability features, workflows and app & service responses customers prefer. Based on personal experience, one of the most challenging aspects of new product development is designing an effective, engaging and intuitive user experience that turns usability into a strength for the product. When AI techniques are part of the core new product development cycle, including usability, delivering enjoyable customer experiences, becomes possible. Instead of a new app, service, or device is a chore to use, AI can provide insights to make the experience intuitive and even fun.
  • Forecasting demand for new products, including the causal factors that most drive new sales is an area AI is being applied to today with strong results. From the pragmatic approaches of asking channel partners, indirect and direct sales teams, how many of a new product they will sell to using advanced statistical models, there is a wide variation in how companies forecast demand for a next-generation product. AI and ML are proving to be valuable at taking into account causal factors that influence demand yet had not been known of before.
  • Designing the next generation of Nissan vehicles using AI is streamlining new product development, trimming weeks off new vehicle development schedules. Nissan’s pilot program for using AI to fast-track new vehicle designs is called DriveSpark. It was launched in 2016 as an experimental program and has since proven valuable for accelerating new vehicle development while ensuring compliance and regulatory requirements are met. They’ve also used AI to extend the lifecycles of existing models as well. For more information, see the article, DriveSpark, “Nissan’s Idea: Let An Artificial Intelligence Design Our Cars,” September 2016.
  • Using generative design algorithms that rely on machine learning techniques to factor in design constraints and provide an optimized product design. Having constraint-optimizing logic within a CAD design environment helps GM attain the goal of rapid prototyping. Designers provide definitions of the functional requirements, materials, manufacturing methods and other constraints. In May 2018, General Motors adopted Autodesk generative design software to optimize for weight and other key product criteria essential for the parts being designed to succeed with additive manufacturing. The solution was recently tested with the prototyping of a seatbelt bracket part, which resulted in a single-piece design that is 40% lighter and 20% stronger than the original eight component design. Please see the Harvard Business School case analysis, Project Dreamcatcher: Can Generative Design Accelerate Additive Manufacturing? for additional information.

Additional reading:

2020 AI Predictions, Five ways to go from reality check to real-world payoff, PwC Consulting

Accenture, Manufacturing The Future, Artificial intelligence will fuel the next wave of growth for industrial equipment companies (PDF, 20 pp., no opt-in)

AI Priorities February 2020 5 ways to go from reality check to real-world pay off, PwC, February, 2020 (PDF, 16 pp.)

Anderson, M. (2019). Machine learning in manufacturing. Automotive Design & Production, 131(4), 30-32.

Bruno, J. (2019). How the IIoT can change business models. Manufacturing Engineering, 163(1), 12.

Digital Factories 2020: Shaping The Future Of Manufacturing, PwC DE., 2017 (PDF, 48 pp.)

Digital Product Development 2025: Agile, Collaborative, AI Driven and Customer Centric, PwC, 2020 (PDF, 45 pp.)

Enabling a digital and analytics transformation in heavy-industry manufacturing, McKinsey & Company, December 19, 2019

Global Digital Operations 2018 Survey, Strategy&, PwC, 2018

Governance and Management Economics, 7(2), 31-36.

Greenfield, D. (2019). Advice on scaling IIoT projects. ProFood World

Hayhoe, T., Podhorska, I., Siekelova, A., & Stehel, V. (2019). Sustainable manufacturing in industry 4.0: Cross-sector networks of multiple supply chains, cyber-physical production systems and AI-driven decision-making. Journal of Self-

Industry’s fast-mover advantage: Enterprise value from digital factories, McKinsey & Company, January 10, 2020

Kazuyuki, M. (2019). Digitalization of manufacturing process and open innovation: Survey results of small and medium-sized firms in japan. St. Louis: Federal Reserve Bank of St Louis.

‘Lighthouse’ manufacturers lead the way—can the rest of the world keep up?  McKinsey & Company, January 7, 2019

Machine Learning in Manufacturing – Present and Future Use-Cases, Emerj Artificial Intelligence Research, last updated May 20, 2019, published by Jon Walker

Machine learning, AI are most impactful supply chain technologies. (2019). Material Handling & Logistics

MAPI Foundation, The Manufacturing Evolution: How AI Will Transform Manufacturing & the Workforce of the Future by Robert D. Atkinson, Stephen Ezell, Information Technology and Innovation Foundation (PDF, 56 pp., opt-in)

Mapping heavy industry’s digital-manufacturing opportunities, McKinsey & Company, September 24, 2018

McKinsey, AI in production: A game changer for manufacturers with heavy assets, by Eleftherios Charalambous, Robert Feldmann, Gérard Richter and Christoph Schmitz

McKinsey, Digital Manufacturing – escaping pilot purgatory (PDF, 24 pp., no opt-in)

McKinsey, Driving Impact and Scale from Automation and AI, February 2019 (PDF, 100 pp., no opt-in).

McKinsey, ‘Lighthouse’ manufacturers, lead the way—can the rest of the world keep up?,by Enno de Boer, Helena Leurent and Adrian Widmer; January, 2019.

McKinsey, Manufacturing: Analytics unleashes productivity and profitability, by Valerio Dilda, Lapo Mori, Olivier Noterdaeme and Christoph Schmitz, March, 2019

McKinsey/Harvard Business Review, Most of AI’s business uses will be in two areas,

Morey, B. (2019). Manufacturing and AI: Promises and pitfalls. Manufacturing Engineering, 163(1), 10.

Preparing for the next normal via digital manufacturing’s scaling potential, McKinsey & Company, April 10, 2020

Reducing the barriers to entry in advanced analytics. (2019). Manufacturing.Net,

Scaling AI in Manufacturing Operations: A Practitioners Perspective, Capgemini, January, 2020

Seven ways real-time monitoring is driving smart manufacturing. (2019). Manufacturing.Net,

Siemens, Next Level AI – Powered by Knowledge Graphs and Data Thinking, Siemens China Innovation Day, Michael May, Chengdu, May 15, 2019

Smart Factories: Issues of Information Governance Manufacturing Policy Initiative School of Public and Environmental Affairs Indiana University, March 2019 (PDF, 68 pp., no opt-in)

Smartening up with Artificial Intelligence (AI) – What’s in it for Germany and its Industrial Sector? (52 pp., PDF, no opt-in) McKinsey & Company.

Team predicts the useful life of batteries with data and AI. (2019, March 28). R & D.

The AI-powered enterprise: Unlocking the potential of AI at scale, Capgemini Research, July 2020

The Future of AI and Manufacturing, Microsoft, Greg Shaw (PDF, 73 pp., PDF, no opt-in).

The Rise of the AI-Powered Company in the Postcrisis World, Boston Consulting Group, April 2, 2020

Top 8 Data Science Use Cases in Manufacturing, ActiveWizards: A Machine Learning Company Igor Bobriakov, March 12, 2019

Walker, M. E. (2019). Armed with analytics: Manufacturing as a martial art. Industry Week

Wang, J., Ma, Y., Zhang, L., Gao, R. X., & Wu, D. (2018). Deep learning for smart manufacturing: Methods and applications. Journal of Manufacturing Systems, 48, 144–156.

Zulick, J. (2019). How machine learning is transforming industrial production. Machine Design

5 Mistakes That Threaten Infrastructure Cybersecurity And Resilience

5 Mistakes That Threaten Infrastructure Cybersecurity And Resilience

 

Bottom line: With many IT budgets under scrutiny, cybersecurity teams are expected to do more with less, prioritizing spending that delivers the greatest ROI while avoiding the top five mistakes that threaten their infrastructures.

In a rush to reduce budgets and spending, cybersecurity teams and the CISOs that lead them need to avoid the mistakes that can thwart cybersecurity strategies and impede infrastructure performance. Cutting budgets too deep and too fast can turn into an epic fail from a cybersecurity standpoint. What I’ve found is that CIOs are making decisions based on budget requirements, while CISOs are looking out for the security of the company.

Based on their ongoing interviews with CIOs, Gartner is predicting an 8% decline in worldwide IT spending this year. Cybersecurity projects that don’t deliver a solid ROI are already out of IT budgets. Prioritizing and trimming projects to achieve tighter cost optimization is how CIOs and their teams are reshaping their budgets today. CIOs say the goal is to keep the business running as secure as possible, not attain perfect cybersecurity.

Despite the unsettling, rapid rise of cyber-attacks, including a 667% increase in spear-fishing email attacks related to Covid-19 since February alone, CIOs often trim IT budgets starting with cybersecurity first. The current economic downturn is making it clear that cybersecurity is more of a business strategy than an IT one, as spending gets prioritized by the best-to-worst business case.

Five Mistakes No CISO Wants To Make

One of the hardest parts of a CISO’s job is deciding which projects will continue to be funded and who will be responsible for leading them, so they deliver value. It gets challenging fast when budgets are shrinking and competitors actively recruit the most talented team members. Those factors taken together create the perfect conditions for the five mistakes that threaten the infrastructure cybersecurity and resilience of any business.

The five mistakes no CISO wants to make include the following:

1.   No accountability for the crown jewels for the company. Privileged access credentials continue to be the primary target for cyber-attackers. However, many companies just went through a challenging sprint to make sure all employees have secure remote access to enable Covid-19 work-from-home policies. Research by Centrify reveals that 41% of UK businesses aren’t treating outsourced IT and other third parties likely to have some form of privileged access as an equal security concern.

And while a password vault helps rotate credentials, it still relies on shared passwords and doesn’t provide any accountability to know who is doing what with them. That accountability can be introduced by moving to an identity-centric approach where privileged users log in as themselves and are authenticated using existing identity infrastructures (such as Microsoft Active Directory) to federate access with Centrify’s Privileged Access Service.

CISOs and their teams also continue to discount or underestimate the importance of privileged non-human identities that far outweigh human users as a cybersecurity risk in today’s business world. What’s needed is an enterprise-wide approach enabling machines to protect themselves across any network or infrastructure configuration.

2.   Cybersecurity budgets aren’t revised for current threatscapes. Even though many organizations are still in the midst of extensive digital transformation, their budgets often reflect the threatscape from years ago. This gives hackers the green light to get past antiquated legacy security systems to access and leverage modern infrastructures, such as cloud and DevOps. IT security leaders make this even more challenging by not listening to the front-line cybersecurity teams and security analysts who can see the patterns of breach attempts in data they review every day. In dysfunctional organizations, the analyst teams are ignored and cybersecurity suffers.

3. Conflicts of interest when CISOs report to CIOs and the IT budget wins.  This happens in organizations that get hacked because the cybersecurity teams aren’t getting the tools and support they need to do their jobs. With IT budgets facing the greatest scrutiny they’ve seen in a decade, CISOs need to have their budget to defend. Otherwise, too many cybersecurity projects will be cut without thinking of the business implications of each. The bottom line is CISOs need to report to the CEO and have the autonomy to plan, direct, evaluate and course-correct their strategies with their teams.

4. The mistake of thinking cloud platforms’ Identity and Access Management (IAM) tools can secure an enterprise on their own. Cloud providers offer a baseline level of IAM support that might be able to secure workloads in their clouds adequately but is insufficient to protect a multi-cloud, hybrid enterprise. IT leaders need to consider how they can better protect the complex areas of IAM and Privileged Access Management (PAM) with these significant expansions of the enterprise IT estate.

Native IAM capabilities offered by AWS, Microsoft Azure, Google Cloud and other vendors provide enough functionality to help an organization get up and running to control access in their respective homogeneous cloud environments. However, often they lack the scale to fully address the more challenging, complex areas of IAM and PAM in hybrid or multi-cloud environments. Please see the post, The Truth About Privileged Access Security On AWS and Other Public Clouds, for additional information.

5. Exposing their organizations to a greater risk of breach and privileged access credential abuse by staying with legacy password vaults too long. Given the severity, speed and scale of breach attempts, IT leaders need to re-think their vault strategy and make them more identity-centric. Just as organizations have spent the past 5 – 10 years modernizing their infrastructure, they must also consider how to modernize how they secure access to it. More modern solutions can enforce a least privilege approach based on Zero Trust principles that grant just enough, just-in-time access to reduce risk. Forward-thinking organizations will be more difficult to breach by reorienting PAM from being vault-centric to identity-centric.

Conclusion

Decisions about what stays or goes in cybersecurity budgets this year could easily make or break careers for CISOs and CIOs alike. Consider the five mistakes mentioned here and the leading cause of breaches – privileged access abuse. Prioritizing privileged access management for human and machine identities addresses the most vulnerable threat vector for any business. Taking a more modern approach that is aligned to digital transformation priorities can often allow organizations to leverage their existing solutions to reduce risk and costs at the same time.

 

 

 

Why Cybersecurity Is Really A Business Problem

Why Cybersecurity Is Really A Business Problem

Bottom Line: Absolute’s 2020 Endpoint Resilience Report illustrates why the purpose of any cybersecurity program needs to be attaining a balance between protecting an organization and the need to keep the business running, starting with secured endpoints.

Enterprises who’ve taken a blank-check approach in the past to spending on cybersecurity are facing the stark reality that all that spending may have made them more vulnerable to attacks. While cybersecurity spending grew at a Compound Annual Growth Rate (CAGR) of 12% in 2018, Gartner’s latest projections are predicting a decline to only 7% CAGR through 2023. Nearly every CISO I’ve spoken with in the last three months say prioritizing cybersecurity programs by their ROI and contribution to the business is how funding gets done today.

Cybersecurity Has Always Been A Business Decision

Overcoming the paradox of keeping a business secure while fueling its growth is the essence of why cybersecurity is a business decision. Securing an entire enterprise is an unrealistic goal; balancing security and ongoing operations is. CISOs speak of this paradox often and the need to better measure the effectiveness of their decisions.

This is why the findings from Absolute’s 2020 State of Endpoint Resilience Report​  are so timely given the shift to more spending accountability on cybersecurity programs. The report’s methodology is based on anonymized data from enterprise-specific subsets of nearly 8.5 million Absolute-enabled devices active across 12,000+ customer organizations in North America and Europe. Please see the last page of the study for additional details regarding the methodology.

Key insights from the study include the following:

  • More than one of every three enterprise devices had an Endpoint Protection (EP), client management or VPN application out of compliance, further exposing entire organizations to potential threats. More than 5% of enterprise devices were missing one or more of these critical controls altogether. Endpoints, encryption, VPN and Client Management are more, not less fragile, despite millions of dollars being spent to protect them before the downturn. The following graphic illustrates how fragile endpoints are by noting average compliances rate alongside installation rates:
  • When cybersecurity spending isn’t being driven by a business case, endpoints become more complex, chaotic and nearly impossible to protect. Absolute’s survey reflects what happens when cybersecurity spending isn’t based on a solid business decision, often leading to multiple endpoint security agents. The survey found the typical organization has 10.2 endpoint agents on average, up from 9.8 last year. One of the most insightful series of findings in the study and well worth a read is the section on measuring Application Resilience. The study found that the resiliency of an application varies significantly based on what else it is paired with. It’s interesting to see that same-vendor pairings don’t necessarily do better or show higher average compliance rates than pairings from different vendors. The bottom line is that there’s no guarantee that any agent, whether sourced from a single vendor or even the most innovative vendors, will work seamlessly together and make an organization more secure. The following graphic explains this point:
  •  60% of breaches can be linked to a vulnerability where a patch was available, but not applied. When there’s a compelling business case to keep all machines current, patches get distributed and installed. When there isn’t, operating system patches are, on average, 95 days late. Counting up the total number of vulnerabilities addressed on Patch Tuesday in February through May 2020 alone, it shows that the average Windows 10 enterprise device has hundreds of potential vulnerabilities without a fix applied – including four zero-day vulnerabilities. Absolute’s data shows that Post-Covid-19, the average patch age has gone down slightly, driven by the business case of supporting an entirely remote workforce.
  • Organizations that had defined business cases for their cybersecurity programs are able to adapt better and secure vulnerable endpoint devices, along with the sensitive data piling up on those devices, being used at home by employees. Absolute’s study showed that the amount of sensitive data – like Personal Identifiable Information (PII), Protected Health Information (PHI) and Personal Financial Information (PFI) data – identified on endpoints soared as the Covid-19 outbreak spread and devices went home to work remotely. Without autonomous endpoints that have an unbreakable digital tether to ensure the health and security of the device, the greater the chance of this kind of data being exposed, the greater the potential for damages, compliance violations and more.

Conclusion

Absolute’s latest study on the state of endpoints amplifies what many CISOs and their teams are doing today. They’re prioritizing cybersecurity endpoint projects on ROI, looking to quantify agent effectiveness and moving beyond the myth that greater compliance is going to get them better security. The bottom line is that increasing cybersecurity spending is not going to make any business more secure, knowing the effectiveness of cybersecurity spending will, however. Being able to capable of tracking how resilient and persistent every autonomous endpoint is in an organization makes defining the ROI of endpoint investments possible, which is what every CISO I’ve spoken with is focusing on this year.

How To Improve Channel Sales With AI-Based Knowledge Sharing Networks

How To Improve Channel Sales With AI-Based Knowledge Sharing Networks

Bottom Line: Knowledge-sharing networks have been improving supply chain collaboration for decades; it’s time to enhance them with AI and extend them to resellers to revolutionize channel selling with more insights.

The greater the accuracy and speed of supply chain-based data integration and knowledge, the greater the accuracy of custom product orders. Add to that the complexity of selling CPQ and product configurations through channels, and the value of using AI to improve knowledge sharing networks becomes a compelling business case.

Why Channels Need AI-Based Knowledge Sharing Networks Now

Automotive, consumer electronics, high tech, and industrial products manufacturers are combining IoT sensors, microcontrollers, and modular designs to sell channel-configurable smart vehicles and products. AI-based knowledge-sharing networks are crucial to the success of their next-generation products. Likewise, to sell to any of these manufacturers, suppliers need to be pursuing the same strategy. AI-based services, including Amazon Alexa, Microsoft Cortana, and Google Voice and others, rely on knowledge-sharing networks to collaborate with automotive supply chains and strengthen OEM partnerships. The following graphic reflects how successful Amazon’s Alexa Automotive OEM sales team is at using knowledge-sharing networks to gain design wins across their industry.

The following are a few of the many reasons why creating and continually fine-tuning an AI-based knowledge-sharing network is an evolving strategy worth paying attention to:

  • Supply chains are the primary source of knowledge that must permeate an organization’s structure and channels for the company to stay synchronized to broader market demands. For CPQ channel selling strategies to thrive, they need real-time pricing, availability, available-to-promise, and capable-to-promise data to create accurate, competitive quotes that win deals. The better the supplier collaboration across supply chains and with channel partners, the higher the probability of selling more. A landmark study of the Toyota Production System by Professors Jeffrey H Dyer & Kentaro Nobeoka found that Toyota suppliers value shared data more than cash, making knowledge sharing systems invaluable to them (Dyer, Nobeoka, 2000).
  • Smart manufacturing metrics also need to be contributing real-time data to knowledge sharing systems channel partners use, relying on AI to create quotes for products that can be built the fastest and are the most attractive to each customer. Combining manufacturing’s real-time monitoring data stream of ongoing order progress and production availability with supply chain pricing, availability, and quality data all integrated to a cloud-based CPQ platform gives channel partners what they need to close deals now. AI-based knowledge-sharing networks will link supply chains, manufacturing plants, and channel partners to create smart factories that drive more sales. According to a recent Capgemini study, manufacturers are planning to launch 40% more smart factories in the next five years, increasing their annual investments by 1.7 times compared to the previous three years, according to their recent Smart factories @ scale Capgemini survey. The following graphic illustrates the percentage growth of smart factories across key geographic regions, a key prerequisite for enabling AI-based knowledge-sharing networks with real-time production data:
  • By closing the data gaps between suppliers, manufacturing, and channels, AI-based knowledge-sharing networks give resellers the information they need to sell with greater insight. Amazon’s Alexa OEM marketing teams succeeded in getting the majority of design-in wins with automotive manufacturers designing their next-generation of vehicles with advanced electronics and AI features. The following graphic from Dr. Dyer’s and Nobeoka’s study defines the foundations of a knowledge-sharing network. Applying AI to a mature knowledge-sharing network creates a strong network effect where every new member of the network adds greater value.
  • Setting the foundation for an effective knowledge sharing network needs to start with platforms that have AI and machine learning designed in with structure that can flex for unique channel needs. There are several platforms capable of supporting AI-based knowledge-sharing networks available, each with its strengths and approach to adapting to supply chain, manufacturing, and channel needs. One of the more interesting frameworks not only uses AI and machine learning across its technology pillars but also takes into consideration that a company’s operating model needs to adjust to leverage a connected economy to adapt to changing customer needs. BMC’s Autonomous Digital Enterprise (ADE) is differentiated from many others in how it is designed to capitalize on AI and Machine Learning’s core strengths to create innovation ecosystems in a knowledge-sharing network. Knowledge-sharing networks thrive on continuous learning. It’s good to see major providers using adaptive and machine learning to strengthen their platforms, with BMC’s Automated Mainframe Intelligence (AMI) emerging as a leader. Their approach to using adaptive learning to maintain data quality during system state changes and link exceptions with machine learning to deliver root cause analysis is prescient of where continuous learning needs to go.  The following graphic explains the ADE’s structure.

Conclusion

Knowledge-sharing networks have proven very effective in improving supply chain collaboration, supplier quality, and removing barriers to better inventory management. The next step that’s needed is to extend knowledge-sharing networks to resellers and enable knowledge sharing applications that use AI to tailor product and service recommendations for every customer being quoted and sold to. Imagine resellers being able to create quotes based on the most buildable products that could be delivered in days to buying customers. That’s possible using a knowledge-sharing network. Amazon’s success with Alexa design wins shows how their use of knowledge-sharing systems helped to provide insights needed across automotive OEMs wanted to add voice-activated AI technology to their next-generation vehicles.

References

BMC, Maximizing the Value of Hybrid IT with Holistic Monitoring and AIOps (10 pp., PDF).

BMC Blogs, 2019 Gartner Market Guide for AIOps Platforms, December 2, 2019

Cai, S., Goh, M., De Souza, R., & Li, G. (2013). Knowledge sharing in collaborative supply chains: twin effects of trust and power. International journal of production Research51(7), 2060-2076.

Capgemini Research Institute, Smart factories @ scale: Seizing the trillion-dollar prize through efficiency by design and closed-loop operations, 2019.

Columbus, L, The 10 Most Valuable Metrics in Smart Manufacturing, Forbes, November 20, 2020

Jeffrey H Dyer, & Kentaro Nobeoka. (2000). Creating and managing a high-performance knowledge-sharing network: The Toyota case. Strategic Management Journal: Special Issue: Strategic Networks, 21(3), 345-367.

Myers, M. B., & Cheung, M. S. (2008). Sharing global supply chain knowledge. MIT Sloan Management Review49(4), 67.

Wang, C., & Hu, Q. (2020). Knowledge sharing in supply chain networks: Effects of collaborative innovation activities and capability on innovation performance. Technovation94, 102010.

 

%d bloggers like this: