Skip to content

Posts from the ‘AI’ Category

7 Ways AI Reduces Mobile Fraud Just In Time For The Holidays

7 Ways AI Reduces Mobile Fraud Just In Time For The Holidays

  • There has been a 680% increase in global fraud transactions from mobile apps from October 2015 to December 2018, according to RSA.
  •  70% of fraudulent transactions originated in the mobile channel in 2018.
  • RSA’s Anti-Fraud Command Center saw phishing attacks increase 178% after leading banks in Spain launched instant transfer services.
  • Rogue mobile apps are proliferating with, 20% of all reported cyberattacks originating from mobile apps in 2018 alone.

On average, there are 82 new rogue applications submitted per day to any given AppExchange or application platform, all designed to defraud consumers. Mobile and digital commerce are cybercriminals’ favorite attack surfaces because they are succeeding with a broad base of strategies for defrauding people and businesses.

Phishing, malware, smishing, or the use of SMS texts rather than email to launch phishing attempts are succeeding in gaining access to victims’ account credentials, credit card numbers, and personal information to launch identity theft breaches. The RSA is seeing an arms race between cybercriminals and mobile OS providers with criminals improving their malware to stay at parity or leapfrog new versions and security patches of mobile operating systems.

Improving Mobile Fraud Prevention With AI And Machine Learning

Creating a series of rogue applications and successfully uploading them into an AppExchange or application store gives cybercriminals immediate access to global markets. Hacking mobile apps and devices is one of the fastest-growing cybercriminal markets, one with 6.8B mobile users worldwide this year, projected to increase to 7.3B in 2023, according to The Radicati Group. The total number of mobile devices, including both phones and tablets, will be over 13B by the end of 2019, according to the research firm. And a small percentage of mobile fraud transactions get reported, with mobile fraud losses reported totaling just over $40M across 14,392 breaches according to the U.S. Federal Trade Commission. Mobile fraud is an epidemic that needs to be fought with state-of-the-art approaches based on AI and machine learning’s innate strengths.

Traditional approaches to thwarting digital fraud rely on rules engines that thrive on detecting and taking action based on established, known patterns, and are often hard-coded into a merchant’s system. Fraud analyst teams further customize rules engines to reflect the unique requirements of the merchants’ selling strategies across each channel. Fine-tuning rules engines makes them effective at recognizing and taking action on known threat patterns. The challenge for every merchant relying on a fraud rules engine is that they often don’t catch the latest patterns in cybercriminal activity. Where rules-based approaches to digital fraud don’t scale, AI, and machine learning do.

Exploring The 7 Ways AI Is Reducing Mobile Fraud

Where rules engines are best suited for spotting existing trends in fraud activity, machine learning excels at classifying observations (called supervised machine learning) and finding anomalies in data by finding entirely new patterns and associations (called unsupervised machine learning). Combining supervised and unsupervised machine learning algorithms are proving to be very effective at reducing mobile fraud. The following are the seven ways AI and machine learning are reducing mobile fraud today:

  1. AI and machine learning reduce false positives by interpreting the nuances of specific behaviors and accurately predicting if a transaction is fraudulent or not. Merchants are relying on AI and machine learning to reduce false positives, saving their customers from having to re-authenticate who they are and their payment method. A false positive at that first interaction with a customer is going to reduce the amount of money that they spend with a merchant, so it’s very important to interpret each transaction accurately.
  2. Identifying and thwarting merchant fraud based on anomalous activity from a compromised mobile device. Cybercriminals are relying on SIM swapping to gain control of mobile devices and commit fraud, as the recent hack of Twitter’s founder Jack Dorsey illustrates. Hackers were able to transfer his telephone number using SIM swapping and by talking Dorsey’s mobile service provider to bypass the account passcode. Fortunately, only his Twitter account was hacked. Any app or account accessible on his phone could have been breached, leading to fraudulent bank transfers or purchases. The attack could have been thwarted if Jack Dorsey’s mobile service provider was using AI-based risk scoring to detect and act on anomalous activity.
  3. AI and machine learning-based techniques scale across a wider breadth of merchants than any rules-based approach to mobile fraud prevention can. Machine learning-based models scale and learn across different industries in real-time, accumulating valuable data that improves payment fraud prediction accuracy. Kount’s Universal Data Network is noteworthy, as it includes billions of transactions over 12 years, 6,500 customers, 180+ countries and territories, and multiple payment networks. That rich data feeds Kount’s machine learning models to detect anomalies more accurately and reduce false positives and chargebacks.
  4. Combining supervised and unsupervised machine learning algorithms translates into a formidable speed advantage, with fraudulent transactions identified on average in 250 milliseconds. Merchants’ digital business models’ scale and speed are increasing, and with the holidays coming up, there’s a high probability many will set mobile commerce sales records. The merchants who will gain the most sales are focusing on how security and customer experience can complement each other. Being able to approve or reject a transaction within a second or less is the cornerstone of an excellent customer buying experience.
  5. Knowing when to use two-factor authentication via SMS or Voice PIN to reduce false negatives or not, preserving customer relationships in the process. Rules engines will often take a brute-force approach to authentication if any of the factors they’re tracking show a given transaction is potentially fraudulent. Requesting customers authenticate themselves after they’re logged into a merchant’s site when they attempt to buy an item is a sure way to lose a customer for life. By being able to spot anomalies quickly, fewer customers are forced to re-authenticate themselves, and customer relationships are preserved. And when transactions are indeed fraudulent, losses have been averted in less than a second.
  6. Provide a real-time transaction risk score that combines the strengths of supervised and unsupervised machine learning into a single fraud prevention payment score. Merchants need a real-time transaction risk score that applies to every channel they sell, though. Fraud rules engines had to be tailored to each specific selling channel with specific rules for each type of transaction. That’s no longer the case due to machine learnings’ ability to scale across all channels and provide a transaction risk score in milliseconds. Leaders in this area include Kount’s Omniscore, the actionable transaction safety rating that is a result of their AI, which combines patented, proprietary supervised and unsupervised machine learning algorithms and technologies.
  7. Combining insights from supervised and unsupervised machine learning with contextual intelligence of transactions frees up fraud analysts to do more investigations and fewer transaction reviews. AI and machine learning-based fraud prevention systems’ first contribution is often reducing the time fraud analysts take for manual reviews. Digitally-based businesses I’ve talked with say having supervised machine learning categorize and then predict fraudulent attempts is invaluable from a time-saving standpoint alone. Merchants are finding AI, and machine learning-based approaches enable to score to approve more orders automatically, reject more orders automatically, and focus on those gray area orders, freeing up fraud analysts to do more strategic, rewarding work. They’re able to find more sophisticated, nuanced abuse attacks like refer a friend abuse or a promotion abuse or seller collusion in a marketplace. Letting the model do the work of true payment fraud prevention frees up those fraud analysts to do other worth that add value.

Conclusion

With the holiday season rapidly approaching, it’s time for merchants to look at how they can protect mobile transactions at scale across all selling channels. AI and machine learning are proving themselves as viable replacements to traditional rules engines that rely on predictable, known fraud patterns. With 70% of fraudulent transactions originating in the mobile channel in 2018 and the influx of orders coming in the next three months, now would be a good time for merchants to increase their ability to thwart mobile fraud while reducing false positives that alienate customers.

Sources:

RSA 2019 Current State of Cybercrime Report (11 pp., PDF, opt-in)

The Radicati Group, Mobile Statistics Report, 2019 – 2023 (3 pp., PDF, no opt-in)

U.S. Federal Trade Commission, Consumer Sentinel Network, Data Book 2018 (90 pp., PDF, no opt-in)

 

 

10 Ways AI And Machine Learning Are Improving Endpoint Security

  • Gartner predicts $137.4B will be spent on Information Security and Risk Management in 2019, increasing to $175.5B in 2023, reaching a CAGR of 9.1%. Cloud Security, Data Security, and Infrastructure Protection are the fastest-growing areas of security spending through 2023.
  •  69% of enterprise executives believe artificial intelligence (AI) will be necessary to respond to cyberattacks with the majority of telecom companies (80%) saying they are counting on AI to help identify threats and thwart attacks according to Capgemini.
  •  Spending on AI-based cybersecurity systems and services reached $7.1B in 2018 and is predicted to reach $30.9B in 2025, attaining a CAGR of 23.4% in the forecast period according to Zion Market Research.

Traditional approaches to securing endpoints based on the hardware characteristics of a given device aren’t stopping breach attempts today. Bad actors are using AI and machine learning to launch sophisticated attacks to shorten the time it takes to compromise an endpoint and successfully breach systems. They’re down to just 7 minutes after comprising an endpoint and gaining access to internal systems ready to exfiltrate data according to Ponemon. The era of trusted and untrusted domains at the operating system level, and “trust, but verify” approaches are over. Security software and services spending is soaring as a result, as the market forecasts above show.

AI & Machine Learning Are Redefining Endpoint Security

AI and machine learning are proving to be effective technologies for battling increasingly automated, well-orchestrated cyberattacks and breach attempts. Attackers are combining AI, machine learning, bots, and new social engineering techniques to thwart endpoint security controls and gain access to enterprise systems with an intensity never seen before. It’s becoming so prevalent that Gartner predicts that more than 85% of successful attacks against modern enterprise user endpoints will exploit configuration and user errors by 2025. Cloud platforms are enabling AI and machine learning-based endpoint security control applications to be more adaptive to the proliferating types of endpoints and corresponding threats. The following are the top ten ways AI and machine learning are improving endpoint security:

  • Using machine learning to derive risk scores based on previous behavioral patterns, geolocation, time of login, and many other variables is proving to be effective at securing and controlling access to endpoints. Combining supervised and unsupervised machine learning to fine-tune risk scores in milliseconds is reducing fraud, thwarting breach attempts that attempt to use privileged access credentials, and securing every identity on an organizations’ network. Supervised machine learning models rely on historical data to find patterns not discernable with rules or predictive analytics. Unsupervised machine learning excels at finding anomalies, interrelationships, and valid links between emerging factors and variables. Combining both unsupervised and supervised machine learning is proving to be very effective in spotting anomalous behavior and reducing or restricting access.
  • Mobile devices represent a unique challenge to achieving endpoint security control, one that machine learning combined with Zero Trust is proving to be integral at solving.  Cybercriminals prefer to steal a mobile device, its passwords, and privileged access credentials than hack into an organization. That’s because passwords are the quickest onramp they have to the valuable data they want to exfiltrate and sell. Abandoning passwords for new techniques including MobileIron’s zero sign-on approach shows potential for thwarting cybercriminals from getting access while hardening endpoint security control. Securing mobile devices using a zero-trust platform built on a foundation of unified endpoint management (UEM) capabilities enables enterprises to scale zero sign-on for managed and unmanaged services for the first time. Below is a graphic illustrating how they’re adopting machine learning to improve mobile endpoint security control:
  • Capitalizing on the core strengths of machine learning to improve IT asset management is making direct contributions to greater security.  IT Management and security initiatives continue to become more integrated across organizations, creating new challenges to managing endpoint security across each device. Absolute Software is taking an innovative approach to solve the challenge of improving IT asset management, so endpoint protection is strengthened at the same time. Recently I had a chance to speak with Nicko van Someren, Ph.D. and Chief Technology Officer at Absolute Software, where he shared with me how machine learning algorithms are improving security by providing greater insights into asset management. “Keeping machines up to date is an IT management job, but it’s a security outcome. Knowing what devices should be on my network is an IT management problem, but it has a security outcome. And knowing what’s going on and what processes are running and what’s consuming network bandwidth is an IT management problem, but it’s a security outcome. I don’t see these as distinct activities so much as seeing them as multiple facets of the same problem space. Nicko added that Absolute’s endpoint security controls begin at the BIOS level of over 500M devices that have their endpoint code embedded in them. The Absolute Platform is comprised of three products: Persistence, Intelligence, and Resilience—each building on the capabilities of the other. Absolute Intelligence standardizes the data around asset analytics and security advocacy analytics to allow Security managers to ask any question they want. (“What’s slowing down my device? What’s working and what isn’t? What has been compromised? What’s consuming too much memory? How does this deviate from normal performance?”). An example of Absolute’s Intelligence providing insights into asset management and security is shown below:
  • Machine learning has progressed to become the primary detection method for identifying and stopping malware attacks. Machine learning algorithms initially contributed to improving endpoint security by supporting the back-end of malware protection workflows. Today more vendors are designing endpoint security systems with machine learning as the primary detection method. Machine learning trained algorithms can detect file-based malware and learn which files are harmful or not based on the file’s metadata and content. Symantec’s Content & Malware Analysis illustrates how machine learning is being used to detect and block malware. Their approach combines advanced machine learning and static code file analysis to block, detect, and analyze threats and stop breach attempts before they can spread.
  • Supervised machine learning algorithms are being used for determining when given applications are unsafe to use, assigning them to containers, so they’re isolated from production systems. Taking into account an applications’ threat score or reputation, machine learning algorithms are defining if dynamic application containment needs to run for a given application. Machine learning-based dynamic application containment algorithms and rules block or log unsafe actions of an application based on containment and security rules. Machine learning algorithms are also being used for defining predictive analytics that define the extent of a given applications’ threat.
  •  Integrating AI, machine learning, and SIEM (Security Information and Event Management) in a single unified platform are enabling organizations to predict, detect, and respond to anomalous behaviors and events. AI and machine learning-based algorithms and predictive analytics are becoming a core part of SIEM platforms today as they provide automated, continuous analysis and correlation of all activity observed within a given IT environment. Capturing, aggregating, and analyzing endpoint data in real-time using AI techniques and machine learning algorithms is providing entirely new insights into asset management and endpoint security. One of the most interesting companies to watch in this area is LogRhythm. They’ve developed an innovative approach to integrating AI, machine learning, and SIEM in their LogRhythm NextGen SIEM Platform, which delivers automated, continuous analysis and correlation of all activity observed within an IT environment. The following is an example of how LogRhythm combines AI, machine learning, and SIEM to bring new insights into securing endpoints across a network.
  • Machine learning is automating the more manually-based, routine incident analysis, and escalation tasks that are overwhelming security analysts today. Capitalizing on supervised machine learnings’ innate ability to fine-tune algorythms in milliseconds based on the analysis of incidence data, endpoint security providers are prioritizing this area in product developnent. Demand from potential customers remains strong, as nearly everyone is facing a cybersecurity skills shortage while facing an onslaught of breach attempts.  “The cybersecurity skills shortage has been growing for some time, and so have the number and complexity of attacks; using machine learning to augment the few available skilled people can help ease this. What’s exciting about the state of the industry right now is that recent advances in Machine Learning methods are poised to make their way into deployable products,” Absolute’s CTO Nicko van Someren added.
  • Performing real-time scans of all processes with an unknown or suspicious reputation is another way how machine learning is improving endpoint security. Commonly referred to as Hunt and Respond, supervised and unsupervised machine learning algorithms are being used today to seek out and resolve potential threats in milliseconds instead of days. Supervised machine learning algorithms are being used to discover patterns in known or stable processes where anomalous behavior or activity will create an alert and pause the process in real-time. Unsupervised machine learning algorithms are used for analyzing large-scale, unstructured data sets to categorize suspicious events, visualize threat trends across the enterprise, and take immediate action at a single endpoint or across the entire organization.
  • Machine learning is accelerating the consolidation of endpoint security technologies, a market dynamic that is motivating organizations to trim back from the ten clients they have on average per endpoint today. Absolute Software’s 2019 Endpoint Security Trends Report found that a typical device has ten or more endpoint security agents installed, each often conflicting with the other. The study also found that enterprises are using a diverse array of endpoint agents, including encryption, AV/AM, and Endpoint Detection and Response (EDR). The wide array of endpoint solutions make it nearly impossible to standardize a specific test to ensure security and safety without sacrificing speed. By helping to accelerate the consolidation of security endpoints, machine learning is helping organizations to see the more complex and layered the endpoint protection, the greater the risk of a breach.
  • Keeping every endpoint in compliance with regulatory and internal standards is another area machine learning is contributing to improving endpoint security. In regulated industries, including financial services, insurance, and healthcare, machine learning is being deployed to discover, classify, and protect sensitive data. This is especially the case with HIPAA (Health Insurance Portability and Accountability Act) compliance in healthcare. Amazon Macie is representative of the latest generation of machine learning-based cloud security services. Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides organizations with dashboards, alerts, and contextual insights that give visibility into how data is being accessed or moved. The fully managed service continuously monitors data access activity for anomalies and generates detailed alerts when it detects the risk of unauthorized access or inadvertent data leaks. An example of one of Amazon Macie’s dashboard is shown below:

How AI Is Protecting Against Payments Fraud

  • 80% of fraud specialists using AI-based platforms believe the technology helps reduce payments fraud.
  • 63.6% of financial institutions that use AI believe it is capable of preventing fraud before it happens, making it the most commonly cited tool for this purpose.
  • Fraud specialists unanimously agree that AI-based fraud prevention is very effective at reducing chargebacks.
  • The majority of fraud specialists (80%) have seen AI-based platforms reduce false positives, payments fraud, and prevent fraud attempts.

AI is proving to be very effective in battling fraud based on results achieved by financial institutions as reported by senior executives in a recent survey, AI Innovation Playbook published by PYMNTS in collaboration with Brighterion. The study is based on interviews with 200 financial executives from commercial banks, community banks, and credit unions across the United States. For additional details on the methodology, please see page 25 of the study. One of the more noteworthy findings is that financial institutions with over $100B in assets are the most likely to have adopted AI, as the study has found 72.7% of firms in this asset category are currently using AI for payment fraud detection.

Taken together, the findings from the survey reflect how AI thwarts payments fraud and deserves to be a high priority in any digital business today. Companies, including Kount and others, are making strides in providing AI-based platforms, further reducing the risk of the most advanced, complex forms of payments fraud.

Why AI Is Perfect For Fighting Payments Fraud

Of the advanced technologies available for reducing false positives, reducing and preventing fraud attempts, and reducing manual reviews of potential payment fraud events, AI is ideally suited to provide the scale and speed needed to take on these challenges. More specifically, AI’s ability to interpret trend-based insights from supervised machine learning, coupled with entirely new knowledge gained from unsupervised machine learning algorithms are reducing the incidence of payments fraud. By combining both machine learning approaches, AI can discern if a given transaction or series of financial activities are fraudulent or not, alerting fraud analysts immediately if they are and taking action through predefined workflows. The following are the main reasons why AI is perfect for fighting payments fraud:

  • Payments fraud-based attacks are growing in complexity and often have a completely different digital footprint or pattern, sequence, and structure, which make them undetectable using rules-based logic and predictive models alone. For years e-commerce sites, financial institutions, retailers, and every other type of online business relied on rules-based payment fraud prevention systems. In the earlier years of e-commerce, rules and simple predictive models could identify most types of fraud. Not so today, as payment fraud schemes have become more nuanced and sophisticated, which is why AI is needed to confront these challenges.
  • AI brings scale and speed to the fight against payments fraud, providing digital businesses with an immediate advantage in battling the many risks and forms of fraud. What’s fascinating about the AI companies offering payments fraud solutions is how they’re trying to out-innovate each other when it comes to real-time analysis of transaction data. Real-time transactions require real-time security. Fraud solutions providers are doubling down on this area of R&D today, delivering impressive results. The fastest I’ve seen is a 250-millisecond response rate for calculating risk scores using AI on the Kount platform, basing queries on a decades-worth of data in their universal data network. By combining supervised and unsupervised machine learning algorithms, Kount is delivering fraud scores that are twice as predictive as previous methods and faster than competitors.
  • AI’s many predictive analytics and machine learning techniques are ideal for finding anomalies in large-scale data sets in seconds. The more data a machine learning model has to train on, the more accurate its predictive value. The greater the breadth and depth of data, a given machine learning algorithm learns from means more than how advanced or complex a given algorithm is. That’s especially true when it comes to payments fraud detection where machine learning algorithms learn what legitimate versus fraudulent transactions look like from a contextual intelligence perspective. By analyzing historical account data from a universal data network, supervised machine learning algorithms can gain a greater level of accuracy and predictability. Kount’s universal data network is among the largest, including billions of transactions over 12 years, 6,500 customers, 180+ countries and territories, and multiple payment networks. The data network includes different transaction complexities, verticals, and geographies, so machine learning models can be properly trained to predict risk accurately. That analytical richness includes data on physical real-world and digital identities creating an integrated picture of customer behavior.

Bottom Line:  Payments fraud is insidious, difficult to stop, and can inflict financial harm on any business in minutes. Battling payment fraud needs to start with a pre-emptive strategy to thwart fraud attempts by training machine learning models to quickly spot and act on threats then building out the strategy across every selling and service channel a digital business relies on.

How To Get Your Data Scientist Career Started

The most common request from this blogs’ readers is how to further their careers in analytics, cloud computing, data science, and machine learning. I’ve invited Alyssa Columbus, a Data Scientist at Pacific Life, to share her insights and lessons learned on breaking into the field of data science and launching a career there. The following guest post is authored by her.

Earning a job in data science, especially your first job in data science, isn’t easy, especially given the surplus of analytics job-seekers to analytics jobs.

Many people are looking to break into data science, from undergraduates to career changers, have asked me how I’ve attained my current data science position at Pacific Life. I’ve referred them to many different resources, including discussions I’ve had on the Dataquest.io blog and the Scatter Podcast. In the interest of providing job seekers with a comprehensive view of what I’ve learned that works, I’ve put together the five most valuable lessons learned. I’ve written this article to make your data science job hunt easier and as efficient as possible.

  • Continuously build your statistical literacy and programming skills. Currently, there are 24,697 open Data Scientist positions on LinkedIn in the United States alone. Using data mining techniques to analyze all open positions in the U.S., the following list of the top 10 data science skills was created today. As of April 14, the top 3 most common skills requested in LinkedIn data scientist job postings are Python, R, and SQL, closely followed by Jupyter Notebooks, Unix Shell/Awk, AWS, and Tensorflow. The following graphic provides a prioritized list of the most in-demand data science skills mentioned in LinkedIn job postings today. Please click on the graphic to expand for easier viewing.

Hands-on training is the best way to develop and continually improve statistical and programming skills, especially with the languages and technologies LinkedIn’s job postings prioritize.  Getting your hands dirty with a dataset is often much better than reading through abstract concepts and not applying what you’ve learned to real problems. Your applied experience is just as important as your academic experience, and taking statistics, and computer science classes help to translate theoretical concepts into practical results. The toughest thing to learn (and also to teach) about statistical analysis is the intuition for what the big questions to ask of your dataset are. Statistical literacy, or “how” to find the answers to your questions, come with education and practice. Strengthening your intellectual curiosity or insight into asking the right questions comes through experience.

  • Continually be creating your own, unique portfolio of analytics and machine learning projects. Having a good portfolio is essential to be hired as a data scientist, especially if you don’t come from a quantitative background or have experience in data science before. Think of your portfolio as proof to potential employers that you are capable of excelling in the role of a data scientist with both the passion and skills to do the job. When building your data science portfolio, select and complete projects that qualify you for the data science jobs, you’re the most interested in. Use your portfolio to promote your strengths and innate abilities by sharing projects you’ve completed on your own. Some skills I’d recommend you highlight in your portfolio include:
    • Your programming language of choice (e.g., Python, R, Julia, etc.).
    • The ability to interact with databases (e.g., your ability to use SQL).
    • Visualization of data (static or interactive).
    • Storytelling with data. This is a critical skill. In essence, can someone with no background in whatever area your project is in look at your project and gain some new understandings from it?
    • Deployment of an application or API. This can be done with small sample projects (e.g., a REST API for an ML model you trained or a nice Tableau or R Shiny dashboard).

Julia Silge and Amber Thomas both have excellent examples of portfolios that you can be inspired by. Julia’s portfolio is shown below.

  • Get (or git!) yourself a website. If you want to stand out, along with a portfolio, create and continually build a strong online presence in the form of a website.  Be sure to create and continually add to your GitHub and Kaggle profiles to showcase your passion and proficiency in data science. Making your website with GitHub Pages creates a profile for you at the same time, and best of all it’s free to do. A strong online presence will not only help you in applying for jobs, but organizations may also reach out to you with freelance projects, interviews, and other opportunities.
  • Be confident in your skills and apply for any job you’re interested in, starting with opportunities available in your network.  If you don’t meet all of a job’s requirements, apply anyway. You don’t have to know every skill (e.g., programming languages) on a job description, especially if there are more than ten listed. If you’re a great fit for the main requirements of the job’s description, you need to apply. A good general rule is that if you have at least half of the skills requested on a job posting, go for it. When you’re hunting for jobs, it may be tempting to look for work on company websites or tech-specific job boards. I’ve found, as have many others, that these are among the least helpful ways to find work. Instead, contact recruiters specializing in data science and build up your network to break into the field. I recommend looking for a data science job via the following sources, with the most time devoted to recruiters and your network:
    • Recruiters
    • Friends, family, and colleagues
    • Career fairs and recruiting events
    • General job boards
    • Company websites
    • Tech job boards.

Alyssa Columbus is a Data Scientist at Pacific Life and member of the Spring 2018 class of NASA Datanauts. Previously, she was a computational statistics and machine learning researcher at the UC Irvine Department of Epidemiology and has built robust predictive models and applications for a diverse set of industries spanning retail to biologics. Alyssa holds a degree in Applied and Computational Mathematics from the University of California, Irvine and is a member of Phi Beta Kappa. She is a strong proponent of reproducible methods, open source technologies, and diversity in analytics and is the founder of R-Ladies Irvine. You can reach her at her website: alyssacolumbus.com.

Which Analytics And BI Technologies Will Be The Highest Priority In 2019?

  • 82% of enterprises are prioritizing analytics and BI as part of their budgets for new technologies and cloud-based services.
  • 54% say AI, Machine Learning and Natural Language Processing (NLP) are also a high investment priority.
  • 50% of enterprises say their stronger focus on metrics and Key Performance Indicators (KPIs) company-wide are a major driver of new investment in analytics and BI.
  • 43%  plan to both build and buy AI and machine learning applications and platforms.
  • 42% are seeking to improve user experiences by automating discovery of data insights and 26% are using AI to provide user recommendations.

These and many other fascinating insights are from the recent TDWI Best Practices Report, BI and Analytics in the Age of AI and Big Data. An executive summary of the study is available online here. The entire study is available for download here (39 PP., PDF, free, opt-in). The study found that enterprises are placing a high priority on augmenting existing systems and replacing older technologies and data platforms with new cloud-based BI and predictive analytics ones. Transforming Data with Intelligence (TDWI) is a global community of AI, analytics, data science and machine learning professionals interested in staying current in these and more technology areas as part of their professional development. Please see page 3 of the study for specifics regarding the methodology.

Key takeaways from the study include the following:

  • 82% of enterprises are prioritizing analytics and BI applications and platforms as part of their budgets for new technologies and cloud-based services. 78% of enterprises are prioritizing advanced analytics, and 76% data preparation. 54% say AI, machine learning and Natural Language Processing (NLP) are also a high investment priority. The following graphic ranks enterprises’ investment priorities for acquiring or subscribing to new technologies and cloud-based services by analytics and BI initiatives or strategies. Please click on the graphic to expand for easier reading.

  • Data warehouse or mart in the cloud (41%), data lake in the cloud (39%) and BI platform in the cloud (38%) are the top three types of technologies enterprises are planning to use. Based on this finding and others in the study, cloud platforms are the new normal in enterprises’ analytics and Bi strategies going into 2019. Cloud data storage (object, file, or block) and data virtualization or federation (both 32%) are the next-most planned for technologies by enterprises when it comes to investing in the analytics and BI initiatives. Please click on the graphic to expand for easier reading.

  • The three most important factors in delivering a positive user experience include good query performance (61%), creating and editing visualizations (60%), and personalizing dashboards and reports (also 60%). The three activities that lead to the least amount of satisfaction are using predictive analytics and forecasting tools (27% dissatisfied), “What if” analysis and deriving new data (25%) and searching across data and reports (24%). Please click on the graphic to expand for easier reading.

  • 82% of enterprises are looking to broaden the base of analytics and BI platforms they rely on for insights and intelligence, not just stay with the solutions they have in place today. Just 18% of enterprises plan to add more instances of existing platforms and systems. Cloud-native platforms (38%), a new analytics platform (35%) and cloud-based data lakes (31%) are the top three system areas enterprises are planning to augment or replace existing BI, analytics, and data warehousing systems in. Please click on the graphic to expand for easier reading.

  • The majority of enterprises plan to both build and buy Artificial Intelligence (AI) and machine learning (ML) solutions so that they can customize them to their specific needs. 43% of enterprises surveyed plan to both build and buy AI and ML applications and platforms, a figure higher than any other recent survey on this aspect of enterprise AI adoption. 13% of responding enterprises say they will exclusively build their own AI and ML applications.

  • Capitalizing on machine learning’s innate strengths of applying algorithms to large volumes of data to find actionable new insights (54%) is what’s most important to the majority of enterprises. 47% of enterprises look to AI and machine learning to improve the accuracy and quality of information. And 42% are configuring AI and machine learning applications and platforms to augment user decision making by giving recommendations. Please click on the graphic to expand for easier reading.

Tech Leaders Look To IoT, AI & Robotics To Fuel Growth Through 2021

  • 30% of tech leaders globally predict blockchain will disrupt their businesses by 2021.
  • IoT, Artificial Intelligence (AI) and Robotics have the greatest potential to digitally transform businesses, making them more customer-centered and efficient.
  • 26% of global tech leaders say e-Commerce apps and platforms will be the most disruptive new business model in their countries by 2021.
  • IDC predicts worldwide IoT spending will reach $1.1T by 2021.

These and many other insights are from KPMG’s recent research study Tech Disruptors Outpace The Competition. The study can be downloaded here (PDF, 42 pp., no opt-in.).  The methodology is based on interviews with 750 global technology industry leaders, 85% of whom are C-level executives. For additional details on the methodology, please see pages 32 and 33 of the study. The study found that the three main benefits of adopting IoT, AI, and robotics include improved management of personal information, increased personal productivity, and improved customer experience through personalized real-time information. Key insights gained from the study include the following:

  • IoT, Artificial Intelligence (AI) and Robotics have the greatest potential to digitally transform businesses, making them more customer-centered and efficient. Tech leaders also see these three core technologies enabling the next indispensable consumer technology and driving the greatest benefit to life, society, and the environment. KPMG’s research team found that tech companies are integrating these three technologies to create growth platforms for new business ventures while digitally transforming existing business processes. Tech leaders in the U.K. (21%), Japan (20%) and the U.S. (16%) lead all other nations in their plans for IoT digitally transforming their businesses by 2021. Please click on the graphic below to expand for easier reading.

  • 30% of tech leaders globally predict blockchain will disrupt their businesses by 2021. 50% of Japanese tech leaders predict that blockchain will digitally transform their industries and companies by 2021, leading all nations included in the survey.  IoT processes and the rich, real-time data stream sensors and systems are capable of delivering is predicted by tech leaders to be the primary catalyst that will enable blockchain to digitally transform their businesses. 27% of tech leaders globally expect IoT data and applications combined with blockchain to redefine their companies, supply chains and industries. Identity authentication (24%), automated trading (22%) and contracts (14%) are the 2nd through fourth-most disruptive aspects of blockchain by 2021 according to tech leaders. Please click on the graphic below to expand for easier reading.

  • 26% of global tech leaders say e-Commerce apps and platforms will be the most disruptive new business model in their countries by 2021. 19% see social media platforms creating the majority of new business models, followed autonomous vehicle platforms (14%) and entertainment platforms (11%).  KPMG’s analysis includes a ranking of top business models by country, with e-Commerce dominating four of the five regions included in the survey.

  • 50% of tech leaders expect media, transportation, healthcare, and transportation to experience the greatest digital transformation in the next three years.  Respondents most mentioned Amazon, Netflix, Alibaba, Uber, Google, and Facebook as examples of companies who will digitally transform their industries by 2021.  The following table provides insights into which industries by country will see the greatest digital transformations in the next three years. Entertainment platforms are predicted by tech leaders to have the greatest potential to digitally transform the media industry in the U.S. by 2021.

  • Tech leaders predict IoT’s greatest potential for adoption by 2021 is in consumer products, education, services, industrial manufacturing, and telecom. AI’s greatest potential to digitally transform business models is in healthcare and industrial manufacturing (both 11%), consumer products, financial, and services (10% each).  As would be expected, Robotics’ adoption and contribution to digitally transforming businesses will be most dominant in industrial manufacturing (15%), followed by healthcare (11%) and consumer, financial and services (10%). Please click on the graphic to expand for easier reading.

The State Of Cloud Business Intelligence, 2018

  • Cloud BI adoption is soaring in 2018, nearly doubling 2016 adoption levels.
  • Over 90% of Sales & Marketing teams say that Cloud BI is essential for getting their work done in 2018, leading all categories in the survey.
  • 66% of organizations that consider themselves completely successful with Business Intelligence (BI) initiatives currently use the cloud.
  • Financial Services (62%), Technology (54%), and Education (54%) have the highest Cloud BI adoption rates in 2018.
  • 86% of Cloud BI adopters name Amazon AWS as their first choice, 82% name Microsoft Azure, 66% name Google Cloud, and 36% identify IBM Bluemix as their preferred provider of cloud BI services.

These and other many other fascinating insights are from Dresner Advisory Services 2018 Cloud Computing and Business Intelligence Market Study (client access reqd.) of the Wisdom of Crowds® series of research. The goal of the 7th annual edition of the study seeks to quantify end-user deployment trends and attitudes toward cloud computing and business intelligence (BI), defined as the technologies, tools, and solutions that employ one or more cloud deployment models. Dresner Advisory Services defines the scope of Business Intelligence (BI) tools and technologies to include query and reporting, OLAP (online analytical processing), data mining and advanced analytics, end-user tools for ad hoc query and analysis, and dashboards for performance monitoring. Please see page 10 of the study for the methodology. The study found the primary barriers to greater cloud BI adoption are enterprises’ concerns regarding data privacy and security.

Key takeaways from the study include the following:

  • Cloud BI’s importance continues to accelerate in 2018, with the majority of respondents considering it an important element of their broader analytics strategies. The study found that mean level of sentiment rose from 2.68 to 3.22 (above the level of “important”) between 2017 and 2018, indicating the increased importance of Cloud BI over the last year. By region, Asia-Pacific respondents continue to be the strongest proponents of cloud computing regarding both adjusted mean (4.2 or “very important”) and levels of criticality. The following graphic illustrates Cloud BI’s growing importance between 2012 and 2018.

  • Over 90% of Sales & Marketing teams say Cloud BI apps are important to getting their work done in 2018, leading all respondent categories in the survey. The study found that Cloud BI importance in 2018 is highest among Sales/Marketing and Executive Management respondents. One of the key factors driving this is the fact that both Sales & Marketing and Executive Management are increasingly relying on cloud-based front office applications and services that are integrated with and generate cloud-based data to track progress towards goals.

  • Cloud BI is most critical to Financial Services & Insurance, Technology, and Retail & Wholesale Trade industries. The study recorded its highest-ever levels of Cloud Bi importance in 2018. Financial Services has the highest weighted mean interest in cloud BI (3.8, which approaches “very important” status shown in the figure below). Technology organizations, where half of the respondents say cloud BI is “critical” or “very important,” are the next most interested. Close to 90% of Retail/Wholesale respondents say SaaS/cloud BI is at least “important” to them. As it has been over time, Healthcare remains the industry least open to managed services for data and business intelligence.

  • Cloud BI adoption is soaring in 2018, nearly doubling 2016 adoption levels. The study finds that the percentage of respondents using Cloud BI in 2018 nearly doubled from 25% of enterprise users in 2016. Year over year, current use rose from 31% to 49%. In the same time frame, the percentage of respondents with no plans to use cloud BI dropped by half, from 38% to 19%. This study has been completed for the last seven years, showing a steady progression of Cloud BI awareness and adoption, with 2018 being the first one showing the most significant rise in adoption levels ever.

  • Sales & Marketing leads all departments in current use and planning for Cloud BI applications. Business Intelligence Competency Centers (BICC) are a close second, each with over 60% adoption rates for Cloud BI today. Operations including manufacturing and supply chains and services are the next most likely to use Cloud BI currently. Marketing and BICC lead current adoption and are contributing catalysts of Cloud BI’s soaring growth between 2016 and 2018. Both of these departments often have time-constrained and revenue-driven goals where quantifying contributions to company growth and achievement ad critical.

  • Financial Services (62%), Technology (54%), and Education (54%) industries have the highest Cloud BI adoption rates in 2018. The retail/wholesale industry has the fourth-highest level of Cloud BI adoption and the greatest number of companies who are currently evaluating Cloud BI today. The least likely current or future users are found in manufacturing and security-sensitive healthcare organizations, where 45% respondents report no plans for cloud-based BI/analytics.

  • Dashboards, advanced visualization, ad-hoc query, data integration, and self-service are the most-required Cloud BI features in 2018. Sales & Marketing need real-time feedback on key initiatives, programs, strategies, and progress towards goals. Dashboards and advanced visualization features’ dominance of feature requirements reflect this department’s ongoing need for real-time feedback on the progress of their teams towards goals. Reporting, data discovery, and end-user data blending (data preparation) make up the next tier of importance.

  • Manufacturers have the greatest interest in dashboards, ad-hoc query, production reporting, search interface, location intelligence, and ability to write to transactional applications. Education respondents report the greatest interest in advanced visualization along with data integration, data mining, end-user data blending, data catalog, and collaborative support for group-based analysis. Financial Services respondents are highly interested in advanced visualization and lead all industries in self-serviceHealthcare industry respondents lead interest only in in-memory support. Retail/Wholesale and Healthcare industry respondents are the least feature interested overall.

  • Interest in cloud application connections to Salesforce, NetSuite, and other cloud-based platforms has increased 12% this year. Getting end-to-end visibility across supply chains, manufacturing centers, and distribution channels requires Cloud BI apps be integrated with cloud-based platforms and on-premises applications and data. Expect to see this accelerate in 2019 as Cloud BI apps become more pervasive across Marketing & Sales and Executive Management, in addition to Operations including supply chain management and manufacturing where real-time shop floor monitoring is growing rapidly.

  • Retail/Wholesale, Business Services, Education and Financial Services & Insurance industries are most interested in Google Analytics connectors to obtain data for their Cloud BI apps. Respondents from Technology industries prioritize Salesforce integration and connectors above all others. Education respondents are most interested in MySQL and Google Drive integration and connectors. Manufacturers are most interested in connectors to Google AdWords, SurveyMonkey, and The Healthcare industry respondents prioritize SAP Cloud BI services and also interested in ServiceNow connectors.

Machine Learning’s Greatest Potential Is Driving Revenue In The Enterprise

  • Enterprise investments in machine learning will nearly double over the next three years, reaching 64% adoption by 2020.
  • International Data Corporation (IDC) is forecasting spending on artificial intelligence (AI) and machine learning will grow from $8B in 2016 to $47B by 2020.
  • 89% of CIOs are either planning to use or are using machine learning in their organizations today.
  • 53% of CIOs say machine learning is one of their core priorities as their role expands from traditional IT operations management to business strategists.
  • CIOs are struggling to find the skills they need to build their machine learning models today, especially in financial services.

These and many other insights are from the recently published study, Global CIO Point of View. The entire report is downloadable here (PDF, 24 pp., no opt-in). ServiceNow and Oxford Economics collaborated on this survey of 500 CIOs in 11 countries on three continents, spanning 25 industries. In addition to the CIO interviews, leading experts in machine learning and its impact on enterprise performance contributed to the study. For additional details on the methodology, please see page 4 of the study and an online description of the CIO Survey Methodology here.

Digital transformation is a cornerstone of machine learning adoption. 72% of CIOs have responsibility for digital transformation initiatives that drive machine learning adoption. The survey found that the greater the level of digital transformation success, the more likely machine learning-based programs and strategies would succeed. IDC predicts that 40% of digital transformation initiatives will be supported by machine learning and artificial intelligence by 2019.

Key takeaways from the study include the following:

  • 90% of CIOs championing machine learning in their organizations today expect improved decision support that drives greater topline revenue growth. CIOs who are early adopters are most likely to pilot, evaluate and integrate machine learning into their enterprises when there is a clear connection to driving business results. Many CIO compensation plans now include business growth and revenue goals, making the revenue potential of new technologies a high priority.
  • 89% of CIOs are either planning to use or using machine learning in their organizations today. The majority, 40%, are in the research and planning phases of deployment, with an additional 26% piloting machine learning. 20% are using machine learning in some areas of their business, and 3% have successfully deployed enterprise-wide. The following graphic shows the percentage of respondents by stage of their machine learning journey.

  • Machine learning is a key supporting technology leading the majority Finance, Sales & Marketing, and Operations Management decisions today. Human intervention is still required across the spectrum of decision-making areas including Security Operations, Customer Management, Call Center Management, Operations Management, Finance and Sales & Marketing. The study predicts that by 2020, machine learning apps will have automated 70% of Security Operations queries and 30% of Customer Management ones.

  • Automation of repetitive tasks (68%), making complex decisions (54%) and recognizing data patterns (40%) are the top three most important capabilities CIOs of machine learning CIOs are most interested in.  Establishing links between events and supervised learning (both 32%), making predictions (31%) and assisting in making basic decisions (18%) are additional capabilities CIOs are looking for machine learning to accelerate. In financial services, machine learning apps are reviewing loan documents, sorting applications to broad parameters, and approving loans faster than had been possible before.

  • Machine learning adoption and confidence by CIOs varies by region, with North America in the lead (72%) followed by Asia-Pacific (61%). Just over half of European CIOs (58%) expect value from machine learning and decision automation to their company’s overall strategy. North American CIOs are more likely than others to expect value from machine learning and decision automation across a range of business areas, including overall strategy (72%, vs. 61% in Asia Pacific and 58% in Europe). North American CIOs also expect greater results from sales and marketing (63%, vs. 47% Asia-Pacific and 38% in Europe); procurement (50%, vs. 34% in Asia-Pacific and 34% in Europe); and product development (48%, vs. 29% in Asia-Pacific and 29% in Europe).
  • CIOs challenging the status quo of their organization’s analytics direction are more likely to rely on roadmaps for defining and selling their vision of machine learning’s revenue contributions. More than 70% of early adopter CIOs have developed a roadmap for future business process changes compared with just 33% of average CIOs. Of the CIOs and senior management teams in financial services, the majority are looking at how machine learning can increase customer satisfaction, lifetime customer value, improving revenue growth. 53% of CIOs from our survey say machine learning is one of their core priorities as their role expands from traditional IT operations to business-wide strategy.

Sources: CIOs Cutting Through the Hype and Delivering Real Value from Machine Learning, Survey Shows

Data Scientist Is The Best Job In America According Glassdoor

  • Data Scientist has been named the best job in America for three years running, with a median base salary of $110,000 and 4,524 job openings.
  • DevOps Engineer is the second-best job in 2018, paying a median base salary of $105,000 and 3,369 job openings.
  • There are 29,187 Software Engineering jobs available today, making this job the most popular regarding Glassdoor postings according to the study.

These and many other fascinating insights are from Glassdoor’s 50 Best Jobs In America For 2018. The Glassdoor Report is viewable online here. Glassdoor’s annual report highlights the 50 best jobs based on each job’s overall Glassdoor Job Score.The Glassdoor Job Score is determined by weighing three key factors equally: earning potential based on median annual base salary, job satisfaction rating, and the number of job openings. Glassdoor’s 2018 report lists jobs that excel across all three dimensions of their Job Score metric. For an excellent overview of the study by Karsten Strauss of Forbes, please see his post, The Best Jobs To Apply For In 2018.

LinkedIn’s 2017 U.S. Emerging Jobs Report found that there are 9.8 times more Machine Learning Engineers working today than five years ago with 1,829 open positions listed on their site as of last month. Data science and machine learning are generating more jobs than candidates right now, making these two areas the fastest growing tech employment areas today.

Key takeaways from the study include the following:

  • Six analytics and data science jobs are included in Glassdoor’s 50 best jobs In America for 2018. These include Data Scientist, Analytics Manager, Database Administrator, Data Engineer, Data Analyst and Business Intelligence Developer. The complete list of the top 50 jobs is provided below with the analytics and data science jobs highlighted along with software engineering, which has a record 29,817 open jobs today:

  • Median base salary of the 50 best jobs in America is $91,000 with the average salary of the six analytics and data science jobs being $94,167.
  • Across all six analytics and data science jobs there are 16,702 openings as of today according to Glassdoor.
  • Tech jobs make up 20 of Glassdoor’s 50 Best Jobs in America for 2018, up from 14 jobs in 2017.

Source: Glassdoor Reveals the 50 Best Jobs in America for 2018

Gartner’s Top 10 Predictions For IT In 2018 And Beyond

  • In 2020, AI will become a positive net job motivator, creating 2.3M jobs while eliminating only 1.8M jobs.
  • By 2020, IoT technology will be in 95% of electronics for new product designs.
  • By 2021, 40% of IT staff will be versatilists, holding multiple roles, most of which will be business, rather than technology-related.

These and many other insights are being presented earlier this month at the Gartner Symposium/ITxpo 2017 being held in Orlando, Florida. Gartner’s predictions and the series of assumptions supporting them illustrate how CIOs must seek out and excel in the role of business strategist first, technologist second. In 2018 and beyond CIOs will be more accountable than ever for revenue generation, value creation, and the development and launch of new business models using proven and emerging technologies. Gartner’s ten predictions point to the future of CIOs as collaborators in new business creation, selectively using technologies to accomplish that goal.

The following are Gartner’s ten predictions for IT organizations for 2018 and beyond:

  1. By 2021, early adopter brands that redesign their websites to support visual- and voice-search will increase digital commerce revenue by 30%. Gartner has found that voice-based search queries are the fastest growing mobile search type. Voice and visual search are accelerating mobile browser- and mobile app-based transactions and will continue to in 2018 and beyond. Mobile browser and app-based transactions are as much as 50% of all transactions on many e-commerce sites today. Apple, Facebook, Google and Microsoft’s investments in AI and machine learning will be evident in how quickly their visual- and voice-search technologies accelerate in the next two years.
  2. By 2020, five of the top seven digital giants will willfully “self-disrupt” to create their next leadership opportunity. The top digital giants include Alibaba, Amazon, Apple, Baidu, Facebook, Google, Microsoft, and Tencent. Examples of self-disruption include AWS Lambda versus traditional cloud virtual machines, Alexa versus screen-based e-commerce, and Apple Face ID versus Touch ID.
  3. By the end of 2020, the banking industry will derive $1B in business value from the use of blockchain-based cryptocurrencies. Gartner estimates that the current combined value of cryptocurrencies in circulation worldwide is $155B (as of October 2017), and this value has been increasing as tokens continue to proliferate and market interest grows. Cryptocurrencies will represent more than half of worldwide blockchain global business value-add through year-end 2023 according to the Gartner predictions study.
  4. By 2022, most people in mature economies will consume more false information than true information. Gartner warns that while AI is proving to be very effective in creating new information, it is just as effective at distorting data to create false information as well. Gartner predicts that before 2020, untrue information will fuel a major financial fraud made possible through high-quality falsehoods moving the financial markets worldwide. By the same year, no significant internet company will fully succeed in its attempts to mitigate this problem. Within three years a significant country will pass regulations or laws seeking to curb the spread of AI-generated false information.
  5. By 2020, AI-driven creation of “counterfeit reality,” or fake content, will outpace AI’s ability to detect it, fomenting digital distrust. AI and machine learning systems today can categorize the content of images faster and more consistently accurate than humans. Gartner cautions that by 2018, a counterfeit video used in a satirical context will begin a public debate once accepted as real by one or both sides of the political spectrum. In the next year, there will be a 10-fold increase in commercial projects to detect fake news according to the predictions study.
  6. By 2021, more than 50% of enterprises will be spending more per annum on bots and chatbot creations than traditional mobile app developments. Gartner is predicting that by 2020, 55% of all large enterprises will have deployed (used in production) at least one bot or chatbot. Rapid advances in natural-language processing (NLP) make today’s chatbots much better at recognizing the user intent than previous generations. According to Gartner’s predictions study, NLP is used to determine the entry point for the decision tree in a chatbot, but a majority of chatbots still use scripted responses in a decision tree.
  7. By 2021, 40% of IT staff will be versatilists, holding multiple roles, most of which will be business, rather than technology-related. By 2019, IT technical specialist hires will fall by more than 5%. Gartner predicts that 50% of enterprises will formalize IT versatilist profiles and job descriptions. 20% of IT organizations will hire versatilists to scale digital business. IT technical specialist employees will fall to 75% of 2017 levels.
  8. In 2020, AI will become a positive net job motivator, creating 2.3M jobs while eliminating only 1.8M jobs. By 2020, AI-related job creation will cross into positive territory, reaching 2 million net-new jobs in 2025. Global IT services firms will have massive job churn in 2018, adding 100,000 jobs and dropping 80,000. By 2021 Gartner predicts, AI augmentation will generate $2.9T in business value and recover 6.2B hours of worker productivity.
  9. By 2020, IoT technology will be in 95% of electronics for new product designs. Gartner predicts IoT-enabled products with smartphone activation emerging at the beginning of 2019.
  10. Through 2022, half of all security budgets for IoT will go to fault remediation, recalls and safety failures rather than protection. Gartner predicts IoT spending will increase sharply after 2020 following better methods of applying security patterns cross-industry in IoT security architectures, growing at more than 50% compound annual growth rate (CAGR) over current rates.The total IoT security market for products will reach $840.5M by 2020, and a 24% CAGR for IoT security from 2013 through 2020. Combining IoT security services, safety systems, and physical security will lead to a fast-growing global market. Gartner predicts exponential growth in this area, exceeding more than $5B in global spending by year-end 2020.

Gartner has also made an infographic available of the top 10 Strategic Technology Trends for 2018, in addition to an insightful article on Smarter with Gartner.  You can find the article here, at Gartner Top 10 Strategic Technology Trends for 2018.

Sources:

Gartner Reveals Top Predictions for IT Organizations and Users in 2018 and Beyond

Smarter With Gartner, Gartner Top 10 Strategic Technology Trends for 2018

Top Strategic Predictions for 2018 and Beyond: Pace Yourself, for Sanity’s Sake (client access reqd)

%d bloggers like this: