Every hour, a threat actor starts a new scan on the public web for vulnerable systems, moving at a quicker pace than global enterprises when trying to identify serious vulnerabilities on their networks.

The adversaries’ efforts increase significantly when critical vulnerabilities emerge, with new internet-wide scans happening within minutes from the disclosure.

Mind the gap

Attackers are tireless in their quest for new victims and strive to win the race to patched vulnerable systems. While companies strive to identify issues on their networks before it’s too late, they move at a much lower rate.

The data comes from the Palo Alto Networks Cortex Xpanse research team, who between January and March this year monitored scans from 50 million IP addresses of 50 global enterprises, some of them in Fortune 500.

The researchers found that companies take an average of 12 hours to find a new, serious vulnerability. Almost a third of all identified issues related to the Remote Desktop Protocol, a common target for ransomware actors as they can use it to gain admin access to servers.

Misconfigured database servers, zero-day vulnerabilities in critical products from vendors like Microsoft and F5, and insecure remote access (Telnet, SNMP, VNC) complete the list of high-priority flaws.

According to Palo Alto Networks, companies identified one such issue every 12 hours, in stark contrast with the threat actors’ mean time to inventory of just one hour.

In some cases, though, adversaries increased the scan frequency to 15 minutes when news emerged about a remotely exploitable, critical bug in a networking device; and the rate dropped to five minutes after the disclosure of the ProxyLogon bugs in Microsoft Exchange Server and Outlook Web Access (OWA) issues.

Palo Alto Networks recommends security teams look at the following list of services and systems to limit the attack surface.

The researchers note that they compiled the list based on two principles: certain things should not be exposed to the public web (bad protocols, admin portals, VPNs) and secure assets may become vulnerable over time.

  1. Remote access services (e.g., RDP, VNC, TeamViewer)
  2. Insecure file sharing/exchange services (e.g., SMB, NetBIOS)
  3. Unpatched systems vulnerable to public exploit and end-of-life (EOL) systems
  4. IT admin system portals 5. Sensitive business operation applications (e.g., Jenkins, Grafana, Tableau)
  5. Unencrypted logins and text protocols (e.g., Telnet, SMTP, FTP)
  6. Directly exposed Internet of Things (IoT) devices
  7. Weak and insecure/deprecated crypto
  8. Exposed development infrastructure
  9. Insecure or abandoned marketing portals (which tend to run on Adobe Flash)

Why companies fall behind

One explanation for this lag in identifying the risks on the network is a faulty vulnerability management process relying on a database of known vulnerabilities.

The scanners using this database won’t find new issues until the database receives an update, which may come with a delay of hours, or even days. Furthermore, scanners don’t see all devices on the network.

“Typically, discovery of assets happens just once per quarter and uses a patchwork of scripts and programs the pen-testers have put together to find some of the infrastructure that is potentially vulnerable. Their methods are rarely comprehensive, however, and regularly fail to find all vulnerable infrastructure of a given organization” – Palo Alto Networks

At the other end, attackers take advantage of the cheap cloud computing power that enables them to run internet-wide scans.

Currently, scanning the internet is no longer restricted to well-funded actors. Cloud technology made it possible to set up infrastructure that can “talk” over one port-protocol pair with every device on the public face of the web in just 45 minutes.