NonStop Insider

job types


Site navigation


Recent articles


Editions


Subscribe


For monthly updates and news.
Subscribe here
NonStop Insider

Cybercrime: Now the World’s 3rd Largest Economy

Justin Simonds, HPE Master Technologist

Andy VaseyAndy Vasey

Cybercrime: Now the World’s 3rd Largest Economy

Justin Simonds, HPE Master Technologist

There is an awful lot going on in the world today, but I believe these numbers are still accurate. The world’s largest economies include the United States with a GDP of $30.5 trillion. The US leads the world as the largest economy. China is second with a GDP of $19.2 trillion. Surprisingly and I’d say tragically cybercrime at $10.2 trillion (Microsoft estimate) comes in as the third largest economy in the world. The aggregate cost of cybercrime now exceeds the GDPs of Germany and Japan, ranking it third if we were to treat it as an “economy” and isn’t that both sad and a bit terrifying.

Microsoft’s $10.2 trillion cybercrime estimate (https://tinyurl.com/3bzykkkk ) represents the total annual global cost of cyberattacks, including a wide range of direct and indirect damages such as data destruction, financial theft, lost productivity, intellectual property theft, fraud, reputational harm, business disruption, forensic investigations, and recovery costs.

Definition and Components

All these costs are aggregated based on incident reports.

This ‘economy’ is not limited to immediate monetary loss—it also involves broad economic impacts felt across all sectors, including government, enterprise, and individuals, making it the third largest “economy” globally.

To reduce the amount of cybercrime, customers need a multi-layered defense strategy that spans technology, policy, user behavior, and global collaboration.

To address this growing threat there are some key actions to consider. The first to apply overlapping security controls at multiple layers: networks, devices, identities, applications, and data. Use advanced protection methods, especially multi-factor authentication, up-to-date anti-malware, regular patching, and encryption. Invest in automated detection, machine learning, behavioral analytics, and extensive threat intelligence sharing to rapidly identify and respond to new threats. Continuously monitor network and user activity for anomalies and regularly run vulnerability scans. See Nonstop partners and the new Aegis Scan tool. Additionally, provide ongoing security training for users and staff, and test employees. HPE sends us phishing emails from time to time. If we report it, we are congratulated; if we click on anything we get additional security training.

This is all good and fairly standard information, but we do need to ask how AI and specifically generative/LLMs change our financial threat landscape.

Generative and agentic AI raises several specific risks for banks which are that attackers can generate highly tailored phishing, vishing scripts, fake documentation, and deepfakes at scale, using public and leaked data about specific banks, products, and even your staff. AI agents can systematically crawl public portals, APIs, documentation, and leaked code to map an institution’s attack surface more quickly than human teams. Worse yet, models tuned on code (including legacy COBOL/Java, payment protocols, API specs) can help attackers spot recurring vulnerability patterns, misconfigurations, and weak controls in similar systems. If a bank exposes internal error messages or configuration hints in front‑end flows, an external agent can use those signals iteratively to discover exploitable states. So, the idea of Customer Relationship Management (CRM) and Customer eXperience (CX) are turned on its head since making it easier for a customer can now make it easier for cybercriminals to find and exploit weaknesses!

Owing to the ease of using AI and the already known dark web generative tools can help less‑skilled bad actors create or adapt malware, obfuscate payloads, and customize them to specific environments more quickly. AI can “co‑pilot” live intrusions: interpreting logs, correlating signals, and suggesting next moves in real time. Autonomous agents making trading or liquidity decisions can herd, amplify misinformation, or misinterpret signals, increasing the chance of flash‑like events if poorly constrained.

The mantra for many years for humans has been continuous learning. And with us, it requires dedication, perseverance and foresight on what to learn. AI systems ‘continuously’ learn because they have nothing competing for their attention and are not distracted by life. In other words, they have nothing else to do. therefore, lots of care and oversight needs to be done in terms of allowing uncontrolled feedback loops between production systems and LLMs because this creates new classes of vulnerabilities. If logs, tickets, chat transcripts, or transactions are fed back into models without strict filtering, you have exposed sensitive details about infrastructure, configurations, and exceptions can become part of the model’s internal representation. If those models, or derivative models, are exposed to less‑trusted users or vendors, you’ve unintentionally leaked a map of your defenses. Yikes.

An external agent probing your systems could indirectly shape model behavior if its interactions are used as training data—e.g., error patterns that hint at what inputs “move the needle” on a fraud classifier or WAF rule. Over time, this can act like adversarial training from the attacker’s side: the model learns that certain patterns are effective at bypassing controls. If internal models sit behind thin access control layers, external users (or compromised clients) can probe for edge‑case responses that reveal implementation details, specific control flows, or error handling that were “memorized” from logs and code snippets.

For banks, the safe baseline assumption should be to never treat production logs and configs as “just another data source” for training without a dedicated security and privacy review.

There are some things you can do to blunt the “LLMs learning your weaknesses” problem. Wherever you expose models (to customers, staff, or partners) you want to put the LLM behind an application layer that enforces business rules, rate limits, and strict input/output filtering. The model never talks directly to core banking systems; it goes through orchestrators that enforce security policies.

Also, no direct access from “chatbots” to production credentials or admin‑grade APIs; only well‑scoped, audited service accounts. Implement allow‑lists for tool calls (e.g., which APIs can be invoked, which parameters are legal), and deny‑lists for queries (e.g., requests for internal error codes, stack traces, infrastructure details). Log and rate‑limit suspicious prompt patterns (mass enumeration, injection attempts, repeated boundary testing). Do not use a single, multi‑tenant, generally trained model to directly interact with payment rails, AML systems, and customer help. Use smaller, domain‑specific models or deterministic rules for truly critical decisions (e.g., release of funds, modification of limits) and keep the LLM in an advisory or triage role.

Design your training pipeline as a first‑class security boundary. Classify logs, tickets, emails, and code as confidential or restricted; automatically strip or mask PII, secrets, architectural details (hostnames, IPs, path names, API keys), and precise error messages before any of it touches an ML corpus. Use differential privacy or aggregation where possible so individual high‑risk events are not directly “memorized.” Maintain completely separate datasets (and ideally separate model instances) for your internal engineering support, fraud/risk analytics, and customer‑facing assistants. Prevent cross‑contamination, that is, customer chatbot models should not train on detailed SOC runbooks or firewall configs; SOC copilots should not train on raw customer chats. Any new data source (e.g., a new log stream, internal wiki, vendor tickets) requires security review to approve in terms of what fields are included, retention, masking, and permitted downstream use cases. And you must document for regulators which data feeds into which models and for what purpose.

Banks can use the same technology to test and defend. Use controlled internal agents to systematically probe your public websites, APIs, login flows, documentation, and chatbot interfaces, looking for misconfigurations, data leakage, or prompt‑injection vectors. Treat those agents as part of your security testing program, with findings going into a formal remediation pipeline. Use models to correlate large volumes of security telemetry, summarize related events, and propose hypotheses (“this pattern matches known credential‑stuffing campaigns targeting financial APIs”). Use LLMs as assistants—not decision makers—for triaging alerts, writing detection rules, and reviewing code for obvious flaws in controls and authentication.

Given banking’s regulatory environment, structural safeguards will help keep “runaway LLM learning” in check. Apply the same discipline used for market/credit models to gen‑AI. This will include inventories, model owners, documented training data, performance and drift monitoring, independent validation, and change control. Explicitly assess “information leakage” risks for each model. Know what internal knowledge can it expose, to whom, and via what interfaces?

Treat AI agents as untrusted identities inside the network. They get their own service identities, limited scopes, and must authenticate and be logged like any other process. No broad database access “so the model can answer anything.” Instead, build narrow, well‑audited APIs (e.g., “GetAccountBalance(customerId)”) and give the agent only those.

For external LLMs, insist on enterprise terms that guarantee:

For on‑prem or private‑cloud models, treat them as critical infrastructure assets: hardened, segmented, with strict access control and regular security assessments.

The core shift for banks is to stop thinking of AI as a monolithic “brain” that naturally gets smarter, and instead treat it as a set of bounded, auditable components that:

Success requires the combined efforts of individuals, organizations, and governments to both deny attackers entry and impose meaningful consequences on malicious actors. Many of these are out-of-scope but figure out what you can do in your environment and harden it as much as possible.