feature article
Subscribe Now

A Tale to Make Your Blood Run Cold

On The Eve of the Big Cyber Wars?

For many of us in the Northern Hemisphere it is winter, so a good time to close the curtains, gather round the fire and tell stories that make the blood run cold and the hairs on the back of your neck rise in horror. And this is one such story.

It was a peaceful day in the international company’s computer operations centre until, at 13.07, the monitoring services detected that there were several simultaneous attempts to probe a non-existent workstation. Four minutes later, a VHDL server attempted to access a Google search. And, four minutes after that, external friends confirmed that they were seeing potential broadcasts from the company to known bad sites. (External friends are other companies who mutually monitor sites.) At 13.20 – 13 minutes after the first recorded incident –  instructions were issued that all the company’s sites should close down their IT activities. By 13.25 all external connections, including landline telephones, were closed down and all named machines were locked down. By 13.40 all on-site networks and machines were shut down – including printers and other intelligent peripherals – and remote users were being instructed to shut down. At 13.45 all the named representatives of a pre-defined Incident Response Team (IRT) left their remote sites for the central location and by 15.50 it was clear that all remote users were shut down.

At 16.00 the IRT had a summary of the situation. This said

  • As of 2 hours ago all networks on all sites and all machines on those networks are off
  • All connections to the Internet and other networks are disconnected
  • We have no landline communications
  • We cannot communicate with our staff
  • Our customers and people in the field have started to ring mobiles

They then set out the purpose of the meeting

  • To determine what we will do and in what order technically to understand and recover
  • To determine our communication plan
  • To inform the Board if we are unable to recover.

This was clearly a stressful situation: the whole company was depending on IRT’s ability to get things running safely again. At 22.00 the stress was too much for the deputy leader of the team, and he was hospitalised. (Where he remained for a month.) This provided a marker for later review: while the deputy was enormously technically talented, no one had thought of measuring his ability to undergo such serious stress.

Three days after the attack (T+3) the IRT prepared a report for the board.

The team had determined that, 14 days before the attack, there had been what they described as an “immature” attack that compromised four networked workstations on two sites. These had been removed, and, at the time of the report, it appeared that little damage had been done. The same attacker was responsible for launching the major attack, when at least six workstations and a server were polluted. There was a high level of probability that this attack targeted specific individuals who had particular roles in developing certain kinds of assets.

A further high level of probability was that the service provider, a third party, had their asset database used to guide the attack. There was some evidence that the attack also used the staff directory in conjunction with the asset database.

It was also highly likely, but not entirely certain, that standard system- and user-management tools deployed by the service provider including, in particular, Symantec’s Altiris, were used as the vector of attack.

A significant issue was that the administrator of the configuration management database was a particular target of the attack. It seems likely that the passwords for this role were among the items transmitted from that workstation before it was shut down. This system was subsequently air-gapped and was no longer accessible through the same attack vector, and, by the time of the report, there was no evidence that there had been any unusual attempts to access the configuration management database. However, it was impossible to guarantee that this had not happened, and, consequently, the IRT could not guarantee that the codebase was intact.

For at least the following 12 months, the company was regularly on the receiving end of significant targeted attacks. 13 were known to have succeeded to some extent, and 11 of these were attributable to the same entity as the initial attack. The attackers regularly targeted eight individuals in specific roles in the company, as this allowed the attacker the opportunity to corrupt the code base, access information on customers, and access information on individuals in high threat territories.  During this year, two of the targeted people left the company and the industry.

The attacks, while using recognised techniques, were fully customised to the target company, were industrial in scale, and were so well directed that they frequently created the effect of an insider attack.

Reluctantly, the company had to accept that it was impossible to build a wall strong enough to keep out a determined attacker while, at the same time, still continuing to operate. The speed at which the attacks keep coming make it impossible to counter all the threats simultaneously.  It has set critical parameters, in defending the network. These are: the time it takes to detect and assess the impact of an attack, the limits that can be placed on the scale of clearing-up after an attack, and defending against “insider” attacks.

And this brought in the issues of attacks on the home environments, since, by 2012, there was no gap between home and work computing on home-based machines. This potentially exposed employees’ families to threats. It also made it easier for the attackers to continue their activities.

We mentioned earlier that analysis found that certain individuals in defined roles were specifically targeted as a route for attacks; for example, they would be used to transmit code-corrupting bots. Examining their home computers and other devices, bots were found there as well, which were clearly from the same source as those found in the company. (There were identical code fragments and a very similar writing style.)  These can be described as “high value attack assets” and had been missed by leading-edge cyber-detection tools for at least five years. An analysis of a small sample of home computers showed a 100% correlation between those under attack at work and those with personality-profiled software on their home computers. There was also evidence of malware on a further 11 machines of younger employees whose psychological and experience profiles indicated that they would develop into the roles that were being targeted.

This malware “grooming” was traced back in some cases to at least 2006, ten years before the attack. And, even more revealingly, when one person, who had been tracked for at least three years, made a lifestyle/career change, the tracking stopped. There was evidence that malware was wiped from home computers of people whose career development was not leading toward one of the interesting roles. When someone left the company, there was often unusual behaviour in the immediate period before they left. This might have been the launch of under-prepared attacks. In many cases, malware was removed from their workstations. When others left, there was activity that showed that the attackers hadn’t realised that they were no longer there. The attackers displayed knowledge of holiday dates, notice periods and even absence due to sickness of their target people.

When someone joined in a relevant role, there was evidence that the attackers quickly targeted him or her to learn more about their activities within the company, and, when people retired, it appeared to be standard practice that their home computer was cleared of malware, presumably to protect the bots.

This long-term attack indicates significant investment on the part of the attacker. It also indicates the way in which the internet – not just the social media aspects, but also what we would consider “normal” activities, such as company web sites, press releases of appointment or product announcements with named sources, papers at conferences and articles on news sites, plus official blogs, all of which would provide someone with resources the opportunity to build profiles of potential targets, even without breaking into the corporate personnel databases.

Now this story was not told in front of a fire while the winds howled outside, but in daylight in a lecture room during a one-day conference on Cybersecurity at the University of Southampton. The story-teller was Peter Davies from Thales e-Security, and, to be fair, he said it was a synthesis from several companies, some purely UK based and some operating internationally. He was careful not to point a finger at who might be the attackers, but he implied that the timescale and size and frequency of the attacks indicate a large organisation, or perhaps a group of co-operating organisations. He also said that he felt that organised crime, if it was not already involved, would soon be active in this area.

This presentation was about planned attacks on large organisations. Another case study at the conference was about an attack on a small organisation, a charity running eight hospices with a number of fund-raising shops, manned mainly by volunteers, and around 400 staff. The computer network, which included the hospices, shops, and administration, was supported by a third party IT company. The presenter, Matt Argyle, had been in post for a matter of days, as head of a two-person in-house IT team, when a member of the care team received what appeared to be an email from the CEO asking why an invoice, attached, had not been paid. Naturally she clicked on the invoice (it was from the CEO after all – and she had never heard the word phishing) and then a few minutes later called the help desk to say she was getting strange messages when she tried to open files. The help desk then was overwhelmed with messages from other users about ransom notes when they were trying to open documents from the file server. Within an hour, large chunks of the network were out of use.  By that evening, there was a replacement network up and running, but even three weeks later they were finding files that the virus had encrypted. Much of the work during the day was lost, as back-ups were run only nightly, and the estimate was that the huge amount of wasted time with staff unable to work while the systems were down represented, conservatively, around £20,000-£30,000 ($30,000-$45,000) – let alone the cost of rebuilding the network and retrieving the encrypted documents. It didn’t help that during the post mortem they discovered that the IT support company had not kept up-to-date with patching.

The charity undertook training of all staff on the issues, added stronger email filtering, and instituted all-day incremental backup. They then carried out a training exercise of fake phishing emails. Despite the training, 40 people (10% of the staff) opened a false email, including the CEO, who knew about the exercise.

I said earlier that Peter Davies didn’t point any fingers at possible suspects, but, as I was writing this, the British Government, with what is assumed to be the implicit support of other governments, did just that, following an investigation by the UK’s National Cyber Security Centre (NCSC) into the June, 2017 NotPetya cyber-attack. It is believed that the NotPetya malware was released as part of the continuing attacks by Russia against Ukraine’s infrastructure. (See https://www.eejournal.com/article/20161027-cyberwarfare/) but quickly spread much more widely. Shipping company Maersk had to rebuild its entire IT infrastructure of 4,000 servers, 45,000 pcs, and 2,500 applications. While they managed this in a frankly incredible ten-day effort, for those ten days they lost a huge amount of business, as their ships were unable to unload or load their containers. The NCSC found that “the Russian military was almost certainly responsible for the destructive NotPetya cyber-attack of June 2017.” “Almost certainly” is their highest rating, so on February 15th, the British Government put out a statement by the Foreign Office Minister for Cyber Security, which said, “The UK Government judges that the Russian Government, specifically the Russian military, was responsible for the destructive NotPetya cyber-attack of June 2017.”  It went on to say, “The United Kingdom is identifying, pursuing and responding to malicious cyber activity regardless of where it originates, imposing costs on those who would seek to do us harm. We are committed to strengthening coordinated international efforts to uphold a free, open, peaceful and secure cyberspace.”

It was swiftly followed by a brief statement from the White House:

“In June 2017, the Russian military launched the most destructive and costly cyber-attack in history. … This was also a reckless and indiscriminate cyber-attack that will be met with international consequences.”

I don’t think that you can get a clearer declaration of cyber war than those two statements. Cyberspace is going to get nastier and nastier, and part of the cost of doing business for any organisation is going to be constant vigilance. It feels like those old westerns where the settler could not get on with growing his crops without also fighting a continuous defensive battle against the weather, bandits, Indians, and the cattle ranchers.

2 thoughts on “A Tale to Make Your Blood Run Cold”

  1. Forgot to mention that late last year various governments, including the US and UK, blamed North Korea for the WannaCry malware attack , which is believed to have hit more than 300,000 computers in 150 nations, and to have caused billions of dollars of damage.

Leave a Reply

featured blogs
Nov 12, 2024
The release of Matter 1.4 brings feature updates like long idle time, Matter-certified HRAP devices, improved ecosystem support, and new Matter device types....
Nov 7, 2024
I don't know about you, but I would LOVE to build one of those rock, paper, scissors-playing robots....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

ROHM’s 3rd Gen 650V IGBT for a Wide range of Applications: RGW and RGWS Series
In this episode of Chalk Talk, Amelia Dalton and Heath Ogurisu from ROHM Semiconductor investigate the benefits of ROHM Semiconductor’s RGW and RGWS Series of IGBTs. They explore how the soft switching of these hybrid IGBTs contribute to energy savings and power generation efficiency and why these IGBTs provide a well-balanced solution for switching and cost.
Jun 5, 2024
33,742 views