Carl Philipp Gottfried von Clausewitz, the Prussian general who also was a theoretical thinker, wrote, “War is a mere continuation of politics by other means.” What exactly he meant by this is the subject of serious debate. Today, however, we are seeing another “continuation of politics by other means”, as cyber attacks are moving from data gathering and financial fraud and theft by criminals to attacks on physical systems, including elements of national infrastructures by nation states or organisations closely linked to nation states – in short, cyberwarfare.
I was forcibly made aware of this at a recent conference – System Safety and Cyber Security – organised by the IET – the British version, in some ways, of IEEE. I’d originally signed up for the system safety sessions, but drifted – fascinated – into the cyber sessions, driven in part by a keynote session on national cyber security strategy. Martyn Thomas, whose credentials would fill several pages and include stints in official capacities, was pessimistic about the future. He felt that, despite the British National Security Council declaring cybersecurity as a Tier One risk, there is no national strategy, and, instead, it is being treated like food poisoning – where the approach is better hygiene and treating outbreaks – accepting the problem as a part of living in a cyber world. Among the contributory factors, in his view, are: general acceptance of poor quality software (where there is typically one error in every 30 lines of code); providing users with tools that encourage certain actions and then telling them not to use those features (embedded URLs and attachments in email, for example), and insecure IoT products. The recent Distributed Denial of Service (DDoS, which brings a website down by swamping it with multiple requests) attack on the website of security journalist Brian Krebs, one of the largest ever seen, which brought down a major web hosting service, used messages sent from Internet-connected security cameras and DVD players. These had been identified and hijacked by a malware product called Mirai, which scanned the Internet looking for IoT products protected by factory default usernames and passwords. When one was found, it was infected with software that turned it into a “bot” that regularly checked with a control server. When Krebs exposed some bad guys, they turned on their “botnet” to overwhelm Krebs’s site with messages. At peak, it is estimated that the site was being hit with 620 Gbps. The code for Mirai is now publicly available. (This seems to me to be the perfect counter argument for people who ask, “Why should I be bothered if someone hacks my DVD player?”) Thomas, in using this as an example of the weakness of security, said that manufacturers who send out devices with default usernames should be fined, and that the insecure IoT posed an existential threat to the Internet as a whole.
In contrast, the second keynoter, Ian Levy, was much more positive and upbeat. Levy is the Technical Director of the UK’s recently established National Cyber Security Centre (NCSC). This brings together a number of units, such as the Computer Emergency Response Team UK, that were previously spread across different government departments, and it is a part of GCHQ – the UK’s version of America’s NSA. Its role is to “act as a bridge between industry and government, providing a unified source of advice and support on cyber security, including the management of cyber security incidents.” He disagreed with Thomas by saying that there is a national strategy – just that it wasn’t yet published. (Apparently it was due for publication in June but had to be held over until the dust settled after the vote for Brexit.) He confirmed a frequent argument of Thomas’s – that software, unlike real engineering, never learns from its mistakes – by demonstrating that buffer overflow – a favourite way of causing damage to a program or system – was first identified as a problem as far back as October 1972. And it is still not fixed. Buffer overflows are a regular and frequent problem. In addition to talking about the work of the NCSC (see more at www.NCSC.gov.uk ), he highlighted some of the common ways that organisations provide an open door for cyber attacks. Some of these will be discussed again later, but they include using administrator accounts for email and web browsing, connecting together different networks – particularly older networks – and providing users with bad advice, such as telling them to change passwords regularly.
Every paper in the two sessions was worthy of detailed reporting, but I will attempt to synthesise the two days.
Firstly – who are the cyber attackers? There was a general agreement that there is a hierarchy and that the scene is fluid. Some actors appear and then disappear; the motives for action can vary from destruction or damage to information gathering, which often includes stealing industrial secrets.
State Entities: there is agreement in the cyber community that national governments, despite protestations to the contrary, are active in cyber attacks. The Stuxnet attack on Iran’s nuclear processing plant is generally agreed to have been a joint exercise by the American and Israeli governments. Russia is almost certainly behind massive cyber attacks in the Ukraine and the Baltic states. State entities have huge resources, of both time and expertise, and their cyber teams will have access to state-gathered intelligence, the academic community, and other partners. Apart from open warfare, like Stuxnet, it is hard to attribute, and if the work is for information gathering – replacing traditional spying – it may leave no trace in the target systems.
Semi-state entities: these are organisations often found in the developing world that are working for the state, usually with access to state resources and support but not acknowledged officially. They carry out similar activities to the state entities, but they may, for example, be engaged in commercial activities.
State-tolerated or -permitted entities: these often carry out activities where the state needs complete deniability. However, they enjoy informal state support and may even exchange personnel. As with the semi-state entities, they frequently have high levels of competence.
Issue-based entities: varying from environmental and similar organisations all the way up to Daesh. (Daesh appears to be the preferred name in the intelligence communities for ISIL – ISIS – Islamic State.) There is an enormous variation in objectives, competence, and even levels of rationality. Some of these, like Wikileaks, merely publish material that is embarrassing or controversial; others deface web sites as propaganda, but some are more destructive.
Criminal entities: again with a wide variety of competence, but they are becoming increasingly sophisticated, and, to quote one speaker, “They are running rings around the state.” The successful ones have massive resources. There is geographically widespread co-operation between them. Some provide contract services to other groups and draw on academic and commercial resources. They have moved the centuries-old crimes of ransom and extortion into the cyber arena, holding entire organisations’ networks captive until a ransom is paid, or merely using the cyber equivalent of the criminal’s threat: “Nice little web site you have here – shame if something nasty happened to it.”
Enthusiasts, hobbyists and nut-jobs: these are the stereotype hackers with a wide range of skill and motives. While some of them are acting innocently, they have penetrated systems that should be very secure. A speaker said that they were playing big-boy games and had to accept big-boy rules, and that the penetrated organisations had suffered a sense of humour loss and were reacting severely. Typical of these is Lauri Love, an electronic engineering student who has Asperger’s syndrome. The US is seeking his extradition from Britain to face charges of data theft from NASA, the Federal Reserve, the Department of Defence, and the FBI, with a possible 99-year sentence.
Before we move to what these entities are doing – where are they getting their tools? Easily – they go onto the Internet and buy them. The sites are in what is called the Dark Net – or, more correctly, that huge area of the web that has blocked indexing by Google. On these sites, it is possible to buy malware, using PayPal or credit cards as well as BitCoin, or to rent a DDoS – you specify the target and pay a rate based on the time you want the target attacked. And the sums involved are not great – in the hundreds of dollars rather than the thousands.
A fairly standard approach has been developed to effect a cyber-attack, often called an Advanced Persistent Threat (APT), which uses social engineering and then technology. Before the attack begins, the actor researches the target organisation, using its web site and social media to identify individuals, often to assess possible administrators. The next phase is spear phishing – sending the identified individuals emails that appear to come from others in the organisation or some other trusted source – that contain links to a web site or an attachment. The web site may be legitimate, but it would have been previously targeted so that when contacted it can download malware, while the attachment can appear to be a boring Word document but has embedded macros that carry out activities in the target computer. Word no longer executes macros by default, so the message encourages the reader to run the macro “for an enhanced reading experience”. If the spear has hit someone using an administrator account to access the web or to read emails, the attackers have struck gold. They infiltrate the network and place malware in strategic places, together with multiple gateways for them to regain access, then go away and do nothing for several weeks. They return cautiously to see if they have been detected – if not, then they move from reconnaissance to action. This will depend on what they want to do, which could be anything from detailed information gathering – for example, downloading email archives – to active destruction of infrastructure.
According to Oleh Starodubov, Digital Forensic Investigator, Department of Information Security of the Security Service of Ukraine, who spoke over a Skype link from the Ukraine, this approach is part of a continuing and increasing attack on the Ukrainian infrastructure. Rising from 79 attacks identified in 2012/13, in 2014/15 there were 239 attacks identified. There is a strong probability that there may be more that are not yet found. These all seem to have been initiated in the office hours of the UTC +3 time zone – which includes Moscow. There have been a wide range of targets ranging from diplomatic missions to the Ukraine to power supply systems. There have been several attacks on power supplies, blacking out areas for several days at a time, with malware disrupting the internal phone system and the network control interfaces before breaking electrical connectivity. Operators could only helplessly watch lights going out all round them, with no way of communicating with each other. A big factor in delaying the restoration of power was that the entire control software had to be re-installed at multiple locations before it was possible to reconfigure the electrical supply network. If the SCADA (Supervisory Control And Data Acquisition) network had not been connected to other networks in the power companies, this attack would have been much more difficult.
Present in the room during the Ukrainian presentation was Andrey Nikishin, the Special Projects Director & Head of Future Technologies for the Russia-based Kaspersky Lab. He spoke later and gave some examples of threats that they have found. He identified humans as possibly the weakest link in a target company. In one instance, for example, a nuclear power station computer was infected by an operator user inserting a USB memory stick in a control computer. He also pointed out that espionage has changed by quoting from the latest James Bond film, where Q is no longer a “mad-professor” gadget pusher but, instead, a young spotty-faced geek who says, “I’ll hazard I can do more damage on my laptop sitting in my pyjamas before my first cup of Earl Grey than you can do in a year in the field.” In another nuclear plant, someone with sysadmin authority wanted to watch video, so uploaded onto a control PC a media player that came with malware. The infected computer held 42,000 documents, including emails – which may have been transmitted to a third party, but no one knows whether they have or where they might have gone.
References to Daesh made regular appearances as a major topic at the conference. The Islamic group is well funded, mainly from selling oil, which generates income estimated at between $20 million a month to $1.5 million a day. Some of these funds are being used to hire specialist skills from Arabic speaking countries, including computer skills. The public view of Daesh and cyber activities has been limited to DDoS and other attacks on web sites of web sites, which, as we discussed, can just be bought. More concerning are views that it may also be undertaking APTs. Press rumours that Daesh is planning to steal nuclear materials to make dirty bombs make great headlines, but why should it bother with that if it is possible to take control of an entire power station remotely? Daesh can either threaten to blow it up unless certain conditions are granted, just cut it off from the electrical power supply, or, in what one has to consider the worst case, blow it up without any warning.
This was briefly covered in a presentation on the way in which systems are being developed for Hinckley Point C, a new nuclear power station for the UK. The plant is being designed and run by the French state-owned company (EDF) Electricité de France and is being financed by EDF and two Chinese state-owned companies. There is a distinction between physical security – which has always been important – and cyber security, but there is also the issue of the requirements of safety and those of security. The speaker expressed confidence that everything was under control, but there were mutterings – at least in my area of the audience.
There were many suggestions on how to protect against cyber attacks, but one of the significant problems is that decisions have already been made that don’t just compromise security, but blow vast holes in it. The starting point is people. There are countless reports of even aware people falling for fake email phishing and spear phishing attacks. People with admin powers to reconfigure and adapt systems and networks shouldn’t use the same account for mail and web, but they do. Organisations are fallible. They make decisions to connect systems for different activities together, often for very good operational reasons, only to create a happy hunting ground for the cyber criminal. Obviously, connecting the SCADA network to a business network makes sense by providing management with timely reports, but it then opens SCADA to malware from people’s personal activities. Similarly, connecting the security camera network can be seen to have some advantages – but has every camera in the network had its password reset before going live? The list is endless. When did you last back-up your personal devices? And is your wifi router running the latest version of firmware, and have you changed its passwords from the default settings?
While writing this, either I have become more aware of the issues, or things are getting worse.
The Democratic National Committee in the US has been hacked. (And other, linked, organisations and individuals as well.) But who did it? And why?
It has been suggested that the Mirai malware and its variants, described by cryptographic expert Bruce Schneier as simple and childish, are now resident in half a million devices after the code was published. And there is no way to clean the devices.
After an Austrian supplier to the aerospace industry lost $59 million to a “fake president” attack, where an employee was fooled into transferring funds to a fake bank account, the aerospace companies are looking again at the security of their systems, where suppliers are closely linked to manufacturers’ IT systems to share information.
Even our own Bryon Moyer is worried about the security of fitness devices.
Irritating TV commercials are appearing on British TV for Hive, a British competitor to Google’s NEST: “The clever way to control your heating and hot water from your phone.” Having browsed the website, I am still unsure how secure the Hive hub – the link beween controls, the heating system, plus other and you broadband router – is.
At the end of the two days’ conference, many of the people I spoke to, who have in-depth experience in system safety and related issues, were distinctly down-beat. The aggressors in cybersecurity, including the ones who target the western governments, are always going to have the advantage. They will choose what ground on which to attack, and they will be investing heavily in the next generation of tools. The vast IoT will continue to grow with edge devices that are unprotected, offering more sites on which to add to a botnet.
We just have to hope that the whack-a-mole approach to cybersecurity doesn’t leave us too far behind.
Post script
While writing the final draft of this article on October 21 I became aware that there were significant delays in the Internet. It has emerged that there was a significant DDoS attack on Dyn, a company that is used for routing a lot of Internet traffic, using the Mirai malware discussed earlier. Since for much of the media this was a new phenomenon, there have been acres of discussion by “experts” on how this could have happened, with shadowy groups of criminals being blamed for building a botnet just for this exploit. Nowhere have I seen something that I find much more disturbing: the attack could have been carried out by a single disgruntled person renting DDoS from the half a million IoT devices where Mirai is resident, for a few hundred dollars.
@Dick – please provide references to back up your assertion that released product software “Among the contributory factors, in his view, are: general acceptance of poor quality software (where there is typically one error in every 30 lines of code)”
This appears to be bigoted software engineering bashing that probably goes well with bigoted EE types. Broad studies have shown that specification, design, coding, and integration errors range between 15-50KLOC across nearly all languages. I’m pretty certain this includes Verilog and VHDL by EE’s too.
More specifically:http://www.isixsigma.com/topic/industry-average-defect-rate/…
For System Software
Typically it is observed
Requirements: 2 – 3 def/KLOC
Analysis : 3 – 4
Design : 5 – 6 defects/KLOC
Coding : 14 – 16 defects/KLOC
Unit Testing : 4 – 5 defects / KLOC
System Integration Testing : 2 – 3 defects /KLOC
This however does not state the errors per KLOC (1000 lines of code) after product delivery, which is certainly not every 30 lines of code.
The metrics are easily obtained from rigorous software life cycle metrics that come from frequent check-in’s during the software product life cycle.
From the blue wire ECO’s, and board/chip revs EE’s have their own issues, with similar numbers.
So … please stop this senseless Software Engineer bashing.
I, and many other engineers, do both hardware and software development … error rates are very similar in both … from initial product specification, to post first ship ECO’s. I’ve debugged a number of arrogant EE’s boards while bringing up systems software on them … they are certainly not perfect either — and that would be better documented if they were forced to check in every minor design change along the development path.
As for the IoT devices that became part of the BOT nets, it was not coding errors that allowed access.
It was sales, marketing, and engineering specifications for the product that did not include strict security as a “must have” requirement. As a result, the project was poorly staffed, and did not include an experienced security advocate/engineer on the design team.
Flawed by design … followed by flawed deployment by customers.
I do wish that before you accuse me of software engineering bashing that you would read what I have written. (For your information I have a lot of time for proper software engineers but save be bashing for code monkeys)
1) It was not my claim. It was made by Martyn Thomas who is a vastly experienced and authoritative sofware and system expert.
2) 1 error for 30 lines of code is 33 per KLoC and I think was Martyn’s estimate for all software, not software written by skilled engineers, which is only a tiny proportion of code written.
3) I never said that it was code in the bots that caused the problem. You are making sweeping claims about how the security camers and DVD recorders were developed from a position of ignorance. The problem was a firmware held password that was preset in the factory and never changed.
4) I see you feel no qualms about bashing EEs. 5) Can you tell me what tools you use in your development tool chain to ensure that your code is pristine.
Can you match the code quality of the Tokeener project?
Dick,
Like it or not, you bare responsibility for what you choose to quote or re-print from other sources as fact.
15-50 per KLoc are errors fixed in development, of which 33 is median. The effective claim you made by quoting Thomas’s claim as fact, was wrongly asserting that was the typical error rate in released code.
I’m certain we can find a few pieces of poor code in the market, that might have close to that … but those are the less than 1% products with software. There are certainly a few equally poor hardware designs that have been shipped, and failed in the market.
You choose what you quote/reprint … I suggest you take some journalistic responsibility and choose to only quota verifiable facts that do not disrespect other professions out of bigotry.
Quoting and perpetuating the bigotry of others, claiming some vindication as a journalist doesn’t cut it. Those bigoted claims become your words when you choose to exercise such poor standards of conduct.
So again … I challenge you to back up this false bigoted claim you made, using Thomas’s assertion — show us where this is industry wide as a problem, at the level of 33 per KLOC in released production code:
@Dick – please provide references to back up your assertion that released product software “Among the contributory factors, in his view, are: general acceptance of poor quality software (where there is typically one error in every 30 lines of code)”
The number of errors fixed from specification to release integration ARE NOT errors in released products. Nor are they in ANY WAY a number that can be rightly used to bash software engineers as you and Thomas choose to do.
I don’t think you understand. I don’t have to justify the claim made by someone whose speech I report. I didn’t report him as saying those errors made it to production because that is not what he said, although that is what you chose to read.
I don’t understand why you have to be offensive in your comments, assigning motives to me that I don’t have. As I said before I have great respect for real software engineers. Thomas is a real software engineer. He has contributed massively to the production of tools like SPARK, that make it possible for other software engineers to produce really high quality code.
Dick … we have a few hundred/thousand years of history where people refused to be held accountable for racial, religious, place of origin, and other forms of bigotry.
Many also held to the false belief that the disparagements where true/fact because highly respected members of the community declared them so, as you have done by claiming that only “1 error for 30 lines of code is 33 per KLoC and I think was Martyn’s estimate for all software, not software written by skilled engineers, which is only a tiny proportion of code written.”
You openly claim that only a tiny proportion of code written makes it to product release with less than 33 per KLOC. And that only a tiny proportion of software engineers are skilled.
Bigotry … without any proof to stand behind YOUR assertion, and present real facts to back them up.
I’m certain the software engineers in Aerospace, transportation, banking, security, networking, operating systems, compiler tools, point of sale systems, ASIC/FPGA tool chains, Office (Word/Calc/Presentation) systems, and even most successful high income game software, ALL have software engineers that are significantly high quality producers as a team from design to QA that exceed your bigoted claims.
You are right … you can choose to be a bigot … but you MUST also accept that you will be called out for it. Just as many learned men choose to believe that their bigotry was also justified.
I’m pretty sure the term bigot applies, when you purposefully demean the quality of work produced by the vast majority of software engineers on the planet, by claiming “I think was Martyn’s estimate for all software, not software written by skilled engineers, which is only a tiny proportion of code written.”
Full Definition of bigot
: a person who is obstinately or intolerantly devoted to his or her own opinions and prejudices; especially : one who regards or treats the members of a group (as a racial or ethnic group) with hatred and intolerance
It’s such a tragedy, and a dark stain upon our history the way software engineers have faced violence and discrimination. Remember when software engineers couldn’t use the same drinking fountains as other people? Remember when the police turned dogs and hoses on software engineers?
And software engineers are still subject to the fear of police violence and discrimination in housing and employment.
It wasn’t that long ago that software engineers weren’t allowed to marry whoever they wanted. And now lots of state legislatures are trying to amend non-discrimination laws so they can continue to discriminate against software engineers in the name of their religion. Some people want to legislate where software engineers are allowed to use the bathroom.
It’s tragic the way law enforcement demonizes and spies on communities of software engineers because some software engineers have committed terrorist acts.
I hate that one of the people running for president of the US has called software engineers rapists and criminals, said we should deport software engineers, said we should build a wall between us and software engineers, said a federal judge couldn’t possibly do a good and unbiased job because members of his family are software engineers. I’m disgusted at the way he constantly demeans software engineers and brags about sexually assaulting them.
… or maybe none of this has happened and it’s a little bit ridiculous and insulting to compare someone’s thoughts about software errors to the actual harm suffered by actual marginalized groups.
@Larra – inferring the vast majority of software engineers are “unskilled” and produce highly buggy code is not a fair discussion about software quality, and the reasons for it.
There are similar slights against women engineers, maybe all of them are also unskilled, except for their exceptional talents of sleeping their way to the top of the management chain.
Or is it you want to side with the boys, and keep the witch hunt against software engineers going to provide cover for your own career?
Mock it if you may, but the reality in the work place needs some adjustment, rather than open disrespect claiming software engineers are rarely skilled.
I’ve mentored enough women engineers against that same bigotry.
I’m pretty sure the women are claiming they are being marginalized with the equal pay and glass ceiling marginalization … I’ve seen the same with talented software engineers in development groups dominated by EE’s and ME’s.
So … shall we ignore it for all groups in the work place, and let the bigotry proceed?
In case you haven’t been in industry lately … it’s rare to find a woman EE or ME … about half software engineers are women, so in the work place, these two forms of bigotry are often combined.
Just to clarify, I’m not and have never been a “woman engineer.” Prior to coming over to EE Journal, I worked in civil rights law.
And just to clearify, your relationship with Kevin and the editorial board for this forum is?
The point is that bigotry takes many forms … and is never just.
EE’s and ME’s attacking software engineers, is just another form of “Scientific racism”, with the assertion that some how their degree, experience and talents are far superior to the lessor software engineers.
Dick crossed the line, and clearly branded himself a bigot by his own words.
You and the rest of the EE Journal staff have a clear choice between right and wrong here … and can side with Dick’s flavor of Scientific racism and support his false claims against software engineers.
Or publish an open apology about why this is not accepted at EE Journal.
Kevin Morris is my father. I’m on the editorial staff for EEJ.
Having worked as a lawyer, I know a thing or two about having one’s occupation disparaged. Perhaps you are familiar with the hundreds of lawyer jokes and comments about ambulance chasers and “first we kill all the lawyers”? I get it. But I’ve also worked on behalf of many people who are the legitimate victims of bias and it’s really not the same thing. People who have been victims of horrific violence (or who have lost their homes, jobs, or medical care because they belonged to a marginalized group).
When you allude to “scientific racism” you’re talking about a movement that sought to justify eugenics and sterilizing people against their will. If you think your challenges as a software engineer are like that, I think you lack a sense of perspective.
So, your stand is that is absolutely ok to demean software engineers, because it doesn’t hurt anyone, and is just a joke that nobody believes?
And that doesn’t show up in pay checks, or career advancement either?
Good luck with that … and tell that to the many hard working women software engineers with a family to support.
Just because someone isn’t a holocaust victim, doesn’t mean they are not being harmed in other REAL and TANGIBLE ways with other lessor forms of bigotry.
The basis of “scientific racism” as in a nut shell, was some bigots declaring themselves experts with a status well above their victims, and using false statements/science to justify taking away the rights of their victims. To justify slavery, and other injustices.
That is what Dick is doing, asserting his position of experience and as a respected journalist of EE Journal, to falsely assert that the vast majority software engineers are unskilled and produce poor quality products. When questioned on these false assertions, and asked to provide proof … he simply doubled down, and basicly said it’s true because HE SAYS SO.
And you defend that …
I don’t think “bigotry” is a valid issue here.
First, editorially, I don’t see Dick or Dick’s article as disparaging software engineers. He is clearly reporting what a keynote speaker said during a keynote. If Dick were reporting on a political campaign, for example, and quoted a candidate, I would not hold him accountable for what the candidate said, and I’d expect him to report it accurately.
Going a step farther, I don’t see the original speaker’s quote as disparaging software engineers. Saying that we have a culture that practices “general acceptance of poor quality software (where there is typically one error in every 30 lines of code);” does not seem to me like an attack on software engineers. I don’t believe that saying that the public accepts poor quality software is a condemnation of the community of programmers. People accept poor quality in a lot of things, and sometimes “poor quality” is still the best available option.
In every engineering discipline there are mistakes and quality issues. In most of them, these quality issues can manifest themselves in terrible consequences such as loss of life and enormous property damage. Software engineering is a relatively new and immature art. I knew this when I decided to become a software engineer. The world also is in an era where we require an enormous amount of new software. The gap between the amount and quality of software we need, and the amount and quality of software we as a community of software engineers can produce is enormous. As a result, today, software engineering is under more pressure than any other segment of engineering. And, we are still all learning our craft. Software is the single most complex thing ever produced by humans, and it will take a long time for us to learn how to make it anything close to perfect. Yet, we cannot afford to just stop and wait until we get it right.
Adding fuel to the fire, unlike many other professions, there is no community enforcement of standards to practice software engineering. Doctors and lawyers maintain rigorous professional standards for education, certification, and peer review. If you don’t meet those standards, you can’t practice the profession. Any twelve year old with a laptop can declare themselves a “software engineer”.
Finally, I don’t believe that disparaging a career choice, or a community of professionals is remotely the same as bigotry. We all decided to become software engineers, lawyers, whatever. We made those decisions while fully aware of the public perception of our chosen profession. We made those decisions with the knowledge that the choice carried a large weight of social responsibility. Nobody was ever born a poor, underprivileged, misunderstood software engineer. I am proud of the work I did as a software engineer, and I do not think looking critically at the state of software engineering constitutes bigotry in the least.
@Kevin … it’s your journal … you can defend Dick if you want.
What you say is almost true …. except for when Dick doubled down with:
“I think was Martyn’s estimate for all software, not software written by skilled engineers, which is only a tiny proportion of code written.”
Again … show me where some majority of released code has uncorrected faults at 33 KLOC and is of poor quality.
This is not TRUE anywhere in the industry as a norm, where most of the following markets have rigorous controls from design to deployment in the field, many with mandatory governmental level reporting of faults. Certainly not in medical, banking, aerospace, flight control systems, transportation systems, FPGA/ASIC tools, Software development tools, operating systems, point of sale, or in ANY PRODUCT THAT IS SUCCESSFUL.
Again … back up this claim that everything is of poor quality and was written by unskilled software engineers — which for the most part are highly skilled in their field with a 4 year degree in either Computer Engineering or Computer Science.
And yes there are a few self taught with english degrees, or no degrees, working in both software development and hardware development … but certainly not recognized as trained engineers with a vetted formal degree in the field of practice. These are certainly not the vast majority.
@Kevin — ” I do not think looking critically at the state of software engineering constitutes bigotry in the least.”
I agree, as long as critically is disciplined without agendas. Critically implies objectively, with clear facts and guidelines, which there are not in this case.
And when challenged, critically means responding with supporting facts … bigoted is responding with I said so.
The vast majority of practicing EE’s, ME’s, Computer Engineers and Software engineers are considered highly skilled, but are also unlicensed. Licensing is mandatory in certain positions, but not universally required.
Contrary to your statement, both Computer Engineers and Software Engineers have formal licensing available.
http://insight.ieeeusa.org/insight/content/careers/97473
Not having a license, is not being unskilled … just you can not legally sign certain regulated documents which requires a license.
I like to think that EE Journal belongs to all of us as a community. We value your participation here. In past comments, you’ve even compared members of our editorial team to Nazis, and we didn’t delete your comments.
As for “defending Dick”, hmmm… He and I usually disagree. If we both had the same ideas and views, one of us wouldn’t be necessary.
I don’t need to back up a claim that “everything is of poor quality and was written by unskilled software engineers” because I don’t see anyone actually making that claim – or anything close to it.
Looks to me like you strongly disagree with Martyn Thomas’s statement that “…there is typically one error in 30 lines of code.”
That’s fine. You’re free to disagree with Martyn. You’re free to post that disagreement here. I don’t know the scope of code Martyn was talking about, and I haven’t done studies on defect rates in various types of software, so I’m not well qualified to comment on that.
I don’t agree with your assertion that Dick, by simply accurately reporting what Martyn said, is somehow “bigoted software engineering bashing”.
Calling DIck “bigoted” and then expanding that to the whole community of EEs “that probably goes well with bigoted EE types” seems out of line to me. While I don’t see anybody above directly saying bad things about software engineers or making judgments on their character, (other than implying that software typically has a larger number of defects than you claim), I do see you disparaging Dick and the entire community of EEs with this “bigoted” label.
I’m a software engineer, and I was not offended. I understand that each of us has a different tolerance for being offended, however. For me, debating whether software has 3 defects or 30 per thousand lines of code – particularly when we haven’t defined what type of software we’re talking about or at what stage in the development process, really doesn’t seem like “bashing” anyone.
Saying that women engineers advance by sleeping with their bosses is bigoted, even when giving dozens of examples where it was true. The explicit goal of the statement is to disparage women engineers, not to create any valid critical assessment of women engineers skills.
Strongly asserting that software engineers are unskilled and produce low quality software with high error rates is equally bigoted, even with a few valid examples. Doing so with false facts of 33 KLOC in released, is certainly more damning.
I started my career in banking and financials while going to school for my degree … highly structured environment that did not tolerate errors from design to production release, with extensive testing, validation, and QA from design conception to production rollout. Errors after release have regulatory mandated disclosure.
I’ve worked in medical, with the same tight controls, and regulatory oversight — for both hardware and software changes/faults.
I’ve worked for nearly every major large and mid-level server vendor from Amdahl to Sun Micro Systems … extensive review and fault management from design to field deployment, with some regulatory oversight for some markets. For both Software and Hardware changes.
I’ve also worked for a few small shops without any significant oversight … rare … most are really worried about Company name and quality.
yeah … any high school kid can publish is Android APP, and in some cases Apple APP, with little to no oversight.
But for real products … with significant number of jobs and investor money on the line … there is always significant process in place inside the company to ensure quality and market reputation, starting with clear quality hires, all the way to high coverage testing and QA prior to release.
That experience, solidly refutes “I think was Martyn’s estimate for all software, not software written by skilled engineers, which is only a tiny proportion of code written.”
If we are critically assessing the state of the industry … then provide facts to back up this claim.
Continuing to double down with I said so, is bigotry.
Full Definition of bigot
: a person who is obstinately or intolerantly devoted to his or her own opinions and prejudices; especially : one who regards or treats the members of a group (as a racial or ethnic group) with hatred and intolerance
@Kevin — There are multiple statements, all with the same target, and false assertion. Bashing software engineers, as not “real engineering, never learns from its mistakes”, is not critically looking at the problem … it’s clearly identifying a particular sub-group to lay claim as a flawed class of engineers.
“He confirmed a frequent argument of Thomas’s – that software, unlike real engineering, never learns from its mistakes – by demonstrating that buffer overflow – a favourite way of causing damage to a program or system – was first identified as a problem as far back as October 1972. And it is still not fixed. Buffer overflows are a regular and frequent problem.”
There has been a 3 decade effort to clean up buffer overflows in legacy code, tools, and libraries. Code that was written prior to the internet, and global attacks.
http://www.cert.org/secure-coding/tools/compiler-enforced-buffer-ov…
So the attack that software engineers never learn from their mistakes, is just that, a targeted, bigoted attack.
We have also known about ESD failures since prior to 1972 … yet even to this day products continue to have excessive ESD failures.
Do we also attack EE’s as a class and group, as never learning from their mistakes?
We have known about fatigue failures in mechanical systems for over 100 years, yet we produce mechanical systems today which fail with preventable fatigue failures … even in aviation, where this is extensively studied, with clear strict regulatory practices. And things are still failing. Do we attack all Mechanical engineers as never learning from these mistakes?
Kevin … it’s not just a single choice of words, is a clear specific attack on Software engineers, being held to a standard that every other engineer isn’t.
That is Bigotry … deny it … but the assertion is clear in statements like “that software, unlike real engineering, never learns from its mistakes”
WITF is “real engineering”? Why is Software engineering being targeted with a higher standard?
Bigotry
You said “While I don’t see anybody above directly saying bad things about software engineers or making judgments on their character,”
I differ … the prose wasn’t marked as a quotation, so the choice of words were most likely Dicks. He could have very easily used words that didn’t demean Software Engineers as not being real engineers. The entire article is about social engineering attacks that were successful, like passwords and phishing.