Let's face it, software is broken and it has been for a long time. There are some programs that are damned near bulletproof, but they are used in things like the old Space Shuttle and other military and aerospace equipment that absolutely must work right every single time. The Shuttle is a good example of that reliability.
Even though computer technology made leaps and bounds since the first Shuttle flew, NASA didn't want to upgrade the computer hardware because the software was rock solid and had thousands and thousands of testing and and flight use. Upgrading the computers would have required rewriting all of the software with no guarantees it would work properly with the new hardware. All of the work the software engineers put into the original software would have thrown out. So even though the computers on board the Shuttle were the equivalent of the old Radio Shack TRS-80 in computing power, they were proven designs and it was known with certainty that they would work with the software and work every time.
Another example is our land-based nuclear arsenal. The computer systems are an ancient design, using 8” floppy disks for launch programming and authentication. Their very age means that it isn't likely a new programming bug will show up and it also means it's impossible to hack. (There are no Internet or other network connections to the launch computers and it's unlikely any of the present day hackers even know the programming language used for those systems.)
The two examples above are the extreme, but they show that it is possible to create 99.999% flawless programming code. The downside: it's very expensive to do so.
Does that mean we're doomed to a life using barely adequate software in the rest of the world?Probably.
I have seen the “It works good enough for now, so ship it!” attitude at my place of employment. The engineers usually want to spend more time making it right, but the marketing depart wants the product out there yesterday. That has come back to haunt us more than once over the years as both known and unknown software bugs reveal themselves in the field, making our product more difficult, if not impossible, to use properly.
That attitude also seems to exist with many of our more critical computer systems that control everything from banking to traffic control to utilities.
It’s hard to explain to regular people how much technology barely works, how much the infrastructure of our lives is held together by the IT equivalent of baling wire.It's kind of scary to think that just about every computer on the 'Net, either directly or indirectly, is vulnerable. About the only machines that aren't are those isolated from the 'Net. And that still doesn't take into account the gawd-awful operating systems and programs that don't work the way they're supposed to. As Glenn Reynolds puts it, “If houses were built like software, one woodpecker could destroy a city.”
It was my exasperated acknowledgement (sic) that looking for good software to count on has been a losing battle. Written by people with either no time or no money, most software gets shipped the moment it works well enough to let someone go home and see their family. What we get is mostly terrible.
This is because all computers are reliably this bad: the ones in hospitals and governments and banks, the ones in your phone, the ones that control light switches and smart meters and air traffic control systems. Industrial computers that maintain infrastructure and manufacturing are even worse. I don’t know all the details, but those who do are the most alcoholic and nihilistic people in computer security. Another friend of mine accidentally shut down a factory with a malformed ping at the beginning of a pen test. For those of you who don’t know, a ping is just about the smallest request you can send to another computer on the network. It took them a day to turn everything back on.
I do have two computers here at The Manse not on the network that I use for such things as word processing and a few amateur radio applications. They both use Linux for an OS. Even then they aren't entirely secure because I use USB keys to move files from them to other machines that are on the network. Even when the USB keys are 'erased' or reformatted after I use them, that is no guarantee that some bit of malicious software didn't make it's way onto the non-networked machines by way of these keys. It's just less likely. The risk still isn't zero, nor will it ever be.
Still, I would be willing to wait longer and pay a bit more to get software that actually works well, isn't buggy (at least for the features I use), and doesn't leave my computer vulnerable to exploits by crackers.