Computer Reliability

Computer Reliability in Black Mirror: Hated in the Nation

Chapter 8 covers the various way computer systems can be unreliable and the impacts these situations can have on humanity. In Black Mirror, there wasn’t a direct case where a software system failed but rather security issues. Yet what is interesting in the case presented by Black Mirror is that the failure of the Drone system is a result of both a rogue ex-employee and a mandatory back door installed by the government. While there are security implications of these systems, it also is an example of how a computer system can only be as reliable as its weakest link. In this case, the allowance of a back door gave a malicious attacker an opportunity to seize control of the system. Furthermore, his prior employment gave him access to the tools he needed to execute such an attack, circumventing the onboard security.

In the post chapter interview Avi Rubin speaks of how digital voting systems can actually be less reliable than paper systems due to many opportunities for tampering. Similarly, the ADI system, while given military grade encryption and specialized tooling, could be less reliable than an alternate system that wouldn’t risk the death of so many. They mention in the movie the ADIs supplement the bee population, suggesting there are still some left on the brink of extinction - would nursing this last bit of the population back to a stable amount have been safer and less intrusive?

Perhaps another potential source of reliability errors, albeit not explored by the movie in its fullest, is their national ID system. In the text we saw an example of how the NCIC has lead to false arrests due to date entry errors. An implication of having a nationwide database is the potential for errors in data entry or access, which when considered at the scale Black Mirror does (tracking exact positions of people) could lead to larger issues than just false arrests. For the Antagonist, this database was his means to locate targets in a general area (which then the bees could finish the rest). Even with the backdoor in the bees, this system shouldn’t have been accessible in a two way manner - showcasing where the creators failed to properly ensure the government system couldn’t be exploited from a potential attacker.

Additionally, Sjolberg’s work can be analyzed using the Software Engineering Code of Ethics (although software was not the entirety of the work he performed, it is still relevant). It is clear that he could have done better overall. Sjolberg violates, albeit unintentionally, clause 1.03 in that his work did harm the public, despite his good intentions. It is reasonable to expect him to take extraordinary measures to test and validate the ADIs - after all, these are robots being release in massive quantities into the public. Sjolberg failed to adequatly ensure the security and safety of his system, which lead to thousands of deaths. This also fails clause 3.10 - Sjolberg did not test the software adequately enough to prevent a malicious actor from gaining control to the entire system. While Sjolberg was not at fault entirely, the issues presented with the ADIs could have potentially been prevented with better development ethics.

Written by:Dylan