As we approach National Cyber Security Awareness Month, we kick off our Author Q&A series with Courtney Bowman, John K Grant and Ari Gesher, who along with Daniel Slate, authored The Architecture of Privacy, published this month by O’Reilly Media. In this practical guide, the authors describe how software teams can make privacy-protective features a core part of product functionality.
On October 6, Safari will host an author talk in Washington, DC with Bowman, Gesher, and Grant. If you’re in town and can join us, RSVP in the link below before all the spots get filled:
Before our Q&A, the authors had a few words on how privacy architecture relates to the concept of information security:
For the purposes of the book, privacy protection is mainly about regulating authorized access to and use of data. Information security (infosec for short, or cyber-security), which is primarily about stopping unauthorized access to information, is what makes privacy protection possible. Without controlling unauthorized access, building a privacy protection regime for authorized users is moot because any protection that can be easily circumvented is no true protection at all.
Whereas the implementation of privacy and security are concerned with guarding against different threats, they do make use of the same technologies such as encryption, auditing, logging, access controls, separation of concerns, alerting, active monitoring, and investigation. It could therefore be quite understandable for an organization that has not thought extensively about the underlying distinctions to mistake privacy for security. But an architecture is an arrangement of things to constitute a whole with desired properties, and the desired properties for protecting privacy and for securing against unauthorized access are not the same. Each requires unique design considerations.
Question: What is the biggest misconception IT leaders have today when it comes securing the private data in their systems?
Answer: Some of the finer points of data protection and privacy engineering are often left as afterthoughts to deploying information systems. IT leaders often assume that measures like content access restriction, user auditing, and data retention practices can be configured after a system has been stood up and is ready for launch. However, this approach can be deeply problematic when the system hasn’t been architected from the start to provide the necessary configurability and flexibility to implement these types of capabilities in a contextually appropriate way. Systems that are effective at enhancing privacy and anticipating and mitigating many of the sizable organizational risks that we see in the news every day (e.g., hacking, private information leakage, etc.) need to be built from the ground up with these considerations in mind, ideally as seamless extensions of core operating functionality.
Q: For organizations that are behind in data privacy practices and working with old information systems, what are realistic first steps that can be taken to improve sensitive data handling practices?
A: Start by identifying the key risks to personal information, prioritizing those threats, and accounting for all of the related vulnerabilities. Then explore what it would take to sufficiently mitigate those data security risks. No system (old or new) is going to be made 100% risk free and mitigation will often be a discussion around tolerances, costs, and benefits. Knowing what the risks are – both in terms of technical facets and corresponding costs – will help inform the best path forward. Retrofitting information systems to become more privacy enhancing may prove to be a more costly venture than replacing the system altogether, but you won’t know until you’ve had an honest reckoning with what you’re up against.
Q: It seems like a huge challenge to keep up with rapidly evolving data privacy threats and technologies. Do you have advice on how IT professionals and teams can effectively do this?
A: Although the space of threats and attendant defensive technology are always changing, a well-architected system should be fairly stable in the face of these changes. While threats evolve with the discovery of new vulnerabilities, these new attack vectors are often addressed by vendors in a timely manner and short-term workarounds are usually available to help secure the system until the vulnerability is patched. New technology is not something that architects should be eager to adopt (“If ain’t broke, don’t fix it”) – the exception being if it fixes a known shortcoming of the system; the vendor ecosystem is driven more by market forces (what CIO/CISO’s will buy) than a true accounting of what the needs are to build secure systems. Which is why careful architectural choices are paramount: choosing the right security and privacy-protection architecture should make a system robust against any single point of failure, anticipated or otherwise; the entire point of the design exercise is to anticipate that failures can and will happen and to limit the scope of damage that any failure can cause.
Narrowing scope to privacy protections, the main worry has to do with the behavior of authorized users. Authorized users tend to just probe the bounds of what they are allowed to do rather trying to crack the security in the system. However, authorized users maliciously, negligently, or accidentally circumventing privacy controls and policies will happen and can only be truly mitigated through active monitoring (using both automation and human oversight and review) of the system to quickly detect these events and respond to them.