Sometimes life just doesn’t make sense. As humans, we are capable of creating and implementing so many remarkable things, and yet we often struggle to make the most basic connections. From the great pyramids of Egypt to the voyage of Apollo 11 to the theory of relativity, mankind has demonstrated the ability to craft innovations that go far beyond the obvious limitations of body and mind. Even more, they have proven their capacity to create solutions for every avenue of life, especially as it pertains to the advancement of their livelihood. But for some reason, organizations around the world are still struggling with many aspects of cybersecurity, from writing bug-free code to establishing effective regulations. True, the rules of cyberspace are different from those of the physical world—and yes, we are currently embroiled in an unprecedented digital revolution… And yes, the nodal nature of a lightning-fast network makes controlling its boundaries difficult, but still. Can’t we make computers easier and safer to use?
VP and Principal Scientist at Comodo Cybersecurity, Dr. Phillip Hallam-Baker, believes we can. This morning, in his RSAC 2018 presentation, Why Did We Make Security So Hard?, he approaches the subject with startling simplicity. Usability. “The only security application we can expect users to use is one that demands nothing from them.” We all know an easier computer interface equates to better cybersecurity, especially for those folks who don’t know the difference between a router and firewall, so perhaps the solution isn’t so elusive after all. Provide people with clear, efficient, and intuitive systems, and they will handle them more appropriately as a result. Of course, educating people is always a smart decision, but the truth is most of the population is far too busy trying to complete their own online work to sit and ponder the security of their network or their email or their social media. They have their own fish to fry, as it were.
As Hallam-Baker reminds us, “secure applications and their features usually don’t get used because they require the user to be thinking about security,” when what they want to be thinking about is buying a microwave on Amazon or meeting their boss’s deadline. So, while technological responses to cybersecurity are obviously critical, understanding the inherent responses of human users is equally as valuable. Why does someone need to go through 17 different steps to enable S/MIME encryption (and click an extra button every time a message is sent) when in theory the process could be completed with far less effort? The point is, they don’t. “We have to strip out all unnecessary steps in securing data and make encryption the default and not the exception.” Combining this simple approach with effective managed security services, like those implemented by Comodo Cybersecurity, is a sure-fire way to create systems that are both safe and highly usable.
As long as we continue to treat cybersecurity as just a technical problem, and not a design one, we will continue to fail. But if we can honestly address the results of usability testing—thereby defining the efficacy of our products, applications, websites, software packages, or devices—the goal of improved usability and security is fully attainable. By shifting our focus to the optimization of UI designs, work flows, and user understanding, we can learn more about how people and systems can come together to achieve real progress. The data collected from usability labs can provide engineers with ideas for future innovation that speaks to the need for improved security and human understanding. This effort includes identifying issues with products and paying attention to how users:
- Complete specific tasks
- Meet usability objectives
- Feel about the overall experience
- Complete tasks within a set time period
The bottom line is, computers are smart and can do a lot of the heavy lifting for us. So, why make usability hard on the user? Hallam-Baker tells us, “any instructions you can write for the user can be turned into code and executed by the machine,” which makes perfect sense. Handing over the more complex actions to the computer-side of the exchange is a far better option that establishing unrealistic expectations for people who won’t (and often can’t) meet them.
The hypothesis fleshed out through Hallam-Baker’s presentation asserts that is is possible to solve any security usability issue by introducing an additional layer of PKI. This set of roles, policies, and procedures supports the distribution and identification of public encryption keys and enables users and computers to securely exchange information over networks, thereby identifying the identity of the other part. Without this system, sensitive data could still be encrypted and shared but it could not confirm authorization. Digital certificates sit at the heart of PKI because they are what affirms identify of the certificate subject and binds that identity to the public key. As a solution layer, the Mathematical Mesh is a cloud repository for configuration data. Mesh tools pull configurations from the cloud and makes devices run properly with no user effort. This affects security because it automates the administrative process and implements security with minimal compromise and error. As a security tool, strong end-to-end encryption works internally to enable stronger application management of email, web, and SSH.
While this is not the only solution to the larger question of how to simplify usability, it provides clarity around just how feasible it would be to change the way we think about computers and how we interact with them. Yes, innovating new technology is always going to be an essential part of our digital evolution; however, strange as it sounds—we will likely never reach the stars if we don’t also remember our own limitations.