publishing date icon
October 10, 2022
read time icon
5 min. read

Automation, awareness, and design thinking for improved cybersecurity (and fantasy football rosters)

In the CISO Phish Bowl, 12 highly capable human beings had to choose whether to draft a fantasy football team on their own, let a draft robot do it for them, or opt for a combination of both. Each week, the same 12 humans must choose which players to start and which to bench; we can do that ourselves, or defer to the robot and its player performance projections. There have been mixed results for each approach, but Manager of the Week, Bill Bonney just smoked League Commissioner, Eliot Baker in a triumph of machine over man-powered decision-making. So how do you know when to hand the clipboard over to the robots? This got Bill Bonney to thinking about cybersecurity. How do you get the most out of automation and AI to stay secure? Where and when do you start? What’s the sweet spot of the human-machine collaboration?

Post hero image

Table of contents

The question is: "How can automation help protect users?” 

The answer should involve all participants creating, implementing, and using digital products and services. What I’m talking about is the integration of security into the design thinking of digital applications. And cybersecurity awareness could serve as the glue that bonds it all together, as I’ll explain. 

Security was an afterthought in the digital revolution. Hence, cybercrime has flourished in a landscape where everything is built on a security-flawed foundation. The objective of apps should thus shift towards making applications more secure by default. 

When people use an application, they should be automatically more secure. That can sound like a platitude, but it’s actually pretty straightforward: applications should come out of the box with more security and privacy-enhancing default configurations. 

That can manifest in various ways. For example, it can mean security and privacy features come initially in the most secure setting. It might mean unneeded ports are locked down. Or it could involve a host of established secure approaches to produce safer products that enhance privacy.  

While these are surface attributes, they illustrate the point. There are many ways apps should function more securely, and design thinking, in the sense I’m talking about here, addresses more about how the humans interact with the app – but the concept applies deep within the technology. 

Think: Security by Design.

Now let’s return to the human/app interaction. It seems like there are at least two obstacles to the objective of secure configuration by default. The first is that security is often a bolt-on in product design. Products are traditionally designed and then secured, rather than securely designed. 

The second obstacle is that default settings are often either fully open or broadly open under the assumption that usability is essential, at least when the user begins interacting with the product. So, in a very real sense, security is not automatic - it takes manual effort AND specialized knowledge for the user to secure many of the applications they use. 

The first obstacle can tend to make the secure configuration much less useful or even palatable because the product’s usability was never tested in a fully secure mode. By usability, I mean in the “does the product delight the user?” context, not in the “does the feature still work?” context. That’s where design thinking comes into play. 

We need to add security criteria into the design phase and use design thinking to identify and overcome those usability issues before the product exits the design phase. In that way, the fully secure configuration doesn’t appear clunky, restricted, or deficient. Asked differently: “Does the product fully delight the user while being truly secure and protective of the user’s privacy?” 

 In a sense, this is automation. Without thought or manual effort, the product is automatically secure.

The second obstacle is very likely a first-order derivative of the first obstacle. If the product is clunky in secure mode, it’s not as likely to be shipped (delivered, installed, deployed) in a secure mode.

It is tempting to dismiss this concern as consumer-centric rather than end-user-centric, where an end-user in this context is someone using an application provided by their company. Fortunately, many of those applications come much more locked down than the typical consumer product. While sometimes or often true, these obstacles still apply for two reasons. 

One reason is that corporate apps don’t actually always come locked down, as any CISO who has ever had to lock down a SaaS or shadow IT product post-deployment will tell you. The second reason is that, even if locked down in advance, the lack of design thinking can result in a clunky, deficient product that does not delight the user. While the consumer doesn’t want to sacrifice their user experience, or lacks the knowledge (and possibly the motivation) to make the application more secure, the end-user actively seeks workarounds that make their job easier to do and their interactions with the product less secure. Unfortunately, many of these workarounds bypass the app for side-channel data manipulation, those also bypassing approved (secure? compliant?) in-app business logic.

Using awareness for user feedback 

Awareness training can help solve this problem by providing a design feedback loop after deployment. As good as we might become at designing delight and security into new products, we’re going to get it wrong sometimes, especially at first. Also, we have a ton of poorly designed (from a security point of view) products already in use. Thus, feedback is essential - the firm conducting the training, the firm administering the training, and the firm building the products (which might not be related in any way) all must be in on this feedback and truly take it in and act on it. This type of feedback is a radical change over existing feedback loops – so often, the input solicited is about how to improve the training. How to make the business process better and how to make the apps used in carrying out that business process better is an opportunity we aren’t tapping. This type of feedback is a staple of post-mortem reviews of incidents. So it should be for cyber awareness training too.

Where AI can help is two-fold, at opposite ends of the process. First, AI can automate the testing of more secure designs to help uncover things missed or new problems created. Second, AI can help detect after-deployment problems by scenario testing settings as they are adjusted and provide real-time feedback to the consumer or end-user regarding the consequence of what they are about to do. A warning like "74% of users who set this setting this way experienced a data loss within 90 days, are you sure you want to continue” might just stop a mistake here and there.

So what choice did I make? I let the bot draft my team, and I am 4-1. The one game I lost…that would be when I made my first manual roster move. The power of automation is on display. 

Screenshot of Fantasy Footall stats
Subscribe to All Things Human Risk

Subscribe to our newsletter for a curated digest of the latest news, articles, and resources on human risk and the ever-changing landscape of phishing threats.

We're committed to your privacy. Hoxhunt uses the information you provide to us to contact you about our content, products, and services. You may unsubscribe from these communications at anytime. For more information, check out our Privacy Policy.