Product Updates

Tailor interventions for a secure and exceptional user experience

July 23, 2024

Last week, we launched our data protection capabilities, which included the ability to send “nudges” to steer employees towards more secure use of generative AI. 

Over the past week, we’ve been working to make our intervention modal even more powerful and customizable, creating an incredible end-user experience that promotes secure behavior. 

This blog provides an inside look at the four core drivers behind successfully changing user behavior without preventing users from getting their jobs done.

1) Train the users inline

There’s a slew of security awareness tools, and security training has progressed massively in the last few years. 

But the reality is that this is all after the fact – often following up to provide security training modules. The most effective coaching must be in the moment, and it must be immediate. If the user has to wait more than 1 second, it’s too long.

Sure, you might want to take action after this, such as for repeat offenders, but this can be achieved separately with our webhooks (read more on our Tines integration options here!). 

Inline end-user training is critical to the success of the following steps.

2) Give users enough context

Don’t just tell users they can’t do something without explaining why. Can you imagine trying to do your job and just being told “no” without any context? It’s extremely frustrating and only contributes to the opinion that security is a blocker, rather than an enabler.

Instead, when a user is attempting to upload sensitive content, we want to help guide them to understand what is sensitive. In this modal, we not only tell them what we detect, but the user can click to dive deeper and have the issue described in plain English. 

Not only is this a better experience for the end user, but it will also save you (the security team) time manually reaching out to users. 

3) Enable users to do their job

The increasing adoption of generative AI – especially those that train on user data – can cause a headache for security teams, as there is no easy way to understand if sensitive data is being shared with these LLMs. However, we want to assume good intent. The vast majority of employees have good intentions and they are simply trying to do their job. 

To this end, you can choose whether to allow the user to ignore the notification or redirect to an approved tool. In the example below, we show the flow for moving from ChatGPT to Microsoft Copilot. Throughout all of this, you’ll see that we’re removing as many friction points as possible.

4) Make it look awesome

Let’s face it, most security tools look terrible when they interact with the end user. To change user behavior, consider adopting a modern design that meets user expectations.

With our new customization options, we give you easy ways to make it look great to the end user. Choose your colors, text, logo, and even link to your company policies.

 

See Harmonic In Action

At Harmonic, we’re serious about building for security teams, but also for their users. We aim to create security products that are fresh, modern, and unique. Above all, we want to help you adopt generative AI and guide end users toward secure practices

Take 60 seconds out of your day to watch the video below

Now a customer but interested in learning more? Request a demo with our team now: https://www.harmonic.security/get-demo 

Request a demo

Concerned about the data privacy implications of Generative AI? You're not alone. Get in touch to learn bore about Harmonic Security's apporach.
Madeline Miller