Newsletter

05.27.2021 | 22'' read

Q&A: Heather Adkins, director of information security, Google

by Ryan Naraine

[ Transcript presented by Eclypsium ]

This is the transcript of a Security Conversations podcast interview with Google security leader Heather Adkins.   We discuss her role at the search advertising giant, the priorities around securing the software supply chain, expanding the concept of zero-trust and the future of modern desktop computing.  It has been edited for brevity and clarity.

[ Click here to listen along  ]

[ Click here to find the book ]

Ryan Naraine:  What’s the  director of information security and privacy at Google do?  What are your primary responsibilities? 

Heather Adkins:  So, I have been at Google for a very long time and I’ve played just about every security role that you can possibly imagine, but have settled quite nicely into this space of big-picture thinking and balance that with day-to-day decisions to actually keep the hackers out of Google’s systems.

I have a series of teams that work on detection and response, and that’s where my love in the security space is. So, I’m really happy I get to stay a little bit deep down, but at the same time, thinking about the big-picture strategy, how we educate people, how we bring security awareness everywhere.

When you look back at computer security over the last 20 years, have you seen the type of progress you expected we’d see by now?  It’s now 2021 and we’re in the midst of massive supply chain hacks, Exchange zero-days, ransomware everywhere. It feels like things are just really bad…

Yeah. I think it’s very tempting to feel that way, Ryan. I think the example actually that I give is, if you read Cliff Stoll’s The Cuckoo’s Egg and it almost is the first post-mortem of a security incident that I’ve ever read. And when I read that book today, I read about some of these cryptocurrency compromises and cloud instances. It’s all the same stuff. It’s like the same post-mortem over and over again. So, I think it is really tempting to think of it in that way.

But I do think we’ve made significant progress. Take a look at two-factor authentication. As an example, when I started in the industry, the standard was, like a one-time token. And even then, we barely had hardware tokens, and people hated to use them because they rotated the number every X number of seconds.  You had to wait and it was, yeah, the usability of it was really terrible. 

Today, we have security keys. On your phone, you have facial recognition and fingerprint readers. The usability of these things have made them more adoptable. And I think that, especially in the last 10 years, we’ve seen some evolutions in that space that are going to be drastically different for consumers, for billions of people. 

While we are seeing the same kinds of compromises — people not setting strong passwords, not patching their systems — I think we have all of the things in place to make this better. If you are a new company starting up, you can be a cloud native company and by default, you don’t have this on-prem stuff to take care of. You don’t have to worry about Exchange. You can use one of the big mail providers. You have choices that didn’t exist, say 20 years ago. 

I think if you’re making the right choices, technology-wise, then you’re in a much better situation. If you’re making bad choices for yourself, you’re going to face problems.

I think it’s better [than it was 10 years ago].  I am just sad that we’re not further along. We are still talking, thinking about passwords, and what makes a good and complex password.  But I’m encouraged that my bank turns on two-factor authentication or as we are now doing by default in Gmail, turn on two-factor authentication for people.

There’s still too much friction around implementing 2FA/MFA for consumers?  Google does it one way, my bank might do it a different way, it’s a nightmare for consumers to understand and implement. 

I think that’s right. And I think that we as solutions providers understand that. I think what you’re seeing is the phase of technology, where there’s a lot of diversity of solutions, there’s a lot of experimenting.  Look at how cars existed in the 1920s, there was no single ignition system. Some of them had cranks, right? Some of them had different kinds of mechanisms and then eventually the solution space converged on what worked best for consumers.

And now we all push a button to turn on the car, right? We’ll see the same thing with two-factor authentication solutions and others over time. Once we figure out what the right things like, what the best path is for users. We will figure this one out.

Are you still surprised that in 2021 enterprises are still almost universally using VPNs? Is this one of the things you expected would have been dead by now?

I am not surprised, but I am encouraged by what we’ve seen over the past year. I think the pandemic has really opened our eyes. Once you take a large workforce and you have them working from home, you start to see the fragility of these heavyweight solutions that actually don’t provide any real security at all.

In some cases, they introduce more risk and expand attack surfaces.

Yes, exactly. It’s great for transport security, man-in-the-middle sniffing of traffic, but I think that extending the corporate network into thousands of people’s houses is probably not the wisest of choices.

I’ve heard some CISOs at large organizations talk about the difficulties of this ‘digital transformation’ forced by the pandemic.  And, they say that fully implementing zero trust, backporting zero-trust in a traditional network is incredibly difficult.  There’s real friction.

Yeah, I think it’s a fair thing. If you’re working in a traditional company, maybe it’s been around for 10-plus years, you’re going to have a long journey. I think if you’re a new company, you’re just setting up for the first time, you’re going to be able to get it by default, but these transformations take time.

It took Google time. It took Google 10 years. We were inventing a lot of stuff at the beginning that didn’t exist yet. That stuff now exists. Companies don’t have to go through that for themselves. But if you are, for example, in the process of moving your on-prem applications into the cloud, that will take some time.

And that might be a prerequisite to some kind of proxy-based BeyondCorp zero-trust solutions, but you’re going to need to make those transitions anyway, to stay competitive in your field.

I’m excited about the transformations because we can get a security uplift at the same time. The reality is that, banking, health, oil pipelines, all of these folks are doing an IT transformation because it’s what their business needs.  So, they’re going to do it anyway. You might as well get the security uplift at the same time. It might be a long journey, but if your company is going to be around for 50 or 100 years, you’ve got to make that investment at some point.

You talk in the book about building security principles into the foundation of the organization at the very beginning, playing the long game.  For some folks, it feels overwhelming when their infrastructure looks like duct-tape and bubble-wrapped pieces of code and things everywhere?  Is it ever too late to go back to the beginning? 

When we talk about the foundation, the foundation is culture, not necessarily technologies. 

If there is a culture for transformation, if your organization wants to make the investment and understands that it’s long-term, understands that it’s going to need to make tough decisions, this is usually a leadership question of sorts, right? That’s the foundation. 

Because yes, you’re going to need to spend money.  Yes. You’re going to have conversations with your staff about saying, “No, you can’t just log in by the VPN with a password anymore. You need one of our computers. We need to verify it. You need a security key.”  There are conversations with the staff and that has to be part of this journey.

That’s actually the foundation. And I would say to people, if your company doesn’t have that, it will never have an appetite for a ten-year transformation, a five-year transformation. 

So it starts at the very top, even up to the level of the board of directors. This need for long-term commitment to setting this up as a foundational thing.

Yes, especially if you are in a regulated industry, you’re going to have an appetite for a little bit of risk and a little bit of change. These tend to be risk averse areas because you’re navigating complicated stakeholders in your ecosystem. 

Are these conversations getting easier with your leadership? Is it easier to get security resources today than it was, say, three years ago?

At Google, I’ve never had any trouble with that (laughs).  Again, this is, company culture. Are we the kind of company that responds to headlines by just saying, ‘don’t ever let this happen’, or are we the kind of company that says, ‘oh, we need to transform to be better here.’ I think this ties in really closely to the new [Biden/U.S. gov) executive order that talks about having this kind of NTSB board to help with transparency.

We are seeing the value of transparency and the headlines in terms of moving the conversation along at the board level, the CEO level, and just general awareness that this isn’t a hypothetical thing we hear about in the movies. This is actually something that happens every day and to all kinds of organizations.

So I think it’s helpful. As long as we know how to frame it as leaders, right? That it doesn’t just become a scary monster, but it’s something that we can get traction on and improvement over  time. 

The modern CISO is so busy putting out never-ending fires — ransomware, spear-phishing, passwords, people clicking on malicious links — that there’s no time to take a step back and restart anything…

Yep, that’s fair. We have this problem too. At Google, we have thousands of employees worldwide who click on links. Browsing the web is what we do at Google (laughs).  

But, sometimes, we over fixate, right? Yes, we should try to make those things as safe as possible, but, ask yourself what happens after that. Is malware getting installed on the machine? Why can that happen? If the attacker gets control of the machine, can they laterally move to other machines in the network? Why are you allowing that to happen? If that particular employee has access to sensitive data, can the attacker immediately get it on their machine or have you put more roadblocks that would actually stop them, right?

Let’s say someone clicks on a bad link, can you contain them long enough for you to get to it? And then how quickly can you rebuild that system?  What I see happen is we have, especially in this remote world, a machine gets compromised and it takes weeks to replace it because you don’t have the ability to re-install it remotely, or there’s lots of sensitive data there…

In reality, at Google, we try to be able to rebuild any machine in a couple of hours. Even if somebody clicked on a link and bypassed all the malware protections, the attackers are trapped there, and we can just reinstall that system immediately. This is also very helpful with ransomware because you just get rid of the machine and move on with life.

Chrome OS for example, is designed with all this in mind. If you get compromised, just reset it and move on. So yes, people are going to click on links, but you should also be thinking about the entire kill chain and making sure you’re disrupting all of that as well.  Then you don’t have to worry as much about those clicking on things. 

Yeah, but this requires that the foundation is in place with all the right processes, the culture, the right mix of people and tools to get it right…

Yes, but it also depends on the solutions you choose and how you opt to implement them. Let’s just take a traditional Microsoft implementation, Active Directory, you set up a bunch of laptops, right?  The default for a very long time was to just have everything trust everything in the environment. But if you were a trained Windows SRE, you would set that up differently, such that those two or three, or, 1000 laptops don’t trust each other. 

Can you linger here on the SRE? It feels like the security specialist is disappearing in favor of these newer disciplines.  What’s an SRE?

The Site Reliability Engineer is a concept that we developed over a decade. This idea that our systems were so advanced that we not only needed systems administrators, but we needed developers who understood the stack more deeply. 

So they’re not just installing the software that the developers build and watching it run and keeping it up there, they’re actively involved every day in fixing bugs and code, finding ways to optimize deployments. They’re doing a development job. It’s that other piece of the development life cycle, which is not just writing the code, but making sure it’s maintainable and reliable as well.

And you have your processes in place to make sure that when code gets checked in it, they go through like the mandatory security checks, and that all gets automated…

Yes, you have to automate it, especially in a big software engineering shop, but even if you are a 10-, 20 -person company making a single-featured software suite, it’s really valuable because you don’t have a lot of staff.  If we can build into these cloud-native pipelines, things like auto-fuzzing, static analysis, these kinds of checks can really assist the developer by default.

Do you think the need for security specialists will disappear altogether?

It depends a little bit on the topic. There will always be a need for humans and specialists who understand security. We will always have cryptographers. Nobody should be rolling their own crypto. We should leave that to the experts. We emphasize that a lot in the book that specialists are great for inventing the kinds of things you want to happen. Auto-fuzzing, for example, is a very special, unique thing. The people doing research on CPU side-channel bugs, you’re always going to have a need for specialists.

But what we want to do is get the specialists out of the business of doing things that can be automated. And things that should be done by default, right? They should be building templating systems, not auditing line-by-line code that might create cross-site scripting bugs.

Is good security even affordable?  Can we buy our way to being secure?  

I think where we are and where we want to be are two very different places.  Today, the reality is a lot of companies are forced to go to the market and buy solutions that they’re told do a thing, but don’t actually do a thing.

Whereas, where I think we’re going, is that more and more of those needs are built into the platforms by default and, probably with some continued optionality rights and choices of what kinds of solutions you plug in. 

But it shouldn’t be something you have to think about. When you buy a car, you don’t pick which seatbelt you’re going to get. The seatbelt just comes in the car. It’s standard. It works the same in every car, pretty much.

That brings up another friction point, where the security of the product is itself profitable. We are at a place where vendors sell you a product, then upsell you on the tools to secure that product.  Your seatbelt example is perfect and it goes back to paying the security tax. We don’t pay extra for the seatbelt. There’s a website called sso.tax that actually documents this, where there’s an extra cost for implementing MFA…

It comes back to this idea that we’ve not converged on the right solutions for everything yet, but if you think about it, a lot of the banks are doing MFA.

Now, Google is doing 2FA by default and that’s part of the product. You’re not paying extra to get it. Because I think, to some extent, we’ve largely converged on the solution. And so we understand there’s no R&D costs anymore. But I think when we look at things like the EDR market …

Are you bullish on that as part of this long game? That security programs should invest in EDR and this world of log all the things, security analytics, push everything into dashboards…

I think it depends on what configuration it takes. I’ve got strong opinions that’s probably a whole different podcast (laughs).  I think EDR is an interesting example because there’s so much experimentation still going on. You actually do have quite a bit of instrumentation by default from the vendors, you’re already getting those signals and telemetry. Apple has XProtect, Microsoft has Defender. What you’re seeing in the EDR market is that those solutions are still not sufficient and you’re seeing lots of experimentation in the field over time. Some of that will converge and end up in the platforms by default.

Google is taking a different approach with Chrome OS, which is to remove a bunch of the attack surface and make a lot of that irrelevant. I like that approach because you’re not going to have tons and tons of malware problems on the Chrome OS device because we’ve just eliminated the class of problem. 

But as long as you’re still handing users a supercomputer that can do anything from databases to running video games, you’re gonna have this gap of trying to catch malware and bad stuff after the fact.

But the future are these ‘supercomputers’ where everyday machines are capable of doing all those things.  Do you think the future is the Chrome OS model?

The most popular devices are mobile phones and those are not supercomputers. They have smaller attack surfaces and are purpose-built.  When we think about the 7 billion people on the planet, most people are going to have those devices, not supercomputers. 

Let’s also discuss the special-purpose stuff.  The device that runs the IV pump in the hospital does not need a supercomputer operating system.  It is a special device that does one thing so why does it need to be able to run cryptocurrency-miners, for example.

Google is seen very much as an industry pioneer, driving a lot of change throughout the industry, from TAG to Project Zero and their disclosure policies, all the way down. When you look back, is there a specific thing out of Google Security you’re most proud of? 

Yes, that we’ve been able to give back and all the things we talked about earlier, whether it’s two factor authentication, zero-trust, BeyondCorp, BeyondProd, SafeBrowsing.  These are things we built for ourselves, but we also built it to give back and, in many cases, you can use these solutions every day without needing to pay for them.

All of our open-source contributions, our ability to innovate and give that back for free, contributes to a healthy culture within the community and I’m really proud of how we’ve shown up in that way. 

Is there a specific thing that you wish we had fixed by now? What’s that one problem statement you think we should have solved by now?

Open-source security. The security of our open-source supply chain remains problematic.  It’s really incredible how the democratization of coding means that anyone can create open-source projects, contribute to the community, and make a positive difference.  But, it comes with a major downside.

There’s a really great example from a little while ago, a project called dnsmasq, built into almost everything you do, and is invisible and underneath and you don’t ever really see it. It’s running in satellites and stuff in space. And if that code has a bug, we have to patch the whole world. How does that work? We need to get a handle on all this. 

Is that even fixable? What would something like securing the open-source software supply chain even look like?

As a community, we are really good at adopting best practices and building a culture of things that makes sense. If it’s the right thing to secure our development machines in a certain way, developers are going to take on that responsibility.

It’s part of this idea of contributing and giving back. I think the question here is what standards do we want to converge on?  There are a few, very passionate people in this space, I’ll just mention one that Google’s participating in, it’s the OSSF, and the idea is that we give every open-source project a framework of things. — ‘Here’s what you need to do. And, If you’re an OpenSSL or an Apache, regardless of which license you’re using, if you want to participate in a healthy ecosystem, here’s what we need to do as a community.’

This is a starting point. We’re going to be talking about this for a very, very long time but if we build that, then every new developer who comes in, whether they’re 15 or 50 and are doing it for the first time, when you set up your open-source project, here’s what you get by default, here’s the framework you need to work in by default…

It’s going to be a journey and will take a lot of time to get there.  You mentioned Dan Kaminsky earlier, he was really passionate about DNSSec, right? That’s a whole journey that people are still working through today.  These transformations take time.  IPV6 is another example of the community converging on a new protocol and adopting it and starting to use it. 

With Solarwinds, CodeCov and the heightened interest in securing the software supply chain, does it feel we’re at a major inflection point.  It seems like software supply chain security is on the front burner for everyone these days…

When you’ve been doing it for 20 years, there are lots of inflection points. And if there are all those inflection points, maybe there are no inflection points. 

But I do see these incidents and headlines as catalysts and we should take every opportunity we get to raise the visibility on the issues and keep the conversation moving forward. I don’t mind when these come out and then all the vendors get excited and the governments get excited because it’s them opening the door for important conversations. 

We should walk through those doors and we should lead with our messaging and lead with our ideas and help them get to where we’re going.  But these are long journeys. There’s not a one-, two-year project to fix this. This is 10 to 20 years of investment in human societies and we have to be willing to make those long walks. 

The cybersecurity marketplace can be noisy and confusing.  From your perch, where do you see security innovation actually happening?  What excites you about the future?

Philosophically, I like any solution that turns the problem upside-down and completely eliminates classes of vulnerabilities. I like platforms like Chrome OS.  I like the isolation technologies, certainly on mobile platforms. One thing I will say is that we don’t talk enough about data and the role that data plays, not only in security, but in other parts of the decision-making process. 

Can you expand on that?  I’m starting to hear smart folks talk about data firewalls and extending zero-trust all the way down to the data.

Yeah. So I was thinking about data more in the sense of data logging and using data analytics, machine learning and computing to assist us in making decisions.

But let me pivot for a moment, because I think that when we talk about data security, there are lots of solutions thrown out. ‘How do I trust my cloud provider with my data? Maybe I’ll just encrypt all my data inside.’

More importantly, there are things that we can be doing as data custodians. If you have your Gmail with Google, you should know any time a human at Google goes and looks at it. So we actually offer this through our cloud programs. Access Transparency is what we call it.

I also think that zero trust, from an SRE perspective, if I’m going to access data, maybe I need a peer review for that. Maybe I need to do multi-factor authentication. Maybe I need to write down exactly why I’m going to do that. And maybe someone’s going to review that, and this has benefits for reliability and security because it prevents people from making mistakes because that second party is like, ‘hey, did you realize you had a typo?’ But also it means that if an attacker takes your account, the barriers are much higher for them to steal the data, using your access privileges.

So I think we can extend zero trust all the way down to the data layer as well. But I think we should also be using the metadata around all of those activities to make better decisions.

Right, and you do “AI and machine learning” at a real level where it’s just not just marketing buzzwords…

Yep. And it’s assistant. I see people saying they’re going to use machine learning for detecting malware, but actually where I see machine learning,  working best in that environment is looking for patterns of anomalies that help the analyst figure out what’s going on, rather than trying to be a true detective. 

|

This site uses cookies and may process personal data based on our Privacy Policy