Challenges
Solution
Results
What is most valuable?
Key quote
What’s next
What is our primary use case?
We use it mostly to look for secrets in our repositories so we can inform the developers not to do that.
How has it helped my organization?
The recommendation is always get this out of your code. One of the things that they added over the year was the ability to reach out to the developer directly to get feedback. This helps us know if the developer is aware of it or it is actually not a secret. So, we don't have to break out of the app, then go into Slack and ask.
We consider all secrets in the source code a Priority 1. We expect every developer to remediate them as soon as they are notified. We don't have a ranking of what is important. We consider them all Priority 1, getting them done first.
It definitely gets us to catch these secrets earlier, instead of after they have made it into production.
With the new feedback system, it has definitely improved our lives.
When my security team gets alarms and we don't immediately know that it is a false positive because it is in the test directory, we have questions sometimes whether it is a secret. We then need to work with them to find out what this thing can actually do. The security team has the ability to immediately reach out to the developer and get feedback via email in a portal, where the developer can see what we see and put comments on it, which has drastically improved our lives. We are a worldwide company so we have engineers in a dozen countries. Sometimes, the engineer who made the bad commit isn't even awake, so sending a Slack message doesn't get a response. This is more pressing, so it helps us.
Every engineer has to use it. As we grow, obviously more engineers will be using it. We will probably be at about 100 engineers by this time next year. I don't think that they have any other features or things that we would grow into on the internal side.Â
What is most valuable?
The scanning on pull requests has been the most useful feature. When someone checks in code and they are waiting for another engineer to approve that code, they have a tool that scans it for secrets. There are three places where engineers could realize that they are about to do something dangerous:Â
- On their own machine. They have to set up tools on their machine to do that, and a lot of the time, they are not going to do that.Â
- On pull requests before it gets into our main code branch.Â
- Once it is already in our code branches, which is the least optimal place. This is where we can inject a check before it makes it into our main code branch. This is the most valuable spot since we are stopping bad code from making it into production.
The solution has a 90% to 95%Â accuracy of detection for its false positive rate.Â
The only time that it is not accurate is when we purposely check in fake secrets for unit tests. That is on us. They have the ability for us to fix this by excluding the test directory, and we are just too nervous to do that.
What needs improvement?
It could be easier. They have a CLI tool that engineers can run on their laptops, but getting engineers to install the tool is a manual process. I would like to see them have it integrated into one of those developer tools, e.g., VS Code or JetBrains, so developers don't have to think about it. However, it is moving in the right direction.
I would like to see them take their CLI tooling and make first-level plugins for major development platforms so I don't have to write a script to help engineers set up the CLI tool for their own workstations. That could use some improvement.
When we add new repositories, they don't immediately get a historical scan. Every now and then, when I log into the interface, it is like, "You have five repositories that haven't had a historical scan," and I have to go enable it. That seems weird. It should be automatic.
It is email, so it is out-of-band, which is what we need. It would be cooler if it could be done through Slack or some other means for more urgency. However, it meets our needs. Most of the time, our security team is US-based. A lot of our engineers are in European countries and even places like Australia, so there is a lot of asynchronous work.
For how long have I used the solution?
This is our second year of using this solution.
What do I think about the stability of the solution?
It has never gone down, so it seems pretty stable.
Besides clicking the button to say, "Go do historical scans," it takes care of itself once it has been set up. Every now and then, I just happen to be in there, see it, and I push the button. So, there is about a week a year when I get around to doing this action. We almost never need to go into the console, because going into the console is just something you do as a check up to make sure everything is healthy.
What do I think about the scalability of the solution?
We have over 500 repositories. We get detections within seconds of people making those commits.
It seems like it can scale to any size that we would need.
We are a very flat organization. Everybody is essentially a software engineer, including our security team. We have about 70 engineers today who are all just building software.
How are customer service and support?
I haven't actually needed to use the technical support. I would assume it is great. Everything that we have done with them so far has been great.
Which solution did I use previously and why did I switch?
The breadth of the solution’s detection capabilities is the best one out there.
I came from a very large Fortune 100 insurance company where we used a couple different products. They were full of false positives and noise, and in my opinion, not that valuable. I have not received a single false positive, which wasn't quickly apparent that it was something like a test credential, since we have been using this product.
We had some internal scanning previously. I don't have really strong metrics of how it was before, but there was always a concern, "Are there things we are missing?" When you use homegrown tools, you don't know. Now, we have about a 20-hour mean time remediation, which is less than a day. That is really good. We have scanned over 20,000 commits in the last month and found 256 secrets that would have made it to production. That is very impactful to me.
We have tried a bunch of open-source solutions, the biggest one being truffleHog. The main reason for switching was lack of good detection. It pretty much thinks any complex string is a password, so the signal-to-noise ratio was extremely high. That was a huge toil for us, trying to tune it and get rid of all the noise so the engineers could actually work.
How was the initial setup?
It was very painless. We just had to give it access to our GitHub environment, then we immediately got value. The only place where it takes preparation is if you want to move it all the way into a developer's workstation because they need an API key and a binary. They have to configure Git to use it. That is six or seven steps, which is a little toilsome.
There was one requirement. When we set up SSO, the documentation wasn't super clear. We had to go back and forth during implementation to get the right settings so we could single sign-on into it. There were some requirements where we had to get information from their implementation on what we needed to put into Okta and how to configure it.
What about the implementation team?
What was our ROI?
We have definitely seen a return on investment when it finds things that are real. We have caught a couple things before they made it to production, and had they made it to production, that would have been dangerous. For example, AWS secrets, if that ever got leaked, would have allowed people full access to our environment. Just catching two or three of those a year is our return on investment.Â
It definitely increased our secrets detection rate. My personal opinion is that our custom-built tooling was basically useless, so it has increased our detection rate by 100% because we didn't have metrics prior to it. Our engineers were shocked and surprised at how often they were getting notifications, which tells me that our secrets detection rate has vastly improved.
The solution has helped to increase our security team's productivity.Â
We don't have to spend our time running scans in repositories to see if they contain secrets. Within 10 seconds of a commit, we know whether it contains a secret.Â
I would probably spend a couple hours a week just running open-source tools, trying to find secrets and seeing if anything bad was going on. Now, we just get low-priority service tickets, when they get opened, and whomever is on-call deals with those. I have seen a couple a week now and then, but they usually take five to 10 minutes to resolve.
The solution has reduced our mean time to remediation. We are down to less than a day. In the past, without context, knowing who made the commit, or kind of secret it was, sometimes it was taking us a lot longer to determine the impact and what actions needed to be taken.Â
What's my experience with pricing, setup cost, and licensing?
I know they do public monitoring, which is a different product, but it is a little expensive and we don't have anything public. So, we probably wouldn't go that way.
The internal side is cheap per user. It is annual pricing based on the number of users.
It was a trivial cost compared to pretty much any security tool in our organization. It was a no-brainer for me to do.
It is a trivial cost compared to static code analysis, where we are paying something like $50 a user. I don't know what this is per user, but it is probably less than $10. It provides a lot more value and is just the right thing to do.
Which other solutions did I evaluate?
We looked at Snyk, GitHub CodeQL that has some secrets detection, and another solution. They either lacked depth or were more expensive.
What other advice do I have?
Read the news. Source code is a huge wealth of knowledge. It also happens to exist on pretty much every developer's workstation, which they probably take home with them. You probably don't want your secrets being all over the country.
Make the detection of a secret a blocking action so you can't deploy until you have resolved it.
When we first started, we had it as a non-blocking informative action and were shocked at how many times an engineer just wants to go home on a weekend and pushes the button anyway. Then, you have clean-up and investigative work to do. Make it blocking so they have to do the right thing. One of the things that we have as a motto is, "Our goal is security. Make it easy to do the right thing so you do the right thing and don't try to work around it." If you know this will block, then you will make sure it doesn't happen.
There is a lot of disagreement on what a secret is. For example, Slack has webhook URLs, where when you send a message to it, it will then post it into a company's Slack. A lot of developers have said that because those are publicly available on the Internet, if you find one, you can post to it. That means it is not a secret, but I would disagree, because you can use it for phishing attacks or to confuse the company. They can take bad actions or sometimes start automations. We spend a lot of time discussing whether a finding is a real secret when it probably always is, from my perspective, but we have to convince developers that it is.
Secrets detection as a security program for application development is table stakes. You need to have it.
I would rate GitGuardian Internal Monitoring as 9 out of 10. The CLI needs to be easier. The rest of it is perfect.