Oracle’s Chief Security Officer, Mary Ann Davidson, took to her corporate blog today to rant about security, and how Oracle has been pursuing its own clients that break its license terms to ensure software security.
The post said that some customers have been hiring security consultants to do research on Oracle’s tools, who often use ‘static analysis’ tools to reverse engineer software and find security vulnerabilities.
Davidson’s post outlines that researchers are wasting the company’s time with false positives and breaking its terms — so it’ll be going after them for reporting these bugs.
The post drips with a special kind of corporate “we know best” arrogance, starting by attacking researchers who have been reverse engineering the company’s code to find security holes:
Recently, I have seen a large-ish uptick in customers reverse engineering our code to attempt to find security vulnerabilities in it. <Insert big sigh here.> This is why I’ve been writing a lot of letters to customers that start with “hi, howzit, aloha” but end with “please comply with your license agreement and stop reverse engineering our code, already.”
It seems Oracle would prefer people didn’t find the glaring security holes that keep cropping up, instead letting it keep them quietly to itself.
Next up, Davidson deflects and blames companies for not doing other security things, trying to say they shouldn’t be so worried about Oracle’s security flaws and should worry about their own infrastructure instead:
That said, you would think that before gearing up to run that extra mile, customers would already have ensured they’ve identified their critical systems, encrypted sensitive data, applied all relevant patches, be on a supported product release, use tools to ensure configurations are locked down – in short, the usual security hygiene – before they attempt to find zero day vulnerabilities in the products they are using.
Wow, so, you’re blaming companies that might have other security problems for being concerned about your buggy, insecure software? The argument here is ridiculous and misguided, but we’ll let it fly for now.
What happens if someone does find a security hole and notifies Oracle?
If we determine as part of our analysis that scan results could only have come from reverse engineering (in at least one case, because the report said, cleverly enough, “static analysis of Oracle XXXXXX”), we send a letter to the sinning customer, and a different letter to the sinning consultant-acting-on-customer’s behalf – reminding them of the terms of the Oracle license agreement that preclude reverse engineering, So Please Stop It Already.
Hahahahaahahahaha, insanity — Oracle sends a angry letter telling the person who found the bug that they broke its terms. That’s one way to make researchers prefer to just put zero-day flaws out in the open instead.
More like, “I do not need you to analyze the code since we already do that, it’s our job to do that, we are pretty good at it, we can – unlike a third party or a tool – actually analyze the code to determine what’s happening and at any rate most of these tools have a close to 100% false positive rate so please do not waste our time on reporting little green men in our code.”
Oh, so Oracle is so good at security that it found all of these bugs on its own. Surely Oracle isn’t arrogant enough to think it knows best?
I want to reiterate that customers Should Not and Must Not reverse engineer our code. However, if there is an actual security vulnerability, we will fix it. We may not like how it was found but we aren’t going to ignore a real problem – that would be a disservice to our customers. We will, however, fix it to protect all our customers, meaning everybody will get the fix at the same time. However, we will not give a customer reporting such an issue (that they found through reverse engineering) a special (one-off) patch for the problem. We will also not provide credit in any advisories we might issue. You can’t really expect us to say “thank you for breaking the license agreement.”
We’re only three quarters of the way through the post by this point and there’s still more of this dripping arrogance.
Davidson’s post today is childish ignorance of how the security industry actually works. It pretends like the company is the only one who knows best and throws security researchers who have been doing the company a favor under the bus.
Information security relies on the fact that external parties can test software for its resiliency, but Oracle clearly thinks it’s above all of that.
Update: Oracle has pulled the post. There’s a full copy available on Scribd, which we’ve embedded below:
No, You Really Can’t (Mary Ann Davidson Blog)
Update 2: Oracle responded to our request for comment, with Edward Screen, Executive Vice President and Chief Corporate Architect, saying that:
“The security of our products and services has always been critically important to Oracle. Oracle has a robust program of product security assurance and works with third party researchers and customers to jointly ensure that applications built with Oracle technology are secure. We removed the post as it does not reflect our beliefs or our relationship with our customers.”
Update 3: Chris Wysopal, CTO and CISO at Veracode, one of the vendors who received a letter from Oracle said:
“We now rely on software for everything – health, safety and well-being – and crafting a policy of ‘see something, say nothing’ puts us all at risk.
Application security is an enormous software supply chain issue for both enterprises and software vendors because we all rely on software provided by others. Vendors need to be responsive to their customers’ valid requests for assurance, and to security researchers who are trying to make the software we all consume better. Leaders in the industry – Google, Apple, Microsoft, Adobe – all encourage third-party code audits and bug bounty programs as a valuable extension of their own security processes.
Discouraging customers from reporting vulnerabilities or telling them they are violating license agreements by reverse engineering code, is an attempt to turn back the progress made to improve software security.”
Top image credit: Max Herman / Shutterstock.com
Get the TNW newsletter
Get the most important tech news in your inbox each week.