This is a list of common terms we use throughout our site to describe the risk levels of the software we analyze.
- ASLR (Address Space Layout Randomization)
- Attackers will try to piece existing computer code together to make the system do what they want, like using words out of different magazines to write a ransom note. If each application's code is loaded at a predictable location, this is pretty easy to do, so modern operating systems randomize the locations of code segments. To follow our metaphor, if the attacker doesn't know which words are where, they can't write the message they want to write in their note.
- When software compilers translate human-readable source code into machine language, the files that are output are commonly referred to as "binaries". These are the files that get put onto your computer when you install a program or application.
- CFI (Control Flow Integrity)
- Control Flow Integrity (CFI) is one of the newest safety features to be added to Windows, and refers to any safety features that try to prevent attackers from redirecting the path of code execution. Many attacks involve hijacking software execution, so that the attacker can direct the computer to a different code path than originally intended. Sometimes this is to get their own code executed, and other times they're trying to cobble together a new effect from snippets of existing code, sort of like someone writing a ransom note by cutting out words from a magazine. CFI is available in some form on most modern desktop systems, but is implemented differently on each.
- Code Complexity
- More complex code is harder to review and maintain, and is more likely to contain bugs. This is why NASA/JPL put limits on things like function size for code that's going into critical systems. Commercial code doesn't have to live up to the same standards as the code in the space shuttle, but programmers should still try to "Keep it Simple, Stupid." We measure a few different characteristics of the code in order to assess complexity, and we highlight cases where the complexity is significantly different than average. This includes the sizes, numbers, and scores of the libraries code uses, as well as the overall code size.
- Code Hygiene
- There are things we can learn about a developer's security skill and knowledge based on what functions are used in the code they write. We evaluate about 500 functions that fall into these categories, and by looking at the frequency, count, and consistency of the function used we can learn a lot about the developer practices of a particular software vendor.
- Code Size
- A software application mostly consists code sections and data sections. Code is the instructions that tell the computer what to do, and the data is the information the computer needs in order to follow those instructions. The total size of all code sections is the code size.
- Crash Testing
- The most commonly accepted method of testing software today is crash testing, or fuzzing. Software is given malformed inputs, to see if and how it crashes. You can learn a lot about how something is built from how it breaks, and fuzzing provides a lot of insight into how robust or potentially exploitable software is.
- DEP (Data Execution Preventable)
- DEP tries to make sure attacker introduced data is not treated as executable instructions by the computer. Some operating systems have separate application armoring settings for Stack DEP and Heap DEP, while others pretext both with a single setting. The stack and the heap are the two locations where data is stored by the application.
When used a noun, an executable is a file that can be run by a computer program. A large application, like a browser or a word processor, usually consists of many executable files, with one main one that interconnects with all the rest.
When used as an adjective, "executable" indicates that a particular part of the computer's memory has been identified as being code/instructions to be run, not data/user input. It is dangerous for a part of memory where user input resides to be marked as code, because this allows the user to potentially introduce and run their own code.
- Extremely Risky Functions
- These are the rare functions that have no place in commercial software, due to their extreme insecurity. Their use is fairly rare, but a big red flag.
- Some functions are known to be difficult to use correctly, or have been known to introduce vulnerabilities in the past due to misuse. In many cases, safer, alternative versions of these functions exist, but programmers still frequently use the original, less safe versions. If fortification is enabled, then the compiler will try to replace these functions with safer versions. If the compiler can't figure out how to do a replacement correctly, it won't change anything, so the use of fortification doesn't guarantee that *all* unsafe functions get fortified.
- A function is a named section of code that performs a particular task or procedure. Some functions are written by the programmers specifically for the project they're working on, and other times they use functions from libraries, so that they don't have to write that same procedure every time they need it in a new project.
- Good Functions
- These are much safer replacements for historically bad and risky functions. For example, “strlcpy” is the good version of strncpy and strcpy, which are in the risky and bad categories, respectively. Use of these functions is relatively rare, but indicates that the developers prioritize secure coding.
- Hardened Software
- Hardened software is software that has all the industry-standard safety features enabled and no unnecessary calls to risky functions. With closed source software, you get what you get from the vendor, but with open source you can edit and recompile, which means you can have more control over how hardened the end product is.
Instead of writing code from scratch, sometimes a developer will use functions or modules from code that someone else already wrote. This way, they aren't "reinventing the wheel" if someone else has already written and shared the code they need. These software projects obtained from third parties are called libraries.
Vulnerabilities inside libraries frequently have higher overall impact then vulnerabilities within proprietary code, because the same library could be used by many different applications. For example, Heartbleed was a problem within a commonly used library.
- Open Source
- When code is shared with the public, so that they can review or modify it themselves, that code is open source. "Source" in this case refers to source code (link to other definition). When code is proprietary and not available for review or download, that is called closed source. Most commercial products are closed source.
- RELRO (Relocate Read-Only)
- RELRO stands for Relocate Read-Only, and is a mitigation of a Linux-specific vulnerability. Certain parts of the binary that are particularly sensitive need to be made read-only after they are built, but before the program starts running, so that an attacker can't use them to hijack operations.
- Risky Functions
- These are slightly safer functions than the bad functions, sometimes ones which were originally introduced specifically to fix flaws in those bad functions, but which are still tricky to use without introducing vulnerable bugs into code. More security-savvy programmers will generally use the “good” versions of these functions instead.
- Safety Features
- Modern compilers, linkers, and loaders come with lots of safety features, but they have to be enabled by the software vendor. These features are to software what airbags and seatbelts are to cars: things that are known and proven to improve safety, and whose use should be established by now as industry-standard. If your car doesn't have airbags, you're entitled to know that before you buy it.
- Some safety features are designed to prevent vulnerabilities from being exploited. Others seek to contain exploits, to keep them from having too much impact. Attempts at exploit containment are generally referred to as sandboxing.
- SEH (Safe Structured Exception Handling)
- SEH stands for Structured Exception Handler. When an error occurs during the software's execution, the SEH gives the software its instructions on how to handle that error. Safe SEH makes sure that an attacker can't introduce changes to the SEH pointers and code.
- Software Compiler
- After a programmer writes the code for a program, a compiler translates that human-readable code into instructions that a machine can read and execute. As with any translation, the end result can vary a lot, depending on the translator that was used. The more modern a compiler is, the better the safety features it will be able to add into the software at compile time.
- Source Code
- Source code is what programmers write, before it's translated into something that a computer can read by a compiler. Some projects are open source, meaning that the code is available for anyone to view and/or download, but most commercial software is closed source, meaning that you can't get access to the underlying source code without special access from the vendor (and most likely a non-disclosure agreement).
- Stack Guards
- Buffer overflows, one of the most commonly exploited vulnerability types today, involve an attacker overwriting values on the stack so that they can change what code is being executed. Stack guards or canaries try to detect manipulations of the stack so that they can detect these styles of attacks in progress and prevent their success.
- Very Risky Functions
- These are functions which are difficult to use without introducing vulnerabilities, and/or which have been known to introduce buffer overflows or other vulnerable bugs into software. They can be used safely, but often aren’t, and there are safer alternatives available, so their use can be indicative of a lack of security awareness.
- Many safety features are more effective for 64 bit binaries, or are only available in their strongest form for 64 bit binaries, so we treat this as a separate safety feature.