CISSP Blog Post 14, Domain 3: Cryptoanalysis


Credit: Post based on CISSP course presented by Dennis Lee, November 2018

Ok, we’ve spent a lot of time looking at generating crypto, now we’re going to look at cryptonalysis which, as mentioned in previous posts, is the practice of defeating or breaking encryption.

Here’s a summary of attacks, goals of attack, and if the attack requires copies of the original plaintext, ciphertext, or both.

Attacks Requires Plaintext? Requires Ciphertext? Immediate Goal?
Ciphertext only N Y Derive plaintext or key
Known plaintext Y Y Derive key
Chosen Plaintext 1) Choose plaintext sample first 2) Use device or trick someone into encrypting plaintext 1)Derive / verify key and 2)Trick into continuing using encryption with compromised key
Chosen Ciphertext Y – step 2) Access device to decrypt message or trick someone into decoding message Y – step 1) Choose ciphertext Uncover private key of public / private key pair

Password files should be stored with one-way encryption or hashed. You can attack encrypted or hashed passwords with a dictionary attack that tries to generate matching encrypted / hashed passwords from known plaintext. This is different from rainbow tables which are pre-encrypted / hashed passwords that are compared against the ciphertext.

A Birthday Attack is only used against hashes. It exploits aforementioned collision problems where you have a “collision” of two documents with the same hash (specific target value). The larger the hash size, the less chance for collisions and the more secure the hash. Essentially, you simply hash lots of files until two hashes are found that are identical, then you can claim that the hashed file is the same as the hash of the original text (typically a contract) in order to commit fraud.

CISSP Blog Post 13, Domain 3: Message Authentication Code or Message Integrity Code (MACs and MICs)


Credit: Post based on CISSP course presented by Dennis Lee, November 2018

Message Authentication Codes aka Message Integrity Codes are a piece of information used to authenticate a message – i.e. confirm the message came from the expected sender and/or to ensure that the message is the original message and has not been tampered with.

The term MIC is a bit controversial in that it’s sometimes preferred since MAC can also refer to Media Access Control Address (MAC addresses). However MIC can also sometimes be used to refer specifically to message digests which are only one of the two types of Message Authentication Codes.

There two types of Message Authentication Codes are ones that detect accidental changes in content / integrity and do not use encryption keys. These are called Checksums, Hashes, or Message Digests. The other type detects both accidental and deliberate changes in content through the use of encryption keys. These come in two types as well – those using symmetric keys (Examples are Cipher Block Chain Message Authentication Code (CBC-MAC) and Hashed Message Authentication Code (HMAC) aka Keyed Hash). And those using asymmetric keys such as digital signatures.

Let’s look at the first type, checksums. These require 2nd verification to ensure deliberate changes aren’t made. One drawback is different plaintext can create the same checksum. For example the name Dennis and the name Michael have the same "checksum":

Letter Value Letter Value
D 3 M 12
E 4 I 8
N 13 C 2
N 13 H 7
I 8 E 4
S 18 L 11
L 11
E 4
Totals: 59 59

Example algorithms include those made by Ron Rivest who created the MDX series: MD2, MD4, MD5 (128-bit hashes), MD6 (512-bit hash), and the Secure Hash Algorithm (SHA) series: SHA1 (160 bit), SHA2-224, SHA2-256, SHA2-384, SHA-512 (hash bit size in name), SHA3-224, SHA3-256, SHA3-384, SHA-512 (hash bit size in name) where the larger the size, the longer the time to calculate.

This is the opposite of a Hash or a Message Digest which uses an algorithm (typically XOR) to create for each block of text a hash.

For the second type that use encryption keys, there are: CBC-MAC Calculations which require sharing the file, the calculated CBC-MAC, and the symmetric key. So long as no one else has the symmetric key, you can prove that no deliberate changes were made to the file.

CBC-MAC Process
CBC-MAC Process

HMAC / Keyed Hash Calculations run faster than CBC-MAC and are created by:

  1. Join symmetric key with file
  2. Hash combined key + file
  3. Append original key with hash
  4. Hash 2nd time (Hash 1 + key) and this becomes your HMAC

To use, you send the file, HMAC, and the receiver uses their symmetric key to verify the file vs. the HMAC.

Digital Signature are created by:

  1. Hash file
  2. Encrypt hash with private key = digital signature
  3. Receiver reverses process with public key to confirm hash

Digital signatures can be legally binding if:

  1. Laws legalize their use, and
  2. Need agreement between parties to use them

Digital Certification of Public Keys (x.509) is done by a Certificate Authority (CA). To generate a Digital Certificate you:

  1. Fill out verification form
  2. CA uses private key to sign your digital certificate including information & your public key
  3. CA’s private key encrypts certificate, hash used to sign it

Certificates can be used for non-repudiation of Sender (Origin) and non-repudiation of Delivery (Receipt).

A Digital Certificate typically includes:

  • The CA’s name
  • Public key (of certificate owner)
  • Certificate owner name
  • Unique serial # issued by CA
  • Start and end (expiration) dates
  • CA’s digital signature
  • Etc.

Certificates may be revoked if:

  • Private key was compromised (CA’s or cert owner’s)
  • They were fraudulently issued
  • Certificate wasn’t paid for (e.g. check bounced), etc.

Trust models for issuing / using Digital Certificates include:

  1. The Hierarchical Trust Model which is used by most CA’s. The CA acts as a central hub between customers. You can have multiple layers of CA’s and it can get complicated if different CA’s don’t trust each other.
  2. The Web of Trust Model (e.g. PGP, OpenPGP, etc.) uses certificates signed by mutually known signors. The drawback is there’s no way to revoke a certificate and they can be easy to compromise.
  3. The Dual Control Model requires multiple people to sign off on approval for a certificate.
  4. There’s also a Complex Hybrid System where systems using different certificate models can exchange and authenticate certificates.

CISSP Blog Post 12, Domain 3: Symmetric and Asymmetric Encryption


Credit: Post based on CISSP course presented by Dennis Lee, November 2018

Symmetric Encryption is faster than asymmetric algorithms, and can encrypt large content and large volume of data quickly. The drawback to symmetric encryption is that key management is much more complex as you add more users (you need 1 pair of keys for every 2 users).

To calculate the number of key pairs: Key pairs = Num. of users ( Num. of users – 1 )/2. In other words, for 10 people: 10(10-1)/2 = 45 key pairs so for 1000 people you need 499,500 key pairs!

Public Key Cryptography aka Asymmetric Cryptography was first publicized by Diffie & Hellman. It requires the creation of key pairs (public & private keys). Your private key is required to decrypt your public key encrypted data and vice-versa.

There are multiple Asymmetric Encryption methods:

  1. Confidentiality or Secure Message Format uses the receiver’s public key to encrypt data with.
  2. Proof of Origin or Open Message Format is where you sign with the sender’s private key. This allows anyone to decrypt it, but proves who sent/encrypted the data.
  3. And you can combine Confidentiality + Proof of Origin into what is known as Secure & Signed Message Format where you encrypt the plaintext with the sender’s private key (proof of origin), then encrypt that with the receiver’s public key (to provide confidentiality, so that only the receiver can unlock it).

There are two Mathematical Methods for Asymmetric Cryptography:

  1. By factoring the product of 2 large prime integers – this uses the difficulty of reversing math PxQ=N as a shield against breaking the encryption.
  2. Discrete logarithms in a finite field – this uses infinite possible values as a shield e.g. 3X = x (mod 13) has an infinite number of values for x.

Here are some of the most common Asymmetric Key Algorithms:

The Rivest-Shamir-Adleman (RSA) Algorithm uses factoring and has 3 uses:

  1. Can encrypt
  2. Can create digital signatures
  3. Can exchange session keys (symmetric keys used 1 time)

ElGamel uses discrete logarithms and has the same 3 uses as RSA.

Elliptic Curve Cryptography (ECC) also uses discrete logarithms and has the same 3 uses as RSA. It uses a smaller key size to provide a higher strength of encryption than any other asymmetric algorithm.

The Diffie-Hellman Algorithm uses discrete logarithms but is only used for negotiating session keys. The negotiation process is:

  1. Agree to share 2 #’s
  2. Select random private keys
  3. Create public keys A & B
  4. Public key exchange
  5. Compute session key using opposite person’s Public key in combo with your private key – will result in two identical keys

To summarize the benefits & disadvantages of Symmetric & Asymmetric cryptography:

Services Offered Symmetric Encryption Confidentiality / Secure Message Proof of Origin / Open Message Digital Signatures Secure and Signed Message
Confidentiality Y Y N N Y
Access Control to Data Y Y N N Y
Authenticates Sender N N Y Y Y
Message Integrity (when used with hashing) Y Y Y Y Y
Provides non-repudiation of sender (can’t deny sending) N N Y Y Y
Example Algorithms DES, 2DES, 3DES, AES, Blowfish, Twofish, RC2, RC4, RC5, and RC6 RSA, ECC, & ElGamal
Symmetric Asymmetric
Min. Pairs of Keys for N People N(N-1) / 2 2N
Encrypting 1GB file CPU runs @ 100 Mbps ~80s (much faster) ~222 Hours ~ 9.25 days (much slower)

There is also the concept of hybrid cryptography which combines symmetric & asymmetric encryption. For example, you use symmetric to encrypt the content and asymmetric to share the key. Examples of technology that utilize hybrid cryptography include VPNs, PGP, S/MIME, SSL, TLS, etc.

CISSP Blog Post 11, Domain 3: Modern Encryption Standards / Algorithms


Credit: Post based on CISSP course presented by Dennis Lee, November 2018

Data Encryption Standard (DES) was one of the first commercially used encryption algorithms for computers. It was developed by IBM in order to win a contest in the 1970’s by the National Bureau of Standards. It is based on DEA (Data Encryption Algorithm) which was developed by Dr. Horst Feistel (also at IBM) that was called the Lucifer Algorithm.

DES has: a 64-bit block size, a True key size of 56 bits + an 8bit checksum to protect against corruption. It has a key space of 256 (1’s & 0’s) which gives around 72 Quadrillion possibilities. DES performs 16 rounds of both substitution and transposition encryption per block.

For perspective, in 1997 you could brute force a single DES key in about 4 months. In 1998, it only took 3 days. By 1999 in only took 23 hours.

Double-DES (2DES) is simply DES done twice.

Double-DES (2DES) Encryption Process
Double-DES (2DES) Encryption Process

Double-DES is vulnerable to rainbow tables. What you do is encrypt rainbow tables of plain text with DES and and then try decrypting the ciphertext to “meet in the middle”. This only takes ½ the time to decrypt!

Triple-DES (3DES) is applying DES 3 times to the same text. It can be done through different approaches:

  1. 3 passes of straight encryption (EEE3)
  2. Encrypt, Decrypt, Encrypt with 3 different keys (EDE3)
  3. Use key 1 on encryption and decrypt on 2nd pass with a second key (EEE2)
  4. Use key 1 to encrypt, key 2 to decrypt, key 3 to re-encrypt (EDE2)

A drawback to Triple-DES is it is very slow – you have to do 16 rounds x 3 per block of plaintext.

Triple-DES (3DES) Encryption Process
Triple-DES (3DES) Encryption Process

Advanced Encryption Standard (AES) is a very common encryption standard. It is based on the algorithm “Rijndael”. For comparison:

Rijndael AES
Block Size 128 bit, 192 bit, 256 bit 128 bit
Key Size 128 bit, 192 bit, 256 bit 128 bit, 192 bit, 256 bit
# of encryption rounds – depends on key size 10 (128 bit), 12 (192 bit), or 14 (256 bit) 10 (128 bit), 12 (192 bit), or 14 (256 bit)

The International Data Encryption Standard (IDEA) is a substitute for DES with larger key size.

Ron Rivest is a famous cryptographer who created the RC2, RC5, RC6 (block ciphers) and RC4 (stream cipher) – all of them can use different key sizes.

Bruce Schneier created the Blowfish and Twofish (block ciphers) – all of them can use different key sizes.

Let’s take a second to talk about block cipher modes – there are 5 in total:

Block modes are designed to encrypt data at rest and include:

  1. Electronic Code Book which is typically used for encrypting small, independent blocks of data.
  2. Cipher Block Chaining is typically used for encrypting documents, files, and storage media. It applies an Initialization Vector (IV) that is XOR’d with the 1st plaintext block. The 1st block is then XOR’d with the next block, etc.

Stream Modes are designed to encrypt data in transit – they are NOT a stream cipher!

  1. Cipher feedback is typically used for encrypting streaming communications. It has 5 phases:
    1. Initialization Vector (IV) is encrypted
    2. Encrypted IV is XOR’d with plaintext block 1 to give ciphertext 1
    3. Encrypt ciphertext1
    4. XOR encrypted ciphertext1 with plaintext block to give ciphertext 2.
    5. Repeat from step 2 with ciphertext 2 to produce ciphertext 3, etc.
  2. Output feedback (OFB) is also used for encrypting in-transit data. It provides pre-processing so it is faster than cipher feedback because you can encrypt keystreams before encrypting data.
  3. Countermode is also used for encrypting data in-transit and uses a counter variable and just increments by 1 instead of a random IV. It allows for pre-processing for faster encryption. Also called parallel pre-processing, this is even faster than output feedback mode.

In our next blog post, we will look at Symmetric and Asymmetric Encryption.

CISSP Blog Post 10, Domain 3: Cryptographic Ciphers


Credit: Post based on CISSP course presented by Dennis Lee, November 2018

This is a short post to cover the different types of cryptographic ciphers – I hope it’s useful in your studies!

Substitute Ciphers are mono-alphabetic substitutions, where you replace letters 1 for 1. They are vulnerable to frequency analysis looking to see how often letters show up in the cipher text.

Transposition Ciphers aka a Permutation Cipher. A famous historical example is the Spartan Scytale. Transposition ciphers are vulnerable to pattern recognition.

Polyalphabetic Ciphers aka a Vigenère Cipher are vulnerable to frequency analysis but can take longer to break.

A Running Key Cipher is a type of polyalphabetic substitution cipher where a public text is used as a key (typically a book), to provide a long keystream similar to a one-time pad.

Stream-Based Ciphers (recall these were invented by Gilbert Vernam) use exclusive OR arithmetic (XOR) which uses the simple rules of:
1) Same + same = 0
2) Different + different = 1

Stream based ciphers are used to encrypt data in transit. You first run a key through a pseudo-random number generator to create a key “stream” to minimize the risk of repeating keys.

Block ciphers are slower and more complex than stream ciphers. They are used to encrypt data in transit and at rest. Examples include DES and AES.

Encryption occurs in 2 stages:
1) Break data into equal sized blocks
2) Encrypt the blocks

CISSP Blog Post 9, Domain 3: Cryptography Basics


Credit: Post based on CISSP course presented by Dennis Lee, November 2018

Cryptography is the art and science of creating secrets. Cryptoanalysis is the practice of defeating or breaking encryption.

Steganography is not encryption! It is the science of hiding information, aka “Concealed Writing”. It is used in place of cryptography to not draw suspicion. An example of this is a Null Cipher where you mix plain text with a large amount of non-secret content. These have been used by many high-profile lovers and prisoners over the years to try and send messages back and forth without suspicion. Creating the concealing text can be very time consuming, especially if you don’t want it to be totally random so as to try and not draw suspicion.

Ok, let’s start with a brief history of cryptography methods:

Leon Albert’s Cipher Disk is the first example of a polyaplhabetic substitution encryption. It was two disks, one inside the other that had characters on both disks, you would rotate the first disk a set number of steps to randomize each letter. The encryption protection relied on keeping the starting key a secret.

Leon Albert's Cipher Disk
Leon Albert’s Cipher Disk

Auguste Kerckhoffs’ Principle Auguste Kerckhoffs came up with the following principle of encryption: The strength of encryption should not rely on keeping the encryption algorithm a secret. The strength should rely on keeping the keys a secret, so long as you can change them when needed.

The Enigma Machine was a pre-WWII invention made famous by the Nazi’s usage during WWII. It was a keyboard with 2 sets of keys. The keys were connected by a set of scramblers or rotors. The Nazi’s would publish a code book of scrambler settings to be used on a schedule for encrypting and decrypting text.

Enigma Machine
Enigma Machine

Gilbert Vernam in the early 1900’s invented numerous encryption methodologies that led to modern techniques. These included the one-time pad and the Stream Cipher.

We’ll discuss stream cipher’s in another article. One Time Pads are exactly what they sound like – both sender & receiver of a message have the same number of sheets of random characters. To encode a message, you keep rolling the cipher key through the pad of random characters. This is not vulnerable to frequency analysis so long as the following 3 rules are followed:

  1. Key must contain random characters
  2. Key must be at least as long as the message itself
  3. Key must be used only once

So what are some of the terminology you need to understand cryptography?

First is Cryptovariable (aka key) which is defined as a value used with the algorithm to lock and unlock information.

Key Space is the maximum number of keys possible. The larger the key size (# of spaces), the larger the key space and the more secure your key can become. For example:

  • For a numerical key space of 2, you have 2 spaces with possible values of 0 – 9 = 102 = 100 possible values.
  • For lowercase alphabetical key space of 2, you have 2 spaces with possible values of a – z = 362 = 1,296 possible values.
  • For upper & lowercase alphabetical key space of 2, you have 2 spaces with possible values of a – z = 622 = 3,844 possible values.

An Algorithm is the set of rules performing the locking & unlocking of information.

Ideally, the same set of plaintext (P), when encrypted by two different keys (K) will give you two different sets of ciphertext (C). Key clustering occurs when the same set of plaintext, encrypted by two different keys give you the exact same set of ciphertext. Analyzing these two ciphertexts can make it much easier to figure out what the original plaintext was because the plaintext acts as the "key" in this case and you’re just trying to remove the two actual keys from the ciphertext.

Normal Keys:

Normal Key Encryption
Normal Key Encryption

Key Clustering:

Key Clustering
Key Clustering

Symmetric Key Encryption – aka Single Key, Shared Key, Secret Key, or Session Key Encryption (all starting with “S”) is where the sender & receiver share a key to lock / unlock a cipher. You need to share the key through out-of-band techniques away from the primary system or environment in order to protect the keys.

Symmetric Key Encryption Process
Symmetric Key Encryption Process

Work Factor is the estimated time &/or effort required to defeat a cipher algorithm. It is often measured by timing a brute force approach. The work factor should always be much greater than the time the data needs to be kept a secret.

CISSP Blog Post 8, Domain 3: Security Testing and Evaluation Criteria


Credit: Post based on CISSP course presented by Dennis Lee, November 2018

First, some vocabulary so we know what type of testing we’re talking about:

  • Functionality Testing – What does product do? What are its features? Does it meet requirements?
  • Assurance Testing – How well does product work? Is it reliable?

There are different frameworks for testing security systems including:

The Trusted Computer System Evaluation Criteria (TCSEC) Classes aka the “Orange Book” is a U.S. Based test for Confidentiality. It includes the following classes and criteria:

Class Criteria Notes
D Minimal Protection (Failure)
C1 Discretionary Access Control Identification and Authorization
C2 C1 + Object reuse protection and protected audit trail (clear out media & protect logs)
B1 C2 + Mandatory access control and labeled protection Mandatory access control requires / need all 3 for system to enforce access:
1.Access control policy – any operation in system will be tested against rules to determine if it’s allowed
2.Classification / Sensitivity labels for objects (e.g. data) – these are assigned by data owner (can be implemented by data custodian)
3.Clearance / Privileged labels for subjects (e.g. users) – these are assigned by management (can be implemented by data custodian)
B2 B1 + Trusted Path (integrity indicator for user) and provide covert storage channel analysis
B3 B2 + Trusted Recovery (protect data in case of failure) and provide covert timing channel analysis
A1 B3 + provide formal design verification Example of product that achieved A1 criteria was Honeywell SCOMP

The Information Technology Securing Evaluation Criteria (ITSEC) is a European based system which can be used to test for Confidentiality, Integrity, or Availability capabilities. It uses a 2-part grade, one for functionality and the other for assurance. Example score for High Integrity Services = F-IN or High Availability Services = F-AV.

The Common Criteria (ISO 15408) is an internationally used testing criteria and can test for Confidentiality, Integrity, or Availability capabilities. It also uses a 2-part grade – functionality and assurance. The standard leaves it up to each industry to create their own security rating scale (which is approved by the Common Criteria Group).

Common Criteria specific terminology you need to know for the CISSP exam include:

  • Protection Profile – Community or industry specific security rating scale – can be used for testing security products or security components within products.
  • Target of Evaluation – Product to be tested
  • Security Target – Document specifying security properties and vendor’s desired ratings for product. Typically includes: Vendor, product (target of evaluation), purpose of product, protection profile to be used for test, desired grades (e.g. CS3/EAL5), etc.

CISSP Blog Post 7, Domain 3: Access Control Models


Credit: Post based on CISSP course presented by Dennis Lee, November 2018

There are many models for designing Access Controls. Famous models include:

The Graham-Denning Model – this model accounts for Confidentiality, Integrity, and Availability (CIA). It includes a mechanism called a Monitor that enforces access from subjects to objects.

Graham-Denning Access Control Model
Graham-Denning Access Control Model

The James Anderson Model is similar to the Graham-Denning Model, however this model has a “Reference Monitor” that is an enhanced ruleset version of the Graham-Denning “Monitor” mechanism.

James Anderson Access Control Model
James Anderson Access Control Model

The Harrison, Ruzzon, Ullman (HRU) Model formalized the access control matrix as a model.

Harrison, Ruzzon, Ullman (HRU) Access Control Model
Harrison, Ruzzon, Ullman (HRU) Access Control Model

CISSP Blog Post 6, Domain 3: Security Models


Credit: Post based on CISSP course presented by Dennis Lee, November 2018

In our last post, we looked at Security Design concepts and Architecture frameworks. In this post, we will continue to build upon that by looking at Security models.

Security Architecture Frameworks differ from Security Models which are tools to build systems with. Let’s start by looking as some more elementary / older Security Models.

  • The State Machine is used for building systems providing consistent security from 1 transaction stage to another. E.g. an ATM machine
  • The Lattice Model is used for creating a mathematical structure using a hierarchy of security levels. E.g. Data Classification

Some newer security models include: Bell-LaPadula which is a Confidentiality Model for classification of access rights.

Bell-LaPadula Security Model
Bell-LaPadula Security Model

Example:

Read Data Write Data Read + Write Data
Top Secret N (Unauthorized Access) Y N
Confidential Y Y Y
Public Y N (Unauthorized Disclosure) N
Property Name Simple Star* Strong Star*

Biba Integrity Model which deals with integrity of data NOT confidentiality. The goal of the model is to prevent unauthorized / outside users from tampering with the system. An example could be a Water Treatment Plant’s rules:

Drink (Read) Pour Out (Write)
Top Quality (Pure / Distilled) Y N
Assigned Level (Drinkable) Y Y
Lower Quality (Dirty) N Y
Property Name Simple Property Star * Property

Clark-Wilson Integrity Model – This model has 3 goals:

  1. Prevent unauthorized users (outsiders) from tampering with the system
  2. Prevent authorized users (insiders) from tampering with the system
  3. Transactions must maintain internal and external consistency

Internal consistency is where a transaction must fit the rules of the system, for example a U.S. ATM must have $USD inside, not $AUD

External consistency is where a transaction must fit reality or make sense. For example, you cannot attempt to withdraw $500 with only $50 in your bank account.

In the Clark-Wilson Integrity Model, a program decides if subjects have rights to objects and enforces rules for both Separation of Duties (SoD), and creates well-formed (high integrity) transactions using 3 rules (you can remember them using the acronym TLC):

T – No Tampering of Transactions
L – Proper Logging of Transactions
CConsistency (Internal & External) of Transactions

Clark-Wilson Integrity Model
Clark-Wilson Integrity Model

Brewer and Nash Secrecy Model aka the Chinese Wall Model the goal of the model is to prevent conflicts of interest. It creates a virtual wall between classes of data. For example, Doctors don’t share patient’s data between patients. Another more famous example is Martha Stewart’s insider trading case:

Brewer & Nash Secrecy Model
Brewer & Nash Secrecy Model

CISSP Blog Post 5, Domain 3: Security Design Concepts and Architecture Frameworks


Credit: Post based on CISSP course presented by Dennis Lee, November 2018

Today we’re going to take a look at Security Design and Architecture.

Security Design Concepts (or Goals) are pretty simple and I’ve included a basic acronym to try and help you remember it (DSL):

  • D Building Defense-in-Depth
  • S Reducing Single Points of Failure
  • L Reducing the Weakest Links

There are many Security Architecture frameworks, the main ones on the CISSP exam include the Zachman Framework which is designed to gather input for design & requirements before building anything. It incorporates the needs, goals, and concerns of key players including: Asset owners, users, programmers & designers, management, etc.

The next is the Sherwood Applied Business Security Architecture (SABSA) which is used for building Enterprise Level Information Security. It’s phases include:

  1. Contextual – Business Needs
  2. Conceptual – Security Needs
  3. Logical – Security Policies
  4. Physical – Security Procedures
  5. Component – Security Products & Tools
  6. Operational – Risk Management Planning & Response (covers all phases)

The Open Group Architecture (TOGAF) is an alternative that provides building blocks covering:

  • Business architecture
  • Application architecture
  • Data architecture
  • Technology architecture (e.g. Network Design, etc.)

Information Technology Infrastructure Library (ITIL) provides guidelines for achieving quality Information Technology services including:

  • Improving IT efficiency
  • Reducing IT related Risk
  • Enhancing IT Security

And last but not least the Capability Maturity Model Integration (CMMI) which provides a roadmap for showing progress for process improvement (Levels 1 – 5):

  1. Initial – Processes are unpredictable, poorly defined, and reactive
  2. Repeatable – Processes are established for projects, but are often reactive
  3. Defined – Processes are established for organization and based on standards, more proactive
  4. Managed – processes are measured and controlled
  5. Optimizing – focus on continuous processes improvement