Practice Test Free
  • QUESTIONS
  • COURSES
    • CCNA
    • Cisco Enterprise Core
    • VMware vSphere: Install, Configure, Manage
  • CERTIFICATES
No Result
View All Result
  • Login
  • Register
Quesions Library
  • Cisco
    • 200-301
    • 200-901
      • Multiple Choice
      • Drag Drop
    • 350-401
      • Multiple Choice
      • Drag Drop
    • 350-701
    • 300-410
      • Multiple Choice
      • Drag Drop
    • 300-415
      • Multiple Choice
      • Drag Drop
    • 300-425
    • Others
  • AWS
    • CLF-C02
    • SAA-C03
    • SAP-C02
    • ANS-C01
    • Others
  • Microsoft
    • AZ-104
    • AZ-204
    • AZ-305
    • AZ-900
    • AI-900
    • SC-900
    • Others
  • CompTIA
    • SY0-601
    • N10-008
    • 220-1101
    • 220-1102
    • Others
  • Google
    • Associate Cloud Engineer
    • Professional Cloud Architect
    • Professional Cloud DevOps Engineer
    • Others
  • ISACA
    • CISM
    • CRIS
    • Others
  • LPI
    • 101-500
    • 102-500
    • 201-450
    • 202-450
  • Fortinet
    • NSE4_FGT-7.2
  • VMware
  • >>
    • Juniper
    • EC-Council
      • 312-50v12
    • ISC
      • CISSP
    • PMI
      • PMP
    • Palo Alto Networks
    • RedHat
    • Oracle
    • GIAC
    • F5
    • ITILF
    • Salesforce
Contribute
Practice Test Free
  • QUESTIONS
  • COURSES
    • CCNA
    • Cisco Enterprise Core
    • VMware vSphere: Install, Configure, Manage
  • CERTIFICATES
No Result
View All Result
Practice Test Free
No Result
View All Result
Home Free IT Exam Dumps

SSCP Dump Free

Table of Contents

Toggle
  • SSCP Dump Free – 50 Practice Questions to Sharpen Your Exam Readiness.
  • Access Full SSCP Dump Free

SSCP Dump Free – 50 Practice Questions to Sharpen Your Exam Readiness.

Looking for a reliable way to prepare for your SSCP certification? Our SSCP Dump Free includes 50 exam-style practice questions designed to reflect real test scenarios—helping you study smarter and pass with confidence.

Using an SSCP dump free set of questions can give you an edge in your exam prep by helping you:

  • Understand the format and types of questions you’ll face
  • Pinpoint weak areas and focus your study efforts
  • Boost your confidence with realistic question practice

Below, you will find 50 free questions from our SSCP Dump Free collection. These cover key topics and are structured to simulate the difficulty level of the real exam, making them a valuable tool for review or final prep.

Question 1

One purpose of a security awareness program is to modify:

A. employee’s attitudes and behaviors towards enterprise’s security posture

B. management’s approach towards enterprise’s security posture

C. attitudes of employees with sensitive data

D. corporate attitudes about safeguarding data

 


Suggested Answer: The Answer: security awareness training is to modify employees behaviour and attitude towards towards enterprise’s security posture.

Community Answer: A

Security-awareness training is performed to modify employees behavior and attitude toward security. This can best be achieved through a formalized process of security-awareness training.
It is used to increase the overall awareness of security throughout the company. It is targeted to every single employee and not only to one group of users.
Unfortunately you cannot apply a patch to a human being, the only thing you can do is to educate employees and make them more aware of security issues and threats. Never underestimate human stupidity.
Reference(s) used for this question:
TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation. also see:
Harris, Shon (2012-10-18). CISSP All-in-One Exam Guide, 6th Edition (p. 130). McGraw-Hill. Kindle Edition.

Question 2

Which of the following is implemented through scripts or smart agents that replays the users multiple log-ins against authentication servers to verify a user's identity which permit access to system services?

A. Single Sign-On

B. Dynamic Sign-On

C. Smart cards

D. Kerberos

 


Suggested Answer: A

SSO can be implemented by using scripts that replay the users multiple log-ins against authentication servers to verify a user’s identity and to permit access to system services.
Single Sign on was the best answer in this case because it would include Kerberos.
When you have two good answers within the 4 choices presented you must select the BEST one. The high level choice is always the best. When one choice would include the other one that would be the best as well.
Reference(s) used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 40.

Question 3

Which of the following was developed by the National Computer Security Center (NCSC) for the US Department of Defense ?

A. TCSEC

B. ITSEC

C. DIACAP

D. NIACAP

 


Suggested Answer: The Answer: TCSEC; The TCSEC, frequently referred to as the Orange Book, is the centerpiece of the DoD Rainbow Series publications.

Community Answer: A

Initially issued by the National Computer Security Center (NCSC) an arm of the National Security Agency in 1983 and then updated in 1985, TCSEC was replaced with the development of the Common Criteria international standard originally published in 2005.
References:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, pages 197-199.
Wikepedia –
http://en.wikipedia.org/wiki/TCSEC

Question 4

Which of the following remote access authentication systems is the most robust?

A. TACACS+

B. RADIUS

C. PAP

D. TACACS

 


Suggested Answer: A

Community Answer: A

TACACS+ is a proprietary Cisco enhancement to TACACS and is more robust than RADIUS. PAP is not a remote access authentication system but a remote node security protocol.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3:
Telecommunications and Network Security (page 122).

Question 5

Which of the following is addressed by Kerberos?

A. Confidentiality and Integrity

B. Authentication and Availability

C. Validation and Integrity

D. Auditability and Integrity

 


Suggested Answer: A

Community Answer: A

Kerberos addresses the confidentiality and integrity of information.
It also addresses primarily authentication but does not directly address availability.
Reference(s) used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 42. and https://www.ietf.org/rfc/rfc4120.txt and http://learn-networking.com/network-security/how-kerberos-authentication-works

Question 6

In the days before CIDR (Classless Internet Domain Routing), networks were commonly organized by classes. Which of the following would have been true of a
Class B network?

A. The first bit of the IP address would be set to zero.

B. The first bit of the IP address would be set to one and the second bit set to zero.

C. The first two bits of the IP address would be set to one, and the third bit set to zero.

D. The first three bits of the IP address would be set to one.

 


Suggested Answer: C

Each Class B network address has a 16-bit network prefix, with the two highest order bits set to 1-0.
The following answers are incorrect:
The first bit of the IP address would be set to zero. Is incorrect because, this would be a Class A network address.
The first two bits of the IP address would be set to one, and the third bit set to zero. Is incorrect because, this would be a Class C network address.
The first three bits of the IP address would be set to one. Is incorrect because, this is a distractor. Class D & E have the first three bits set to 1. Class D the 4th bit is 0 and for Class E the 4th bit to 1.
Classless Internet Domain Routing (CIDR)
High Order bits are shown in bold below.
For Class A, the addresses are 0.0.0.0 – 127.255.255.255
The lowest Class A address is represented in binary as 00000000.00000000.0000000.00000000
For Class B networks, the addresses are 128.0.0.0 – 191.255.255.255.
The lowest Class B address is represented in binary as 10000000.00000000.00000000.00000000
For Class C, the addresses are 192.0.0.0 – 223.255.255.255
The lowest Class C address is represented in binary as 11000000.00000000.00000000.00000000
For Class D, the addresses are 224.0.0.0 – 239.255.255.255 (Multicast)
The lowest Class D address is represented in binary as 11100000.00000000.00000000.00000000
For Class E, the addresses are 240.0.0.0 – 255.255.255.255 (Reserved for future usage)
The lowest Class E address is represented in binary as 11110000.00000000.00000000.00000000
Classful IP Address Format –
Reference Image
References: alt=”Reference Image” />
References:
3Com http://www.3com.com/other/pdfs/infra/corpinfo/en_US/501302.pdf

AIOv3 Telecommunications and Networking Security (page 438)

Question 7

While using IPsec, the ESP and AH protocols both provides integrity services.   However when using AH, some special attention needs to be paid if one of the peers uses NAT for address translation service.  Which of the items below would affects the use of AH and its  Integrity Check Value (ICV) the most?

A. Key session exchange

B. Packet Header Source or Destination address

C. VPN cryptographic key size

D. Crypotographic algorithm used B

 


Suggested Answer: Explanation

Community Answer: B

It may seem odd to have two different protocols that provide overlapping functionality.
AH provides authentication and integrity, and ESP can provide those two functions and confidentiality.
Why even bother with AH then?
In most cases, the reason has to do with whether the environment is using network address translation (NAT). IPSec will generate an integrity check value (ICV), which is really the same thing as a MAC value, over a portion of the packet. Remember that the sender and receiver generate their own values. In IPSec, it is called an ICV value. The receiver compares her ICV value with the one sent by the sender. If the values match, the receiver can be assured the packet has not been modified during transmission. If the values are different, the packet has been altered and the receiver discards the packet.
The AH protocol calculates this ICV over the data payload, transport, and network headers. If the packet then goes through a NAT device, the NAT device changes the IP address of the packet. That is its job. This means a portion of the data (network header) that was included to calculate the ICV value has now changed, and the receiver will generate an ICV value that is different from the one sent with the packet, which means the packet will be discarded automatically.
The ESP protocol follows similar steps, except it does not include the network header portion when calculating its ICV value. When the NAT device changes the IP address, it will not affect the receivers ICV value because it does not include the network header when calculating the ICV.
Here is a tutorial on IPSEC from the Shon Harris Blog:
The Internet Protocol Security (IPSec) protocol suite provides a method of setting up a secure channel for protected data exchange between two devices. The devices that share this secure channel can be two servers, two routers, a workstation and a server, or two gateways between different networks. IPSec is a widely accepted standard for providing network layer protection. It can be more flexible and less expensive than end-to end and link encryption methods.
IPSec has strong encryption and authentication methods, and although it can be used to enable tunneled communication between two computers, it is usually employed to establish virtual private networks (VPNs) among networks across the Internet.
IPSec is not a strict protocol that dictates the type of algorithm, keys, and authentication method to use. Rather, it is an open, modular framework that provides a lot of flexibility for companies when they choose to use this type of technology. IPSec uses two basic security protocols: Authentication Header (AH) and
Encapsulating Security Payload (ESP). AH is the authenticating protocol, and ESP is an authenticating and encrypting protocol that uses cryptographic mechanisms to provide source authentication, confidentiality, and message integrity.
IPSec can work in one of two modes: transport mode, in which the payload of the message is protected, and tunnel mode, in which the payload and the routing and header information are protected. ESP in transport mode encrypts the actual message information so it cannot be sniffed and uncovered by an unauthorized entity. Tunnel mode provides a higher level of protection by also protecting the header and trailer data an attacker may find useful. Figure 8-26 shows the high- level view of the steps of setting up an IPSec connection.
Each device will have at least one security association (SA) for each VPN it uses. The SA, which is critical to the IPSec architecture, is a record of the configurations the device needs to support an IPSec connection. When two devices complete their handshaking process, which means they have agreed upon a long list of parameters they will use to communicate, these data must be recorded and stored somewhere, which is in the SA.
The SA can contain the authentication and encryption keys, the agreed-upon algorithms, the key lifetime, and the source IP address. When a device receives a packet via the IPSec protocol, it is the SA that tells the device what to do with the packet. So if device B receives a packet from device C via IPSec, device B will look to the corresponding SA to tell it how to decrypt the packet, how to properly authenticate the source of the packet, which key to use, and how to reply to the message if necessary.
SAs are directional, so a device will have one SA for outbound traffic and a different SA for inbound traffic for each individual communication channel. If a device is connecting to three devices, it will have at least six SAs, one for each inbound and outbound connection per remote device. So how can a device keep all of these
SAs organized and ensure that the right SA is invoked for the right connection? With the mighty secu rity parameter index (SPI), thats how. Each device has an
SPI that keeps track of the different SAs and tells the device which one is appropriate to invoke for the different packets it receives. The SPI value is in the header of an IPSec packet, and the device reads this value to tell it which SA to consult.
IPSec can authenticate the sending devices of the packet by using MAC (covered in the earlier section, “The One-Way Hash”). The ESP protocol can provide authentication, integrity, and confidentiality if the devices are configured for this type of functionality.
So if a company just needs to make sure it knows the source of the sender and must be assured of the integrity of the packets, it would choose to use AH. If the company would like to use these services and also have confidentiality, it would use the ESP protocol because it provides encryption functionality. In most cases, the reason ESP is employed is because the company must set up a secure VPN connection.
It may seem odd to have two different protocols that provide overlapping functionality. AH provides authentication and integrity, and ESP can provide those two functions and confidentiality. Why even bother with AH then? In most cases, the reason has to do with whether the environment is using network address translation (NAT). IPSec will generate an integrity check value (ICV), which is really the same thing as a MAC value, over a portion of the packet. Remember that the sender and receiver generate their own values. In IPSec, it is called an ICV value. The receiver compares her ICV value with the one sent by the sender. If the values match, the receiver can be assured the packet has not been modified during transmission. If the values are different, the packet has been altered and the receiver discards the packet.
The AH protocol calculates this ICV over the data payload, transport, and network headers. If the packet then goes through a NAT device, the NAT device changes the IP address of the packet. That is its job. This means a portion of the data (network header) that was included to calculate the ICV value has now changed, and the receiver will generate an ICV value that is different from the one sent with the packet, which means the packet will be discarded automatically.
The ESP protocol follows similar steps, except it does not include the network header portion when calculating its ICV value. When the NAT device changes the IP address, it will not affect the receivers ICV value because it does not include the network header when calculating the ICV.
Because IPSec is a framework, it does not dictate which hashing and encryption algorithms are to be used or how keys are to be exchanged between devices.
Key management can be handled manually or automated by a key management protocol. The de facto standard for IPSec is to use Internet Key Exchange (IKE), which is a combination of the ISAKMP and OAKLEY protocols. The Internet Security Association and Key Management Protocol (ISAKMP) is a key exchange architecture that is independent of the type of keying mechanisms used. Basically, ISAKMP provides the framework of what can be negotiated to set up an IPSec connection (algorithms, protocols, modes, keys). The OAKLEY protocol is the one that carries out the negotiation process. You can think of ISAKMP as providing the playing field (the infrastructure) and OAKLEY as the guy running up and down the playing field (carrying out the steps of the negotiation).
IPSec is very complex with all of its components and possible configurations. This complexity is what provides for a great degree of flexibility, because a company has many different configuration choices to achieve just the right level of protection. If this is all new to you and still confusing, please review one or more of the following references to help fill in the gray areas.
The following answers are incorrect:
The other options are distractors.
The following reference(s) were/was used to create this question:
Shon Harris, CISSP All-in-One Exam Guide- fiveth edition, page 759 and https://neodean.wordpress.com/tag/security-protocol/

Question 8

Unshielded Twisted Pair cabling is a:

A. four-pair wire medium that is used in a variety of networks.

B. three-pair wire medium that is used in a variety of networks.

C. two-pair wire medium that is used in a variety of networks.

D. one-pair wire medium that is used in a variety of networks.

 


Suggested Answer: A

Community Answer: C

Unshielded Twisted Pair cabling is a four-pair wire medium that is used in a variety of networks.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 101.

Question 9

Which of the following is NOT a technique used to perform a penetration test?

A. traffic padding

B. scanning and probing

C. war dialing

D. sniffing

 


Suggested Answer: A

Community Answer: D

Traffic padding is a countermeasure to traffic analysis.
Even if perfect cryptographic routines are used, the attacker can gain knowledge of the amount of traffic that was generated. The attacker might not know what
Alice and Bob were talking about, but can know that they were talking and how much they talked. In certain circumstances this can be very bad. Consider for example when a military is organising a secret attack against another nation: it may suffice to alert the other nation for them to know merely that there is a lot of secret activity going on.
As another example, when encrypting Voice Over IP streams that use variable bit rate encoding, the number of bits per unit of time is not obscured, and this can be exploited to guess spoken phrases.
Padding messages is a way to make it harder to do traffic analysis. Normally, a number of random bits are appended to the end of the message with an indication at the end how much this random data is. The randomness should have a minimum value of 0, a maximum number of N and an even distribution between the two extremes. Note, that increasing 0 does not help, only increasing N helps, though that also means that a lower percentage of the channel will be used to transmit real data. Also note, that since the cryptographic routine is assumed to be uncrackable (otherwise the padding length itself is crackable), it does not help to put the padding anywhere else, e.g. at the beginning, in the middle, or in a sporadic manner.
The other answers are all techniques used to do Penetration Testing.
References:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, pages 233, 238. and https://secure.wikimedia.org/wikipedia/en/wiki/Padding_%28cryptography%29#Traffic_analysis

Question 10

Which of the following is NOT true concerning Application Control?

A. It limits end users use of applications in such a way that only particular screens are visible.

B. Only specific records can be requested through the application controls

C. Particular usage of the application can be recorded for audit purposes

D. It is non-transparent to the endpoint applications so changes are needed to the applications and databases involved

 


Suggested Answer: D

Source: TIPTON, Harold F. & KRAUSE, MICKI, Information Security Management Handbook, 4th Edition, Volume 2, Auerbach.

Question 11

Which of the following is an example of an active attack?

A. Traffic analysis

B. Scanning

C. Eavesdropping

D. Wiretapping

 


Suggested Answer: B

Community Answer: B

Scanning is definitively a very active attack. The attacker will make use of a scanner to perform the attack, the scanner will send a very large quantity of packets to the target in order to illicit responses that allows the attacker to find information about the operating system, vulnerabilities, misconfiguration and more. The packets being sent are sometimes attempting to identify if a known vulnerability exist on the remote hosts.
A passive attack is usually done in the footprinting phase of an attack. While doing your passive reconnaissance you never send a single packet to the destination target. You gather information from public databases such as the DNS servers, public information through search engines, financial information from finance web sites, and technical infomation from mailing list archive or job posting for example.
An attack can be active or passive.
An “active attack” attempts to alter system resources or affect their operation.
A “passive attack” attempts to learn or make use of information from the system but does not affect system resources. (E.g., see: wiretapping.)
The following are all incorrect answers because they are all passive attacks:
Traffic Analysis – Is the process of intercepting and examining messages in order to deduce information from patterns in communication. It can be performed even when the messages are encrypted and cannot be decrypted. In general, the greater the number of messages observed, or even intercepted and stored, the more can be inferred from the traffic. Traffic analysis can be performed in the context of military intelligence or counter-intelligence, and is a concern in computer security.
Eavesdropping – Eavesdropping is another security risk posed to networks. Because of the way some networks are built, anything that gets sent out is broadcast to everyone. Under normal circumstances, only the computer that the data was meant for will process that information. However, hackers can set up programs on their computers called “sniffers” that capture all data being broadcast over the network. By carefully examining the data, hackers can often reconstruct real data that was never meant for them. Some of the most damaging things that get sniffed include passwords and credit card information.
In the cryptographic context, Eavesdropping and sniffing data as it passes over a network are considered passive attacks because the attacker is not affecting the protocol, algorithm, key, message, or any parts of the encryption system. Passive attacks are hard to detect, so in most cases methods are put in place to try to prevent them rather than to detect and stop them. Altering messages, modifying system files, and masquerading as another individual are acts that are considered active attacks because the attacker is actually doing something instead of sitting back and gathering data. Passive attacks are usually used to gain information prior to carrying out an active attack.”
Wiretapping – Wiretapping refers to listening in on electronic communications on telephones, computers, and other devices. Many governments use it as a law enforcement tool, and it is also used in fields like corporate espionage to gain access to privileged information. Depending on where in the world one is, wiretapping may be tightly controlled with laws that are designed to protect privacy rights, or it may be a widely accepted practice with little or no protections for citizens. Several advocacy organizations have been established to help civilians understand these laws in their areas, and to fight illegal wiretapping.
Reference(s) used for this question:
HARRIS, Shon, All-In-One CISSP Certification Exam Guide, 6th Edition, Cryptography, Page 865 and http://en.wikipedia.org/wiki/Attack_%28computing%29 and http://www.wisegeek.com/what-is-wiretapping.htm and https://pangea.stanford.edu/computing/resources/network/security/risks.php and http://en.wikipedia.org/wiki/Traffic_analysis

Question 12

A timely review of system access audit records would be an example of which of the basic security functions?

A. avoidance.

B. deterrence.

C. prevention.

D. detection.

 


Suggested Answer: D

By reviewing system logs you can detect events that have occured.
The following answers are incorrect:
avoidance. This is incorrect, avoidance is a distractor. By reviewing system logs you have not avoided anything. deterrence. This is incorrect because system logs are a history of past events. You cannot deter something that has already occurred. prevention. This is incorrect because system logs are a history of past events. You cannot prevent something that has already occurred.

Question 13

Which access control model is also called Non Discretionary Access Control (NDAC)?

A. Lattice based access control

B. Mandatory access control

C. Role-based access control

D. Label-based access control

 


Suggested Answer: C

Community Answer: B

RBAC is sometimes also called non-discretionary access control (NDAC) (as Ferraiolo says “to distinguish it from the policy-based specifics of MAC”). Another model that fits within the NDAC category is Rule-Based Access Control (RuBAC or RBAC). Most of the CISSP books use the same acronym for both models but
NIST tend to use a lowercase “u” in between R and B to differentiate the two models.
You can certainly mimic MAC using RBAC but true MAC makes use of Labels which contains the sensitivity of the objects and the categories they belong to. No labels means MAC is not being used.
One of the most fundamental data access control decisions an organization must make is the amount of control it will give system and data owners to specify the level of access users of that data will have. In every organization there is a balancing point between the access controls enforced by organization and system policy and the ability for information owners to determine who can have access based on specific business requirements. The process of translating that balance into a workable access control model can be defined by three general access frameworks:
Discretionary access control –
Mandatory access control –
Nondiscretionary access control –
A role-based access control (RBAC) model bases the access control authorizations on the roles (or functions) that the user is assigned within an organization.
The determination of what roles have access to a resource can be governed by the owner of the data, as with DACs, or applied based on policy, as with MACs.
Access control decisions are based on job function, previously defined and governed by policy, and each role (job function) will have its own access capabilities.
Objects associated with a role will inherit privileges assigned to that role. This is also true for groups of users, allowing administrators to simplify access control strategies by assigning users to groups and groups to roles.
There are several approaches to RBAC. As with many system controls, there are variations on how they can be applied within a computer system.
There are four basic RBAC architectures:
1. Non-RBAC: Non-RBAC is simply a user-granted access to data or an application by traditional mapping, such as with ACLs. There are no formal “roles” associated with the mappings, other than any identified by the particular user.
2. Limited RBAC: Limited RBAC is achieved when users are mapped to roles within a single application rather than through an organization-wide role structure.
Users in a limited RBAC system are also able to access non-RBAC-based applications or data. For example, a user may be assigned to multiple roles within several applications and, in addition, have direct access to another application or system independent of his or her assigned role. The key attribute of limited
RBAC is that the role for that user is defined within an application and not necessarily based on the users organizational job function.
3. Hybrid RBAC: Hybrid RBAC introduces the use of a role that is applied to multiple applications or systems based on a users specific role within the organization. That role is then applied to applications or systems that subscribe to the organization’s role-based model. However, as the term “hybrid” suggests, there are instances where the subject may also be assigned to roles defined solely within specific applications, complimenting (or, perhaps, contradicting) the larger, more encompassing organizational role used by other systems.
4. Full RBAC: Full RBAC systems are controlled by roles defined by the organizations policy and access control infrastructure and then applied to applications and systems across the enterprise. The applications, systems, and associated data apply permissions based on that enterprise definition, and not one defined by a specific application or system.
Be careful not to try to make MAC and DAC opposites of each other — they are two different access control strategies with RBAC being a third strategy that was defined later to address some of the limitations of MAC and DAC.
The other answers are not correct because:
Mandatory access control is incorrect because though it is by definition not discretionary, it is not called “non-discretionary access control.” MAC makes use of label to indicate the sensitivity of the object and it also makes use of categories to implement the need to know.
Label-based access control is incorrect because this is not a name for a type of access control but simply a bogus detractor.
Lattice based access control is not adequate either. A lattice is a series of levels and a subject will be granted an upper and lower bound within the series of levels. These levels could be sensitivity levels or they could be confidentiality levels or they could be integrity levels.
Reference(s) used for this question:
All in One, third edition, page 165.
Ferraiolo, D., Kuhn, D. & Chandramouli, R. (2003). Role-Based Access Control, p. 18.
Ferraiolo, D., Kuhn, D. (1992). Role-Based Access Controls. http://csrc.nist.gov/rbac/Role_Based_Access_Control-1992.html
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition : Access Control ((ISC)2 Press) (Kindle Locations 1557-1584). Auerbach
Publications. Kindle Edition.
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition : Access Control ((ISC)2 Press) (Kindle Locations 1474-1477). Auerbach
Publications. Kindle Edition.

Question 14

A central authority determines what subjects can have access to certain objects based on the organizational security policy is called:

A. Mandatory Access Control

B. Discretionary Access Control

C. Non-Discretionary Access Control

D. Rule-based Access control

 


Suggested Answer: C

Community Answer: A

A central authority determines what subjects can have access to certain objects based on the organizational security policy.
The key focal point of this question is the ‘central authority’ that determines access rights.
Cecilia one of the quiz user has sent me feedback informing me that NIST defines MAC as: “MAC Policy means that Access Control Policy Decisions are made by a CENTRAL AUTHORITY. Which seems to indicate there could be two good answers to this question.
However if you read the NISTR document mentioned in the references below, it is also mentioned that: MAC is the most mentioned NDAC policy. So MAC is a form of NDAC policy.
Within the same document it is also mentioned: “In general, all access control policies other than DAC are grouped in the category of non- discretionary access control (NDAC). As the name implies, policies in this category have rules that are not established at the discretion of the user. Non-discretionary policies establish controls that cannot be changed by users, but only through administrative action.”
Under NDAC you have two choices:
Rule Based Access control and Role Base Access Control
MAC is implemented using RULES which makes it fall under RBAC which is a form of NDAC. It is a subset of NDAC.
This question is representative of what you can expect on the real exam where you have more than once choice that seems to be right. However, you have to look closely if one of the choices would be higher level or if one of the choice falls under one of the other choice. In this case NDAC is a better choice because
MAC is falling under NDAC through the use of Rule Based Access Control.
The following are incorrect answers:
MANDATORY ACCESS CONTROL –
In Mandatory Access Control the labels of the object and the clearance of the subject determines access rights, not a central authority. Although a central authority (Better known as the Data Owner) assigns the label to the object, the system does the determination of access rights automatically by comparing the
Object label with the Subject clearance. The subject clearance MUST dominate (be equal or higher) than the object being accessed.
The need for a MAC mechanism arises when the security policy of a system dictates that:
1. Protection decisions must not be decided by the object owner.
2. The system must enforce the protection decisions (i.e., the system enforces the security policy over the wishes or intentions of the object owner).
Usually a labeling mechanism and a set of interfaces are used to determine access based on the MAC policy; for example, a user who is running a process at the
Secret classification should not be allowed to read a file with a label of Top Secret. This is known as the “simple security rule,” or “no read up.”
Conversely, a user who is running a process with a label of Secret should not be allowed to write to a file with a label of Confidential. This rule is called the “*- property” (pronounced “star property”) or “no write down.” The *-property is required to maintain system security in an automated environment.
DISCRETIONARY ACCESS CONTROL –
In Discretionary Access Control the rights are determined by many different entities, each of the persons who have created files and they are the owner of that file, not one central authority.
DAC leaves a certain amount of access control to the discretion of the object’s owner or anyone else who is authorized to control the object’s access. For example, it is generally used to limit a user’s access to a file; it is the owner of the file who controls other users’ accesses to the file. Only those users specified by the owner may have some combination of read, write, execute, and other permissions to the file.
DAC policy tends to be very flexible and is widely used in the commercial and government sectors. However, DAC is known to be inherently weak for two reasons:
First, granting read access is transitive; for example, when Ann grants Bob read access to a file, nothing stops Bob from copying the contents of Anns file to an object that Bob controls. Bob may now grant any other user access to the copy of Anns file without Anns knowledge.
Second, DAC policy is vulnerable to Trojan horse attacks. Because programs inherit the identity of the invoking user, Bob may, for example, write a program for
Ann that, on the surface, performs some useful function, while at the same time destroys the contents of Anns files. When investigating the problem, the audit files would indicate that Ann destroyed her own files. Thus, formally, the drawbacks of DAC are as follows:
Discretionary Access Control (DAC) Information can be copied from one object to another; therefore, there is no real assurance on the flow of information in a system.
No restrictions apply to the usage of information when the user has received it.
The privileges for accessing objects are decided by the owner of the object, rather than through a system-wide policy that reflects the organizations security requirements.
ACLs and owner/group/other access control mechanisms are by far the most common mechanism for implementing DAC policies. Other mechanisms, even though not designed with DAC in mind, may have the capabilities to implement a DAC policy.
RULE BASED ACCESS CONTROL –
In Rule-based Access Control a central authority could in fact determine what subjects can have access when assigning the rules for access. However, the rules actually determine the access and so this is not the most correct answer.
RuBAC (as opposed to RBAC, role-based access control) allow users to access systems and information based on pre determined and configured rules. It is important to note that there is no commonly understood definition or formally defined standard for rule-based access control as there is for DAC, MAC, and RBAC.
“Rule-based access” is a generic term applied to systems that allow some form of organization-defined rules, and therefore rule-based access control encompasses a broad range of systems. RuBAC may in fact be combined with other models, particularly RBAC or DAC. A RuBAC system intercepts every access request and compares the rules with the rights of the user to make an access decision. Most of the rule-based access control relies on a security label system, which dynamically composes a set of rules defined by a security policy. Security labels are attached to all objects, including files, directories, and devices.
Sometime roles to subjects (based on their attributes) are assigned as well. RuBAC meets the business needs as well as the technical needs of controlling service access. It allows business rules to be applied to access controlfor example, customers who have overdue balances may be denied service access. As a mechanism for MAC, rules of RuBAC cannot be changed by users. The rules can be established by any attributes of a system related to the users such as domain, host, protocol, network, or IP addresses. For example, suppose that a user wants to access an object in another network on the other side of a router.
The router employs RuBAC with the rule composed by the network addresses, domain, and protocol to decide whether or not the user can be granted access. If employees change their roles within the organization, their existing authentication credentials remain in effect and do not need to be re configured. Using rules in conjunction with roles adds greater flexibility because rules can be applied to people as well as to devices. Rule-based access control can be combined with role- based access control, such that the role of a user is one of the attributes in rule setting. Some provisions of access control systems have rule- based policy engines in addition to a role-based policy engine and certain implemented dynamic policies [Des03]. For example, suppose that two of the primary types of software users are product engineers and quality engineers. Both groups usually have access to the same data, but they have different roles to perform in relation to the data and the application’s function. In addition, individuals within each group have different job responsibilities that may be identified using several types of attributes such as developing programs and testing areas. Thus, the access decisions can be made in real time by a scripted policy that regulates the access between the groups of product engineers and quality engineers, and each individual within these groups. Rules can either replace or complement role-based access control. However, the creation of rules and security policies is also a complex process, so each organization will need to strike the appropriate balance.
References used for this question:
http://csrc.nist.gov/publications/nistir/7316/NISTIR-7316.pdf and
AIO v3 p162-167 and OIG (2007) p.186-191
also
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 33.

Question 15

At which layer of ISO/OSI does the fiber optics work?

A. Network layer

B. Transport layer

C. Data link layer

D. Physical layer

 


Suggested Answer: The Answer: Physical layer The Physical layer is responsible for the transmission of the data through the physical medium. This includes such things as cables.

Community Answer: D

Fiber optics is a cabling mechanism which works at Physical layer of OSI model
All of the other answers are incorrect.
The following reference(s) were/was used to create this question:
Shon Harris all in one – Chapter 7 (Cabling)

Question 16

Which of the following statements pertaining to software testing approaches is correct?

A. A bottom-up approach allows interface errors to be detected earlier.

B. A top-down approach allows errors in critical modules to be detected earlier.

C. The test plan and results should be retained as part of the system’s permanent documentation.

D. Black box testing is predicated on a close examination of procedural detail.

 


Suggested Answer: C

Community Answer: B

A bottom-up approach to testing begins testing of atomic units, such as programs or modules, and works upwards until a complete system testing has taken place.
It allows errors in critical modules to be found early. A top-down approach allows for early detection of interface errors and raises confidence in the system, as programmers and users actually see a working system. White box testing is predicated on a close examination of procedural detail. Black box testing examines some aspect of the system with little regard for the internal logical structure of the software.
Source: Information Systems Audit and Control Association, Certified Information Systems Auditor 2002 review manual, Chapter 6: Business Application System
Development, Acquisition, Implementation and Maintenance (page 300).
Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

Question 17

What are the components of an object's sensitivity label?

A. A Classification Set and a single Compartment.

B. A single classification and a single compartment.

C. A Classification Set and user credentials.

D. A single classification and a Compartment Set.

 


Suggested Answer: D

Both are the components of a sensitivity label.
The following are incorrect:
A Classification Set and a single Compartment. Is incorrect because the nomenclature “Classification Set” is incorrect, there only one classifcation and it is not a
“single compartment” but a Compartment Set.
A single classification and a single compartment. Is incorrect because while there only is one classifcation, it is not a “single compartment” but a Compartment Set.
A Classification Set and user credentials. Is incorrect because the nomenclature “Classification Set” is incorrect, there only one classifcation and it is not “user credential” but a Compartment Set. The user would have their own sensitivity label.

Question 18

Which of the following usually provides reliable, real-time information without consuming network or host resources?

A. network-based IDS

B. host-based IDS

C. application-based IDS

D. firewall-based IDS

 


Suggested Answer: A

A network-based IDS usually provides reliable, real-time information without consuming network or host resources.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 48.

Question 19

Where parties do not have a shared secret and large quantities of sensitive information must be passed, the most efficient means of transferring information is to use Hybrid Encryption Methods. What does this mean?

A. Use of public key encryption to secure a secret key, and message encryption using the secret key.

B. Use of the recipient’s public key for encryption and decryption based on the recipient’s private key.

C. Use of software encryption assisted by a hardware encryption accelerator.

D. Use of elliptic curve encryption.

 


Suggested Answer: A

Community Answer: A

A Public Key is also known as an asymmetric algorithm and the use of a secret key would be a symmetric algorithm.
The following answers are incorrect:
Use of the recipient’s public key for encryption and decryption based on the recipient’s private key. Is incorrect this would be known as an asymmetric algorithm.
Use of software encryption assisted by a hardware encryption accelerator. This is incorrect, it is a distractor.
Use of Elliptic Curve Encryption. Is incorrect this would use an asymmetric algorithm.

Question 20

What is one disadvantage of content-dependent protection of information?

A. It increases processing overhead.

B. It requires additional password entry.

C. It exposes the system to data locking.

D. It limits the user’s individual address space.

 


Suggested Answer: A

Community Answer: A

Source: TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.

Question 21

Which of the following is NOT a transaction redundancy implementation?

A. on-site mirroring

B. Electronic Vaulting

C. Remote Journaling

D. Database Shadowing

 


Suggested Answer: A

Community Answer: B

Three concepts are used to create a level of fault tolerance and redundancy in transaction processing.
They are Electronic vaulting, remote journaling and database shadowing provide redundancy at the transaction level.
Electronic vaulting is accomplished by backing up system data over a network. The backup location is usually at a separate geographical location known as the vault site. Vaulting can be used as a mirror or a backup mechanism using the standard incremental or differential backup cycle. Changes to the host system are sent to the vault server in real-time when the backup method is implemented as a mirror. If vaulting updates are recorded in real-time, then it will be necessary to perform regular backups at the off-site location to provide recovery services due to inadvertent or malicious alterations to user or system data.
Journaling or Remote Journaling is another technique used by database management systems to provide redundancy for their transactions. When a transaction is completed, the database management system duplicates the journal entry at a remote location. The journal provides sufficient detail for the transaction to be replayed on the remote system. This provides for database recovery in the event that the database becomes corrupted or unavailable.
There are also additional redundancy options available within application and database software platforms. For example, database shadowing may be used where a database management system updates records in multiple locations. This technique updates an entire copy of the database at a remote location.
Reference used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 20403-20407). Auerbach
Publications. Kindle Edition.
and
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 20375-20377). Auerbach
Publications. Kindle Edition.

Question 22

What is the main focus of the Bell-LaPadula security model?

A. Accountability

B. Integrity

C. Confidentiality

D. Availability

 


Suggested Answer: C

Community Answer: C

The Bell-LaPadula model is a formal model dealing with confidentiality.
The BellLaPadula Model (abbreviated BLP) is a state machine model used for enforcing access control in government and military applications. It was developed by David Elliott Bell and Leonard J. LaPadula, subsequent to strong guidance from Roger R. Schell to formalize the U.S. Department of Defense (DoD) multilevel security (MLS) policy. The model is a formal state transition model of computer security policy that describes a set of access control rules which use security labels on objects and clearances for subjects. Security labels range from the most sensitive (e.g.”Top Secret”), down to the least sensitive (e.g.,
“Unclassified” or “Public”).
The BellLaPadula model focuses on data confidentiality and controlled access to classified information, in contrast to the Biba Integrity Model which describes rules for the protection of data integrity. In this formal model, the entities in an information system are divided into subjects and objects.
The notion of a “secure state” is defined, and it is proven that each state transition preserves security by moving from secure state to secure state, thereby inductively proving that the system satisfies the security objectives of the model. The BellLaPadula model is built on the concept of a state machine with a set of allowable states in a computer network system. The transition from one state to another state is defined by transition functions.
A system state is defined to be “secure” if the only permitted access modes of subjects to objects are in accordance with a security policy. To determine whether a specific access mode is allowed, the clearance of a subject is compared to the classification of the object (more precisely, to the combination of classification and set of compartments, making up the security level) to determine if the subject is authorized for the specific access mode.
The clearance/classification scheme is expressed in terms of a lattice. The model defines two mandatory access control (MAC) rules and one discretionary access control (DAC) rule with three security properties:
The Simple Security Property – a subject at a given security level may not read an object at a higher security level (no read-up).
The -property (read “star”-property) – a subject at a given security level must not write to any object at a lower security level (no write-down). The -property is also known as the Confinement property.
The Discretionary Security Property – use of an access matrix to specify the discretionary access control.
The following are incorrect answers:
Accountability is incorrect. Accountability requires that actions be traceable to the user that performed them and is not addressed by the Bell-LaPadula model.
Integrity is incorrect. Integrity is addressed in the Biba model rather than Bell-Lapadula.
Availability is incorrect. Availability is concerned with assuring that data/services are available to authorized users as specified in service level objectives and is not addressed by the Bell-Lapadula model.
References:
CBK, pp. 325-326 –
AIO3, pp. 279 – 284 –
AIOv4 Security Architecture and Design (pages 333 – 336)
AIOv5 Security Architecture and Design (pages 336 – 338)
Wikipedia at https://en.wikipedia.org/wiki/Bell-La_Padula_model

Question 23

What is called the percentage of valid subjects that are falsely rejected by a Biometric Authentication system?

A. False Rejection Rate (FRR) or Type I Error

B. False Acceptance Rate (FAR) or Type II Error

C. Crossover Error Rate (CER)

D. True Rejection Rate (TRR) or Type III Error A

 


Suggested Answer: Explanation

Community Answer: A

The percentage of valid subjects that are falsely rejected is called the False Rejection Rate (FRR) or Type I Error.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 38.

Question 24

What does the simple security (ss) property mean in the Bell-LaPadula model?

A. No read up

B. No write down

C. No read down

D. No write up

 


Suggested Answer: A

Community Answer: A

The ss (simple security) property of the Bell-LaPadula access control model states that reading of information by a subject at a lower sensitivity level from an object at a higher sensitivity level is not permitted (no read up).
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 5:
Security Architectures and Models (page 202).

Question 25

Which of the following biometric devices offers the LOWEST CER?

A. Keystroke dynamics

B. Voice verification

C. Iris scan

D. Fingerprint

 


Suggested Answer: C

From most effective (lowest CER) to least effective (highest CER) are:
Iris scan, fingerprint, voice verification, keystroke dynamics.
Reference : Shon Harris Aio v3 , Chapter-4 : Access Control , Page : 131
Also see: http://www.sans.org/reading_room/whitepapers/authentication/biometric-selection-body-parts-online_139

Question 26

Whose role is it to assign classification level to information?

A. Security Administrator

B. User

C. Owner

D. Auditor

 


Suggested Answer: C

Community Answer: C

The Data/Information Owner is ultimately responsible for the protection of the data. It is the Data/Information Owner that decides upon the classifications of that data they are responsible for.
The data owner decides upon the classification of the data he is responsible for and alters that classification if the business need arises.
The following answers are incorrect:
Security Administrator. Is incorrect because this individual is responsible for ensuring that the access right granted are correct and support the polices and directives that the Data/Information Owner defines.
User. Is Incorrect because the user uses/access the data according to how the Data/Information Owner defined their access.
Auditor. Is incorrect because the Auditor is responsible for ensuring that the access levels are appropriate. The Auditor would verify that the Owner classified the data properly.
References:
CISSP All In One Third Edition, Shon Harris, Page 121

Question 27

What can be defined as a list of subjects along with their access rights that are authorized to access a specific object?

A. A capability table

B. An access control list

C. An access control matrix

D. A role-based matrix

 


Suggested Answer: B

Community Answer: C

“It [ACL] specifies a list of users [subjects] who are allowed access to each object” CBK, p. 188
A capability table is incorrect. “Capability tables are used to track, manage and apply controls based on the object and rights, or capabilities of a subject. For example, a table identifies the object, specifies access rights allowed for a subject, and permits access based on the user’s posession of a capability (or ticket) for the object.” CBK, pp. 191-192. The distinction that makes this an incorrect choice is that access is based on posession of a capability by the subject.
To put it another way, as noted in AIO3 on p. 169, “A capabiltiy table is different from an ACL because the subject is bound to the capability table, whereas the object is bound to the ACL.”
An access control matrix is incorrect. The access control matrix is a way of describing the rules for an access control strategy. The matrix lists the users, groups and roles down the left side and the resources and functions across the top. The cells of the matrix can either indicate that access is allowed or indicate the type of access. CBK pp 317 – 318.
AIO3, p. 169 describes it as a table if subjects and objects specifying the access rights a certain subject possesses pertaining to specific objects.
In either case, the matrix is a way of analyzing the access control needed by a population of subjects to a population of objects. This access control can be applied using rules, ACL’s, capability tables, etc.
A role-based matrix is incorrect. Again, a matrix of roles vs objects could be used as a tool for thinking about the access control to be applied to a set of objects.
The results of the analysis could then be implemented using RBAC.
References:
CBK, Domain 2: Access Control.
AIO3, Chapter 4: Access Control

Question 28

Which of the following elements of telecommunications is not used in assuring confidentiality?

A. Network security protocols

B. Network authentication services

C. Data encryption services

D. Passwords D

 


Suggested Answer: Explanation

Passwords are one of the multiple ways to authenticate (prove who you claim to be) an identity which allows confidentiality controls to be enforced to assure the identity can only access the information for which it is authorized. It is the authentication that assists assurance of confidentiality not the passwords.
“Network security protocols” is incorrect. Network security protocols are quite useful in assuring confidentiality in network communications.
“Network authentication services” is incorrect. Confidentiality is concerned with allowing only authorized users to access information. An important part of determining authorization is authenticating an identity and this service is supplied by network authentication services.
“Data encryption services” is incorrect. Data encryption services are quite useful in protecting the confidentiality of information.
Reference(s) used for this question:
Official ISC2 Guide to the CISSP CBK, pp. 407 – 520
AIO 3rd Edition, pp. 415 – 580

Question 29

Which of the following is true of two-factor authentication?

A. It uses the RSA public-key signature based on integers with large prime factors.

B. It requires two measurements of hand geometry.

C. It does not use single sign-on technology.

D. It relies on two independent proofs of identity.

 


Suggested Answer: The Answer: It relies on two independent proofs of identity. Two-factor authentication refers to using two independent proofs of identity, such as something the

Community Answer: D

user has (e.g. a token card) and something the user knows (a password). Two-factor authentication may be used with single sign-on.
The following answers are incorrect: It requires two measurements of hand geometry. Measuring hand geometry twice does not yield two independent proofs.
It uses the RSA public-key signature based on integers with large prime factors. RSA encryption uses integers with exactly two prime factors, but the term “two- factor authentication” is not used in that context.
It does not use single sign-on technology. This is a detractor.
The following reference(s) were/was used to create this question:
Shon Harris AIO v.3 p.129 –
ISC2 OIG, 2007 p. 126

Question 30

Which of the following is the act of performing tests and evaluations to test a system's security level to see if it complies with the design specifications and security requirements?

A. Validation

B. Verification

C. Assessment

D. Accuracy

 


Suggested Answer: B

Community Answer: C

Verification vs. Validation:
Verification determines if the product accurately represents and meets the specifications. A product can be developed that does not match the original specifications. This step ensures that the specifications are properly met.
Validation determines if the product provides the necessary solution intended real-world problem. In large projects, it is easy to lose sight of overall goal. This exercise ensures that the main goal of the project is met.
From DITSCAP:
6.3.2. Phase 2, Verification. The Verification phase shall include activities to verify compliance of the system with previously agreed security requirements. For each life-cycle development activity, DoD Directive 5000.1 (reference (i)), there is a corresponding set of security activities, enclosure 3, that shall verify compliance with the security requirements and evaluate vulnerabilities.
6.3.3. Phase 3, Validation. The Validation phase shall include activities to evaluate the fully integrated system to validate system operation in a specified computing environment with an acceptable level of residual risk. Validation shall culminate in an approval to operate.
You must also be familiar with Verification and Validation for the purpose of the exam. A simple definition for Verification would be whether or not the developers followed the design specifications along with the security requirements. A simple definition for Validation would be whether or not the final product meets the end user needs and can be use for a specific purpose.
Wikipedia has an informal description that is currently written as: Validation can be expressed by the query “Are you building the right thing?” and Verification by
“Are you building it right?
NOTE:
DITSCAP was replaced by DIACAP some time ago (2007). While DITSCAP had defined both a verification and a validation phase, the DIACAP only has a validation phase. It may not make a difference in the answer for the exam; however, DIACAP is the cornerstone policy of DOD C&A and IA efforts today. Be familiar with both terms just in case all of a sudden the exam becomes updated with the new term.
Reference(s) used for this question:
Harris, Shon (2012-10-18). CISSP All-in-One Exam Guide, 6th Edition (p. 1106). McGraw-Hill. Kindle Edition. http://iase.disa.mil/ditscap/DITSCAP.html https://en.wikipedia.org/wiki/Verification_and_validation
For the definition of “validation” in DIACAP, Click Here
Further sources for the phases in DIACAP, Click Here

Question 31

Which of the following security modes of operation involves the highest risk?

A. Compartmented Security Mode

B. Multilevel Security Mode

C. System-High Security Mode

D. Dedicated Security Mode

 


Suggested Answer: B

In multilevel mode, two or more classification levels of data exist, some people are not cleared for all the data on the system.
Risk is higher because sensitive data could be made available to someone not validated as being capable of maintaining secrecy of that data (i.e., not cleared for it).
In other security modes, all users have the necessary clearance for all data on the system.
Source: LaROSA, Jeanette (domain leader), Application and System Development Security CISSP Open Study Guide, version 3.0, January 2002.

Question 32

A potential problem related to the physical installation of the Iris Scanner in regards to the usage of the iris pattern within a biometric system is:

A. concern that the laser beam may cause eye damage

B. the iris pattern changes as a person grows older.

C. there is a relatively high rate of false accepts.

D. the optical unit must be positioned so that the sun does not shine into the aperture.

 


Suggested Answer: D

Because the optical unit utilizes a camera and infrared light to create the images, sun light can impact the aperture so it must not be positioned in direct light of any type. Because the subject does not need to have direct contact with the optical reader, direct light can impact the reader.
An Iris recognition is a form of biometrics that is based on the uniqueness of a subject’s iris. A camera like device records the patterns of the iris creating what is known as Iriscode.
It is the unique patterns of the iris that allow it to be one of the most accurate forms of biometric identification of an individual. Unlike other types of biometics, the iris rarely changes over time. Fingerprints can change over time due to scaring and manual labor, voice patterns can change due to a variety of causes, hand geometry can also change as well. But barring surgery or an accident it is not usual for an iris to change. The subject has a high-resoulution image taken of their iris and this is then converted to Iriscode. The current standard for the Iriscode was developed by John Daugman. When the subject attempts to be authenticated an infrared light is used to capture the iris image and this image is then compared to the Iriscode. If there is a match the subject’s identity is confirmed. The subject does not need to have direct contact with the optical reader so it is a less invasive means of authentication then retinal scanning would be.
Reference(s) used for this question:
AIO, 3rd edition, Access Control, p 134.
AIO, 4th edition, Access Control, p 182.
Wikipedia – http://en.wikipedia.org/wiki/Iris_recognition
The following answers are incorrect:
concern that the laser beam may cause eye damage. The optical readers do not use laser so, concern that the laser beam may cause eye damage is not an issue. the iris pattern changes as a person grows older. The question asked about the physical installation of the scanner, so this was not the best answer. If the question would have been about long term problems then it could have been the best choice. Recent research has shown that Irises actually do change over time: http://www.nature.com/news/ageing-eyes-hinder-biometric-scans-1.10722 there is a relatively high rate of false accepts. Since the advent of the Iriscode there is a very low rate of false accepts, in fact the algorithm used has never had a false match. This all depends on the quality of the equipment used but because of the uniqueness of the iris even when comparing identical twins, iris patterns are unique.

Question 33

Which of the following is not one of the three goals of Integrity addressed by the Clark-Wilson model?

A. Prevention of the modification of information by unauthorized users.

B. Prevention of the unauthorized or unintentional modification of information by authorized users.

C. Preservation of the internal and external consistency.

D. Prevention of the modification of information by authorized users.

 


Suggested Answer: A

Community Answer: D

There is no need to prevent modification from authorized users. They are authorized and allowed to make the changes. On top of this, it is also NOT one of the goal of Integrity within Clark-Wilson.
As it turns out, the Biba model addresses only the first of the three integrity goals which is Prevention of the modification of information by unauthorized users.
Clark-Wilson addresses all three goals of integrity.
The ClarkWilson model improves on Biba by focusing on integrity at the transaction level and addressing three major goals of integrity in a commercial environment. In addition to preventing changes by unauthorized subjects, Clark and Wilson realized that high-integrity systems would also have to prevent undesirable changes by authorized subjects and to ensure that the system continued to behave consistently. It also recognized that it would need to ensure that there is constant mediation between every subject and every object if such integrity was going to be maintained.
Integrity is addressed through the following three goals:
1. Prevention of the modification of information by unauthorized users.
2. Prevention of the unauthorized or unintentional modification of information by authorized users.
3. Preservation of the internal and external consistency.
The following reference(s) were used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 17689-17694). Auerbach
Publications. Kindle Edition.
and
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 31.

Question 34

What is called the verification that the user's claimed identity is valid and is usually implemented through a user password at log-on time?

A. Authentication

B. Identification

C. Integrity

D. Confidentiality

 


Suggested Answer: A

Community Answer: A

Authentication is verification that the user’s claimed identity is valid and is usually implemented through a user password at log-on time.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 36.

Question 35

How are memory cards and smart cards different?

A. Memory cards normally hold more memory than smart cards

B. Smart cards provide a two-factor authentication whereas memory cards don’t

C. Memory cards have no processing power

D. Only smart cards can be used for ATM cards

 


Suggested Answer: C

The main difference between memory cards and smart cards is their capacity to process information. A memory card holds information but cannot process information. A smart card holds information and has the necessary hardware and software to actually process that information.
A memory card holds a users authentication information, so that this user needs only type in a user ID or PIN and presents the memory card to the system. If the entered information and the stored information match and are approved by an authentication service, the user is successfully authenticated.
A common example of a memory card is a swipe card used to provide entry to a building. The user enters a PIN and swipes the memory card through a card reader. If this is the correct combination, the reader flashes green and the individual can open the door and enter the building.
Memory cards can also be used with computers, but they require a reader to process the information. The reader adds cost to the process, especially when one is needed for every computer. Additionally, the overhead of PIN and card generation adds additional overhead and complexity to the whole authentication process.
However, a memory card provides a more secure authentication method than using only a password because the attacker would need to obtain the card and know the correct PIN.
Administrators and management need to weigh the costs and benefits of a memory card implementation as well as the security needs of the organization to determine if it is the right authentication mechanism for their environment.
One of the most prevalent weaknesses of memory cards is that data stored on the card are not protected. Unencrypted data on the card (or stored on the magnetic strip) can be extracted or copied. Unlike a smart card, where security controls and logic are embedded in the integrated circuit, memory cards do not employ an inherent mechanism to protect the data from exposure.
Very little trust can be associated with confidentiality and integrity of information on the memory cards.
The following answers are incorrect:
“Smart cards provide two-factor authentication whereas memory cards don’t” is incorrect. This is not necessarily true. A memory card can be combined with a pin or password to offer two factors authentication where something you have and something you know are used for factors.
“Memory cards normally hold more memory than smart cards” is incorrect. While a memory card may or may not have more memory than a smart card, this is certainly not the best answer to the question.
“Only smart cards can be used for ATM cards” is incorrect. This depends on the decisions made by the particular institution and is not the best answer to the question.
Reference(s) used for this question:
Shon Harris, CISSP All In One, 6th edition , Access Control, Page 199 and also for people using the Kindle edition of the book you can look at Locations 4647-
4650.
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition : Access Control ((ISC)2 Press) (Kindle Locations 2124-2139). Auerbach
Publications. Kindle Edition.

Question 36

Which of the following is most appropriate to notify an internal user that session monitoring is being conducted?

A. Logon Banners

B. Wall poster

C. Employee Handbook

D. Written agreement D

 


Suggested Answer: Explanation

Community Answer: D

This is a tricky question, the keyword in the question is Internal users.
There are two possible answers based on how the question is presented, this question could either apply to internal users or ANY anonymous/external users.
Internal users should always have a written agreement first, then logon banners serve as a constant reminder.
Banners at the log-on time should be used to notify external users of any monitoring that is being conducted. A good banner will give you a better legal stand and also makes it obvious the user was warned about who should access the system, who is authorized and unauthorized, and if it is an unauthorized user then he is fully aware of trespassing. Anonymous/External users, such as those logging into a web site, ftp server or even a mail server; their only notification system is the use of a logon banner.
References used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 50. and
Shon Harris, CISSP All-in-one, 5th edition, pg 873

Question 37

In the context of network enumeration by an outside attacker and possible Distributed Denial of Service (DDoS) attacks, which of the following firewall rules is not appropriate to protect an organization's internal network?

A. Allow echo reply outbound

B. Allow echo request outbound

C. Drop echo request inbound

D. Allow echo reply inbound

 


Suggested Answer: A

Community Answer: D

Echo replies outbound should be dropped, not allowed. There is no reason for any internet users to send ICMP ECHO Request to your interal hosts from the internet. If they wish to find out if a service is available, they can use a browser to connect to your web server or simply send an email if they wish to test your mail service.
Echo replies outbound could be used as part of the SMURF amplification attack where someone will send ICMP echo requests to gateways broadcast addresses in order to amplify the request by X number of users sitting behind the gateway.
By allowing inbound echo requests and outbound echo replies, it makes it easier for attackers to learn about the internal network as well by performing a simply ping sweep. ICMP can also be used to find out which host has been up and running the longest which would indicates which patches are missing on the host if a critical patch required a reboot.
ICMP can also be use for DDoS attacks, so you should strictly limit what type of ICMP traffic would be allowed to flow through your firewall.
On top of all this, tools such as LOKI could be use as a client-server application to transfer files back and forward between the internat and some of your internal hosts. LOKI is a client/server program published in the online publication Phrack . This program is a working proof-of-concept to demonstrate that data can be transmitted somewhat secretly across a network by hiding it in traffic that normally does not contain payloads. The example code can tunnel the equivalent of a
Unix RCMD/RSH session in either ICMP echo request (ping) packets or UDP traffic to the DNS port. This is used as a back door into a Unix system after root access has been compromised. Presence of LOKI on a system is evidence that the system has been compromised in the past.
The outbound echo request and inbound echo reply allow internal users to verify connectivity with external hosts.
The following answers are incorrect:
Allow echo request outbound The outbound echo request and inbound echo reply allow internal users to verify connectivity with external hosts.
Drop echo request inbound There is no need for anyone on the internet to attempt pinging your internal hosts.
Allow echo reply inbound The outbound echo request and inbound echo reply allow internal users to verify connectivity with external hosts.
Reference(s) used for this question:
http://www.phrack.org/issues.html?issue=49&id=6
STREBE, Matthew and PERKINS, Charles, Firewalls 24seven, Sybex 2000, Chapter 10: The Perfect Firewall.

Question 38

One of the following statements about the differences between PPTP and L2TP is NOT true

A. PPTP can run only on top of IP networks.

B. PPTP is an encryption protocol and L2TP is not.

C. L2TP works well with all firewalls and network devices that perform NAT.

D. L2TP supports AAA servers C

 


Suggested Answer: Explanation

Community Answer: C

L2TP is affected by packet header modification and cannot cope with firewalls and network devices that perform NAT.
“PPTP can run only on top of IP networks.” is correct as PPTP encapsulates datagrams into an IP packet, allowing PPTP to route many network protocols across an IP network.
“PPTP is an encryption protocol and L2TP is not.” is correct. When using PPTP, the PPP payload is encrypted with Microsoft Point-to-Point Encryption (MPPE) using MSCHAP or EAP-TLS.
“L2TP supports AAA servers” is correct as L2TP supports TACACS+ and RADIUS.
NOTE:
L2TP does work over NAT. It is possible to use a tunneled mode that wraps every packet into a UDP packet. Port 4500 is used for this purpose. However this is not true of PPTP and it is not true as well that it works well with all firewalls and NAT devices.
References:
All in One Third Edition page 545
Official Guide to the CISSP Exam page 124-126

Question 39

External consistency ensures that the data stored in the database is:

A. in-consistent with the real world.

B. remains consistant when sent from one system to another.

C. consistent with the logical world.

D. consistent with the real world.

 


Suggested Answer: D

External consistency ensures that the data stored in the database is consistent with the real world.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, page 33.

Question 40

Which of the following control pairings include: organizational policies and procedures, pre-employment background checks, strict hiring practices, employment agreements, employee termination procedures, vacation scheduling, labeling of sensitive materials, increased supervision, security awareness training, behavior awareness, and sign-up procedures to obtain access to information systems and networks?

A. Preventive/Administrative Pairing

B. Preventive/Technical Pairing

C. Preventive/Physical Pairing

D. Detective/Administrative Pairing

 


Suggested Answer: The Answer: Preventive/Administrative Pairing: These mechanisms include organizational policies and procedures, pre-employment background checks, strict

Community Answer: A

hiring practices, employment agreements, friendly and unfriendly employee termination procedures, vacation scheduling, labeling of sensitive materials, increased supervision, security awareness training, behavior awareness, and sign-up procedures to obtain access to information systems and networks.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 34.

Question 41

Smart cards are an example of which type of control?

A. Detective control

B. Administrative control

C. Technical control

D. Physical control

 


Suggested Answer: C

Community Answer: C

Logical or technical controls involve the restriction of access to systems and the protection of information. Smart cards and encryption are examples of these types of control.
Controls are put into place to reduce the risk an organization faces, and they come in three main flavors: administrative, technical, and physical. Administrative controls are commonly referred to as “soft controls” because they are more management-oriented. Examples of administrative controls are security documentation, risk management, personnel security, and training. Technical controls (also called logical controls) are software or hardware components, as in firewalls, IDS, encryption, identification and authentication mechanisms. And physical controls are items put into place to protect facility, personnel, and resources.
Examples of physical controls are security guards, locks, fencing, and lighting.
Many types of technical controls enable a user to access a system and the resources within that system. A technical control may be a username and password combination, a Kerberos implementation, biometrics, public key infrastructure (PKI), RADIUS, TACACS +, or authentication using a smart card through a reader connected to a system. These technologies verify the user is who he says he is by using different types of authentication methods. Once a user is properly authenticated, he can be authorized and allowed access to network resources.
Reference(s) used for this question:
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (p. 245). McGraw-Hill. Kindle Edition. and
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 2: Access control systems (page 32).

Question 42

Rule-Based Access Control (RuBAC) access is determined by rules.  Such rules would fit within what category of access control ?

A. Discretionary Access Control (DAC)

B. Mandatory Access control (MAC)

C. Non-Discretionary Access Control (NDAC)

D. Lattice-based Access control C

 


Suggested Answer: Explanation

Community Answer: C

Rule-based access control is a type of non-discretionary access control because this access is determined by rules and the subject does not decide what those rules will be, the rules are uniformly applied to ALL of the users or subjects.
In general, all access control policies other than DAC are grouped in the category of non-discretionary access control (NDAC). As the name implies, policies in this category have rules that are not established at the discretion of the user. Non-discretionary policies establish controls that cannot be changed by users, but only through administrative action.
Both Role Based Access Control (RBAC) and Rule Based Access Control (RuBAC) fall within Non Discretionary Access Control (NDAC). If it is not DAC or MAC then it is most likely NDAC.
IT IS NOT ALWAYS BLACK OR WHITE –
The different access control models are not totally exclusive of each others. MAC is making use of Rules to be implemented. However with MAC you have requirements above and beyond having simple access rules. The subject would get formal approval from management, the subject must have the proper security clearance, objects must have labels/sensitivity levels attached to them, subjects must have the proper security clearance. If all of this is in place then you have
MAC.
BELOW YOU HAVE A DESCRIPTION OF THE DIFFERENT CATEGORIES:
MAC = Mandatory Access Control –
Under a mandatory access control environment, the system or security administrator will define what permissions subjects have on objects. The administrator does not dictate users access but simply configure the proper level of access as dictated by the Data Owner.
The MAC system will look at the Security Clearance of the subject and compare it with the object sensitivity level or classification level. This is what is called the dominance relationship.
The subject must DOMINATE the object sensitivity level. Which means that the subject must have a security clearance equal or higher than the object he is attempting to access.
MAC also introduce the concept of labels. Every objects will have a label attached to them indicating the classification of the object as well as categories that are used to impose the need to know (NTK) principle. Even thou a user has a security clearance of Secret it does not mean he would be able to access any Secret documents within the system. He would be allowed to access only Secret document for which he has a Need To Know, formal approval, and object where the user belong to one of the categories attached to the object.
If there is no clearance and no labels then IT IS NOT Mandatory Access Control.
Many of the other models can mimic MAC but none of them have labels and a dominance relationship so they are NOT in the MAC category.
NISTR-7316 Says:
Usually a labeling mechanism and a set of interfaces are used to determine access based on the MAC policy; for example, a user who is running a process at the
Secret classification should not be allowed to read a file with a label of Top Secret. This is known as the “simple security rule,” or “no read up.” Conversely, a user who is running a process with a label of Secret should not be allowed to write to a file with a label of Confidential. This rule is called the “*-property” (pronounced
“star property”) or “no write down.” The *-property is required to maintain system security in an automated environment. A variation on this rule called the “strict *- property” requires that information can be written at, but not above, the subject’s clearance level. Multilevel security models such as the Bell-La Padula
Confidentiality and Biba Integrity models are used to formally specify this kind of MAC policy.
DAC = Discretionary Access Control
DAC is also known as: Identity Based access control system.
The owner of an object is define as the person who created the object. As such the owner has the discretion to grant access to other users on the network.
Access will be granted based solely on the identity of those users.
Such system is good for low level of security. One of the major problem is the fact that a user who has access to someone’s else file can further share the file with other users without the knowledge or permission of the owner of the file. Very quickly this could become the wild wild west as there is no control on the dissimination of the information.
RBAC = Role Based Access Control
RBAC is a form of Non-Discretionary access control.
Role Based access control usually maps directly with the different types of jobs performed by employees within a company.
For example there might be 5 security administrator within your company. Instead of creating each of their profile one by one, you would simply create a role and assign the administrators to the role. Once an administrator has been assigned to a role, he will IMPLICITLY inherit the permissions of that role.
RBAC is great tool for environment where there is a a large rotation of employees on a daily basis such as a very large help desk for example.
RBAC or RuBAC = Rule Based Access Control
RuBAC is a form of Non-Discretionary access control.
A good example of a Rule Based access control device would be a Firewall. A single set of rules is imposed to all users attempting to connect through the firewall.
NOTE FROM CLEMENT:
Lot of people tend to confuse MAC and Rule Based Access Control.
Mandatory Access Control must make use of LABELS. If there is only rules and no label, it cannot be Mandatory Access Control. This is why they call it Non
Discretionary Access control (NDAC).
There are even books out there that are WRONG on this subject. Books are sometimes opiniated and not strictly based on facts.
In MAC subjects must have clearance to access sensitive objects. Objects have labels that contain the classification to indicate the sensitivity of the object and the label also has categories to enforce the need to know.
Today the best example of rule based access control would be a firewall. All rules are imposed globally to any user attempting to connect through the device.
This is NOT the case with MAC.
I strongly recommend you read carefully the following document:
NISTIR-7316 at http://csrc.nist.gov/publications/nistir/7316/NISTIR-7316.pdf
It is one of the best Access Control Study document to prepare for the exam. Usually I tell people not to worry about the hundreds of NIST documents and other reference. This document is an exception. Take some time to read it.
Reference(s) used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 33. and
NISTIR-7316 at http://csrc.nist.gov/publications/nistir/7316/NISTIR-7316.pdf and
Conrad, Eric; Misenar, Seth; Feldman, Joshua (2012-09-01). CISSP Study Guide (Kindle Locations 651-652). Elsevier Science (reference). Kindle Edition.

Question 43

Secure Shell (SSH-2) supports authentication, compression, confidentiality, and integrity, SSH is commonly used as a secure alternative to all of the following protocols below except:

A. telnet

B. rlogin

C. RSH

D. HTTPS

 


Suggested Answer: D

HTTPS is used for secure web transactions and is not commonly replaced by SSH.
Users often want to log on to a remote computer. Unfortunately, most early implementations to meet that need were designed for a trusted network. Protocols/ programs, such as TELNET, RSH, and rlogin, transmit unencrypted over the network, which allows traffic to be easily intercepted. Secure shell (SSH) was designed as an alternative to the above insecure protocols and allows users to securely access resources on remote computers over an encrypted tunnel. SSHs services include remote log-on, file transfer, and command execution. It also supports port forwarding, which redirects other protocols through an encrypted SSH tunnel. Many users protect less secure traffic of protocols, such as X Windows and VNC (virtual network computing), by forwarding them through a SSH tunnel.
The SSH tunnel protects the integrity of communication, preventing session hijacking and other man-in-the-middle attacks. Another advantage of SSH over its predecessors is that it supports strong authentication. There are several alternatives for SSH clients to authenticate to a SSH server, including passwords and digital certificates. Keep in mind that authenticating with a password is still a significant improvement over the other protocols because the password is transmitted encrypted.
The following were wrong answers:
telnet is an incorrect choice. SSH is commonly used as an more secure alternative to telnet. In fact Telnet should not longer be used today. rlogin is and incorrect choice. SSH is commonly used as a more secure alternative to rlogin.
RSH is an incorrect choice. SSH is commonly used as a more secure alternative to RSH.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 7077-7088). Auerbach
Publications. Kindle Edition.

Question 44

Which TCSEC level is labeled Controlled Access Protection?

A. C1

B. C2

C. C3

D. B1

 


Suggested Answer: B

Community Answer: D

C2 is labeled Controlled Access Protection.
The TCSEC defines four divisions: D, C, B and A where division A has the highest security.
Each division represents a significant difference in the trust an individual or organization can place on the evaluated system. Additionally divisions C, B and A are broken into a series of hierarchical subdivisions called classes: C1, C2, B1, B2, B3 and A1.
Each division and class expands or modifies as indicated the requirements of the immediately prior division or class.
D Minimal protection –
Reserved for those systems that have been evaluated but that fail to meet the requirements for a higher division
C Discretionary protection –
C1 Discretionary Security Protection
Identification and authentication
Separation of users and data –
Discretionary Access Control (DAC) capable of enforcing access limitations on an individual basis
Required System Documentation and user manuals
C2 Controlled Access Protection
More finely grained DAC –
Individual accountability through login procedures
Audit trails –
Object reuse –
Resource isolation –
B Mandatory protection –
B1 Labeled Security Protection –
Informal statement of the security policy model
Data sensitivity labels –
Mandatory Access Control (MAC) over selected subjects and objects
Label exportation capabilities –
All discovered flaws must be removed or otherwise mitigated
Design specifications and verification
B2 Structured Protection –
Security policy model clearly defined and formally documented
DAC and MAC enforcement extended to all subjects and objects
Covert storage channels are analyzed for occurrence and bandwidth
Carefully structured into protection-critical and non-protection-critical elements
Design and implementation enable more comprehensive testing and review
Authentication mechanisms are strengthened
Trusted facility management is provided with administrator and operator segregation
Strict configuration management controls are imposed
B3 Security Domains –
Satisfies reference monitor requirements
Structured to exclude code not essential to security policy enforcement
Significant system engineering directed toward minimizing complexity
Security administrator role defined
Audit security-relevant events –
Automated imminent intrusion detection, notification, and response
Trusted system recovery procedures
Covert timing channels are analyzed for occurrence and bandwidth
An example of such a system is the XTS-300, a precursor to the XTS-400
A Verified protection –
A1 Verified Design –
Functionally identical to B3 –
Formal design and verification techniques including a formal top-level specification
Formal management and distribution procedures
An example of such a system is Honeywell’s Secure Communications Processor SCOMP, a precursor to the XTS-400
Beyond A1 –
System Architecture demonstrates that the requirements of self-protection and completeness for reference monitors have been implemented in the Trusted
Computing Base (TCB).
Security Testing automatically generates test-case from the formal top-level specification or formal lower-level specifications.
Formal Specification and Verification is where the TCB is verified down to the source code level, using formal verification methods where feasible.
Trusted Design Environment is where the TCB is designed in a trusted facility with only trusted (cleared) personnel.
The following are incorrect answers:
C1 is Discretionary security –
C3 does not exists, it is only a detractor
B1 is called Labeled Security Protection.
Reference(s) used for this question:
HARE, Chris, Security management Practices CISSP Open Study Guide, version 1.0, april 1999. and
AIOv4 Security Architecture and Design (pages 357 – 361)
AIOv5 Security Architecture and Design (pages 358 – 362)

Question 45

How would an IP spoofing attack be best classified?

A. Session hijacking attack

B. Passive attack

C. Fragmentation attack

D. Sniffing attack A

 


Suggested Answer: Explanation

Community Answer: A

IP spoofing is used to convince a system that it is communicating with a known entity that gives an intruder access. IP spoofing attacks is a common session hijacking attack.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3:
Telecommunications and Network Security (page 77).

Question 46

Which of the following is less likely to be included in the change control sub-phase of the maintenance phase of a software product?

A. Estimating the cost of the changes requested

B. Recreating and analyzing the problem

C. Determining the interface that is presented to the user

D. Establishing the priorities of requests

 


Suggested Answer: D

Community Answer: A

Change control sub-phase includes Recreating and analyzing the problem, Determining the interface that is presented to the user, and Establishing the priorities of requests.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 7:
Applications and Systems Development (page 252).

Question 47

Which of the following binds a subject name to a public key value?

A. A public-key certificate

B. A public key infrastructure

C. A secret key infrastructure

D. A private key certificate A

 


Suggested Answer: Explanation

Community Answer: A

Remember the term Public-Key Certificate is synonymous with Digital Certificate or Identity certificate.
The certificate itself provides the binding but it is the certificate authority who will go through the Certificate Practice Statements (CPS) actually validating the bindings and vouch for the identity of the owner of the key within the certificate.
As explained in Wikipedia:
In cryptography, a public key certificate (also known as a digital certificate or identity certificate) is an electronic document which uses a digital signature to bind together a public key with an identity information such as the name of a person or an organization, their address, and so forth. The certificate can be used to verify that a public key belongs to an individual.
In a typical public key infrastructure (PKI) scheme, the signature will be of a certificate authority (CA). In a web of trust scheme such as PGP or GPG, the signature is of either the user (a self-signed certificate) or other users (“endorsements”) by getting people to sign each other keys. In either case, the signatures on a certificate are attestations by the certificate signer that the identity information and the public key belong together.
RFC 2828 defines the certification authority (CA) as:
An entity that issues digital certificates (especially X.509 certificates) and vouches for the binding between the data items in a certificate.
An authority trusted by one or more users to create and assign certificates. Optionally, the certification authority may create the user’s keys.
X509 Certificate users depend on the validity of information provided by a certificate. Thus, a CA should be someone that certificate users trust, and usually holds an official position created and granted power by a government, a corporation, or some other organization. A CA is responsible for managing the life cycle of certificates and, depending on the type of certificate and the CPS that applies, may be responsible for the life cycle of key pairs associated with the certificates
Source: SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000. and http://en.wikipedia.org/wiki/Public_key_certificate

Question 48

Which of the following are NOT a countermeasure to traffic analysis?

A. Padding messages.

B. Eavesdropping.

C. Sending noise.

D. Faraday Cage B

 


Suggested Answer: Explanation

Community Answer: B

Eavesdropping is not a countermeasure, it is a type of attack where you are collecting traffic and attempting to see what is being send between entities communicating with each other.
The following answers are incorrect:
Padding Messages. Is incorrect because it is considered a countermeasure you make messages uniform size, padding can be used to counter this kind of attack, in which decoy traffic is sent out over the network to disguise patterns and make it more difficult to uncover patterns.
Sending Noise. Is incorrect because it is considered a countermeasure, tansmitting non-informational data elements to disguise real data.
Faraday Cage Is incorrect because it is a tool used to prevent emanation of electromagnetic waves. It is a very effective tool to prevent traffic analysis.

Question 49

Which of the following statements pertaining to access control is false?

A. Users should only access data on a need-to-know basis.

B. If access is not explicitly denied, it should be implicitly allowed.

C. Access rights should be granted based on the level of trust a company has on a subject.

D. Roles can be an efficient way to assign rights to a type of user who performs certain tasks.

 


Suggested Answer: B

Community Answer: B

Access control mechanisms should default to no access to provide the necessary level of security and ensure that no security holes go unnoticed. If access is not explicitly allowed, it should be implicitly denied.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, Chapter 4: Access Control (page 143).

Question 50

Which one of the following statements about the advantages and disadvantages of network-based Intrusion detection systems is true

A. Network-based IDSs are not vulnerable to attacks.

B. Network-based IDSs are well suited for modern switch-based networks.

C. Most network-based IDSs can automatically indicate whether or not an attack was successful.

D. The deployment of network-based IDSs has little impact upon an existing network.

 


Suggested Answer: D

Network-based IDSs are usually passive devices that listen on a network wire without interfering with the normal operation of a network. Thus, it is usually easy to retrofit a network to include network-based IDSs with minimal effort.
Network-based IDSs are not vulnerable to attacks is not true, even thou network-based IDSs can be made very secure against attack and even made invisible to many attackers they still have to read the packets and sometimes a well crafted packet might exploit or kill your capture engine.
Network-based IDSs are well suited for modern switch-based networks is not true as most switches do not provide universal monitoring ports and this limits the monitoring range of a network-based IDS sensor to a single host. Even when switches provide such monitoring ports, often the single port cannot mirror all traffic traversing the switch.
Most network-based IDSs can automatically indicate whether or not an attack was successful is not true as most network-based IDSs cannot tell whether or not an attack was successful; they can only discern that an attack was initiated. This means that after a network-based IDS detects an attack, administrators must manually investigate each attacked host to determine whether it was indeed penetrated.
Reference:
NIST special publication 800-31 Intrusion Detection System pages 15-16
Official guide to the CISSP CBK. Pages 196 to 197

Access Full SSCP Dump Free

Looking for even more practice questions? Click here to access the complete SSCP Dump Free collection, offering hundreds of questions across all exam objectives.

We regularly update our content to ensure accuracy and relevance—so be sure to check back for new material.

Begin your certification journey today with our SSCP dump free questions — and get one step closer to exam success!

Share18Tweet11
Previous Post

SOA-C02 Dump Free

Next Post

SY0-501 Dump Free

Next Post

SY0-501 Dump Free

SY0-601 Dump Free

SY0-701 Dump Free

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

CLF-C02 Practice Test Free

Comptia A+ Practice Test

Azure 104 Exam Questions

Azure 104 Practice Test

Azure 104 Dumps

XK0-005 Mock Test Free

  • About
  • DMCA
  • Privacy & Policy
  • Contact

PracticeTestFree.com materials do not contain actual questions and answers from Cisco's Certification Exams. PracticeTestFree.com doesn't offer Real Microsoft Exam Questions. PracticeTestFree.com doesn't offer Real Amazon Exam Questions.

  • Login
  • Sign Up
No Result
View All Result
  • Quesions
    • Cisco
    • AWS
    • Microsoft
    • CompTIA
    • Google
    • ISACA
    • ECCouncil
    • F5
    • GIAC
    • ISC
    • Juniper
    • LPI
    • Oracle
    • Palo Alto Networks
    • PMI
    • RedHat
    • Salesforce
    • VMware
  • Courses
    • CCNA
    • ENCOR
    • VMware vSphere
  • Certificates

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.