Month End Sale Limited Time Flat 70% Discount offer - Ends in 0d 00h 00m 00s - Coupon code: 70spcl

IAPP AIGP Artificial Intelligence Governance Professional Exam Practice Test

Page: 1 / 13
Total 132 questions

Artificial Intelligence Governance Professional Questions and Answers

Question 1

What is the primary purpose of conducting ethical red-teaming on an Al system?

Options:

A.

To improve the model's accuracy.

B.

To simulate model risk scenarios.

C.

To identify security vulnerabilities.

D.

To ensure compliance with applicable law.

Question 2

Retraining an LLM can be necessary for all of the following reasons EXCEPT?

Options:

A.

To minimize degradation in prediction accuracy due tochanges in data.

B.

Adjust the model's hyper parameters specific use case.

C.

Account for new interpretations of the same data.

D.

To ensure interpretability of the model's predictions.

Question 3

According to November 2023 White House Executive Order, which of the following best describes the guidance given to governmental agencies on the use of generative Al as a workplace tool?

Options:

A.

Limit access to specific uses of generative Al.

B.

Impose a general ban on the use of generative Al.

C.

Limit access of generative Al to engineers and developers.

D.

Impose a ban on the use of generative Al in agencies that protect national security.

Question 4

Which of the following is the least relevant consideration in assessing whether users should be given the right to opt out from an Al system?

Options:

A.

Feasibility.

B.

Risk to users.

C.

Industry practice.

D.

Cost of alternative mechanisms.

Question 5

The planning phase of the Al life cycle articulates all of the following EXCEPT the?

Options:

A.

Objective of the model.

B.

Approach to governance.

C.

Choice of the architecture.

D.

Context in which the model will operate.

Question 6

CASE STUDY

Please use the following answer the next question:

A local police department in the United States procured an Al system to monitor and analyze social media feeds, online marketplaces and other sources of public information to detect evidence of illegal activities (e.g., sale of drugs or stolen goods). The Al system works by surveilling the public sites in order to identify individuals that are likely to have committed a crime. It cross-references the individuals against data maintained by law enforcement and then assigns a percentage score of the likelihood of criminal activity based on certain factors like previous criminal history, location, time, race and gender.

The police department retained a third-party consultant assist in the procurement process, specifically to evaluate two finalists. Each of the vendors provided information about their system's accuracy rates, the diversity of their training data and how their system works. The consultant determined that the first vendor’s system has a higher accuracy rate and based on this information, recommended this vendor to the police department.

The police department chose the first vendor and implemented its Al system. As part of the implementation, the department and consultant created a usage policy for the system, which includes training police officers on how the system works and how to incorporate it into their investigation process.

The police department has now been using the Al system for a year. An internal review has found that every time the system scored a likelihood of criminal activity at or above 90%, the police investigation subsequently confirmed that the individual had, in fact, committed a crime. Based on these results, the police department wants to forego investigations for cases where the Al system gives a score of at least 90% and proceed directly with an arrest.

During the procurement process, what is the most likely reason that the third-party consultant asked each vendor for information about the diversity of their datasets?

Options:

A.

To comply with applicable law.

B.

To assist the fairness of the Al system.

C.

To evaluate the reliability of the Al system.

D.

To determine the explainability of the Al system.

Question 7

You are part of your organization’s ML engineering team and notice that the accuracy of a model that was recently deployed into production is deteriorating.

What is the best first step address this?

Options:

A.

Replace the model with a previous version.

B.

Conduct champion/challenger testing.

C.

Perform an audit of the model.

D.

Run red-teaming exercises.

Question 8

During the planning and design phases of the Al development life cycle, bias can be reduced by all of the following EXCEPT?

Options:

A.

Stakeholder involvement.

B.

Feature selection.

C.

Human oversight.

D.

Data collection.

Question 9

CASE STUDY

Please use the following answer the next question:

A mid-size US healthcare network has decided to develop an Al solution to detect a type of cancer that is most likely arise in adults. Specifically, the healthcare network intends to create a recognition algorithm that will perform an initial review of all imaging and then route records a radiologist for secondary review pursuant Agreed-upon criteria (e.g., a confidence score below a threshold).

To date, the healthcare network has taken the following steps: defined its Al ethical principles: conducted discovery to identify the intended uses and success criteria for the system: established an Al governance committee; assembled a broad, crossfunctional team with clear roles and responsibilities; and created policies and procedures to document standards, workflows, timelines and risk thresholds during the project.

The healthcare network intends to retain a cloud provider to host the solution and a consulting firm to help develop the algorithm using the healthcare network's existing data and de-identified data that is licensed from a large US clinical research partner.

Which of the following steps can best mitigate the possibility of discrimination prior to training and testing the Al solution?

Options:

A.

Procure more data from clinical research partners.

B.

Engage a third party to perform an audit.

C.

Perform an impact assessment.

D.

Create a bias bounty program.

Question 10

CASE STUDY

Please use the following answer the next question:

A local police department in the United States procured an Al system to monitor and analyze social media feeds, online marketplaces and other sources of public information to detect evidence of illegal activities (e.g., sale of drugs or stolen goods). The Al system works by surveilling the public sites in order to identify individuals that are likely to have committed a crime. It cross-references the individuals against data maintained by law enforcement and then assigns a percentage score of the likelihood of criminal activity based on certain factors like previous criminal history, location, time, race and gender.

The police department retained a third-party consultant assist in the procurement process, specifically to evaluate two finalists. Each of the vendors provided information about their system's accuracy rates, the diversity of their training data and how their system works. The consultant determined that the first vendor’s system has a higher accuracy rate and based on this information, recommended this vendor to the police department.

The police department chose the first vendor and implemented its Al system. As part of the implementation, the department and consultant created a usage policy for the system, which includes training police officers on how the system works and how to incorporate it into their investigation process.

The police department has now been using the Al system for a year. An internal review has found that every time the system scored a likelihood of criminal activity at or above 90%, the police investigation subsequently confirmed that the individual had, in fact, committed a crime. Based on these results, the police department wants to forego investigations for cases where the Al system gives a score of at least 90% and proceed directly with an arrest.

When notifying an accused perpetrator, what additional information should a police officer provide about the use of the Al system?

Options:

A.

Information about the accuracy of the Al system.

B.

Information about how the accused can oppose the charges.

C.

Information about the composition of the training data of the system.

D.

Information about how the individual was identified by the Al system.

Question 11

Which of the following use cases would be best served by a non-AI solution?

Options:

A.

A non-profit wants to develop a social media presence.

OB. An e-commerce provider wants to make personalized recommendations.

B.

A business analyst wants to forecast future cost overruns and underruns.

C.

A customer service agency wants automate answers to common questions.

Question 12

A company has trained an ML model primarily using synthetic data, and now intends to use live personal data to test the model.

Which of the following is NOT a best practice apply during the testing?

Options:

A.

The test data should be representative of the expected operationaldata.

B.

Testing should minimize human involvement to the extent practicable.

C.

The test data should be anonymized to the extent practicable.

D.

Testing should be performed specific to the intended uses.

Question 13

You are the chief privacy officer of a medical research company that would like to collect and use sensitive data about cancer patients, such as their names, addresses, race and ethnic origin, medical histories, insurance claims, pharmaceutical prescriptions, eating and drinking habits and physical activity.

The company will use this sensitive data to build an Al algorithm that will spot common attributes that will help predict if seemingly healthy people are more likely to get cancer. However, the company is unable to obtain consent from enough patients to sufficiently collect the minimum data to train its model.

Which of the following solutions would most efficiently balance privacy concerns with the lack of available data during the testing phase?

Options:

A.

Deploy the current model and recalibrate it over time with more data.

B.

Extend the model to multi-modal ingestion with text and images.

C.

Utilize synthetic data to offset the lack of patient data.

D.

Refocus the algorithm to patients without cancer.

Question 14

What is the technique to remove the effects of improperly used data from an ML system?

Options:

A.

Data cleansing.

B.

Model inversion.

C.

Data de-duplication.

D.

Model disgorgement.

Question 15

The White House Executive Order from November 2023 requires companies that develop dual-use foundation models to provide reports to the federal government about all of the following EXCEPT?

Options:

A.

Any current training or development of dual-use foundation models.

B.

The results of red-team testing of each dual-use foundation model.

C.

Any environmental impact study for each dual-use foundation model.

D.

The physical and cybersecurity protection measures of their dual-use foundation models.

Question 16

All of the following are reasons to deploy a challenger Al model in addition a champion Al model EXCEPT to?

Options:

A.

Provide a framework to consider alternatives to the champion model.

B.

Automate real-time monitoring of the champion model.

C.

Perform testing on the champion model.

D.

Retrain the champion model.

Question 17

You are a privacy program manager at a large e-commerce company that uses an Al tool to deliver personalized product recommendations based on visitors' personal information that has been collected from the company website, the chatbot and public data the company has scraped from social media.

A user submits a data access request under an applicable U.S. state privacy law, specifically seeking a copy of their personal data, including information used to create their profile for product recommendations.

What is the most challenging aspect of managing this request?

Options:

A.

Some of the visitor's data is synthetic data that the company does not have to provide to the data subject.

B.

The data subject's data is structured data that can be searched, compiled and reviewed only by an automated tool.

C.

The data subject is not entitled to receive a copy of their data because some of it was scraped from public sources.

D.

Some of the data subject's data is unstructured data and you cannot untangle it from the other data, including information about other individuals.

Question 18

An artist has been using an Al tool to create digital art and would like to ensure that it has copyright protection in the United States.

Which of the following is most likely to enable the artist to receive copyright protection?

Options:

A.

Ensure the tool was trained using publicly available content.

B.

Obtain a representation from the Al provider on how the tool works.

C.

Provide a log of the prompts the artist used to generate the images.

D.

Update the images in a creative way to demonstrate that it is the artist's.

Question 19

You are an engineer that developed an Al-based ad recommendation tool.

Which of the following should be monitored to evaluate the tool’s effectiveness?

Options:

A.

Output data, assess the delta between the prediction and actual ad clicks.

B.

Algorithmic patterns, to show the model has a high degree of accuracy.

C.

Input data, to ensure the ads are reaching the target audience.

D.

GPU performance, to evaluate the tool's robustness.

Question 20

All of the following are included within the scope of post-deployment Al maintenance EXCEPT?

Options:

A.

Ensuring that all model components are subject a control framework.

B.

Dedicating experts to continually monitor the model output.

C.

Evaluating the need for an audit under certain standards.

D.

Defining thresholds to conduct new impact assessments.

Question 21

Which of the following most encourages accountability over Al systems?

Options:

A.

Determining the business objective and success criteria for the Al project.

B.

Performing due diligence on third-party Al training and testing data.

C.

Defining the roles and responsibilities of Al stakeholders.

D.

Understanding Al legal and regulatory requirements.

Question 22

CASE STUDY

Please use the following answer the next question:

Good Values Corporation (GVC) is a U.S. educational services provider that employs teachers to create and deliver enrichment courses for high school students. GVC has learned that many of its teacher employees are using generative Al to create the enrichment courses, and that many of the students are using generative Al to complete their assignments.

In particular, GVC has learned that the teachers they employ used open source large language models (“LLM”) to develop an online tool that customizes study questions for individual students. GVC has also discovered that an art teacher has expressly incorporated the use of generative Al into the curriculum to enable students to use prompts to create digital art.

GVC has started to investigate these practices and develop a process to monitor any use of generative Al, including by teachers and students, going forward.

What is the best reason for GVC to offer students the choice to utilize generative Al in limited, defined circumstances?

Options:

A.

Toenable students to learn how to manage their time.

B.

Toenable students to learn about performing research.

C.

Toenable students to learn about practical applications of Al.

D.

Toenable students to learn how to use Al as a supportive

educational tool.

Question 23

Scenario:

A U.S.-based AI governance professional is evaluating resources from the National Institute of Standards and Technology (NIST) to guide the organization’s AI risk assessment strategy. They are particularly interested in programs focused on assessing AI-specific impacts.

The main purpose of NIST’s Assessing Risks and Impacts of AI (ARIA) program is to:

Options:

A.

Provide a suite of resources to manage risks

B.

Pilot new standards for AI red-teaming

C.

Promote interoperability across AI systems

D.

Offer a regulatory sandbox for risk reporting

Question 24

If it is possible to provide a rationale for a specific output of an Al system, that system can best be described as?

Options:

A.

Accountable.

B.

Transparent.

C.

Explainable.

D.

Reliable.

Question 25

Random forest algorithms are in what type of machine learning model?

Options:

A.

Symbolic.

B.

Generative.

C.

Discriminative.

D.

Natural language processing.

Question 26

A US company has developed an Al system, CrimeBuster 9619, that collects information about incarcerated individuals to help parole boards predict whether someone is likely to commit another crime if released from prison.

When considering expanding to the EU market, this type of technology would?

Options:

A.

Require the company to register the tool with the EU database.

B.

Be subject approval by the relevant EU authority.

C.

Require a detailed conformity assessment.

D.

Be banned under the EU Al Act.

Question 27

CASE STUDY

Please use the following answer the next question:

ABC Corp, is a leading insurance provider offering a range of coverage options to individuals. ABC has decided to utilize artificial intelligence to streamline and improve its customer acquisition and underwriting process, including the accuracy and efficiency of pricing policies.

ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose large language model (“LLM”). In particular, ABC intends to use its historical customer data—including applications, policies, and claims—and proprietary pricing and risk strategies to provide an initial qualification assessment of potential customers, which would then be routed a human underwriter for final review.

ABC and the cloud provider have completed training and testing the LLM, performed a readiness assessment, and made the decision to deploy the LLM into production. ABC has designated an internal compliance team to monitor the model during the first month, specifically to evaluate the accuracy, fairness, and reliability of its output. After the first month in production, ABC realizes that the LLM declines a higher percentage of women's loan applications due primarily to women historically receiving lower salaries than men.

Which of the following is the most important reason to train the underwriters on the model prior to deployment?

Options:

A.

Toprovide a reminder of a right appeal.

B.

Tosolicit on-going feedback on model performance.

C.

Toapply their own judgment to the initial assessment.

D.

Toensure they provide transparency applicants on the model.

Question 28

Scenario:

A global organization wants to align with international frameworks on AI governance. They are reviewing guidance from the OECD on how to incorporate broader governance tools into their AI program.

Codes of conduct and collective agreements are what type of assessment tools as defined by the Organization for Economic Cooperation and Development (OECD)?

Options:

A.

Educational

B.

Procedural

C.

Technical

D.

Analytic

Question 29

All of the following are common optimization techniques in deep learning to determine weights that represent the strength of the connection between artificial neurons EXCEPT?

Options:

A.

Gradient descent, which initially sets weights arbitrary values, and then at each step changes them.

B.

Momentum, which improves the convergence speed and stability of neural network training.

C.

Autoregression, which analyzes and makes predictions about time-series data.

D.

Backpropagation, which starts from the last layer working backwards.

Question 30

Which of the following is a foundational characteristic of effective AI governance?

Options:

A.

Engagement of a cross-functional team

B.

Reliance on tested vendor management processes

C.

Thorough reviews of a company’s public filings with experts

D.

Uniform policies and procedures across developer, deployer and user roles

Question 31

What is the primary purpose of an AI impact assessment?

Options:

A.

To determine whether a conformity assessment is needed

B.

To escalate the findings to the appropriate owner(s)

C.

To identify and measure the benefits of an AI system

D.

To anticipate and manage the potential risks and harms of an AI system

Question 32

According to the EU Al Act, providers of what kind of machine learning systems will be required to register with an EU oversight agency before placing their systems in the EU market?

Options:

A.

Al systems that are harmful based on a legal risk-utility calculation.

B.

Al systems that are "strong" general intelligence.

C.

Al systems trained on sensitive personal data.

D.

Al systems that are high-risk.

Question 33

Under the Canadian Artificial Intelligence and Data Act, when must the Minister of Innovation, Science and Industry be notified about a high-impact Al system?

Options:

A.

When use of the system causes or is likely to cause material harm.

B.

When the algorithmic impact assessment has been completed.

C.

Upon release of a new version of the system.

D.

Upon initial deployment of the system.

Question 34

Scenario:

An organization wants to leverage its existing compliance structures to identify AI-specific risks as part of an ongoing data governance audit.

Which of the following compliance-related controls within an organization is most easily adapted to identify AI risks?

Options:

A.

Privacy training

B.

Penetration testing

C.

Transfer risk assessments

D.

Privacy impact assessments

Question 35

All of the following are penalties and enforcements outlined in the EU Al Act EXCEPT?

Options:

A.

Fines for SMEs and startups will be proportionally capped.

B.

Rules on General Purpose Al will apply after 6 months as a specific provision.

C.

The Al Pact will act as a transitional bridge until the Regulations are fully enacted.

D.

Fines for violations of banned Al applications will be €35 million or 7% global annual turnover (whichever is higher).

Question 36

Scenario:

A distributor operating in the EU is responsible for selling imported high-risk AI systems to businesses. The distributor wants to ensure they fulfill all applicable obligations under the EU AI Act.

All of the following are obligations of a distributor of high-risk AI systems under the EU AI Act EXCEPT?

Options:

A.

Corrective actions

B.

Verification of CE marking

C.

Registration in EU Database

D.

Communication with national authorities

Question 37

An Al system that maintains its level of performance within defined acceptable limits despite real world or adversarial conditions would be described as?

Options:

A.

Robust.

B.

Reliable.

C.

Resilient.

D.

Reinforced.

Question 38

Machine learning is best described as a type of algorithm by which?

Options:

A.

Systems can mimic human intelligence with the goal of replacing humans.

B.

Systems can automatically improve from experience through predictive patterns.

C.

Statistical inferences are drawn from a sample with the goal of predicting human intelligence.

D.

Previously unknown properties are discovered in data and used to predict and make improvements in the data.

Question 39

CASE STUDY

A premier payroll services company that employs thousands of people globally, is embarking on a new hiring campaign and wants to implement policies and procedures to identify and retain the best talent. The new talent will help the company's product team expand its payroll offerings to companies in the healthcare and transportation sectors, including in Asia.

It has become time consuming and expensive for HR to review all resumes, and they are concerned that human reviewers might be susceptible to bias.

To address these concerns, the company is considering using a third-party Al tool to screen resumes and assist with hiring. They have been talking to several vendors about possibly obtaining a third-party Al-enabled hiring solution, as long as it would achieve its goals and comply with all applicable laws.

The organization has a large procurement team that is responsible for the contracting of technology solutions. One of the procurement team's goals is to reduce costs, and it often prefers lower-cost solutions. Others within the company deploy technology solutions into the organization’s operations in a responsible, cost-effective manner.

The organization is aware of the risks presented by Al hiring tools and wants to mitigate them. It also questions how best to organize and train its existing personnel to use the Al hiring tool responsibly. Their concerns are heightened by the fact that relevant laws vary across jurisdictions and continue to change.

The organization continues planning the adoption of an AI tool to support hiring, but is concerned about potential bias in content generated by AI systems and how that could affect public perception.

Which of the following measures should the company adopt to best mitigate its risk of reputational harm from using the AI tool?

Options:

A.

Test the AI tool pre- and post-deployment

B.

Ensure the vendor provides indemnification for the AI tool

C.

Require the procurement and deployment teams to agree upon the AI tool

D.

Continue to require the company’s hiring personnel to manually screen all applicants

Page: 1 / 13
Total 132 questions