Leading and Managing AI
Leading and Managing AI – Sample Paper 2
This sample paper explores the practical challenges of leading and managing AI initiatives in modern organizations. It focuses on how executives, managers, and team leaders can align AI projects with strategy, manage risks, and build responsible governance. You can adapt the structure for coursework, internal training, or policy development. The paper typically covers leadership roles, stakeholder communication, ethical considerations, and change management, helping readers move from experimentation to sustainable, value‑driven AI adoption.
Use this sample as a guide to frame your own arguments, case studies, and recommendations. Emphasize clear decision rights, measurable outcomes, and cross‑functional collaboration between business, data, and IT teams.

A typical outline for Leading and Managing AI – Sample Paper 2 might include: an introduction to AI in organizational strategy, a review of leadership frameworks, and analysis of governance models. Subsequent sections can examine talent and skills, data and model lifecycle management, and ethical, legal, and social implications. Conclude with actionable recommendations, implementation roadmaps, and metrics for evaluating AI impact.
When writing, balance theory with real‑world examples, such as AI in customer service, operations, or decision support. Critically assess both benefits and limitations, and highlight how leaders can foster a culture of experimentation, transparency, and continuous learning around AI.

🔵 DOMAIN 1 – Identify Business Needs & Solutions
Q1
Your organization wants to "implement AI to improve operations." During initial discussions, stakeholders cannot clearly define what problem they want solved.
What should you do FIRST?
A. Select an AI pattern to explore
B. Develop a proof of concept
C. Facilitate structured problem framing and define measurable outcomes
D. Begin collecting historical data
Answer: C
Explanation:
AI initiatives must begin with a clearly articulated business problem tied to measurable outcomes (KPIs or ROI). Without this clarity, downstream phases such as data collection and model development risk misalignment and waste.
Why others are incorrect:
-
A: AI pattern selection follows problem definition.
-
B: PoC without clarity increases risk.
-
D: Data collection must align to defined objectives.
Q2
During AI Go/No-Go assessment, leadership is excited about the "cool factor" of deploying AI.
Which factor should carry the MOST weight?
A. Competitive marketing buzz
B. Technical sophistication
C. Clear business value and feasibility
D. Vendor partnerships
Answer: C
Explanation:
AI projects must be grounded in value realization and feasibility. The "cool factor" is not a strategic driver.
Q3
A project team proposes an AI solution for a process currently governed by deterministic rules that are stable and easy to automate.
What is the most appropriate recommendation?
A. Proceed with AI for scalability
B. Recommend traditional automation instead
C. Use reinforcement learning
D. Implement generative AI
Answer: B
Explanation:
AI is best suited for probabilistic, uncertain, or complex pattern-based problems. Deterministic rule-based processes are better served by automation.
Q4
Which of the following BEST describes a valid AI business objective?
A. Improve model F1-score to 0.92
B. Reduce fraud losses by 18% within 6 months
C. Train a neural network with 3 hidden layers
D. Increase training speed by 30%
Answer: B
Explanation:
Business objectives must reflect measurable business impact, not technical metrics.
Q5
An executive asks for an AI solution before confirming data availability.
What major risk is being introduced?
A. Overfitting
B. Scope creep
C. Data feasibility failure
D. Hyperparameter misalignment
Answer: C
Explanation:
AI feasibility depends heavily on available, usable data. Moving ahead without confirming data readiness threatens project viability.
Q6
During business evaluation, you determine the solution will impact customers across multiple countries.
What must be considered early?
A. Cloud storage cost
B. International AI regulations and data laws
C. Model architecture
D. Feature selection strategy
Answer: B
Explanation:
Regulatory and compliance risks must be assessed during early feasibility stages.
Q7
Your team is debating which AI pattern to use before defining ROI.
What is the correct sequence?
A. Choose algorithm → Define ROI
B. Select vendor → Define objective
C. Define objective → Determine ROI → Select AI pattern
D. Define AI pattern → Build prototype
Answer: C
Explanation:
Business objective drives ROI expectations, which then informs pattern and algorithm choice.
Q8
Which is the BEST example of predictive analytics?
A. Clustering customers by demographics
B. Predicting churn probability
C. Summarizing last quarter's sales
D. Data visualization dashboard
Answer: B
Explanation:
Predictive analytics uses past behavior to forecast future outcomes.
Q9
Your organization wants to expand AI internationally within a year. Which governance action is most appropriate?
A. Focus only on local laws
B. Ignore future expansion until launch
C. Align with both current and anticipated regulatory environments
D. Outsource compliance
Answer: C
Explanation:
Strategic expansion requires proactive regulatory planning.
Q10
If business alignment is weak but technical enthusiasm is high, the MOST likely outcome is:
A. Fast deployment
B. Poor adoption and ROI
C. Higher accuracy
D. Lower infrastructure cost
Answer: B
Explanation:
Adoption drives value realization. Misalignment reduces ROI.
Q11
Which scenario BEST justifies AI adoption?
A. Process is repetitive and rule-based
B. Problem involves probabilistic predictions with uncertainty
C. Data volume is small and static
D. Human judgment is not required
Answer: B
Q12
The AI Go/No-Go decision chart is primarily used to:
A. Select hyperparameters
B. Evaluate business and technical feasibility
C. Design deployment strategy
D. Tune algorithms
Answer: B
Q13
What is a common cause of AI project failure?
A. Overly complex architecture
B. Lack of GPU capacity
C. Undefined success criteria
D. Excessive data volume
Answer: C
Q14
If ROI cannot be quantified but strategic importance is high, you should:
A. Cancel the project
B. Build immediately
C. Define leading indicators of value
D. Ignore ROI
Answer: C
Q15
Which stakeholder is MOST critical during Phase I?
A. DevOps engineer
B. Data owner
C. UX designer
D. Legal advisor only
Answer: B
Q16
Shadow AI initiatives typically arise due to:
A. Strong governance
B. Centralized AI strategy
C. Lack of organizational AI framework
D. Excess compliance
Answer: C
Q17
Which is a valid AI feasibility concern?
A. Brand logo placement
B. Data privacy risk
C. Office location
D. Training duration preference
Answer: B
Q18
A PoC should primarily validate:
A. Final deployment pipeline
B. Marketing pitch
C. Feasibility and business potential
D. Documentation structure
Answer: C
Q19
Which metric should leadership care MOST about?
A. Accuracy
B. Recall
C. Revenue uplift
D. Epoch count
Answer: C
Q20
When should Trustworthy AI considerations begin?
A. After deployment
B. During model testing
C. At every phase
D. Only during compliance audit
Answer: C
🔵 DOMAIN 2 – Identify Data Needs
Q21
Before initiating model development, you are asked to validate whether the available data supports the defined business objective.
What is your FIRST step?
A. Select the algorithm
B. Conduct data readiness and gap assessment
C. Begin feature engineering
D. Deploy a proof of concept
Answer: B
Explanation:
Data readiness assessment evaluates availability, completeness, quality, compliance, and alignment with the objective. Model development must not begin before confirming data sufficiency.
Q22
A dataset contains 35% missing values in a critical feature.
What is the MOST appropriate action?
A. Ignore the feature
B. Immediately discard the dataset
C. Assess impact and determine imputation or alternative sourcing
D. Train the model and evaluate performance later
Answer: C
Explanation:
Missing data requires evaluation. Imputation, alternative sourcing, or scope adjustment may be necessary.
Q23
Which of the following BEST describes data lineage?
A. Tracking hyperparameter evolution
B. Monitoring model drift
C. Tracing data origin and transformations
D. Identifying bias in algorithms
Answer: C
Explanation:
Data lineage ensures traceability for compliance, governance, and debugging.
Q24
A team proposes collecting "as much data as possible" without defining relevance.
What risk does this introduce?
A. Overfitting
B. Scope creep in data engineering
C. Inefficient storage only
D. Better accuracy
Answer: B
Explanation:
Excessive irrelevant data increases complexity, cost, and noise without guaranteeing value.
Q25
Which issue MOST directly affects model fairness?
A. High GPU utilization
B. Imbalanced representation across demographics
C. Large dataset size
D. Cloud vendor lock-in
Answer: B
Explanation:
Demographic imbalance leads to biased outcomes.
Q26
A dataset used for marketing is being reused for credit scoring.
Which risk must be assessed?
A. Model drift
B. Purpose limitation compliance risk
C. Feature scaling
D. Hyperparameter misalignment
Answer: B
Explanation:
Data collected for one purpose cannot automatically be reused for another without compliance validation.
Q27
When splitting data into training, validation, and testing sets, the BEST practice is:
A. Alphabetical split
B. Randomized sampling
C. Time-based arbitrary split
D. Use same data across all sets
Answer: B
Explanation:
Random sampling helps ensure representative distribution and unbiased evaluation.
Q28
Which metric BEST identifies dataset completeness?
A. F1 score
B. % of missing values
C. GPU throughput
D. Inference latency
Answer: B
Q29
If input data distribution changes seasonally, this is known as:
A. Concept drift
B. Model collapse
C. Data drift
D. Overfitting
Answer: C
Explanation:
Data drift refers to changes in input distribution over time.
Q30
Which document defines feature descriptions, types, and acceptable ranges?
A. Model card
B. Risk register
C. Data dictionary
D. Sprint backlog
Answer: C
Q31
A labeling team introduces inconsistent tagging standards.
What impact is MOST likely?
A. Increased model explainability
B. Reduced training stability
C. Faster convergence
D. Improved fairness
Answer: B
Q32
Which approach best balances data quantity and quality needs?
A. "More is always better"
B. "Less is always better"
C. Goldilocks principle (sufficient and relevant)
D. Random data selection
Answer: C
Q33
If data resides across multiple silos, what risk arises?
A. Increased fairness
B. Incomplete insight and integration challenges
C. Faster processing
D. Higher interpretability
Answer: B
Q34
A chatbot project involves accessing user PII.
Which governance areas must be addressed? (Choose the BEST set)
A. Data sharing, privacy, security, quality, business risks
B. Only privacy
C. Only encryption
D. Only storage capacity
Answer: A
Q35
Outliers in training data may cause:
A. Better generalization
B. Skewed model behavior
C. Reduced bias
D. Faster training
Answer: B
Q36
If dataset labeling cost exceeds projected ROI, you should:
A. Continue regardless
B. Expand scope
C. Reassess business feasibility
D. Increase model complexity
Answer: C
Q37
Which is MOST critical before approving model training?
A. Cloud subscription
B. Legal clearance and compliance validation
C. UI prototype
D. Marketing strategy
Answer: B
Q38
Which team member is MOST critical in building scalable data pipelines?
A. UX designer
B. Data engineer
C. HR manager
D. Marketing analyst
Answer: B
Q39
Synthetic data generation is MOST appropriate when:
A. Real data is abundant
B. Real-world data is scarce or privacy-restricted
C. Accuracy already high
D. Deployment complete
Answer: B
Q40
If dataset size is too large for current iteration, the BEST approach is:
A. Delete random records
B. Upgrade GPU
C. Perform feature selection and attribute pruning
D. Ignore excess data
Answer: C
Explanation:
Reducing dimensionality and selecting relevant features improves efficiency without compromising value.
🟢 DOMAIN 3 – Operationalize AI Solution
Q41
Your team is preparing to deploy a fraud detection model into production. Which activity should be completed BEFORE go-live?
A. Increase model complexity
B. Conduct production readiness review including rollback plan
C. Expand training dataset
D. Remove monitoring requirements
Answer: B
Explanation:
Production readiness includes performance validation, scalability testing, compliance verification, and rollback planning. Without rollback, operational risk increases.
Q42
After deployment, stakeholders report degraded performance. What should be investigated FIRST?
A. Office network bandwidth
B. Data drift and input distribution changes
C. UI redesign
D. Marketing strategy
Answer: B
Explanation:
Post-deployment degradation often stems from data drift. Input patterns may have shifted from training distribution.
Q43
Which MLOps practice ensures the ability to revert to a previous stable model?
A. Feature engineering
B. Model versioning and registry
C. Hyperparameter tuning
D. Batch inference
Answer: B
Q44
A model supports real-time credit approval decisions.
Which operational architecture is MOST appropriate?
A. Weekly batch processing
B. Offline analytics dashboard
C. Low-latency real-time inference service
D. Manual review queue
Answer: C
Q45
A manufacturing AI solution must generate predictions every Sunday night for weekly planning.
Which deployment method fits BEST?
A. Real-time inference
B. Stream processing
C. Batch prediction
D. Manual scoring
Answer: C
Q46
Which metric reflects operational performance rather than model performance?
A. Accuracy
B. Precision
C. Latency and uptime
D. Recall
Answer: C
Explanation:
Operational KPIs measure reliability and service performance, not prediction quality.
Q47
What is the PRIMARY objective of continuous monitoring in production AI systems?
A. Increase training speed
B. Detect performance degradation and compliance issues
C. Reduce cloud cost
D. Improve feature engineering
Answer: B
Q48
A team deploys Version 2 of a model and overwrites Version 1. Later, rollback is required but impossible.
What governance failure occurred?
A. No feature selection
B. No retraining
C. No version control
D. No hyperparameter logging
Answer: C
Q49
Shadow deployment (canary release) is primarily used to:
A. Replace production entirely
B. Test model with limited user segment before full rollout
C. Remove old data
D. Eliminate monitoring
Answer: B
Q50
Infrastructure as Code (IaC) helps operationalization by:
A. Reducing bias
B. Standardizing deployment environments
C. Increasing accuracy
D. Eliminating governance
Answer: B
Q51
A model is deployed without defined SLA (Service Level Agreement).
What risk is introduced?
A. Overfitting
B. Undefined performance expectations
C. Increased fairness
D. Faster drift detection
Answer: B
Q52
If a model must function without internet connectivity (e.g., mobile app in remote area), what is required?
A. Cloud-only API
B. Hybrid API model
C. Edge deployment
D. Centralized inference
Answer: C
Q53
CI/CD in AI extends beyond software pipelines by including:
A. Manual review
B. Continuous model training and validation
C. Static deployment only
D. Reduced testing
Answer: B
Q54
Automated retraining pipelines are MOST useful when:
A. Business goals static
B. Data continuously evolves
C. Model rarely used
D. Dataset small
Answer: B
Q55
Which document supports operational transparency?
A. Sprint backlog
B. Model card
C. Gantt chart
D. Office manual
Answer: B
Q56
Which is a key post-deployment governance activity?
A. Initial feature engineering
B. Continuous compliance monitoring
C. Problem framing
D. Vendor selection
Answer: B
Q57
A data pipeline failure causes incorrect predictions.
This highlights weakness in:
A. Feature importance
B. Operational resilience
C. Business objective alignment
D. ROI planning
Answer: B
Q58
Observability in AI systems includes:
A. Only logging
B. Logging, tracing, metrics monitoring
C. Accuracy calculation only
D. Hyperparameter tuning
Answer: B
Q59
Model drift differs from data drift because:
A. They are identical
B. Model drift refers to change in model performance due to concept changes
C. Data drift is post-deployment only
D. Model drift is about infrastructure
Answer: B
Q60
The PRIMARY responsibility of the AI project manager during operationalization is to:
A. Write algorithm code
B. Ensure governance, monitoring, and business alignment
C. Tune hyperparameters
D. Select activation functions
Answer: B
🟣 DOMAIN 4 – Manage AI Model Development & Evaluation
Q61
A model performs extremely well on training data but poorly on unseen validation data.
What issue is MOST likely occurring?
A. Underfitting
B. Overfitting
C. Data encryption failure
D. Poor deployment
Answer: B
Explanation:
Overfitting occurs when a model memorizes training data patterns but fails to generalize to new data.
Q62
Which dataset is primarily used for hyperparameter tuning?
A. Training set
B. Validation set
C. Test set
D. Production set
Answer: B
Q63
Which metric is MOST appropriate for evaluating performance on imbalanced datasets?
A. Accuracy
B. Precision-Recall or F1 Score
C. Epoch count
D. Training time
Answer: B
Q64
What is the purpose of cross-validation?
A. Reduce compute cost
B. Improve model robustness and reduce variance
C. Increase dataset size
D. Eliminate bias completely
Answer: B
Q65
Data leakage occurs when:
A. Too many features are used
B. Future or unavailable information is used during training
C. Dataset is small
D. Validation set too large
Answer: B
Q66
Which technique helps reduce overfitting in complex models?
A. Increasing hidden layers
B. Removing validation
C. Regularization
D. Increasing batch size only
Answer: C
Q67
A confusion matrix is used to evaluate:
A. Regression models only
B. Infrastructure scaling
C. Classification model performance
D. Deployment latency
Answer: C
Q68
Bias-variance tradeoff refers to:
A. Cost vs accuracy
B. Balance between underfitting and overfitting
C. Training vs deployment
D. Data vs infrastructure
Answer: B
Q69
Which is an example of supervised learning?
A. K-means clustering
B. PCA
C. Binary spam detection
D. Association rules
Answer: C
Q70
Which approach allows reuse of prior trained knowledge for a new but related task?
A. Reinforcement learning
B. Transfer learning
C. Ensemble averaging
D. Clustering
Answer: B
Q71
Model explainability tools (e.g., SHAP, LIME) are MOST useful when:
A. Training speed is slow
B. Decisions impact individuals or are regulated
C. GPU cost high
D. Dataset small
Answer: B
Q72
Which evaluation artifact documents model purpose, limitations, and performance?
A. Data dictionary
B. Model card
C. Risk log
D. Sprint review
Answer: B
Q73
Concept drift occurs when:
A. Infrastructure changes
B. Relationship between input and output variables changes over time
C. Dataset increases
D. Feature scaling changes
Answer: B
Q74
If model performance drops after several months due to changing consumer behavior, what should be implemented?
A. Ignore fluctuation
B. Automated retraining pipeline
C. Remove monitoring
D. Reduce feature set
Answer: B
Q75
Which algorithm is most suitable for binary classification with small datasets and need for simplicity?
A. Deep neural networks
B. Naive Bayes
C. Generative diffusion model
D. Reinforcement learning
Answer: B
Q76
A dataset has very high dimensionality causing sparse feature representation.
What is the BEST solution?
A. Add more features
B. Perform dimensionality reduction (e.g., PCA)
C. Increase batch size
D. Increase learning rate
Answer: B
Q77
Which ensemble technique improves performance by combining multiple weak learners?
A. Feature scaling
B. Bagging/Boosting
C. Cross-validation
D. Regression
Answer: B
Q78
Before approving a model for deployment, which condition must be met?
A. Training accuracy above 95%
B. Validation performance aligned with business acceptance criteria
C. Maximum parameter count
D. Most complex architecture
Answer: B
Q79
An anomaly detection project requiring clustering into natural groups should use:
A. Linear regression
B. K-means clustering
C. Naive Bayes
D. Logistic regression
Answer: B
Q80
The AI project manager's key responsibility during model evaluation is to:
A. Code algorithms
B. Ensure evaluation aligns with business, ethical, and regulatory requirements
C. Increase GPU resources
D. Design frontend interface
Answer: B
🟠 DOMAIN 5 – Support Responsible & Trustworthy AI
Q81
A loan approval model disproportionately rejects applicants from a specific demographic despite similar financial profiles.
What is MOST likely occurring?
A. Overfitting
B. Algorithmic bias
C. Infrastructure scaling issue
D. Model drift
Answer: B
Explanation:
Disparate outcomes across protected groups with similar inputs indicate bias in training data or model behavior.
Q82
Which fairness metric evaluates whether outcomes are equally distributed across groups?
A. Accuracy
B. Demographic parity
C. Latency
D. F1 score
Answer: B
Q83
Transparency in AI systems primarily requires:
A. Publishing proprietary source code
B. Clear documentation of model purpose, inputs, and limitations
C. Increasing dataset size
D. Full automation without oversight
Answer: B
Q84
Privacy-by-design means:
A. Adding encryption after deployment
B. Embedding privacy considerations throughout development lifecycle
C. Limiting model complexity
D. Deleting logs
Answer: B
Q85
When AI decisions significantly impact individuals (e.g., hiring), what is MOST appropriate?
A. Fully automated decisions
B. Human-in-the-loop oversight
C. No documentation
D. Reduced monitoring
Answer: B
Q86
Which document improves accountability and traceability of AI decisions?
A. Sprint backlog
B. Model card and audit logs
C. UI documentation
D. Training notebook
Answer: B
Q87
A system produces highly accurate predictions but cannot explain its reasoning in a regulated industry.
What key risk exists?
A. Performance instability
B. Compliance and explainability failure
C. Data drift
D. Storage limitation
Answer: B
Q88
Adversarial attacks in AI systems attempt to:
A. Improve accuracy
B. Manipulate inputs to cause incorrect predictions
C. Increase fairness
D. Optimize hyperparameters
Answer: B
Q89
Governance frameworks should define:
A. Only algorithm selection
B. Roles, responsibilities, oversight mechanisms
C. GPU procurement
D. UI testing
Answer: B
Q90
A chatbot collects personally identifiable information (PII).
Which measure is MOST critical?
A. Increasing storage capacity
B. Data anonymization or minimization
C. Hyperparameter tuning
D. Model compression
Answer: B
Q91
Which is an example of Responsible AI?
A. Maximizing profit regardless of impact
B. Ensuring AI system is built for socially beneficial purpose
C. Ignoring unintended consequences
D. Removing monitoring
Answer: B
Q92
Which activity supports AI audit readiness?
A. Deleting training data
B. Maintaining documentation of datasets, evaluation metrics, and deployment decisions
C. Reducing model complexity
D. Increasing GPU resources
Answer: B
Q93
Model transparency helps stakeholders by:
A. Increasing training speed
B. Understanding limitations and appropriate usage
C. Eliminating bias entirely
D. Removing governance
Answer: B
Q94
Which regulatory risk should multinational organizations consider?
A. Only local data laws
B. Cross-border data protection compliance
C. Only internal policy
D. Training speed requirements
Answer: B
Q95
Bias mitigation strategies include:
A. Increasing learning rate
B. Diversifying training data and applying fairness constraints
C. Reducing features
D. Removing validation
Answer: B
Q96
Which is an example of Explainable AI (XAI)?
A. Larger neural network
B. SHAP value analysis showing feature contributions
C. Faster deployment
D. Higher batch size
Answer: B
Q97
The "Uncanny Valley" effect impacts:
A. Model accuracy
B. User trust and adoption
C. Data quality
D. Training cost
Answer: B
Q98
If employees resist using an augmented intelligence tool, what should have been done earlier?
A. Remove training
B. Conduct change management and stakeholder engagement
C. Increase automation
D. Ignore feedback
Answer: B
Q99
Trustworthy AI considerations should be addressed:
A. Only during deployment
B. Only during compliance review
C. At every phase of the AI lifecycle
D. Only during model training
Answer: C
Q100
The AI project manager's ultimate responsibility regarding Responsible AI is to:
A. Focus solely on accuracy
B. Ensure fairness, transparency, accountability, and compliance are integrated throughout the lifecycle
C. Delegate ethics entirely to legal
D. Ignore regulatory complexity
Answer: B
🔥 20 High-Difficulty Scenario-Based Questions
Q1
Your organization wants to deploy an AI-powered loan approval system across three countries. The business case projects strong ROI. However, during data review, you discover that the training dataset is heavily skewed toward applicants from only one region.
What is your BEST course of action?
A. Deploy the model and monitor bias later
B. Adjust hyperparameters to reduce bias
C. Pause deployment and address dataset imbalance
D. Add fairness disclaimer in documentation
Answer: C
Explanation:
Deploying a biased dataset risks regulatory violations and reputational damage. Dataset correction is required before operationalization.
Why others are wrong:
A: Reactive bias mitigation is too late.
B: Hyperparameters cannot fix representation bias.
D: Documentation alone does not mitigate harm.
Q2
Your executive sponsor insists on launching a chatbot within 4 weeks to demonstrate innovation, but data readiness assessment shows insufficient labeled training data.
What should you do?
A. Launch a minimal version with random responses
B. Purchase or leverage pre-trained models while validating suitability
C. Reduce chatbot scope without informing sponsor
D. Delay until 100% perfect dataset exists
Answer: B
Explanation:
Pre-trained models can accelerate timelines while maintaining feasibility. Governance validation must still occur.
Q3
After deployment, customer behavior shifts significantly due to seasonal changes. Model performance declines.
What was MOST likely missing from your operational design?
A. Model documentation
B. Automated retraining pipeline
C. Feature engineering
D. Deployment checklist
Answer: B
Q4
During Phase I, stakeholders cannot agree whether the objective is revenue growth or cost reduction.
What is the BEST facilitation strategy?
A. Let leadership decide without analysis
B. Start model development while they debate
C. Conduct structured problem framing workshop with measurable KPIs
D. Select the technically easier objective
Answer: C
Q5
A predictive maintenance system requires continuous machine monitoring with millisecond response time.
Which deployment model is MOST appropriate?
A. Weekly batch scoring
B. Manual review
C. Real-time edge or streaming inference
D. Offline dashboard reporting
Answer: C
Q6
A model shows 94% accuracy but fails to improve customer retention.
What is the MOST likely root cause?
A. Overfitting
B. Misalignment between technical metric and business objective
C. Data leakage
D. Underfitting
Answer: B
Q7
Your organization plans global expansion next year but currently operates only domestically. Which compliance approach is BEST?
A. Follow current local law only
B. Ignore future regulatory considerations
C. Align with both current and anticipated regulatory environments
D. Outsource all legal responsibility
Answer: C
Q8
A team wants to reuse a data pipeline built for an image classification project in a text-based NLP project.
What is your response?
A. Approve reuse immediately
B. Reject reuse entirely
C. Evaluate pipeline compatibility with new AI pattern and model needs
D. Replace entire team
Answer: C
Q9
A model denies job applicants the ability to contest automated decisions.
Which Trustworthy AI principle is being violated?
A. Performance optimization
B. Explainability and contestability
C. Scalability
D. Automation maturity
Answer: B
Q10
Senior leadership wants to "buy AI from a vendor" rather than build internal capabilities.
What is your BEST response?
A. Agree immediately
B. Explain AI requires data, governance, and internal alignment beyond vendor procurement
C. Reject vendor discussion
D. Delay project
Answer: B
Q11
A dataset collected for marketing personalization is reused for insurance risk scoring without customer consent.
What is the PRIMARY risk?
A. Model drift
B. Purpose limitation violation
C. Feature imbalance
D. Overfitting
Answer: B
Q12
Your team faces extreme labeling costs that exceed projected ROI.
What is the BEST action?
A. Continue regardless
B. Increase model complexity
C. Reassess feasibility and business value
D. Ignore labeling quality
Answer: C
Q13
An HR AI system is accurate but employees resist using it because they find it "creepy."
What should have been addressed earlier?
A. Hyperparameters
B. Change management and user trust strategy
C. Training dataset size
D. Infrastructure design
Answer: B
Q14
A reinforcement learning approach is proposed for warehouse route optimization.
Why is this appropriate?
A. It handles supervised classification
B. It optimizes sequential decision-making through reward feedback
C. It clusters products
D. It reduces bias automatically
Answer: B
Q15
A model was deployed without clear SLA definitions.
What operational risk arises?
A. Better accuracy
B. Undefined performance expectations
C. Reduced compliance
D. Faster training
Answer: B
Q16
Your validation dataset accidentally included future outcome information.
What issue has occurred?
A. Overfitting
B. Data leakage
C. Bias
D. Drift
Answer: B
Q17
Your AI solution requires 100% accuracy to be viable.
What should you recommend?
A. Proceed normally
B. Clarify feasibility because AI rarely guarantees 100% certainty
C. Increase neural network depth
D. Use generative AI
Answer: B
Q18
A multinational AI system lacks defined human accountability roles.
Which governance gap exists?
A. Hyperparameter misalignment
B. Absence of chain of accountability
C. Data sparsity
D. Infrastructure inefficiency
Answer: B
Q19
Your model performs well technically but conflicts with organizational ethics guidelines.
What is your responsibility?
A. Prioritize performance
B. Escalate and align with Responsible AI governance
C. Deploy quietly
D. Remove documentation
Answer: B
Q20
An AI initiative is technically feasible but lacks executive sponsorship and stakeholder buy-in.
What is the MOST likely outcome?
A. Immediate success
B. High ROI
C. Low adoption and eventual failure
D. Faster deployment
Answer: C
