Leading and Managing AI- Sample Paper 1
Leading and Managing AI
This sample paper explores the core principles of leading and managing artificial intelligence initiatives in modern organizations. It introduces key concepts such as AI strategy, governance, ethics, and change management, helping leaders understand how to align AI projects with business goals. The content emphasizes cross‑functional collaboration between technical and non‑technical teams, risk management, and responsible data use. It is suitable for students, managers, and professionals who need a structured overview of how to plan, implement, and oversee AI solutions in a practical, results‑oriented way.

Within this sample, you will find example frameworks for AI project lifecycles, guidance on stakeholder engagement, and discussion prompts that encourage critical thinking about AI’s impact on people and processes. It also highlights leadership skills required to foster innovation while maintaining transparency and trust, including communication, ethical decision‑making, and continuous learning. Use this paper as a reference for assignments, workshops, or internal training sessions focused on AI leadership, and adapt the ideas to your specific industry, organizational culture, and maturity level in digital transformation.

🔵 DOMAIN 1: Identify Business Needs & Solutions (31 Questions)
Q1
An organization wants to implement AI to "improve customer satisfaction." What should the project manager do first?
A. Select an ML algorithm
B. Define measurable business objectives
C. Collect historical customer data
D. Build a prototype
Answer: B
Explanation: AI initiatives must begin with clearly defined business objectives and success metrics.
Q2
Which document best captures alignment between AI initiative and strategic objectives?
A. Data Dictionary
B. Model Card
C. Business Case
D. Training Log
Answer: C
Explanation: The business case defines ROI, alignment, feasibility, and value justification.
Q3
A stakeholder proposes AI without clear ROI. What is the best action?
A. Approve pilot immediately
B. Reject proposal
C. Conduct feasibility and value assessment
D. Hire data scientist
Answer: C
Explanation: Feasibility and value validation must precede execution.
Q4
Which is the MOST important success criterion in AI projects?
A. Model complexity
B. Accuracy aligned with business threshold
C. Number of features
D. Training speed
Answer: B
Explanation: Accuracy must meet business-defined thresholds, not just technical metrics.
Q5
An AI fraud detection system must reduce fraud losses by 15%. This is an example of:
A. Technical KPI
B. Business Outcome Metric
C. Data Constraint
D. Model Drift Indicator
Answer: B
Explanation: This directly reflects measurable business impact.
Q6
Which approach is most suitable for high-uncertainty AI initiatives?
A. Predictive Waterfall
B. Pure Agile
C. Hybrid Iterative
D. Linear SDLC
Answer: C
Explanation: AI projects involve experimentation; hybrid models manage uncertainty better.
Q7
A company wants AI "because competitors are using it." This indicates:
A. Strategic alignment
B. Technology push bias
C. Clear ROI
D. Strong governance
Answer: B
Explanation: Technology adoption without defined business value reflects technology push bias.
Q8
During problem framing, the MOST critical question is:
A. Which cloud platform to use?
B. What business problem are we solving?
C. Which vendor to select?
D. How many GPUs required?
Answer: B
Q9
AI feasibility analysis primarily evaluates:
A. Employee satisfaction
B. Data availability and technical viability
C. Office infrastructure
D. Vendor contracts
Answer: B
Q10
Which technique helps quantify AI initiative uncertainty?
A. Gantt chart
B. Quantitative risk analysis
C. SWOT analysis
D. Scrum board
Answer: B
Q11
An AI churn model shows 95% accuracy but does not increase retention. The issue is:
A. Overfitting
B. Misaligned success metric
C. Insufficient training data
D. Poor GPU capacity
Answer: B
Q12
Which stakeholder must be involved early in AI projects?
A. Data owner
B. HR manager
C. Facilities manager
D. Receptionist
Answer: A
Q13
AI solutioning begins AFTER:
A. Model deployment
B. Business problem definition
C. Feature engineering
D. Algorithm tuning
Answer: B
Q14
Which is a leading indicator for AI project success?
A. Training loss
B. Stakeholder alignment
C. Model size
D. Compute cost
Answer: B
Q15
What is the purpose of defining AI use case boundaries?
A. Reduce GPU cost
B. Prevent scope creep
C. Increase dataset size
D. Speed up training
Answer: B
Q16
An AI solution is technically feasible but violates regulatory norms. The PM should:
A. Continue development
B. Escalate compliance concern
C. Reduce accuracy
D. Ignore regulation
Answer: B
Q17
Which factor MOST affects AI ROI?
A. Model architecture
B. Business adoption
C. Programming language
D. Notebook tool
Answer: B
Q18
A PoC is primarily used to:
A. Replace production system
B. Validate feasibility and value
C. Reduce staff
D. Audit compliance
Answer: B
Q19
Success criteria should be:
A. Technical
B. Vague
C. Measurable and time-bound
D. Experimental only
Answer: C
Q20
Which technique helps prioritize AI use cases?
A. Confusion matrix
B. Value vs Complexity matrix
C. Feature scaling
D. Hyperparameter tuning
Answer: B
Q21
AI initiative funding approval depends MOST on:
A. Number of engineers
B. Strategic value and ROI
C. Model size
D. Data storage cost
Answer: B
Q22
Shadow AI initiatives usually occur due to:
A. Strong governance
B. Clear PMO oversight
C. Lack of centralized AI strategy
D. High compliance
Answer: C
Q23
Problem reframing is required when:
A. Model accuracy high
B. Business outcome unmet
C. Data size large
D. Cloud cost low
Answer: B
Q24
Which artifact defines scope and authority in AI project?
A. Model Report
B. Project Charter
C. Data Sheet
D. Sprint Backlog
Answer: B
Q25
Which is NOT a business KPI?
A. Revenue uplift
B. Customer retention
C. Model F1-score
D. Cost reduction
Answer: C
Q26
AI feasibility must evaluate:
A. Ethical risk
B. Data readiness
C. Technical viability
D. All of the above
Answer: D
Q27
An AI chatbot reduces call volume by 20%. This reflects:
A. Operational metric
B. Business impact
C. Feature success
D. Technical constraint
Answer: B
Q28
Early stakeholder mapping helps:
A. Increase model size
B. Manage resistance
C. Speed training
D. Increase hyperparameters
Answer: B
Q29
AI project governance begins at:
A. Deployment
B. Model testing
C. Ideation stage
D. Monitoring stage
Answer: C
Q30
Business problem must be framed as:
A. Algorithm type
B. Predictive objective
C. Compute resource
D. Vendor comparison
Answer: B
Q31
Which is the MOST common reason AI projects fail?
A. Poor coding
B. Weak GPUs
C. Lack of business alignment
D. High data volume
Answer: C
🔵 DOMAIN 2: Identify Data Needs (31 Questions)
(Weightage ~26%)
Q32
The first step in identifying data needs for an AI initiative is to:
A. Select the algorithm
B. Define target variable and features required
C. Clean the dataset
D. Deploy a model
Answer: B
Explanation: Data identification starts with understanding what needs to be predicted and which variables influence it.
Q33
Data discovery primarily helps to:
A. Improve model speed
B. Identify available internal and external datasets
C. Increase GPU allocation
D. Automate feature engineering
Answer: B
Q34
Which is a leading indicator of poor data readiness?
A. Large dataset
B. Missing values and inconsistent formats
C. High storage capacity
D. Cloud hosting
Answer: B
Q35
A dataset contains duplicate records. This is a:
A. Governance issue
B. Data quality issue
C. Model drift issue
D. Business alignment issue
Answer: B
Q36
Data lineage ensures:
A. Faster training
B. Traceability of data origin and transformations
C. Higher model accuracy
D. Lower cloud cost
Answer: B
Q37
Unstructured data includes:
A. Relational tables
B. CSV files
C. Text, images, audio
D. SQL database
Answer: C
Q38
Which metric measures class imbalance impact?
A. Accuracy
B. Precision/Recall
C. Storage utilization
D. Throughput
Answer: B
Q39
Data governance defines:
A. Coding standards
B. Data ownership, quality, and compliance policies
C. GPU allocation
D. Algorithm selection
Answer: B
Q40
Which question is MOST important during data sourcing?
A. Is the data legally usable?
B. Is the dataset large?
C. Is the cloud provider premium?
D. Is the storage encrypted?
Answer: A
Q41
Data bias occurs when:
A. Dataset too small
B. Dataset over-represents certain groups
C. Training time increases
D. Cloud cost increases
Answer: B
Q42
Feature engineering is primarily concerned with:
A. Cleaning office data
B. Transforming raw data into meaningful inputs
C. Increasing server uptime
D. Selecting vendor
Answer: B
Q43
If 40% of values are missing in a critical feature, the PM should first:
A. Delete dataset
B. Evaluate business impact and imputation feasibility
C. Train anyway
D. Ignore feature
Answer: B
Q44
Data readiness assessment evaluates:
A. Accuracy only
B. Completeness, consistency, availability
C. GPU configuration
D. Stakeholder alignment
Answer: B
Q45
Which is an external data source?
A. CRM database
B. ERP logs
C. Government open datasets
D. HR attendance records
Answer: C
Q46
PII data requires:
A. Extra storage
B. Compliance with privacy regulations
C. Feature scaling
D. Hyperparameter tuning
Answer: B
Q47
Which document describes dataset structure?
A. Data Dictionary
B. Sprint backlog
C. Risk register
D. Model card
Answer: A
Q48
Data drift refers to:
A. Hardware failure
B. Change in input data distribution over time
C. Algorithm update
D. Compliance breach
Answer: B
Q49
Sampling bias impacts:
A. Governance policy
B. Model fairness and accuracy
C. Compute usage
D. Network bandwidth
Answer: B
Q50
The MOST reliable data source is:
A. Social media rumor
B. Verified enterprise system of record
C. Competitor blog
D. Manual spreadsheet
Answer: B
Q51
Metadata describes:
A. Model training loss
B. Data about data
C. GPU usage
D. Feature weights
Answer: B
Q52
If data resides in silos, the risk is:
A. Faster training
B. Incomplete insights
C. Better governance
D. Lower storage cost
Answer: B
Q53
Which improves model reliability MOST?
A. More GPUs
B. High-quality labeled data
C. Larger cloud account
D. Agile ceremonies
Answer: B
Q54
Labeling errors primarily affect:
A. Infrastructure
B. Model performance
C. Stakeholder mapping
D. Governance policy
Answer: B
Q55
Data anonymization is required to:
A. Increase speed
B. Protect privacy
C. Improve recall
D. Reduce model size
Answer: B
Q56
A dataset updated daily may require:
A. One-time training
B. Continuous monitoring
C. Manual cleaning only
D. No governance
Answer: B
Q57
Data versioning ensures:
A. Faster GPU processing
B. Reproducibility
C. Increased features
D. Vendor lock-in
Answer: B
Q58
Which is MOST critical before model training?
A. Data validation
B. Dashboard creation
C. Cloud branding
D. Logo design
Answer: A
Q59
Structured data is BEST stored in:
A. Relational databases
B. Audio file
C. Video file
D. Image repository
Answer: A
Q60
Which risk arises from third-party data providers?
A. Model drift
B. Licensing and compliance risk
C. Feature scaling
D. Training instability
Answer: B
Q61
Data profiling helps identify:
A. Algorithm bias
B. Statistical characteristics of dataset
C. GPU capacity
D. Organizational resistance
Answer: B
Q62
Outliers in data may:
A. Improve fairness
B. Distort model performance
C. Reduce compliance risk
D. Improve governance
Answer: B
Q63
Data security controls include:
A. Encryption and access control
B. Hyperparameter tuning
C. Model retraining
D. Cloud pricing
Answer: A
Q64
The MOST common AI data challenge is:
A. Overfitting
B. Poor data quality
C. High bandwidth
D. Too many vendors
Answer: B
Q65
Which improves fairness in datasets?
A. Increasing model size
B. Balanced representation across groups
C. More GPUs
D. Faster deployment
Answer: B
Q66
Before approving model development, PM should confirm:
A. Training hardware purchased
B. Data is legally compliant and ready
C. Code repository exists
D. Dashboard designed
Answer: B
Q67
A dataset collected for marketing is reused for credit scoring. This raises:
A. Storage risk
B. Purpose limitation compliance risk
C. Training instability
D. Feature importance issue
Answer: B
Q68
Which metric checks completeness?
A. % missing values
B. GPU temperature
C. Training speed
D. Latency
Answer: A
Q69
If data labeling cost exceeds business value, PM should:
A. Continue anyway
B. Re-evaluate feasibility
C. Increase scope
D. Ignore ROI
Answer: B
Q70
High variance in dataset can cause:
A. Underfitting
B. Unstable model predictions
C. Governance success
D. Compliance readiness
Answer: B
Q71
The PRIMARY objective of data validation is to:
A. Improve UI
B. Ensure correctness and reliability of data
C. Reduce cloud bill
D. Increase features
Answer: B
🟢 DOMAIN 3: Operationalize AI Solution (20 Questions)
(Weightage ~17%)
Q72
Before deploying an AI model into production, the MOST important validation step is:
A. Increase training epochs
B. Perform production readiness review
C. Add more features
D. Upgrade GPU
Answer: B
Explanation: Deployment requires validation across performance, scalability, compliance, and operational readiness.
Q73
Model deployment planning should include:
A. Only accuracy metrics
B. Infrastructure, rollback strategy, monitoring plan
C. Marketing strategy
D. Office setup
Answer: B
Q74
Which environment is used for final validation before production?
A. Development
B. Sandbox
C. Staging/Pre-production
D. Research notebook
Answer: C
Q75
Model monitoring primarily tracks:
A. Office productivity
B. Performance degradation and drift
C. Developer attendance
D. Licensing cost
Answer: B
Q76
If model accuracy drops after deployment, the FIRST action should be:
A. Delete the model
B. Investigate data drift and input changes
C. Change cloud provider
D. Increase UI budget
Answer: B
Q77
Rollback planning ensures:
A. Faster feature engineering
B. Safe reversion to previous stable version
C. Higher model complexity
D. Reduced labeling cost
Answer: B
Q78
Which metric indicates operational performance?
A. Training loss
B. Latency and response time
C. Epoch count
D. Feature count
Answer: B
Q79
CI/CD in AI projects is extended as:
A. Continuous Infrastructure
B. Continuous Intelligence
C. Continuous Integration / Continuous Deployment
D. Continuous Iteration Design
Answer: C
Q80
MLOps primarily focuses on:
A. Algorithm invention
B. Managing ML lifecycle in production
C. Financial auditing
D. Marketing campaigns
Answer: B
Q81
Shadow deployment (canary release) helps:
A. Increase dataset size
B. Test model with limited users before full rollout
C. Reduce governance
D. Skip validation
Answer: B
Q82
Model registry is used for:
A. Payroll
B. Tracking model versions and metadata
C. Data cleaning
D. Stakeholder approval
Answer: B
Q83
Which is a key operational risk?
A. Hyperparameter tuning
B. System integration failure
C. Feature scaling
D. Training accuracy
Answer: B
Q84
SLA in AI deployment defines:
A. Dataset size
B. Service performance commitments
C. Algorithm type
D. Feature importance
Answer: B
Q85
A production AI system must include:
A. Monitoring dashboard
B. Marketing pitch
C. Research paper
D. Academic citation
Answer: A
Q86
Which team collaborates closely during deployment?
A. HR
B. DevOps/IT operations
C. Sales only
D. Finance only
Answer: B
Q87
Automated retraining pipelines help address:
A. Stakeholder misalignment
B. Model drift
C. Office downtime
D. Licensing risk
Answer: B
Q88
Data pipeline failures can cause:
A. Increased fairness
B. Incorrect predictions
C. Better governance
D. Higher ROI
Answer: B
Q89
A/B testing during deployment helps evaluate:
A. Model fairness only
B. Comparative performance of models
C. GPU power
D. Developer efficiency
Answer: B
Q90
Observability in AI systems includes:
A. Logging, tracing, monitoring
B. Feature reduction
C. Manual reporting
D. Training speed
Answer: A
Q91
Which document supports operational governance?
A. Model card and deployment checklist
B. Gantt chart
C. Budget sheet only
D. Attendance log
Answer: A
Q92
If real-time inference is required, priority should be given to:
A. Batch processing
B. Low-latency architecture
C. Offline reporting
D. Manual review
Answer: B
Q93
Infrastructure as Code (IaC) helps:
A. Reduce model bias
B. Standardize deployment environments
C. Increase accuracy
D. Replace stakeholders
Answer: B
Q94
Which is a post-deployment governance activity?
A. Algorithm selection
B. Continuous compliance monitoring
C. Initial feasibility study
D. Data discovery
Answer: B
Q95
Operational KPIs differ from model KPIs because they measure:
A. Only accuracy
B. Business service reliability and performance
C. Feature weights
D. Epochs
Answer: B
🟢 DOMAIN 3: Operationalize AI Solution (20 Questions)
(Weightage ~17%)
Q72
Before deploying an AI model into production, the MOST important validation step is:
A. Increase training epochs
B. Perform production readiness review
C. Add more features
D. Upgrade GPU
Answer: B
Explanation: Deployment requires validation across performance, scalability, compliance, and operational readiness.
Q73
Model deployment planning should include:
A. Only accuracy metrics
B. Infrastructure, rollback strategy, monitoring plan
C. Marketing strategy
D. Office setup
Answer: B
Q74
Which environment is used for final validation before production?
A. Development
B. Sandbox
C. Staging/Pre-production
D. Research notebook
Answer: C
Q75
Model monitoring primarily tracks:
A. Office productivity
B. Performance degradation and drift
C. Developer attendance
D. Licensing cost
Answer: B
Q76
If model accuracy drops after deployment, the FIRST action should be:
A. Delete the model
B. Investigate data drift and input changes
C. Change cloud provider
D. Increase UI budget
Answer: B
Q77
Rollback planning ensures:
A. Faster feature engineering
B. Safe reversion to previous stable version
C. Higher model complexity
D. Reduced labeling cost
Answer: B
Q78
Which metric indicates operational performance?
A. Training loss
B. Latency and response time
C. Epoch count
D. Feature count
Answer: B
Q79
CI/CD in AI projects is extended as:
A. Continuous Infrastructure
B. Continuous Intelligence
C. Continuous Integration / Continuous Deployment
D. Continuous Iteration Design
Answer: C
Q80
MLOps primarily focuses on:
A. Algorithm invention
B. Managing ML lifecycle in production
C. Financial auditing
D. Marketing campaigns
Answer: B
Q81
Shadow deployment (canary release) helps:
A. Increase dataset size
B. Test model with limited users before full rollout
C. Reduce governance
D. Skip validation
Answer: B
Q82
Model registry is used for:
A. Payroll
B. Tracking model versions and metadata
C. Data cleaning
D. Stakeholder approval
Answer: B
Q83
Which is a key operational risk?
A. Hyperparameter tuning
B. System integration failure
C. Feature scaling
D. Training accuracy
Answer: B
Q84
SLA in AI deployment defines:
A. Dataset size
B. Service performance commitments
C. Algorithm type
D. Feature importance
Answer: B
Q85
A production AI system must include:
A. Monitoring dashboard
B. Marketing pitch
C. Research paper
D. Academic citation
Answer: A
Q86
Which team collaborates closely during deployment?
A. HR
B. DevOps/IT operations
C. Sales only
D. Finance only
Answer: B
Q87
Automated retraining pipelines help address:
A. Stakeholder misalignment
B. Model drift
C. Office downtime
D. Licensing risk
Answer: B
Q88
Data pipeline failures can cause:
A. Increased fairness
B. Incorrect predictions
C. Better governance
D. Higher ROI
Answer: B
Q89
A/B testing during deployment helps evaluate:
A. Model fairness only
B. Comparative performance of models
C. GPU power
D. Developer efficiency
Answer: B
Q90
Observability in AI systems includes:
A. Logging, tracing, monitoring
B. Feature reduction
C. Manual reporting
D. Training speed
Answer: A
Q91
Which document supports operational governance?
A. Model card and deployment checklist
B. Gantt chart
C. Budget sheet only
D. Attendance log
Answer: A
Q92
If real-time inference is required, priority should be given to:
A. Batch processing
B. Low-latency architecture
C. Offline reporting
D. Manual review
Answer: B
Q93
Infrastructure as Code (IaC) helps:
A. Reduce model bias
B. Standardize deployment environments
C. Increase accuracy
D. Replace stakeholders
Answer: B
Q94
Which is a post-deployment governance activity?
A. Algorithm selection
B. Continuous compliance monitoring
C. Initial feasibility study
D. Data discovery
Answer: B
Q95
Operational KPIs differ from model KPIs because they measure:
A. Only accuracy
B. Business service reliability and performance
C. Feature weights
D. Epochs
Answer: B
🟣 DOMAIN 4: Manage AI Model Development & Evaluation (19 Questions)
(Weightage ~16%)
Q96
The PRIMARY objective of model training is to:
A. Reduce cloud cost
B. Learn patterns from historical data
C. Increase dataset size
D. Deploy to production
Answer: B
Explanation: Training enables the model to learn relationships between input and output variables.
Q97
Overfitting occurs when a model:
A. Performs well on new data
B. Memorizes training data but fails on unseen data
C. Has low variance
D. Is too simple
Answer: B
Q98
Underfitting indicates that the model:
A. Is too complex
B. Cannot capture underlying patterns
C. Has data leakage
D. Is over-trained
Answer: B
Q99
Which dataset is used to tune hyperparameters?
A. Training dataset
B. Validation dataset
C. Production dataset
D. Archived dataset
Answer: B
Q100
Which metric is MOST appropriate for imbalanced classification?
A. Accuracy
B. Precision-Recall or F1 Score
C. Training speed
D. Epoch count
Answer: B
Q101
Cross-validation helps to:
A. Increase bias
B. Improve robustness and reduce overfitting
C. Deploy faster
D. Eliminate drift
Answer: B
Q102
Confusion matrix is used to evaluate:
A. Feature importance
B. Classification performance
C. Cloud infrastructure
D. Data storage
Answer: B
Q103
Hyperparameters differ from model parameters because they are:
A. Learned automatically
B. Set before training
C. Derived from data
D. Deployment metrics
Answer: B
Q104
Feature selection helps to:
A. Increase model size
B. Improve interpretability and reduce overfitting
C. Increase training time
D. Replace validation
Answer: B
Q105
Data leakage occurs when:
A. Dataset too large
B. Future information is used during training
C. Model is deployed
D. Cloud cost increases
Answer: B
Q106
Bias-variance tradeoff refers to:
A. Storage vs compute
B. Balance between underfitting and overfitting
C. Accuracy vs latency
D. Compliance vs governance
Answer: B
Q107
Which is an example of supervised learning?
A. Clustering
B. Reinforcement reward loop
C. Classification with labeled data
D. Dimensionality reduction
Answer: C
Q108
Model explainability tools (e.g., SHAP, LIME) help to:
A. Increase GPU speed
B. Interpret prediction decisions
C. Reduce storage
D. Train faster
Answer: B
Q109
ROC-AUC measures:
A. Cloud performance
B. Model's ability to distinguish between classes
C. Storage usage
D. Labeling quality
Answer: B
Q110
Which technique helps prevent overfitting?
A. Increasing model complexity
B. Regularization
C. Removing validation
D. Ignoring noise
Answer: B
Q111
Ensemble methods improve performance by:
A. Using single weak model
B. Combining multiple models
C. Increasing storage
D. Reducing governance
Answer: B
Q112
Model validation ensures:
A. Compliance only
B. Performance meets defined acceptance criteria
C. Faster GPU processing
D. Business alignment
Answer: B
Q113
Which is MOST critical before final approval?
A. High training accuracy
B. Validation performance meeting business threshold
C. Large dataset
D. Advanced architecture
Answer: B
Q114
Concept drift refers to:
A. Infrastructure change
B. Change in relationship between input and output over time
C. Dataset duplication
D. Governance breach
Answer: B
Q115
Model retraining strategy should be based on:
A. Calendar only
B. Performance monitoring triggers
C. Developer availability
D. Budget cycle
Answer: B
Q116
Explainable AI is especially critical in:
A. Gaming apps
B. Regulated industries (finance, healthcare)
C. Social media posts
D. Entertainment
Answer: B
Q117
A model approved for production must have:
A. Documented evaluation results
B. Only high complexity
C. Maximum parameters
D. No monitoring plan
Answer: A
Q118
Which technique reduces dimensionality?
A. Gradient descent
B. Principal Component Analysis (PCA)
C. Cross-validation
D. Ensemble stacking
Answer: B
Q119
Model comparison should primarily consider:
A. Developer preference
B. Business objective alignment and performance metrics
C. Cloud vendor
D. Training duration only
Answer: B
Q120
The FINAL responsibility of the AI project manager in model evaluation is to:
A. Code the algorithm
B. Ensure model meets business, ethical, and performance requirements
C. Increase GPU count
D. Design UI
Answer: B
🟠 DOMAIN 5: Support Responsible & Trustworthy AI Efforts (19 Questions)
(Weightage ~15%)
Q121
Responsible AI primarily ensures that AI systems are:
A. Fast and scalable
B. Ethical, fair, transparent, and compliant
C. Highly complex
D. Cost-efficient only
Answer: B
Explanation: Responsible AI focuses on fairness, accountability, transparency, and compliance.
Q122
Algorithmic bias occurs when:
A. Model accuracy is low
B. Certain groups are unfairly disadvantaged by predictions
C. Dataset is large
D. GPU fails
Answer: B
Q123
Which principle requires AI decisions to be understandable?
A. Scalability
B. Explainability
C. Automation
D. Optimization
Answer: B
Q124
In regulated industries, AI decisions must be:
A. Fully automated without oversight
B. Auditable and documented
C. Hidden for IP protection
D. Randomized
Answer: B
Q125
A credit scoring model rejects applications disproportionately from a minority group. This indicates:
A. Overfitting
B. Fairness issue
C. Data redundancy
D. Infrastructure error
Answer: B
Q126
The MOST effective way to mitigate bias is:
A. Increase GPU power
B. Diversify and balance training data
C. Increase model complexity
D. Ignore protected attributes
Answer: B
Q127
Transparency in AI systems requires:
A. Publishing source code publicly
B. Clear documentation of model purpose and limitations
C. Increasing dataset size
D. Faster retraining
Answer: B
Q128
AI governance framework defines:
A. Training dataset size
B. Roles, responsibilities, oversight mechanisms
C. Hyperparameters
D. Coding language
Answer: B
Q129
Which regulation focuses on data protection in the EU?
A. HIPAA
B. GDPR
C. SOX
D. Basel III
Answer: B
Q130
Human-in-the-loop control is important when:
A. Risk level is low
B. Decisions have high societal impact
C. Model accuracy is 100%
D. Dataset is small
Answer: B
Q131
Model cards are used to:
A. Improve training speed
B. Document model purpose, performance, and limitations
C. Reduce bias automatically
D. Increase GPU utilization
Answer: B
Q132
Which is a key accountability mechanism?
A. Anonymous deployment
B. Clear ownership and audit trails
C. Hidden model logic
D. Random monitoring
Answer: B
Q133
Privacy-by-design means:
A. Adding privacy after deployment
B. Embedding privacy controls from the beginning
C. Ignoring compliance
D. Deleting logs
Answer: B
Q134
Which is a transparency risk?
A. Explainable outputs
B. Black-box decision making without documentation
C. Model monitoring
D. Audit logging
Answer: B
Q135
Adversarial attacks attempt to:
A. Improve accuracy
B. Manipulate model predictions
C. Increase fairness
D. Reduce bias
Answer: B
Q136
AI audit readiness requires:
A. No documentation
B. Documented processes, datasets, validation results
C. Faster GPUs
D. Vendor preference
Answer: B
Q137
Ethical AI frameworks emphasize:
A. Profit only
B. Fairness, accountability, transparency, safety
C. Model complexity
D. Cloud efficiency
Answer: B
Q138
Explainability is especially critical when:
A. Using unsupervised clustering only
B. Decisions affect individual rights
C. Model is internal
D. Dataset small
Answer: B
Q139
Which is a key fairness metric?
A. Training speed
B. Demographic parity
C. GPU usage
D. Cloud cost
Answer: B
Q140
The AI project manager's responsibility in Responsible AI is to:
A. Ignore ethics
B. Ensure governance, compliance, and fairness controls are integrated
C. Focus only on accuracy
D. Delegate all oversight to developers
Answer: B
