An organization brings in performance metrics from an external server resource and merges them with internal logs. The data analyst wants to confirm both collections align. Which approach helps verify this alignment?
Cross-validation checks data from more than one source, revealing unmatched records. Data profiling focuses on structural details like field formats but does not confirm matching entries. Selecting random entries for a sample check can miss issues outside the small subset. Reviewing data against reasonable expectations checks for plausible ranges but does not ensure alignment between two merged sources.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is cross-validation in data analysis?
Open an interactive chat with Bash
How does data profiling differ from cross-validation?
Open an interactive chat with Bash
Why is selecting random entries for a sample check not effective for verifying data alignment?