In my opinion there is not a single answer to the question about whether a computer may match party master data or you always will have a human doing/confirming this due to risk of errors.
One approach I often use when matching party master data is that the computer divides the material to be inspected into 3 pots:
A: The positive automated matches. Ideally you take samples for manual inspection for continuous quality assurance by counting and evaluating any false positives.
C: The negative automated matches. Quality assurance samples may be used for continuous improvement by counting and evaluating any false negatives – but this is harder.
B: The dubious part selected for manual inspection. Results may be part of probabilistic learning and there by reducing the B pot over time.
If you have high numbers of data human interaction is very costly and time consuming which forces you to rely more on computerised results.
Also the purpose of the match may guide the need for human interaction. In a multi purpose environment this may force you to take a real world approach.
Anyway the possibility to configure whether similarities between records must end up in pot A, B or C is a must.
The post called ”When computer says maybe” tells about how a functionality for manual inspection may look like.