5 Questions to Ask Before You Automate Anything

1. What problem are we actually solving?

Automation initiatives often begin with a tool, not a problem.

A team buys workflow software. Another deploys AI agents. A third adds RPA bots. The justification sounds familiar: “We need efficiency.”

Efficiency of what, exactly?

Before automating, leaders must state the problem in operational terms:

  • What is slow?
  • What is error-prone?
  • What is expensive at current volume?

If the answer is vague, automation will magnify confusion.

Example:
A finance team automates invoice processing to “save time.” After launch, disputes increase because upstream data was inconsistent. The real problem was data quality, not processing speed.

Rule of thumb:
If the problem cannot be expressed as a measurable constraint (time, cost, error rate), automation is premature.

Pros of clarity

  • Targeted scope
  • Real ROI measurement

Cons if skipped

  • Tool sprawl
  • Automation that “works” but delivers no business value

2. Is the process stable and well-defined?

Automation assumes repeatability. Humans tolerate ambiguity. Machines do not.

According to research from McKinsey & Company, a large share of failed automation programs stem from automating processes that were still evolving or poorly documented.

Ask:

  • Has this process changed in the last 90 days?
  • Are steps executed the same way across teams?
  • Can one person document it end-to-end without exceptions?

If the answer is no, automation will lock in temporary behavior.

Example:
Customer onboarding differs between regions. Automation encodes one version, forcing workarounds elsewhere. Support tickets rise. Trust drops.

Minimum standard before automation

  • One clear owner
  • One documented flow
  • One agreed definition of “done”

3. Where do exceptions and edge cases occur?

Every real process has exceptions. Mature automation plans for them explicitly.

Ignoring edge cases does not remove them. It pushes them downstream, where fixes are more expensive.

Studies summarized by Harvard Business Review show that unhandled exceptions are a major cause of automation rework and employee resistance.

Leaders should ask:

  • What percentage of cases do not follow the “happy path”?
  • Which exceptions are frequent but low impact?
  • Which are rare but high risk?

Example:
An HR automation handles 95% of leave requests perfectly. The remaining 5% include medical or regulatory cases. Without clear exception handling, trust collapses.

Good automation design

  • Automate the common path
  • Route exceptions deliberately
  • Log, review, and learn from them

4. What decisions still require human judgment?

Not all decisions should be automated.

Some require:

  • Context
  • Ethical reasoning
  • Regulatory interpretation
  • Accountability

This is where human-in-the-loop design matters.

According to guidance from NIST, systems that combine automation with human oversight are more resilient in high-impact domains.

Ask:

  • Where could a wrong decision cause irreversible damage?
  • Who is accountable if the system is wrong?
  • At what threshold should a human intervene?

Example:
An automated credit decision flags a borderline case. A human review step prevents a reputational and compliance issue.

Pros of human-in-the-loop

  • Risk control
  • Better learning signals

Cons

  • Slightly slower throughput

For CEOs and Directors, this is not a technical decision. It is a governance decision.

5. How will we measure success after automation?

Many teams declare success at launch. That is a mistake.

Automation success is not deployment. It is sustained improvement.

Metrics must be defined before automation begins:

  • Cycle time reduction (%)
  • Error rate change
  • Cost per transaction
  • Human hours freed (and redeployed)

Research from MIT Sloan Management Review emphasizes that organizations often overestimate automation ROI by failing to measure post-deployment outcomes.

Example:
A sales ops automation reduces manual entry by 20%. However, lead quality drops, increasing downstream sales effort. Net ROI is negative.

Good measurement practice

  • Baseline first
  • Measure weekly
  • Review monthly
  • Kill or adjust fast

Comparison Table: Automate Now vs Fix First

DimensionAutomate NowFix First
Process clarityLowHigh
Exception handlingAd hocDesigned
Risk exposureHiddenManaged
ROI visibilityAssumedMeasured
Long-term costHighControlled

What to Do Next

  1. Require written answers to all five questions before approving automation.
  2. Pilot with clear metrics and a kill switch.
  3. Review outcomes at 30, 60, and 90 days.

Automation is not a strategy. It is an amplifier.

FAQ

Is automation always a cost saver?

No. Poorly designed automation can increase rework, maintenance, and exception handling costs.4

What is human-in-the-loop automation?

It is automation designed with defined checkpoints where humans review or override decisions.3

How many processes should we automate at once?

Start with one stable, high-volume process. Scale only after measured success.

Should AI be treated differently from traditional automation?

AI adds uncertainty and drift. It requires stronger monitoring and governance.3

What is automation debt?

The long-term cost of maintaining brittle or poorly designed automation that no longer fits reality.

Leave a Reply

Discover more from AV

Subscribe now to keep reading and get access to the full archive.

Continue reading