When
AI Asks for
Help

Conducted a mixed-methods study with 120 participants to evaluate how conversational AI can request human help without undermining trust, providing design insights for user engagement and AI systems.

Assumptions

1. AI products often need human input — to resolve uncertainty, validate a decision, or add context.
2. Current conversational AI systems like ChatGPT have been asking for human assistance (e.g., “Which response do you think works better?”).

The Problem

AI products often need human input and ask for human help. But if your bot constantly asks for help, users may disengage. The challenge: how can AI ask for help in a way that maximizes assistance without tanking trust or competence perception?

The Solution: Deserving-ness Cues

When an AI asks for help, it’s not just a technical moment — it’s a social interaction. Research on human–human cooperation shows people help more when the asker seems to deserve it.

We wanted to test: does the same dynamic apply to AI?

The hypothesis:
1. If the AI admits struggle (“Needed”), users may jump in — but see it as less competent.
2. If the AI shows effort (“Earned”), users may respect it more.
3. If the AI frames users as key resources (“Resource”), it might balance competence with collaboration.

Market & User Fit

1. Users: Knowledge workers using AI copilots in ambiguous or high-stakes workflows (e.g. customer support, analysts, PMs).
2. Market: AI adoption hinges on trust and usability. Products that balance accuracy with graceful human handoffs are more likely to stick and expand.

Shortcomings & Trade-offs

1. Overusing “Needed” cues risks eroding perceived competence.
2. “Earned” and “Resource” framings require concise explanations — too much detail slows users down.
3. Study was controlled; in-the-wild dynamics (noise, time pressure) may shift outcomes.

Study Method

Participants: 120 U.S. adults.

Task: Solving LSAT logical reasoning questions with conversational AI.

Conditions: AI either did not ask for help, or asked using Earned Deservingness, Needed Deservingness, or Resource Deservingness cues.

Measures: Number of times participants helped, trust, appropriateness of requests, emotions, and perceptions of intelligence, friendliness, and willingness to collaborate again.

Study Insights

1. Helping Behavior:
AI asking for help received significantly more assistance than AI that didn’t.

Of the cues,Needed Deservingness was the strongest driver of human help. Earned Deservingness and Resource Deservingness did not significantly increase actual helping behavior.

2. Perceptions of AI

- Friendliness: AI that asked for help was rated more friendly than those that didn’t.

- Trust, intelligence, and confidence:
Earned Deservingness and Resource Deservingness improved perceptions of AI intelligence, trust, appropriateness, and willingness to give it another chance.
Needed Deservingness, while effective in eliciting help, reduced perceptions of AI competence and appropriatenes

3. How Humans Interpret Deservingness
Participants interpreted longer response times as Earned Deservingness, error rates as Needed Deservingness, detailed reasoning as Resource Deservingness.

However, perceptions were also shaped by:

- The act of asking for help itself (sometimes seen as effort, sometimes as neediness).
- Quality and correctness of AI’s answers.
- Ability to incorporate user input to improve answers.
- Prior experiences and beliefs about AI.

Proposed Solution

1. When short-term speed/coverage matters:
Use Needed framing. Expect more replies, but throttle frequency to avoid “needy bot” syndrome.

2. When building long-term trust:
Use Earned framing. Show effort (step breakdowns, rationale), then ask for validation.

When targeting power users:
Use Resource framing. Signal competence and invite optimization (ranking, tweaking, prioritization).