<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://critictracking.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://critictracking.com/" rel="alternate" type="text/html" /><updated>2026-04-20T14:35:50+00:00</updated><id>https://critictracking.com/feed.xml</id><title type="html">Critic</title><subtitle>Actionable bug reports for mobile teams. In-app feedback with automatic device telemetry. One line of code. $20/month per app.</subtitle><entry><title type="html">How to Invite Team Members to Your Critic Organization and Assign Roles</title><link href="https://critictracking.com/blog/how-to-invite-team-members-to-your-critic-organization-and-assign-roles/" rel="alternate" type="text/html" title="How to Invite Team Members to Your Critic Organization and Assign Roles" /><published>2026-03-31T13:00:00+00:00</published><updated>2026-03-31T13:00:00+00:00</updated><id>https://critictracking.com/blog/how-to-invite-team-members-to-your-critic-organization-and-assign-roles</id><content type="html" xml:base="https://critictracking.com/blog/how-to-invite-team-members-to-your-critic-organization-and-assign-roles/"><![CDATA[<p>This guide covers how to invite teammates to your Critic organization, choose their role (Owner or Member), and what happens after the invitation is sent.</p>

<h2 id="prerequisites">Prerequisites</h2>

<ul>
  <li>You must be an <strong>Owner</strong> of the organization. Only Owners can send invitations.</li>
  <li>You need the invitee’s email address.</li>
  <li>The organization must already exist (created during signup or via the dashboard).</li>
</ul>

<h2 id="steps">Steps</h2>

<ol>
  <li>Log in to Critic at <strong>critic.inventiv.io</strong>.</li>
  <li>Navigate to your <strong>Organization</strong> from the dashboard.</li>
  <li>Open the <strong>Invite a Member</strong> form within the organization.</li>
  <li>Enter the invitee’s email in the <strong>Email</strong> field.</li>
  <li>Select a <strong>Role</strong> from the dropdown:
    <ul>
      <li><strong>Member</strong> (default): Can view the organization and its bug reports and add comments. Has no access to invite others or change organization settings.</li>
      <li><strong>Owner</strong>: Can manage billing, edit organization settings, invite and manage other members, and delete the organization.</li>
    </ul>
  </li>
  <li>Click <strong>Send Invitation</strong>.</li>
  <li>A confirmation appears: <em>“An invitation has been sent to [email].”</em></li>
</ol>

<p>Email addresses are case-insensitive. <code class="language-plaintext highlighter-rouge">Jane@example.com</code> and <code class="language-plaintext highlighter-rouge">jane@example.com</code> are treated as the same address.</p>

<h2 id="what-the-invitee-receives">What the Invitee Receives</h2>

<p>The invitation flow depends on whether the invitee already has a Critic account.</p>

<p><strong>If the invitee already has a Critic account:</strong></p>

<ol>
  <li>They receive an email with the subject <em>“You’ve been invited to join [Organization Name].”</em></li>
  <li>The email contains an <strong>Accept Invitation</strong> link.</li>
  <li>Clicking the link shows the organization name and assigned role: <em>“You have been invited to join [Organization Name] as a [role].”</em></li>
  <li>They click <strong>Accept Invitation</strong>.</li>
  <li>The dashboard confirms: <em>“You have joined [Organization Name].”</em></li>
</ol>

<p><strong>If the invitee has no Critic account:</strong></p>

<ol>
  <li>They receive an email inviting them to create a Critic account.</li>
  <li>Clicking the link opens a registration form (name, password).</li>
  <li>Completing registration automatically accepts the organization invitation.</li>
  <li>They land in the dashboard as a member of the organization with the assigned role.</li>
</ol>

<h2 id="troubleshooting">Troubleshooting</h2>

<p><strong>Error:</strong> <em>“This person has already been invited to the organization.”</em>
<strong>Cause:</strong> A pending invitation already exists for this email address.
<strong>Fix:</strong> Wait for the invitee to accept the existing invitation. If they didn’t receive it, ask them to check spam. The invitation remains active until accepted.</p>

<p><strong>Error:</strong> <em>“This person is already a member of the organization.”</em>
<strong>Cause:</strong> The email address belongs to someone who has already accepted an invitation.
<strong>Fix:</strong> The person already has access. Check the organization’s member list to confirm their current role.</p>

<p><strong>Error:</strong> The <strong>Invite a Member</strong> option is missing or inaccessible.
<strong>Cause:</strong> You are a <strong>Member</strong>, and only <strong>Owners</strong> can send invitations.
<strong>Fix:</strong> Ask an existing Owner to either invite the person directly or change your role to Owner.</p>

<p><strong>Error:</strong> <em>“You must be signed in with the email address the invitation was sent to.”</em>
<strong>Cause:</strong> The invitee clicked the acceptance link while logged into a different Critic account.
<strong>Fix:</strong> Sign out, then sign back in with the email address that received the invitation before clicking the link again.</p>

<h2 id="organization-scoped-access-for-agencies">Organization-Scoped Access for Agencies</h2>

<p>Invitations are scoped to a single organization. Inviting someone to Organization A gives them access only to Organization A. Agencies can safely invite client stakeholders to their specific organization without exposing other clients’ bug reports or data.</p>]]></content><author><name>dave_lane</name></author><category term="Critic" /><summary type="html"><![CDATA[Invite teammates to your Critic organization, assign Owner or Member roles, and troubleshoot common invitation errors.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://critictracking.com/assets/images/posts/2026-03-31-how-to-invite-team-members-to-your-critic-organization-and-assign-roles.webp" /><media:content medium="image" url="https://critictracking.com/assets/images/posts/2026-03-31-how-to-invite-team-members-to-your-critic-organization-and-assign-roles.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Free In-App Bug Reporting SDK Evaluation Scorecard: Compare Critic, Shake, Gleap, Bugsee &amp;amp; Wiredash</title><link href="https://critictracking.com/blog/free-in-app-bug-reporting-sdk-evaluation-scorecard-compare-critic-shake-gleap-bugsee-wiredash/" rel="alternate" type="text/html" title="Free In-App Bug Reporting SDK Evaluation Scorecard: Compare Critic, Shake, Gleap, Bugsee &amp;amp; Wiredash" /><published>2026-03-30T13:00:00+00:00</published><updated>2026-03-30T13:00:00+00:00</updated><id>https://critictracking.com/blog/free-in-app-bug-reporting-sdk-evaluation-scorecard-compare-critic-shake-gleap-bugsee-wiredash</id><content type="html" xml:base="https://critictracking.com/blog/free-in-app-bug-reporting-sdk-evaluation-scorecard-compare-critic-shake-gleap-bugsee-wiredash/"><![CDATA[<p>You’ve read the listicles. You’ve seen the feature matrices. You still have no idea which bug reporting SDK to pick, because none of those articles gave you a framework for <em>your</em> priorities. This is a weighted evaluation scorecard with 8 dimensions, binary pass/fail checks, and adjustable weights designed for small mobile teams of 1 to 15 engineers. It includes a pre-filled example comparing five tools and a 30-minute evaluation sprint guide so you can score any SDK during a free trial.</p>

<h2 id="whats-in-the-scorecard">What’s in the Scorecard</h2>

<p>The scorecard evaluates bug reporting SDKs across eight dimensions, each with specific, testable checks (no subjective “ease of use” ratings):</p>

<ol>
  <li><strong>Platform coverage:</strong> Does the SDK support every platform you ship on today <em>and</em> plan to ship on next?</li>
  <li><strong>Automatic device telemetry depth:</strong> Battery, memory, disk, network, OS, CPU captured without extra code?</li>
  <li><strong>Setup time:</strong> Can you go from zero to first test report in under 30 minutes?</li>
  <li><strong>SDK footprint:</strong> How much weight does the SDK add to your app binary?</li>
  <li><strong>Pricing model transparency:</strong> Is the price published, predictable, and within budget without a sales call?</li>
  <li><strong>Custom metadata flexibility:</strong> Can you attach arbitrary data (user IDs, feature flags, JSON) to every report?</li>
  <li><strong>API completeness:</strong> Can you do everything via API that you can do in the dashboard?</li>
  <li><strong>Permission granularity:</strong> Can you isolate access per app/client, rather than only per org?</li>
</ol>

<p>Why binary checks instead of subjective scales? “Ease of use: 7/10” is meaningless across reviewers. “SDK captures battery level automatically without additional code: yes/no” is testable and reproducible by anyone running a trial.</p>

<h2 id="the-scorecard">The Scorecard</h2>

<p>Score each check 0 (fail) or 1 (pass). Multiply total passes per dimension by the weight. Sum all dimensions for the final score.</p>

<table>
  <thead>
    <tr>
      <th>Dimension</th>
      <th>Wt</th>
      <th>Check</th>
      <th>Tool A</th>
      <th>Tool B</th>
      <th>Tool C</th>
      <th>Tool D</th>
      <th>Tool E</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Setup time</strong></td>
      <td>3</td>
      <td>One-line SDK initialization (no UI code required)</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>First test report submitted in under 30 min</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Default feedback UI works out of the box</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td><strong>Pricing transparency</strong></td>
      <td>3</td>
      <td>Price published on website without “contact sales”</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Monthly cost calculable for your number of apps</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>No DAU/MAU-based variable pricing</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td><strong>Telemetry depth</strong></td>
      <td>3</td>
      <td>Captures battery level automatically</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Captures memory metrics automatically</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Captures disk space automatically</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Captures network status automatically</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Captures console logs (50+ lines) automatically</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td><strong>Custom metadata</strong></td>
      <td>2</td>
      <td>Accepts arbitrary JSON on every report</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Custom fields visible in dashboard</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td><strong>Platform coverage</strong></td>
      <td>2</td>
      <td>Supports iOS</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Supports Android</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Supports Flutter</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Supports JavaScript/Web</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td><strong>API completeness</strong></td>
      <td>2</td>
      <td>Full REST API available</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Can submit reports via API (not just SDK)</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>API docs publicly accessible</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td><strong>Permission granularity</strong></td>
      <td>1</td>
      <td>Per-app access control (beyond org-level)</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Role-based permissions per project</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td><strong>Advanced features</strong></td>
      <td>1</td>
      <td>Session replay</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Crash reporting</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Native PM integrations (Jira, Linear, Slack)</td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
      <td> </td>
    </tr>
  </tbody>
</table>

<h2 id="pre-filled-example-critic-vs-gleap-vs-shake-vs-bugsee-vs-wiredash">Pre-Filled Example: Critic vs. Gleap vs. Shake vs. Bugsee vs. Wiredash</h2>

<p>Here’s the scorecard completed with verified data for five tools. Pricing reflects published rates as of March 2026.</p>

<table>
  <thead>
    <tr>
      <th>Dimension</th>
      <th>Wt</th>
      <th>Check</th>
      <th>Critic</th>
      <th>Gleap</th>
      <th>Shake</th>
      <th>Bugsee</th>
      <th>Wiredash</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Setup time</strong></td>
      <td>3</td>
      <td>One-line initialization</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>First report under 30 min</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Default UI out of the box</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
    </tr>
    <tr>
      <td><strong>Pricing transparency</strong></td>
      <td>3</td>
      <td>Price published, no “contact sales”</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
      <td>✅</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Cost calculable for your apps</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
      <td>✅</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>No DAU/MAU variable pricing</td>
      <td>✅</td>
      <td>❌</td>
      <td>❌</td>
      <td>✅</td>
      <td>❌</td>
    </tr>
    <tr>
      <td><strong>Telemetry depth</strong></td>
      <td>3</td>
      <td>Battery level</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Memory metrics</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Disk space</td>
      <td>✅</td>
      <td>❌</td>
      <td>❌</td>
      <td>❌</td>
      <td>❌</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Network status</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Console logs (50+ lines)</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
    </tr>
    <tr>
      <td><strong>Custom metadata</strong></td>
      <td>2</td>
      <td>Arbitrary JSON per report</td>
      <td>✅</td>
      <td>❌</td>
      <td>❌</td>
      <td>❌</td>
      <td>❌</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Custom fields in dashboard</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
    </tr>
    <tr>
      <td><strong>Platform coverage</strong></td>
      <td>2</td>
      <td>iOS</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Android</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Flutter</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
      <td>✅</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>JavaScript/Web</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
      <td>❌</td>
      <td>❌</td>
    </tr>
    <tr>
      <td><strong>API completeness</strong></td>
      <td>2</td>
      <td>Full REST API</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Submit reports via API</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Public API docs</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
    </tr>
    <tr>
      <td><strong>Permission granularity</strong></td>
      <td>1</td>
      <td>Per-app access control</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
      <td>❌</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Role-based per project</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
      <td>❌</td>
    </tr>
    <tr>
      <td><strong>Advanced features</strong></td>
      <td>1</td>
      <td>Session replay</td>
      <td>❌</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Crash reporting</td>
      <td>❌</td>
      <td>❌</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td>Native PM integrations</td>
      <td>❌</td>
      <td>✅</td>
      <td>✅</td>
      <td>✅</td>
      <td>❌</td>
    </tr>
  </tbody>
</table>

<h3 id="weighted-scores">Weighted Scores</h3>

<table>
  <thead>
    <tr>
      <th>Tool</th>
      <th>Setup (×3)</th>
      <th>Pricing (×3)</th>
      <th>Telemetry (×3)</th>
      <th>Metadata (×2)</th>
      <th>Platform (×2)</th>
      <th>API (×2)</th>
      <th>Permissions (×1)</th>
      <th>Advanced (×1)</th>
      <th><strong>Total</strong></th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Critic</strong></td>
      <td>9</td>
      <td>9</td>
      <td>15</td>
      <td>4</td>
      <td>8</td>
      <td>6</td>
      <td>2</td>
      <td>0</td>
      <td><strong>53</strong></td>
    </tr>
    <tr>
      <td><strong>Gleap</strong></td>
      <td>9</td>
      <td>6</td>
      <td>12</td>
      <td>2</td>
      <td>8</td>
      <td>6</td>
      <td>2</td>
      <td>2</td>
      <td><strong>47</strong></td>
    </tr>
    <tr>
      <td><strong>Shake</strong></td>
      <td>9</td>
      <td>6</td>
      <td>12</td>
      <td>2</td>
      <td>6</td>
      <td>6</td>
      <td>2</td>
      <td>3</td>
      <td><strong>46</strong></td>
    </tr>
    <tr>
      <td><strong>Bugsee</strong></td>
      <td>9</td>
      <td>3</td>
      <td>12</td>
      <td>2</td>
      <td>4</td>
      <td>6</td>
      <td>0</td>
      <td>3</td>
      <td><strong>39</strong></td>
    </tr>
    <tr>
      <td><strong>Wiredash</strong></td>
      <td>9</td>
      <td>6</td>
      <td>0</td>
      <td>2</td>
      <td>2</td>
      <td>0</td>
      <td>0</td>
      <td>0</td>
      <td><strong>19</strong></td>
    </tr>
  </tbody>
</table>

<p><strong>Pricing context:</strong> <a href="https://critictracking.com/">Critic</a> costs $20/month per app with no seat limits. Gleap’s Team plan runs $149/month ($119/month annual) for unlimited members and projects, with per-AI-response and per-email charges on top. Shake’s Premium plan is $200/month for up to 5 apps and 25 seats, with a 10,000-install cap per app across all tiers. Bugsee’s per-tier pricing is unclear from its website. Wiredash is Flutter-only with a free tier.</p>

<p>Critic’s weighted score is highest <em>for this weight configuration</em> because the weights reflect small-team priorities. If you weight session replay and crash reporting higher (enterprise priorities), Gleap or Bugsee pull ahead. The scorecard adapts to your priorities.</p>

<h2 id="how-to-run-a-30-minute-evaluation-sprint">How to Run a 30-Minute Evaluation Sprint</h2>

<p>You can score the three highest-weighted dimensions (setup time, pricing, and telemetry) hands-on during a single trial session:</p>

<ol>
  <li><strong>Minutes 0–5:</strong> Sign up for a free trial. Clock how long until you have an API key or SDK token. No credit card required? Check the pricing transparency box.</li>
  <li><strong>Minutes 5–15:</strong> Add the SDK to a test project. Initialize. Submit one test report via shake gesture or API call. Did it work on the first try without building custom UI? Score setup time checks.</li>
  <li><strong>Minutes 15–20:</strong> Open the dashboard. Inspect your test report. Which telemetry fields populated automatically: battery? memory? disk? network? logs? Score each telemetry check.</li>
  <li><strong>Minutes 20–25:</strong> Attach custom metadata to a second report: <code class="language-plaintext highlighter-rouge">{"user_id": "test", "feature_flag": "dark_mode"}</code>. Check if it appears in the dashboard. Score metadata checks.</li>
  <li><strong>Minutes 25–30:</strong> Visit the pricing page. Can you calculate your exact monthly cost for your number of apps without contacting sales? Score pricing transparency.</li>
</ol>

<p>Verify API completeness and permission granularity from the docs; hands-on testing covers the highest-weighted dimensions first.</p>

<h2 id="how-to-adjust-weights-for-your-team">How to Adjust Weights for Your Team</h2>

<p>The default weights assume a bootstrapped team of 1 to 15 engineers shipping their first or second mobile app. If that description misses you, change the weights:</p>

<table>
  <thead>
    <tr>
      <th>Team Profile</th>
      <th>Weight Adjustments</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Agency (10+ client apps)</strong></td>
      <td>Permission granularity → 3. Per-app pricing matters for client passthrough billing.</td>
    </tr>
    <tr>
      <td><strong>Flutter-only team</strong></td>
      <td>Add a “Flutter-native experience” dimension at weight 3. Wiredash’s score jumps; tools without Flutter-specific features drop.</td>
    </tr>
    <tr>
      <td><strong>Team with existing Crashlytics/Sentry</strong></td>
      <td>Advanced features weight → 0. You already have crash reporting, so there’s no reason to penalize tools like Critic that skip it.</td>
    </tr>
  </tbody>
</table>

<h2 id="why-these-weights">Why These Weights?</h2>

<p>The weights reflect what actually determines whether a small team adopts and keeps a bug reporting tool.</p>

<p><strong>Setup time (weight 3):</strong> <a href="https://aqua-cloud.io/bug-reporting-mobile-apps-best-practices/">In-app bug reporting SDKs significantly reduce resolution time</a> compared to manual reporting, but only if they get integrated. For a team of three, a tool that takes days to set up never gets set up. One-line initialization is the difference between adoption and abandonment.</p>

<p><strong>Pricing transparency (weight 3):</strong> Luciq (formerly Instabug) moved to opaque DAU-based pricing. Shake caps every tier at 10,000 app installs with add-ons beyond that. Solo developers can’t call sales for a quote. Published, predictable pricing is a trust signal.</p>

<p><strong>Telemetry depth (weight 3):</strong> The entire point of an SDK over email is automatic context. If the tool fails to capture battery, memory, disk, and network without extra code, you’re still asking users “what device are you on?” Apps with easy in-app feedback see <a href="https://aqua-cloud.io/bug-reporting-mobile-apps-best-practices/">dramatically higher response rates</a> than those relying on external channels, but only when the context arrives automatically.</p>

<p><strong>Advanced features (weight 1):</strong> Session replay and AI triage are valuable for larger teams. For a small team, the core feedback loop (shake, describe, capture, deliver) handles the vast majority of bug reporting needs.</p>

<hr />

<p>Run the 30-minute sprint on your shortlist and let the scores surface the trade-offs that matter for <em>your</em> team. <a href="https://critictracking.com/getting-started/">Critic’s getting started guide</a> walks you from signup to first report in under five minutes.</p>]]></content><author><name>dave_lane</name></author><category term="Critic" /><summary type="html"><![CDATA[A weighted scorecard with 8 dimensions and binary pass/fail checks to compare in-app bug reporting SDKs, calibrated for small mobile teams.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://critictracking.com/assets/images/posts/2026-03-30-free-in-app-bug-reporting-sdk-evaluation-scorecard-compare-critic-shake-gleap-bugsee-wiredash.webp" /><media:content medium="image" url="https://critictracking.com/assets/images/posts/2026-03-30-free-in-app-bug-reporting-sdk-evaluation-scorecard-compare-critic-shake-gleap-bugsee-wiredash.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">How to Add Custom Metadata to Mobile Bug Reports: User IDs, Feature Flags, and Session Data</title><link href="https://critictracking.com/blog/how-to-add-custom-metadata-to-mobile-bug-reports-user-ids-feature-flags-and-session-data/" rel="alternate" type="text/html" title="How to Add Custom Metadata to Mobile Bug Reports: User IDs, Feature Flags, and Session Data" /><published>2026-03-27T13:00:00+00:00</published><updated>2026-03-27T13:00:00+00:00</updated><id>https://critictracking.com/blog/how-to-add-custom-metadata-to-mobile-bug-reports-user-ids-feature-flags-and-session-data</id><content type="html" xml:base="https://critictracking.com/blog/how-to-add-custom-metadata-to-mobile-bug-reports-user-ids-feature-flags-and-session-data/"><![CDATA[<p>This guide shows you how to attach arbitrary JSON metadata (user IDs, feature flags, subscription tiers, session data) to every in-app bug report on Android, iOS, Flutter, and via REST API. By the end, every bug report your users submit will arrive with the business context needed to reproduce and prioritize issues, captured automatically alongside device telemetry. You’ll need a <a href="https://critictracking.com/">Critic</a> account (30-day free trial, no credit card required) and your app’s product access token to follow along.</p>

<h2 id="why-device-telemetry-alone-falls-short">Why Device Telemetry Alone Falls Short</h2>

<p>Two users report “checkout crashes on submit.” Both are on a Pixel 8, Android 14, 4 GB free RAM, connected via WiFi. The device telemetry is identical. But User A is on your free tier with a coupon code applied. User B is on the enterprise plan paying with a saved credit card. The crash only triggers when a coupon discount is calculated against the free tier’s pricing logic.</p>

<p>Without custom metadata, you’re staring at two identical device snapshots and no lead. With metadata (<code class="language-plaintext highlighter-rouge">"subscription_tier": "free"</code>, <code class="language-plaintext highlighter-rouge">"coupon_applied": true</code>) the pattern jumps out from the first two reports.</p>

<p>This scenario plays out constantly. A <a href="https://devops.com/survey-fixing-bugs-stealing-time-from-development/">Rollbar survey of 950+ developers</a> found that 38% spend up to a quarter of their working hours fixing bugs, and 26% spend up to half. Data from <a href="https://coralogix.com/blog/this-is-what-your-developers-are-doing-75-of-the-time-and-this-is-the-cost-you-pay/">Coralogix</a> puts it more starkly: developers spend roughly 75% of their time debugging, about 1,500 hours per year. The bottleneck is rarely writing the fix. It’s <em>reproducing the problem</em>. And reproduction depends on context.</p>

<p>Every bug report carries three layers of context:</p>

<ol>
  <li><strong>Device telemetry</strong> (battery, memory, OS, network). Answers: <em>what hardware and software environment?</em></li>
  <li><strong>Console logs</strong> (the last 500 logcat entries on Android, stderr/stdout on iOS). Answers: <em>what happened technically?</em></li>
  <li><strong>Custom metadata</strong> (user ID, feature flags, session state, business data). Answers: <em>who is this user and what app state were they in?</em></li>
</ol>

<p>Most in-app feedback tools stop at layers one and two. Necessary, but insufficient. Layer three, the metadata you define, is where reproduction actually happens. It captures app-specific state that no automated system can infer.</p>

<p>The only existing tutorial on custom metadata in bug reports comes from Marker.io, and it covers their web-only JavaScript SDK. No mobile-focused guide exists for iOS, Android, or Flutter. This guide fills that gap.</p>

<h2 id="prerequisites">Prerequisites</h2>

<p>Before starting, make sure you have:</p>

<ul>
  <li><strong>A Critic account</strong>: sign up at <a href="https://critic.inventiv.io/">critic.inventiv.io</a> (30-day free trial, no credit card required)</li>
  <li><strong>An organization and app</strong> created in the Critic dashboard</li>
  <li><strong>Your app’s product access token</strong>: found in your app’s settings in the dashboard</li>
  <li><strong>One of the following:</strong> an Android app (Java/Kotlin, minSdk 21+), an iOS app (Swift 5+, iOS 12+), a Flutter app (Dart 2.17+), or any HTTP client for REST API integration</li>
  <li><strong>The Critic SDK installed</strong> for your platform:
    <ul>
      <li><strong>Android:</strong> <code class="language-plaintext highlighter-rouge">implementation 'io.inventiv.critic.android:critic-android:1.0.4'</code> in your <code class="language-plaintext highlighter-rouge">build.gradle</code></li>
      <li><strong>iOS:</strong> <code class="language-plaintext highlighter-rouge">pod 'Critic', '~&gt; 0.1.5'</code> in your <code class="language-plaintext highlighter-rouge">Podfile</code>, then run <code class="language-plaintext highlighter-rouge">pod install</code></li>
      <li><strong>Flutter:</strong> <code class="language-plaintext highlighter-rouge">inventiv_critic_flutter: ^0.4.0</code> in your <code class="language-plaintext highlighter-rouge">pubspec.yaml</code></li>
      <li><strong>JavaScript/Web:</strong> <a href="https://github.com/twinsunllc/inventiv-critic-js"><code class="language-plaintext highlighter-rouge">inventiv-critic</code></a> via GitHub</li>
    </ul>
  </li>
  <li><strong>Critic initialized</strong> with one line of code; see the <a href="https://critictracking.com/getting-started/">Getting Started guide</a> for platform-specific setup</li>
</ul>

<p>If you haven’t integrated Critic yet, initialization is one line of code. The Android SDK is <a href="https://github.com/twinsunllc/inventiv-critic-android">roughly 1,600 lines of Java</a> with minimal dependencies, so it won’t bloat your build.</p>

<h2 id="designing-your-metadata-schema">Designing Your Metadata Schema</h2>

<p>Before writing any code, decide what metadata to attach. The guiding principle: <strong>include everything that affects app behavior; exclude everything that identifies a person unnecessarily.</strong></p>

<p>Custom metadata in Critic is arbitrary JSON. You’re free to use any key-value structure your debugging workflow needs. Here are three ready-to-use schemas you can adapt.</p>

<h3 id="e-commerce-app-metadata">E-Commerce App Metadata</h3>

<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"user_id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"usr_a3f9b2"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"subscription_tier"</span><span class="p">:</span><span class="w"> </span><span class="s2">"free"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"cart_item_count"</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">,</span><span class="w">
  </span><span class="nl">"payment_method"</span><span class="p">:</span><span class="w"> </span><span class="s2">"apple_pay"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"coupon_applied"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
  </span><span class="nl">"coupon_code"</span><span class="p">:</span><span class="w"> </span><span class="s2">"SAVE20"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"locale"</span><span class="p">:</span><span class="w"> </span><span class="s2">"en-US"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"feature_flags"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"new_checkout_flow"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
    </span><span class="nl">"dynamic_pricing"</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
  </span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>

<p>Every field earns its place. <code class="language-plaintext highlighter-rouge">coupon_applied</code> combined with <code class="language-plaintext highlighter-rouge">subscription_tier</code> would have resolved the opening scenario from two reports instead of two days. <code class="language-plaintext highlighter-rouge">payment_method</code> surfaces bugs specific to Apple Pay or Google Pay integrations. <code class="language-plaintext highlighter-rouge">cart_item_count</code> reveals whether the crash only happens with large carts (a pagination or memory issue invisible in device telemetry). <code class="language-plaintext highlighter-rouge">feature_flags</code> tells you instantly whether the user was on the new checkout flow or the legacy one.</p>

<h3 id="saas--b2b-app-metadata">SaaS / B2B App Metadata</h3>

<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"user_id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"usr_7d2e1f"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"org_id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"org_4a8c3b"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"role"</span><span class="p">:</span><span class="w"> </span><span class="s2">"admin"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"team_size"</span><span class="p">:</span><span class="w"> </span><span class="mi">12</span><span class="p">,</span><span class="w">
  </span><span class="nl">"subscription_plan"</span><span class="p">:</span><span class="w"> </span><span class="s2">"pro"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"feature_flags"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"beta_dashboard"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
    </span><span class="nl">"legacy_api"</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="p">,</span><span class="w">
    </span><span class="nl">"ai_assist"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
  </span><span class="p">},</span><span class="w">
  </span><span class="nl">"active_view"</span><span class="p">:</span><span class="w"> </span><span class="s2">"reports/monthly"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"data_volume"</span><span class="p">:</span><span class="w"> </span><span class="s2">"10k_records"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>

<p><code class="language-plaintext highlighter-rouge">role</code> and <code class="language-plaintext highlighter-rouge">team_size</code> help reproduce permission-related bugs that only affect admins on large teams. <code class="language-plaintext highlighter-rouge">data_volume</code> surfaces performance issues that vanish during dev testing with 50 records but explode with 10,000. <code class="language-plaintext highlighter-rouge">active_view</code> tells you exactly which screen the user was on when they shook the device.</p>

<h3 id="multi-tenant--agency-app-metadata">Multi-Tenant / Agency App Metadata</h3>

<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"tenant_id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"client_acme"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"user_id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"usr_9f3a7c"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"tenant_plan"</span><span class="p">:</span><span class="w"> </span><span class="s2">"enterprise"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"white_label"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
  </span><span class="nl">"custom_domain"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
  </span><span class="nl">"api_version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"v3"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"feature_flags"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"custom_branding"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
    </span><span class="nl">"sso_enabled"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
  </span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>

<p>For agencies managing multiple client apps, this schema maps directly to Critic’s product-based permissions. Each client’s app gets isolated reports with tenant-specific metadata, so when Client Acme reports a branding bug, you immediately see it’s a white-label configuration issue on their custom domain.</p>

<h3 id="pii-considerations">PII Considerations</h3>

<p><strong>Exclude:</strong> full names, email addresses, passwords, payment card numbers, health data, precise geolocation, or any data subject to GDPR/CCPA that you don’t need for debugging.</p>

<p><strong>Safe pattern:</strong> use opaque user IDs like <code class="language-plaintext highlighter-rouge">usr_a3f9b2</code> that your backend can resolve when needed, rather than embedding PII directly in metadata. This keeps your bug reporting pipeline free of regulated data while preserving your ability to look up the user.</p>

<p>This aligns with the <a href="https://improvado.io/blog/what-is-personally-identifiable-information-pii">GDPR data minimization principle</a>: collect only what you need for the stated purpose. Here, the stated purpose is reproducing and fixing bugs.</p>

<h2 id="step-1-attach-static-metadata-at-initialization-android">Step 1: Attach Static Metadata at Initialization (Android)</h2>

<p>Start with metadata that applies to every report from this app install (environment, build configuration, and app variant):</p>

<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// In your Application class onCreate()</span>
<span class="nc">Critic</span><span class="o">.</span><span class="na">initialize</span><span class="o">(</span><span class="k">this</span><span class="o">,</span> <span class="s">"YOUR_PRODUCT_ACCESS_TOKEN"</span><span class="o">);</span>

<span class="nc">JsonObject</span> <span class="n">metadata</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">JsonObject</span><span class="o">();</span>
<span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"app_environment"</span><span class="o">,</span> <span class="s">"production"</span><span class="o">);</span>
<span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"build_type"</span><span class="o">,</span> <span class="nc">BuildConfig</span><span class="o">.</span><span class="na">BUILD_TYPE</span><span class="o">);</span>
<span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"app_variant"</span><span class="o">,</span> <span class="nc">BuildConfig</span><span class="o">.</span><span class="na">FLAVOR</span><span class="o">);</span>
<span class="nc">Critic</span><span class="o">.</span><span class="na">setProductMetadata</span><span class="o">(</span><span class="n">metadata</span><span class="o">);</span>
</code></pre></div></div>

<p><code class="language-plaintext highlighter-rouge">setProductMetadata</code> accepts a <code class="language-plaintext highlighter-rouge">JsonObject</code>, not a fixed schema. Add any key-value pairs your team needs. This static metadata provides a baseline: you’ll always know whether a bug occurred in production or staging, in a debug or release build.</p>

<h2 id="step-2-update-metadata-dynamically-as-app-state-changes-android">Step 2: Update Metadata Dynamically as App State Changes (Android)</h2>

<p>Static metadata is a start, but the real debugging power comes from metadata that reflects the <em>current</em> app state. Update it at every significant state transition:</p>

<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// After user logs in</span>
<span class="kd">public</span> <span class="kt">void</span> <span class="nf">onUserAuthenticated</span><span class="o">(</span><span class="nc">User</span> <span class="n">user</span><span class="o">)</span> <span class="o">{</span>
    <span class="nc">JsonObject</span> <span class="n">metadata</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">JsonObject</span><span class="o">();</span>
    <span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"user_id"</span><span class="o">,</span> <span class="n">user</span><span class="o">.</span><span class="na">getId</span><span class="o">());</span>
    <span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"subscription_tier"</span><span class="o">,</span> <span class="n">user</span><span class="o">.</span><span class="na">getTier</span><span class="o">());</span>
    <span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"role"</span><span class="o">,</span> <span class="n">user</span><span class="o">.</span><span class="na">getRole</span><span class="o">());</span>

    <span class="nc">JsonObject</span> <span class="n">flags</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">JsonObject</span><span class="o">();</span>
    <span class="n">flags</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"new_checkout_flow"</span><span class="o">,</span> <span class="nc">FeatureFlags</span><span class="o">.</span><span class="na">isEnabled</span><span class="o">(</span><span class="s">"new_checkout"</span><span class="o">));</span>
    <span class="n">flags</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"dark_mode"</span><span class="o">,</span> <span class="nc">FeatureFlags</span><span class="o">.</span><span class="na">isEnabled</span><span class="o">(</span><span class="s">"dark_mode"</span><span class="o">));</span>
    <span class="n">metadata</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="s">"feature_flags"</span><span class="o">,</span> <span class="n">flags</span><span class="o">);</span>

    <span class="nc">Critic</span><span class="o">.</span><span class="na">setProductMetadata</span><span class="o">(</span><span class="n">metadata</span><span class="o">);</span>
<span class="o">}</span>

<span class="c1">// When user enters checkout</span>
<span class="kd">public</span> <span class="kt">void</span> <span class="nf">onCheckoutStarted</span><span class="o">(</span><span class="nc">Cart</span> <span class="n">cart</span><span class="o">)</span> <span class="o">{</span>
    <span class="nc">JsonObject</span> <span class="n">metadata</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">JsonObject</span><span class="o">();</span>
    <span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"user_id"</span><span class="o">,</span> <span class="n">currentUser</span><span class="o">.</span><span class="na">getId</span><span class="o">());</span>
    <span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"cart_item_count"</span><span class="o">,</span> <span class="n">cart</span><span class="o">.</span><span class="na">getItemCount</span><span class="o">());</span>
    <span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"coupon_applied"</span><span class="o">,</span> <span class="n">cart</span><span class="o">.</span><span class="na">hasCoupon</span><span class="o">());</span>
    <span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"payment_method"</span><span class="o">,</span> <span class="n">cart</span><span class="o">.</span><span class="na">getPaymentMethod</span><span class="o">());</span>
    <span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"active_flow"</span><span class="o">,</span> <span class="s">"checkout"</span><span class="o">);</span>

    <span class="nc">JsonObject</span> <span class="n">flags</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">JsonObject</span><span class="o">();</span>
    <span class="n">flags</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"new_checkout_flow"</span><span class="o">,</span> <span class="nc">FeatureFlags</span><span class="o">.</span><span class="na">isEnabled</span><span class="o">(</span><span class="s">"new_checkout"</span><span class="o">));</span>
    <span class="n">metadata</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="s">"feature_flags"</span><span class="o">,</span> <span class="n">flags</span><span class="o">);</span>

    <span class="nc">Critic</span><span class="o">.</span><span class="na">setProductMetadata</span><span class="o">(</span><span class="n">metadata</span><span class="o">);</span>
<span class="o">}</span>
</code></pre></div></div>

<p><strong>Each call to <code class="language-plaintext highlighter-rouge">setProductMetadata</code> replaces the previous metadata entirely.</strong> The report captures whatever was set last, so keep it current. Think of metadata updates like snapshots: each state transition overwrites the previous one so the report always reflects where the user was when they shook the device.</p>

<p>A lightweight helper method keeps this manageable:</p>

<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">private</span> <span class="kt">void</span> <span class="nf">updateCriticMetadata</span><span class="o">()</span> <span class="o">{</span>
    <span class="nc">JsonObject</span> <span class="n">metadata</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">JsonObject</span><span class="o">();</span>
    <span class="k">if</span> <span class="o">(</span><span class="n">currentUser</span> <span class="o">!=</span> <span class="kc">null</span><span class="o">)</span> <span class="o">{</span>
        <span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"user_id"</span><span class="o">,</span> <span class="n">currentUser</span><span class="o">.</span><span class="na">getId</span><span class="o">());</span>
        <span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"subscription_tier"</span><span class="o">,</span> <span class="n">currentUser</span><span class="o">.</span><span class="na">getTier</span><span class="o">());</span>
    <span class="o">}</span>
    <span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"active_screen"</span><span class="o">,</span> <span class="n">getCurrentScreenName</span><span class="o">());</span>
    <span class="n">metadata</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="s">"feature_flags"</span><span class="o">,</span> <span class="n">getActiveFlags</span><span class="o">());</span>
    <span class="nc">Critic</span><span class="o">.</span><span class="na">setProductMetadata</span><span class="o">(</span><span class="n">metadata</span><span class="o">);</span>
<span class="o">}</span>
</code></pre></div></div>

<p>Call <code class="language-plaintext highlighter-rouge">updateCriticMetadata()</code> from your authentication callbacks, navigation router, and feature flag observer. Every report will reflect current state rather than stale initialization data.</p>

<h2 id="step-3-ios-implementation-swift">Step 3: iOS Implementation (Swift)</h2>

<p>The iOS SDK uses a dictionary assigned to the <code class="language-plaintext highlighter-rouge">productMetadata</code> property on the shared Critic instance:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// Initialization</span>
<span class="kt">Critic</span><span class="o">.</span><span class="nf">instance</span><span class="p">()</span><span class="o">.</span><span class="nf">start</span><span class="p">(</span><span class="s">"YOUR_PRODUCT_ACCESS_TOKEN"</span><span class="p">)</span>

<span class="c1">// Set metadata after user authentication</span>
<span class="kd">func</span> <span class="nf">onUserAuthenticated</span><span class="p">(</span><span class="n">_</span> <span class="nv">user</span><span class="p">:</span> <span class="kt">User</span><span class="p">)</span> <span class="p">{</span>
    <span class="kt">Critic</span><span class="o">.</span><span class="nf">instance</span><span class="p">()</span><span class="o">.</span><span class="n">productMetadata</span> <span class="o">=</span> <span class="p">[</span>
        <span class="s">"user_id"</span><span class="p">:</span> <span class="n">user</span><span class="o">.</span><span class="n">id</span><span class="p">,</span>
        <span class="s">"subscription_tier"</span><span class="p">:</span> <span class="n">user</span><span class="o">.</span><span class="n">tier</span><span class="p">,</span>
        <span class="s">"role"</span><span class="p">:</span> <span class="n">user</span><span class="o">.</span><span class="n">role</span><span class="p">,</span>
        <span class="s">"feature_flags"</span><span class="p">:</span> <span class="p">[</span>
            <span class="s">"new_checkout_flow"</span><span class="p">:</span> <span class="kt">FeatureFlags</span><span class="o">.</span><span class="nf">isEnabled</span><span class="p">(</span><span class="o">.</span><span class="n">newCheckout</span><span class="p">),</span>
            <span class="s">"dark_mode"</span><span class="p">:</span> <span class="kt">FeatureFlags</span><span class="o">.</span><span class="nf">isEnabled</span><span class="p">(</span><span class="o">.</span><span class="n">darkMode</span><span class="p">)</span>
        <span class="p">],</span>
        <span class="s">"active_view"</span><span class="p">:</span> <span class="s">"home"</span>
    <span class="p">]</span>
<span class="p">}</span>
</code></pre></div></div>

<p>Update metadata on <code class="language-plaintext highlighter-rouge">viewDidAppear</code> for screen-specific context, or register <code class="language-plaintext highlighter-rouge">NotificationCenter</code> observers for state changes:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">NotificationCenter</span><span class="o">.</span><span class="k">default</span><span class="o">.</span><span class="nf">addObserver</span><span class="p">(</span>
    <span class="nv">forName</span><span class="p">:</span> <span class="o">.</span><span class="n">userDidLogin</span><span class="p">,</span>
    <span class="nv">object</span><span class="p">:</span> <span class="kc">nil</span><span class="p">,</span>
    <span class="nv">queue</span><span class="p">:</span> <span class="o">.</span><span class="n">main</span>
<span class="p">)</span> <span class="p">{</span> <span class="n">_</span> <span class="k">in</span>
    <span class="k">self</span><span class="o">.</span><span class="nf">updateCriticMetadata</span><span class="p">()</span>
<span class="p">}</span>
</code></pre></div></div>

<p>The same principles from the Android section apply: set metadata as early as possible, update at every significant state change, and keep payloads focused on identifiers and states rather than full data objects.</p>

<p>For programmatic report submission on iOS, use <code class="language-plaintext highlighter-rouge">NVCReportCreator</code> (the iOS equivalent of Android’s <code class="language-plaintext highlighter-rouge">BugReportCreator</code>) to build reports with custom descriptions, metadata, and file attachments.</p>

<h2 id="step-4-flutter-implementation-dart">Step 4: Flutter Implementation (Dart)</h2>

<p>The Flutter SDK (<code class="language-plaintext highlighter-rouge">inventiv_critic_flutter</code>) provides report creation and submission:</p>

<div class="language-dart highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// Initialization</span>
<span class="kt">String</span> <span class="n">key</span> <span class="o">=</span> <span class="s">'YOUR_PRODUCT_ACCESS_TOKEN'</span><span class="p">;</span>
<span class="n">Critic</span><span class="p">()</span><span class="o">.</span><span class="na">initialize</span><span class="p">(</span><span class="n">key</span><span class="p">);</span>

<span class="c1">// Create and submit a report with metadata</span>
<span class="kd">var</span> <span class="n">report</span> <span class="o">=</span> <span class="n">BugReport</span><span class="o">.</span><span class="na">create</span><span class="p">(</span>
  <span class="nl">description:</span> <span class="s">'Checkout crashes on submit'</span><span class="p">,</span>
  <span class="nl">stepsToReproduce:</span> <span class="s">'1. Add item to cart</span><span class="se">\n</span><span class="s">2. Apply coupon</span><span class="se">\n</span><span class="s">3. Tap submit'</span><span class="p">,</span>
<span class="p">);</span>

<span class="c1">// Add file attachments if needed</span>
<span class="n">report</span><span class="o">.</span><span class="na">attachments</span> <span class="o">=</span> <span class="p">&lt;</span><span class="n">Attachment</span><span class="p">&gt;[</span>
  <span class="n">Attachment</span><span class="p">(</span><span class="nl">name:</span> <span class="s">'screenshot.png'</span><span class="p">,</span> <span class="nl">path:</span> <span class="n">screenshotPath</span><span class="p">),</span>
<span class="p">];</span>

<span class="k">await</span> <span class="nf">Critic</span><span class="p">()</span><span class="o">.</span><span class="na">submitReport</span><span class="p">(</span><span class="n">report</span><span class="p">);</span>
</code></pre></div></div>

<p>For attaching arbitrary JSON metadata to every report in a Flutter app, use the REST API v2 (covered in Step 5). The REST API gives you full metadata control from any platform, including Flutter, and integrates cleanly with Dart’s <code class="language-plaintext highlighter-rouge">http</code> package:</p>

<div class="language-dart highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="s">'dart:convert'</span><span class="o">;</span>
<span class="kn">import</span> <span class="s">'package:http/http.dart'</span> <span class="k">as</span> <span class="n">http</span><span class="o">;</span>

<span class="n">Future</span><span class="p">&lt;</span><span class="kt">void</span><span class="p">&gt;</span> <span class="n">submitReportWithMetadata</span><span class="p">({</span>
  <span class="kd">required</span> <span class="kt">String</span> <span class="n">description</span><span class="p">,</span>
  <span class="kd">required</span> <span class="kt">Map</span><span class="p">&lt;</span><span class="kt">String</span><span class="p">,</span> <span class="kd">dynamic</span><span class="p">&gt;</span> <span class="n">metadata</span><span class="p">,</span>
<span class="p">})</span> <span class="kd">async</span> <span class="p">{</span>
  <span class="kd">var</span> <span class="n">request</span> <span class="o">=</span> <span class="n">http</span><span class="o">.</span><span class="na">MultipartRequest</span><span class="p">(</span>
    <span class="s">'POST'</span><span class="p">,</span>
    <span class="kt">Uri</span><span class="o">.</span><span class="na">parse</span><span class="p">(</span><span class="s">'https://critic.inventiv.io/api/v2/bug_reports'</span><span class="p">),</span>
  <span class="p">);</span>
  <span class="n">request</span><span class="o">.</span><span class="na">headers</span><span class="p">[</span><span class="s">'Authorization'</span><span class="p">]</span> <span class="o">=</span> <span class="s">'Bearer YOUR_PRODUCT_ACCESS_TOKEN'</span><span class="p">;</span>
  <span class="n">request</span><span class="o">.</span><span class="na">fields</span><span class="p">[</span><span class="s">'description'</span><span class="p">]</span> <span class="o">=</span> <span class="n">description</span><span class="p">;</span>
  <span class="n">request</span><span class="o">.</span><span class="na">fields</span><span class="p">[</span><span class="s">'metadata'</span><span class="p">]</span> <span class="o">=</span> <span class="n">jsonEncode</span><span class="p">(</span><span class="n">metadata</span><span class="p">);</span>
  
  <span class="k">await</span> <span class="n">request</span><span class="o">.</span><span class="na">send</span><span class="p">();</span>
<span class="p">}</span>

<span class="c1">// Usage</span>
<span class="k">await</span> <span class="nf">submitReportWithMetadata</span><span class="p">(</span>
  <span class="nl">description:</span> <span class="s">'Checkout crashes on submit'</span><span class="p">,</span>
  <span class="nl">metadata:</span> <span class="p">{</span>
    <span class="s">'user_id'</span><span class="o">:</span> <span class="n">currentUser</span><span class="o">.</span><span class="na">id</span><span class="p">,</span>
    <span class="s">'subscription_tier'</span><span class="o">:</span> <span class="n">currentUser</span><span class="o">.</span><span class="na">tier</span><span class="p">,</span>
    <span class="s">'feature_flags'</span><span class="o">:</span> <span class="p">{</span>
      <span class="s">'new_checkout_flow'</span><span class="o">:</span> <span class="n">featureFlags</span><span class="p">[</span><span class="s">'new_checkout'</span><span class="p">],</span>
      <span class="s">'dark_mode'</span><span class="o">:</span> <span class="n">featureFlags</span><span class="p">[</span><span class="s">'dark_mode'</span><span class="p">],</span>
    <span class="p">},</span>
    <span class="s">'locale'</span><span class="o">:</span> <span class="n">PlatformDispatcher</span><span class="o">.</span><span class="na">instance</span><span class="o">.</span><span class="na">locale</span><span class="o">.</span><span class="na">toString</span><span class="p">(),</span>
    <span class="s">'active_route'</span><span class="o">:</span> <span class="n">currentRouteName</span><span class="p">,</span>
  <span class="p">},</span>
<span class="p">);</span>
</code></pre></div></div>

<p>This approach gives Flutter developers the same arbitrary JSON metadata capability available on Android and iOS, with one implementation that works across both platforms.</p>

<h2 id="step-5-rest-api-v2">Step 5: REST API v2</h2>

<p>For platforms without native SDKs (desktop apps, smart TVs, CLI tools, or CI/CD pipelines) the REST API gives you full metadata capability over HTTP:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl <span class="nt">-X</span> POST https://critic.inventiv.io/api/v2/bug_reports <span class="se">\</span>
  <span class="nt">-H</span> <span class="s2">"Authorization: Bearer YOUR_PRODUCT_ACCESS_TOKEN"</span> <span class="se">\</span>
  <span class="nt">-F</span> <span class="s2">"description=Checkout crashes on submit"</span> <span class="se">\</span>
  <span class="nt">-F</span> <span class="s1">'metadata={"user_id":"usr_a3f9b2","subscription_tier":"free","coupon_applied":true,"feature_flags":{"new_checkout_flow":true}}'</span>
</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">metadata</code> field accepts a JSON string in the multipart form body. You can attach files alongside metadata: screenshots, log exports, or any file your debugging workflow needs.</p>

<p><strong>Use cases beyond mobile:</strong></p>

<ul>
  <li><strong>CI/CD pipelines</strong> submitting automated test failure reports with build metadata (commit SHA, branch, test suite, environment)</li>
  <li><strong>Custom feedback UIs</strong>: a “Report issue with this order” button that pre-populates the order ID, items, and payment method as metadata</li>
  <li><strong>QA automation</strong> submitting structured reports programmatically during regression testing</li>
</ul>

<p>The API exposes all functionality available in the web portal. Anything you can do in the dashboard, you can automate via the API.</p>

<h2 id="step-6-submit-a-test-report">Step 6: Submit a Test Report</h2>

<p>Before moving to production, submit a test report to verify metadata flows end-to-end.</p>

<p>On Android, use <code class="language-plaintext highlighter-rouge">BugReportCreator</code> for programmatic submission:</p>

<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nc">JsonObject</span> <span class="n">metadata</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">JsonObject</span><span class="o">();</span>
<span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"user_id"</span><span class="o">,</span> <span class="s">"usr_test_123"</span><span class="o">);</span>
<span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"subscription_tier"</span><span class="o">,</span> <span class="s">"enterprise"</span><span class="o">);</span>
<span class="n">metadata</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"test_scenario"</span><span class="o">,</span> <span class="s">"checkout_with_coupon"</span><span class="o">);</span>

<span class="nc">JsonObject</span> <span class="n">flags</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">JsonObject</span><span class="o">();</span>
<span class="n">flags</span><span class="o">.</span><span class="na">addProperty</span><span class="o">(</span><span class="s">"new_checkout_flow"</span><span class="o">,</span> <span class="kc">true</span><span class="o">);</span>
<span class="n">metadata</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="s">"feature_flags"</span><span class="o">,</span> <span class="n">flags</span><span class="o">);</span>

<span class="nc">BugReport</span> <span class="n">report</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">BugReportCreator</span><span class="o">()</span>
    <span class="o">.</span><span class="na">description</span><span class="o">(</span><span class="s">"Test report: verifying metadata appears in dashboard"</span><span class="o">)</span>
    <span class="o">.</span><span class="na">metadata</span><span class="o">(</span><span class="n">metadata</span><span class="o">)</span>
    <span class="o">.</span><span class="na">create</span><span class="o">(</span><span class="n">context</span><span class="o">);</span>
</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">create()</code> method returns a <code class="language-plaintext highlighter-rouge">BugReport</code> object on success or throws a <code class="language-plaintext highlighter-rouge">ReportCreationException</code> with details on what went wrong.</p>

<p><strong>When to use which approach:</strong></p>

<table>
  <thead>
    <tr>
      <th>Approach</th>
      <th>Best for</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">setProductMetadata</code> (Android) / <code class="language-plaintext highlighter-rouge">productMetadata</code> (iOS)</td>
      <td>Enriching the default shake-to-report flow. Set it once, update on state changes; every user-initiated report includes your metadata automatically.</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">BugReportCreator</code> (Android) / <code class="language-plaintext highlighter-rouge">NVCReportCreator</code> (iOS)</td>
      <td>Programmatic submissions. Custom feedback UIs, automated test reports, or any scenario where you control the entire submission flow.</td>
    </tr>
    <tr>
      <td>REST API v2</td>
      <td>Cross-platform consistency, CI/CD integration, platforms without native SDKs, or Flutter apps needing metadata support.</td>
    </tr>
  </tbody>
</table>

<p>Submit a test report, open the dashboard, and see your custom JSON displayed right alongside the automatic device telemetry: battery, memory, disk, network, OS, and console logs. The full picture, in one place.</p>

<h2 id="verification-confirming-metadata-in-the-critic-dashboard">Verification: Confirming Metadata in the Critic Dashboard</h2>

<p>After submitting your test report:</p>

<ol>
  <li>Log into <a href="https://critic.inventiv.io/">critic.inventiv.io</a></li>
  <li>Navigate to your app, then <strong>Bug Reports</strong></li>
  <li>Open the most recent report</li>
  <li>Scroll to the <strong>Metadata</strong> section; your JSON should appear exactly as submitted</li>
  <li>Verify: all keys present, values correct, nested objects (like <code class="language-plaintext highlighter-rouge">feature_flags</code>) displayed as structured JSON</li>
</ol>

<p>The dashboard displays the report in a structured layout: the user’s description at the top, device telemetry (battery level, memory stats, disk space, network type, OS version) in a detailed panel, and your custom metadata rendered as formatted JSON below. Together, these give you the hardware context <em>and</em> the business context in a single view.</p>

<p><strong>API verification</strong> for automated workflows:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl <span class="nt">-X</span> GET https://critic.inventiv.io/api/v2/bug_reports/<span class="o">{</span>report_id<span class="o">}</span> <span class="se">\</span>
  <span class="nt">-H</span> <span class="s2">"Authorization: Bearer YOUR_PRODUCT_ACCESS_TOKEN"</span>
</code></pre></div></div>

<p>Check the response JSON for the <code class="language-plaintext highlighter-rouge">metadata</code> field. This is useful for building automated tests that verify your metadata pipeline: submit a report with known metadata, then programmatically confirm it arrived intact.</p>

<h2 id="troubleshooting-common-issues">Troubleshooting Common Issues</h2>

<h3 id="metadata-fields-missing-from-reports">Metadata Fields Missing from Reports</h3>

<p><strong>Cause:</strong> <code class="language-plaintext highlighter-rouge">setProductMetadata</code> (Android) or <code class="language-plaintext highlighter-rouge">productMetadata</code> (iOS) was set <em>after</em> the user submitted the report, or was never set in the current session.</p>

<p><strong>Fix:</strong> Set metadata as early as possible: immediately after Critic initialization for static values, and again after user authentication for user-specific data. Metadata must be set <em>before</em> the shake-to-report trigger fires. If reports arrive without metadata, add logging around your metadata calls to confirm they execute before submission.</p>

<h3 id="malformed-json-errors">Malformed JSON Errors</h3>

<p><strong>Cause:</strong> On Android, building JSON via string concatenation instead of <code class="language-plaintext highlighter-rouge">JsonObject</code>. On REST API, improperly escaped quotes in the multipart form.</p>

<p><strong>Fix:</strong> Always use the platform’s JSON builder: <code class="language-plaintext highlighter-rouge">JsonObject</code> on Android, native dictionaries on iOS, <code class="language-plaintext highlighter-rouge">jsonEncode()</code> in Dart, or <code class="language-plaintext highlighter-rouge">JSON.stringify()</code> in JavaScript. Test your payload with a JSON validator before sending. For cURL, use single quotes around the metadata value: <code class="language-plaintext highlighter-rouge">-F 'metadata={"key":"value"}'</code>.</p>

<h3 id="metadata-payload-too-large">Metadata Payload Too Large</h3>

<p><strong>Cause:</strong> Attaching large arrays (full cart contents with image URLs) or deeply nested objects.</p>

<p><strong>Fix:</strong> Keep metadata focused on identifiers and states, rather than full data objects. Send <code class="language-plaintext highlighter-rouge">"cart_item_count": 3</code> and <code class="language-plaintext highlighter-rouge">"cart_total": 49.99</code> instead of the entire cart array with product details. If you need granular data, include IDs that your backend can resolve: <code class="language-plaintext highlighter-rouge">"order_id": "ord_8f2a1b"</code> instead of the full order object.</p>

<h3 id="stale-metadata-on-reports">Stale Metadata on Reports</h3>

<p><strong>Cause:</strong> Metadata set once at app launch but never updated when user state changes (login, navigation, feature flag toggles).</p>

<p><strong>Fix:</strong> Call your metadata update method at every significant state transition. Create a centralized <code class="language-plaintext highlighter-rouge">updateCriticMetadata()</code> helper that your auth system, navigation router, and feature flag manager each invoke. This single function guarantees reports always reflect current state.</p>

<h3 id="metadata-missing-via-rest-api">Metadata Missing via REST API</h3>

<p><strong>Cause:</strong> The <code class="language-plaintext highlighter-rouge">metadata</code> field must be a valid JSON <em>string</em> in the multipart form, not a raw object or unquoted text.</p>

<p><strong>Fix:</strong> Wrap your JSON in quotes and ensure it’s properly escaped. Verify your token is valid first; if authentication fails, the issue is the token, not metadata formatting. Test with a minimal payload before adding complexity.</p>

<h2 id="correlating-bugs-with-feature-flags">Correlating Bugs with Feature Flags</h2>

<p>Include active feature flags in every report’s metadata:</p>

<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"feature_flags"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"new_checkout_flow"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
    </span><span class="nl">"dynamic_pricing"</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="p">,</span><span class="w">
    </span><span class="nl">"beta_search"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
  </span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>

<p>When 8 out of 10 bug reports about checkout show <code class="language-plaintext highlighter-rouge">"new_checkout_flow": true</code>, the correlation is visible without AI triage, without a dedicated analytics pipeline, and without an enterprise contract. You open the dashboard, scan the metadata across recent reports, and spot the flag causing problems.</p>

<p>Enterprise tools take a different approach. Datadog’s Feature Flag Tracking requires integrating their RUM SDK with a supported flag service (LaunchDarkly, Split, Statsig), configuring per-service tracking, and paying for their RUM product. That works for teams with enterprise budgets. Critic achieves the same bug-to-flag correlation through arbitrary JSON metadata that you’re already attaching.</p>

<p>You’re trading a dedicated “Feature Flag Tracking” UI for flexibility. Attach the data, and the dashboard becomes your correlation tool.</p>

<p>For teams that roll out features behind flags (which is most teams shipping frequently) this single metadata pattern can cut the time between “something broke” and “this flag broke it” from days to minutes.</p>

<h2 id="next-steps">Next Steps</h2>

<ul>
  <li><strong>Add file attachments alongside metadata.</strong> Screenshots, log exports, or custom files that complement the JSON context. Both <code class="language-plaintext highlighter-rouge">BugReportCreator</code> on Android and the REST API support multiple file attachments per report.</li>
  <li><strong>Build a custom feedback UI</strong> using the <a href="https://critictracking.com/getting-started/">REST API v2</a>. A “Report issue with this order” button that pre-populates order metadata gives users a targeted way to report without the generic shake prompt.</li>
  <li><strong>Invite your team</strong> to the Critic dashboard. Comments and email notifications turn metadata-rich reports into a collaborative triage workflow.</li>
  <li><strong>Expand to additional apps.</strong> Critic’s per-app pricing ($20/month each) and product-based permissions make it straightforward to add apps or isolate client projects, particularly valuable for agencies managing multiple codebases.</li>
</ul>

<p>Your bug reports now carry three layers of context: device telemetry, console logs, and your custom business metadata. The next bug your users report will arrive ready to reproduce.</p>

<p><a href="https://critic.inventiv.io/">Start your free 30-day trial →</a></p>]]></content><author><name>dave_lane</name></author><category term="Critic" /><summary type="html"><![CDATA[Attach arbitrary JSON metadata to in-app bug reports on Android, iOS, Flutter, and REST API so every report includes business context.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://critictracking.com/assets/images/posts/2026-03-27-how-to-add-custom-metadata-to-mobile-bug-reports-user-ids-feature-flags-and-session-data.webp" /><media:content medium="image" url="https://critictracking.com/assets/images/posts/2026-03-27-how-to-add-custom-metadata-to-mobile-bug-reports-user-ids-feature-flags-and-session-data.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">You Set Up In-App Bug Reporting But Nobody’s Submitting Reports. Here’s How to Fix It.</title><link href="https://critictracking.com/blog/you-set-up-in-app-bug-reporting-but-nobody-s-submitting-reports-here-s-how-to-fix-it/" rel="alternate" type="text/html" title="You Set Up In-App Bug Reporting But Nobody’s Submitting Reports. Here’s How to Fix It." /><published>2026-03-26T13:00:00+00:00</published><updated>2026-03-26T13:00:00+00:00</updated><id>https://critictracking.com/blog/you-set-up-in-app-bug-reporting-but-nobody-s-submitting-reports-here-s-how-to-fix-it</id><content type="html" xml:base="https://critictracking.com/blog/you-set-up-in-app-bug-reporting-but-nobody-s-submitting-reports-here-s-how-to-fix-it/"><![CDATA[<p>You added a bug reporting SDK to your app. You deployed it. You waited. Your dashboard is empty.</p>

<p>The instinct is to blame the tool: maybe the API token is wrong, maybe the SDK failed to initialize. But in most cases, the SDK is working fine. The real problem splits into two categories: <strong>discoverability and trust failures</strong> (most common) and <strong>technical misconfiguration</strong> (less common but faster to diagnose). This guide covers both, ranked by probability, so you can work top-to-bottom and stop as soon as you find your issue.</p>

<h2 id="symptoms-how-to-confirm-you-have-this-problem">Symptoms: How to Confirm You Have This Problem</h2>

<p>Before you start troubleshooting, confirm what “no reports” actually means:</p>

<ul>
  <li><strong>Zero reports in the dashboard</strong> despite confirmed active users. Check your analytics (DAU, session count, active installs). If users are opening the app, the feedback pipeline should be producing <em>something</em>.</li>
  <li><strong>SDK initialized successfully but no reports created.</strong> The SDK is talking to the server, registering installs, but users aren’t completing the submission flow. This points to a discoverability or friction problem, not a technical one.</li>
  <li><strong>You can submit reports, but real users can’t (or won’t).</strong> If your QA team has submitted test reports but nothing arrives from actual users or beta testers, the technical pipeline works. The human pipeline doesn’t.</li>
</ul>

<h3 id="quick-diagnostic-steps">Quick Diagnostic Steps</h3>

<p><strong>Step 1: Verify the SDK is initialized and your API token is valid.</strong> With <a href="https://critictracking.com/">Critic</a>, hit <code class="language-plaintext highlighter-rouge">POST /api/v2/ping</code> with your product access token. If it returns success and installs appear in your dashboard, initialization is confirmed. Your tool’s equivalent health check endpoint serves the same purpose.</p>

<p><strong>Step 2: Submit a test report from a physical device</strong>, not a simulator. The shake gesture <a href="https://www.ibm.com/support/pages/shake-gesture-ios-simulator-does-not-open-bug-report">fails to trigger reliably in the iOS Simulator</a>, a known issue documented by IBM. If the test report arrives in your dashboard with full device telemetry, your technical pipeline is confirmed working.</p>

<p>If both checks pass, your problem is almost certainly in the first three causes below. If either check fails, skip ahead to Causes 4–7.</p>

<h2 id="common-causes-most-likely-to-least-likely">Common Causes (Most Likely to Least Likely)</h2>

<p>Causes are ranked by how often they appear in practice. Start from the top; most readers will find their answer in the first two sections.</p>

<h3 id="cause-1-your-users-have-no-idea-shake-to-report-exists">Cause 1: Your Users Have No Idea Shake-to-Report Exists</h3>

<p>This is the most common cause by a wide margin, and it’s the one most developers overlook because <em>they</em> know the feature is there.</p>

<p>Gesture discoverability is a well-documented UX problem. As <a href="https://www.smashingmagazine.com/2016/10/in-app-gestures-and-mobile-app-user-experience/">Smashing Magazine</a> established in their analysis of mobile gestures: “gestures have a lower discoverability; they are always hidden and people need to be able to identify these options.” The <a href="https://www.interaction-design.org/literature/topics/gesture-interaction">Interaction Design Foundation</a> reinforces this: “Hidden or undocumented gestures, even simple ones, can often go unused or come to light much later for users.” Without a visual hint that shaking triggers something, users will never discover it on their own.</p>

<p>The LinkedIn case makes this concrete. When LinkedIn built and <a href="https://www.linkedin.com/blog/engineering/archive/introducing-and-open-sourcing-shaky-android-shake-for-feedback-">open-sourced their “Shaky” library</a> (shake-to-send-feedback for Android) they generated over 5,000 internal bug reports from employees in a single year. But these were employees who were <em>explicitly told</em> about the feature as part of the company’s dogfooding process. The library was purpose-built for internal use where discoverability was handled by announcement, not by UI design.</p>

<p>The <a href="https://news.ycombinator.com/item?id=44225352">Hacker News thread “Most users won’t report bugs unless you make it stupidly easy”</a> confirmed the pattern from the user side, with hundreds of comments. The consensus: friction is the number-one barrier to bug reporting. As one commenter put it, reporting bugs is work, and if submission feels like a black hole, users won’t bother. Users who are unaware the feature exists face infinite friction.</p>

<p>There’s an emotional dimension too. Shaking a phone happens naturally when users are frustrated, but only if they <em>know</em> it triggers something. Otherwise, they shake in frustration, then open the App Store and leave a one-star review.</p>

<p><strong>How to fix it:</strong></p>

<ol>
  <li>
    <p><strong>Add a one-time onboarding tooltip</strong> on first app launch: “Found a bug? Shake your device to report it.” Keep it short; <a href="https://messagegears.com/resources/blog/how-to-use-tooltips-on-mobile-to-enhance-the-app-experience/">MessageGears’ research on mobile tooltips</a> recommends no more than three lines of text. Show once, dismiss on tap, never show again. <a href="https://refiner.io/blog/in-app-survey-response-rates/">Refiner’s 2025 analysis of 1,382 in-app surveys</a> found that center-screen modal prompts achieve a 42.6% response rate; a well-placed tooltip gets seen.</p>
  </li>
  <li>
    <p><strong>Add a visible feedback button</strong> as a complement to shake. A small floating action button or a “Report a Bug” item in your settings or help menu gives every user a discoverable entry point. Shake is convenient for power users who already know about it; a button is discoverable for everyone else.</p>
  </li>
  <li>
    <p><strong>Mention it in release notes and beta invite emails.</strong> One sentence: “New: shake your device to report bugs directly from the app.” <a href="https://firebase.google.com/docs/app-distribution/collect-feedback-from-testers">Firebase’s documentation on collecting tester feedback</a> emphasizes providing clear instructions for how testers should submit feedback. Don’t assume they’ll figure it out.</p>
  </li>
  <li>
    <p><strong>Critic-specific:</strong> Critic’s shake-to-report works out of the box with zero UI code; the built-in form appears automatically on shake. But “works out of the box” and “is discoverable out of the box” are different things. Add the tooltip or button yourself using your app’s UI framework, then let Critic handle the reporting flow, device telemetry capture, and log collection.</p>
  </li>
</ol>

<h3 id="cause-2-reports-go-into-a-void">Cause 2: Reports Go Into a Void</h3>

<p>Even users who discover the reporting feature will stop using it if they believe nobody reads their reports.</p>

<p>The <a href="https://news.ycombinator.com/item?id=44225352">HN bug reporting thread</a> surfaced this pattern explicitly. Users described submitting detailed reports only to receive silence, or worse, automated closures from stale-issue bots months later. The consensus was clear: companies that acknowledge reports and fix reported bugs motivate more detailed future reporting. Companies that don’t kill the feedback pipeline from the user end.</p>

<p>A separate <a href="https://news.ycombinator.com/item?id=21427996">Hacker News discussion</a> told a striking story: a bug in an internal tool had persisted for <em>years</em>. When it finally came up in a meeting, 100% of the client-facing team knew about it from personal experience, and 0% of the development team had ever heard of it. Users hadn’t reported it because they assumed developers already knew, didn’t think they’d be heard, or feared appearing incompetent. The feedback pipeline existed. The response loop didn’t.</p>

<p>For beta testing programs specifically, <a href="https://moldstud.com/articles/p-complete-guide-to-beta-testing-strategies-for-mobile-apps-answering-developers-faqs">Moldstud’s research on beta testing strategies</a> found that 68% of testers prefer real-time communication about their reports. When testers submit feedback and hear nothing back, they conclude the exercise is performative.</p>

<p><strong>How to fix it:</strong></p>

<ol>
  <li>
    <p><strong>Enable email notifications immediately.</strong> Every tool has this setting. Critic sends automatic email notifications for new bug reports and comments; make sure they’re turned on and going to a monitored inbox, not a team mailing list that nobody reads.</p>
  </li>
  <li>
    <p><strong>Respond to every report within 24 hours</strong> with a comment in the dashboard, even if it’s just “Thanks; we’re looking into this.” The bar is acknowledgment, not resolution. Users who see their report acknowledged are far more likely to submit again.</p>
  </li>
  <li>
    <p><strong>Notify users when their bug is fixed.</strong> If you have user contact info (via custom metadata like an email address or user ID injected into reports) send a brief message: “The issue you reported on [date] has been fixed in version X.Y.Z.” This transforms one-time reporters into repeat contributors.</p>
  </li>
  <li>
    <p><strong>For beta programs: send a weekly digest</strong> to testers showing what was fixed based on their feedback. <a href="https://featureupvote.com/blog/managing-feedback-from-beta-testers/">Feature Upvote’s research on managing beta tester feedback</a> recommends thanking prolific testers by name in changelogs or newsletters; making their contribution visible keeps them engaged.</p>
  </li>
</ol>

<h3 id="cause-3-too-much-friction-in-the-submission-flow">Cause 3: Too Much Friction in the Submission Flow</h3>

<p>If your feedback form asks for a title, category, priority level, description, steps to reproduce, and expected vs. actual behavior, your users will close it. Every required field is a reason to abandon.</p>

<p>The minimum viable feedback is a single sentence. <a href="https://featureupvote.com/blog/managing-feedback-from-beta-testers/">Feature Upvote’s beta testing research</a> confirms this: keep the minimum feedback requirement to a single sentence. Everything beyond that should be captured automatically by the SDK, not demanded from the user.</p>

<p>The friction problem compounds on mobile. Unlike web feedback where a user can switch tabs to grab a URL or screenshot, mobile users must leave the app entirely to gather supporting information. <a href="https://embrace.io/blog/you-shouldnt-expect-users-to-deliver-detailed-bug-reports/">Embrace’s engineering blog</a> puts it bluntly: “You shouldn’t expect users to deliver detailed bug reports.” Users lack both the technical expertise and the motivation to document bugs comprehensively. The tool must capture context automatically.</p>

<p>The data backs this up. <a href="https://aqua-cloud.io/bug-reporting-mobile-apps-best-practices/">Aqua Cloud’s research on mobile bug reporting</a> found that apps with in-app feedback see up to 750% higher response rates compared to traditional support channels, primarily because they remove friction at the moment of frustration. <a href="https://refiner.io/blog/in-app-survey-response-rates/">Refiner’s 2025 analysis</a> puts hard numbers on the platform gap: mobile in-app prompts achieve a 36.14% response rate versus 26.48% for web, confirming that meeting users where they already are dramatically increases participation.</p>

<p><strong>How to fix it:</strong></p>

<ol>
  <li>
    <p><strong>Make the description field the only required field.</strong> Everything else (device info, logs, screenshots, app version) should be captured automatically. Critic differentiates here: every report automatically includes battery level, memory metrics, disk space, network connectivity, OS version, CPU usage, and 500 lines of console logs without the user doing anything beyond typing their description.</p>
  </li>
  <li>
    <p><strong>Attach user identity programmatically instead of requiring authentication to submit.</strong> Critic’s arbitrary JSON metadata lets you inject user IDs, session tokens, feature flags, or any other context at SDK initialization, rather than asking users to log in before they can report a bug.</p>
  </li>
  <li>
    <p><strong>Pre-capture the screenshot.</strong> When the feedback form opens, the user should see a screenshot already attached. Critic’s native SDKs include built-in screenshot capture utilities that handle this automatically. If users have to take a screenshot manually, switch to the form, and attach it, most will abandon the process.</p>
  </li>
</ol>

<h3 id="cause-4-sdk-excluded-from-production-builds">Cause 4: SDK Excluded from Production Builds</h3>

<p>This is the most common <em>technical</em> cause, and it’s particularly insidious because everything works perfectly on the developer’s device.</p>

<p><strong>How it happens:</strong> The feedback SDK dependency is scoped to debug-only configuration. On Android, this means <code class="language-plaintext highlighter-rouge">debugImplementation</code> instead of <code class="language-plaintext highlighter-rouge">implementation</code> in your Gradle file. On iOS, the pod is conditionally included only for debug configurations in the Podfile. On Flutter, the package ends up under <code class="language-plaintext highlighter-rouge">dev_dependencies:</code> instead of <code class="language-plaintext highlighter-rouge">dependencies:</code> in <code class="language-plaintext highlighter-rouge">pubspec.yaml</code>. The SDK compiles into your development builds, you test it, it works, and it’s completely absent from the production APK or IPA your users download.</p>

<p><strong>The ProGuard/R8 trap (Android):</strong> Even if the dependency is correctly scoped to all build variants, ProGuard or R8 code shrinking can strip or obfuscate SDK classes in release builds. If the SDK uses reflection (and many do for JSON parsing and annotation processing) R8 may rename or remove classes it considers unused, causing silent failures. No crash, no error log, just no feedback form. This is a <a href="https://www.guardsquare.com/manual/troubleshooting/troubleshooting">well-documented pattern</a> across third-party Android SDKs: everything works in debug where R8 is off by default, and silently breaks in release.</p>

<p><strong>How to fix it:</strong></p>

<ol>
  <li>
    <p><strong>Android:</strong> Confirm the dependency uses <code class="language-plaintext highlighter-rouge">implementation</code> (not <code class="language-plaintext highlighter-rouge">debugImplementation</code>) in your <code class="language-plaintext highlighter-rouge">build.gradle</code>. Add ProGuard/R8 keep rules for the SDK’s packages (e.g., <code class="language-plaintext highlighter-rouge">-keep class io.inventiv.critic.** { *; }</code> for Critic).</p>
  </li>
  <li>
    <p><strong>iOS:</strong> Check your Podfile for conditional configuration blocks that might exclude the Critic pod from release builds.</p>
  </li>
  <li>
    <p><strong>Flutter:</strong> Open <code class="language-plaintext highlighter-rouge">pubspec.yaml</code> and verify <code class="language-plaintext highlighter-rouge">inventiv_critic_flutter</code> is under <code class="language-plaintext highlighter-rouge">dependencies:</code>, not <code class="language-plaintext highlighter-rouge">dev_dependencies:</code>.</p>
  </li>
  <li>
    <p><strong>Verification:</strong> Install the release/production build on a physical device and attempt to trigger the feedback form. If it fails to appear, the SDK isn’t in the build.</p>
  </li>
</ol>

<h3 id="cause-5-incorrect-api-token-or-initialization-error">Cause 5: Incorrect API Token or Initialization Error</h3>

<p>Many SDKs fail silently when the API token is invalid: no crash, no error dialog, just no feedback form. The SDK initializes, detects the bad token on the first API call, and quietly disables itself. You won’t see a stack trace because the SDK handled the error gracefully (too gracefully).</p>

<p>Common variations:</p>

<ul>
  <li><strong>Environment mismatch:</strong> Using a staging API token in a production build, or a production token in development.</li>
  <li><strong>Copy-paste errors:</strong> A trailing space or newline character in the token string, invisible in your IDE.</li>
  <li><strong>Initialization order:</strong> Some SDKs require being initialized before other frameworks to avoid swizzling conflicts on iOS. Initialize your feedback SDK early in the app lifecycle to avoid conflicts with other libraries that may intercept the same system hooks.</li>
</ul>

<p><strong>How to fix it:</strong></p>

<ol>
  <li>
    <p><strong>Verify the API token directly</strong> by calling Critic’s <code class="language-plaintext highlighter-rouge">POST /api/v2/ping</code> endpoint with your token. If it returns an error, the token is wrong or expired.</p>
  </li>
  <li>
    <p><strong>Check initialization order.</strong> Ensure the bug reporting SDK is initialized early in the app lifecycle: in <code class="language-plaintext highlighter-rouge">Application.onCreate()</code> (Android), <code class="language-plaintext highlighter-rouge">application(_:didFinishLaunchingWithOptions:)</code> (iOS), or <code class="language-plaintext highlighter-rouge">main()</code> (Flutter), before other framework initializations.</p>
  </li>
  <li>
    <p><strong>Audit environment-specific tokens.</strong> If you use different API tokens per environment, confirm the production build is using the production token. A build configuration mismatch here will silently break all user-facing feedback.</p>
  </li>
</ol>

<h3 id="cause-6-shake-detection-disabled-or-unreliable">Cause 6: Shake Detection Disabled or Unreliable</h3>

<p>A developer disabled shake detection for a specific screen (a game scene with motion controls, a map with tilt gestures, a fitness feature using the accelerometer) and forgot to re-enable it. Or a configuration flag was set to <code class="language-plaintext highlighter-rouge">false</code> during debugging and never toggled back.</p>

<p><strong>The device sensitivity problem:</strong> Even when shake detection is enabled, devices respond differently. Accelerometer sensitivity varies between manufacturers and models; some phones require a significantly more vigorous shake than the developer’s primary test device. Testing only on a flagship phone in the office misses the long tail of budget and mid-range devices your users actually carry.</p>

<p><strong>How to fix it:</strong></p>

<ol>
  <li>
    <p><strong>Grep your codebase</strong> for any programmatic disable of shake detection. Search for <code class="language-plaintext highlighter-rouge">isAllowShake</code>, <code class="language-plaintext highlighter-rouge">setShakeEnabled</code>, <code class="language-plaintext highlighter-rouge">enableShake(false)</code>, or your SDK’s equivalent configuration flag.</p>
  </li>
  <li>
    <p><strong>Test on multiple physical devices</strong>, not just your primary development phone. The shake threshold varies meaningfully between manufacturers and models.</p>
  </li>
  <li>
    <p><strong>Provide a fallback trigger.</strong> A visible button or menu item ensures that even when shake detection fails or feels unreliable on a particular device, users still have a path to submit feedback.</p>
  </li>
</ol>

<h3 id="cause-7-missing-network-permissions-android">Cause 7: Missing Network Permissions (Android)</h3>

<p>The <code class="language-plaintext highlighter-rouge">INTERNET</code> permission is missing from <code class="language-plaintext highlighter-rouge">AndroidManifest.xml</code>, or the permission declaration has a case-sensitivity error. Android silently denies the network request: the feedback form appears, the user writes their report, taps submit… and nothing happens. No error message, no retry prompt. The report vanishes.</p>

<p><strong>The debug-vs-release divergence:</strong> In some cross-platform frameworks, internet permissions are automatically added for debug builds but omitted from release builds. Everything works in development. Everything fails silently in production.</p>

<p><strong>How to fix it:</strong></p>

<ol>
  <li>
    <p>Verify <code class="language-plaintext highlighter-rouge">&lt;uses-permission android:name="android.permission.INTERNET"/&gt;</code> is present in your <code class="language-plaintext highlighter-rouge">AndroidManifest.xml</code> with correct casing.</p>
  </li>
  <li>
    <p><strong>Test the full submission flow</strong> (not just SDK initialization) on a release build on a physical device. Submit a report and verify it appears in the dashboard.</p>
  </li>
</ol>

<h2 id="step-by-step-resolution-quick-reference-checklist">Step-by-Step Resolution (Quick-Reference Checklist)</h2>

<p>If you want the fast version, work through this list in order. Each step maps to the detailed cause above:</p>

<ol>
  <li><strong>Submit a test report from a physical device</strong> to confirm the entire pipeline works end-to-end <em>(Causes 5, 6, 7)</em></li>
  <li><strong>Verify the ping endpoint / install registration</strong> to confirm the SDK is initialized and the token is valid <em>(Cause 5)</em></li>
  <li><strong>Install the production build and try to trigger the feedback form</strong> to confirm the SDK is present in release builds <em>(Cause 4)</em></li>
  <li><strong>Search your codebase for shake disable flags</strong> to confirm shake isn’t programmatically disabled <em>(Cause 6)</em></li>
  <li><strong>Ask three beta testers: “Do you know you can shake your phone to report a bug?”</strong> to confirm discoverability <em>(Cause 1)</em></li>
  <li><strong>Check your dashboard for unanswered reports</strong> to confirm you aren’t losing repeat reporters to silence <em>(Cause 2)</em></li>
  <li><strong>Count the required fields in your feedback form.</strong> If it’s more than one (description), you have a friction problem <em>(Cause 3)</em></li>
</ol>

<h2 id="if-the-problem-persists">If the Problem Persists</h2>

<p>When none of the seven causes above explain your empty dashboard:</p>

<ul>
  <li><strong>Check SDK-specific issue trackers.</strong> GitHub issues for <a href="https://github.com/twinsunllc/inventiv-critic-android">Critic’s Android SDK</a>, <a href="https://github.com/twinsunllc/critic_flutter">Critic’s Flutter SDK</a>, or whichever tool you’re using may document known bugs or device-specific incompatibilities.</li>
  <li><strong>Inspect network traffic.</strong> Use Charles Proxy or Android Studio’s Network Inspector to confirm the bug report HTTP request is actually being sent and what response the server returns. A 401 means bad token. A timeout means network issue. A 200 with no dashboard entry means a server-side processing problem.</li>
  <li><strong>What to include in a support request:</strong> SDK version, platform and OS version, build type (debug or release), API token validation result (ping response), device model, and whether the feedback form appears at all vs. appears but submission fails silently.</li>
  <li><strong>Critic-specific:</strong> Check the <a href="https://critic.inventiv.io/api-docs">API docs</a> for endpoint troubleshooting, or reach out through the web dashboard.</li>
</ul>

<h2 id="prevention-ensuring-high-submission-rates-from-day-one">Prevention: Ensuring High Submission Rates from Day One</h2>

<p>This checklist ensures your feedback pipeline works from the moment you ship. Don’t wait for an empty dashboard to diagnose.</p>

<ol>
  <li>
    <p><strong>Verify SDK initialization on every build variant.</strong> Add a CI check that builds release and confirms the feedback SDK dependency is included; not just that it compiles, but that the SDK classes are present in the final artifact.</p>
  </li>
  <li>
    <p><strong>Test the full submission flow on a physical device before every release.</strong> Submit a report and verify it arrives in the dashboard with device telemetry attached.</p>
  </li>
  <li>
    <p><strong>Add a one-time onboarding tooltip</strong> teaching users about shake-to-report on first app launch. One sentence, dismiss on tap, never show again.</p>
  </li>
  <li>
    <p><strong>Add a visible fallback trigger</strong> (a settings menu item, a help screen option, or a floating button) in addition to shake. Discoverability beats elegance.</p>
  </li>
  <li>
    <p><strong>Enable email notifications for new reports</strong> so your team responds within 24 hours. An unmonitored dashboard is the same as no dashboard.</p>
  </li>
  <li>
    <p><strong>Brief your beta testers explicitly.</strong> In your beta invite email, include one sentence: “Found a bug? Shake your phone to report it instantly; we’ll get device info and logs automatically.”</p>
  </li>
  <li>
    <p><strong>Attach custom metadata automatically.</strong> Inject user IDs, session data, and app state via the SDK so you can proactively follow up with testers who haven’t submitted anything. Critic’s arbitrary JSON metadata accepts any key-value pairs you need: user email, subscription tier, feature flags, A/B test variant.</p>
  </li>
  <li>
    <p><strong>Send a weekly update to beta testers</strong> showing what you’ve fixed based on their feedback. <a href="https://moldstud.com/articles/p-complete-guide-to-beta-testing-strategies-for-mobile-apps-answering-developers-faqs">Moldstud’s research</a> found that 68% of testers prefer real-time communication; a weekly digest is the minimum viable feedback loop.</p>
  </li>
  <li>
    <p><strong>Use tools that minimize user effort.</strong> Critic’s automatic capture of battery, memory, disk, network, OS, 500 lines of console logs, and screenshots means the user’s only job is typing one sentence. The less you ask of users, the more you’ll hear from them.</p>
  </li>
  <li>
    <p><strong>Close the loop.</strong> Every report acknowledged. Every fix communicated. Users who feel heard become your most reliable testers.</p>
  </li>
</ol>

<p>The pattern across all ten items is the same: the feedback pipeline has two ends. Most developers optimize the technical end (SDK initialization, API tokens, build configuration) and neglect the human end. An empty dashboard almost always means the human end needs work. The tools that win are the ones that make both ends effortless: one line of code for the developer, one shake for the user, and automatic device context that neither of them had to think about.</p>

<p><a href="https://critic.inventiv.io/users/sign_up">Start a free 30-day trial of Critic</a>: one line of code, automatic device telemetry, and your first actionable bug report in minutes. No credit card required.</p>]]></content><author><name>dave_lane</name></author><category term="Critic" /><summary type="html"><![CDATA[Most empty bug-reporting dashboards stem from discoverability and trust failures, not broken SDKs. A ranked troubleshooting guide.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://critictracking.com/assets/images/posts/2026-03-26-you-set-up-in-app-bug-reporting-but-nobody-s-submitting-reports-here-s-how-to-fix-it.webp" /><media:content medium="image" url="https://critictracking.com/assets/images/posts/2026-03-26-you-set-up-in-app-bug-reporting-but-nobody-s-submitting-reports-here-s-how-to-fix-it.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Apple’s AI Review Summaries Put Your Worst Bugs on Display</title><link href="https://critictracking.com/blog/apples-ai-review-summaries-put-your-worst-bugs-on-display/" rel="alternate" type="text/html" title="Apple’s AI Review Summaries Put Your Worst Bugs on Display" /><published>2026-03-25T13:00:00+00:00</published><updated>2026-03-27T15:37:48+00:00</updated><id>https://critictracking.com/blog/apples-ai-review-summaries-put-your-worst-bugs-on-display</id><content type="html" xml:base="https://critictracking.com/blog/apples-ai-review-summaries-put-your-worst-bugs-on-display/"><![CDATA[<p>You shipped the fix three days ago. The crash that hit a handful of users during checkout? Gone. Patched, tested, released.</p>

<p>But new users visiting your App Store page today still see it: <em>“Users frequently report crashes during checkout and loss of saved data.”</em> The AI-generated summary hasn’t caught up. The three users who left those reviews never updated them. Your fix is invisible to every potential download.</p>

<p>Since iOS 18.4 launched in March 2025, Apple’s AI-generated review summaries distill recurring complaints into a prominent paragraph on every product page, <a href="https://www.macrumors.com/2025/03/06/ios-18-4-ai-review-summaries-app-store/">refreshed at least weekly</a> and displayed above individual reviews. Google Play rolled out the same feature in late 2025 under a <a href="https://android.gadgethacks.com/news/google-play-store-ai-review-summaries-roll-out-now/">“Users are saying”</a> heading. A handful of bug reports from frustrated users now shapes the first thing every potential downloader reads, on both storefronts.</p>

<p>The old playbook (fix the bug, reply to the review, move on) has a structural flaw. This article breaks down why it fails under AI summarization, why common workarounds fall short, and what actually reduces negative review volume at the source.</p>

<h2 id="how-apples-ai-review-summaries-work">How Apple’s AI Review Summaries Work</h2>

<p>The system is a multi-stage LLM pipeline built to surface the themes users care about most.</p>

<p>According to <a href="https://machinelearning.apple.com/research/app-store-review">Apple’s machine learning research paper</a>, the system first filters out reviews containing spam, profanity, or fraud signals. The remaining reviews then pass through four LLM-powered stages:</p>

<ol>
  <li><strong>Insight Extraction:</strong> LoRA-fine-tuned models distill each review into atomic “insights” (standardized, single-aspect statements with normalized phrasing and sentiment). A review saying “keeps crashing when I try to check out, really frustrating, also the UI is ugly” becomes two separate insights: one about crashes, one about UI design.</li>
  <li><strong>Dynamic Topic Modeling:</strong> Groups similar insights into themes, deduplicates, and identifies prominent topics. The system explicitly distinguishes between “App Experience” topics (features, performance, crashes, design) and “Out-of-App Experience” topics (like food quality in a delivery app). App Experience topics are prioritized.</li>
  <li><strong>Topic and Insight Selection:</strong> Selects topics by popularity and aligns them with the app’s overall rating distribution, choosing representative insights for each selected topic.</li>
  <li><strong>Summary Generation:</strong> A fine-tuned model using Direct Preference Optimization (DPO) produces a 100 to 300 character summary paragraph from the selected insights, evaluated by thousands of human raters for helpfulness, composition, and safety.</li>
</ol>

<p>This pipeline amplifies bug complaints for a specific reason: crashes, performance issues, and broken features fall squarely into the “App Experience” category, which the system weights most heavily. When multiple reviews mention the same crash, the topic modeling clusters them into a single prominent theme. The selection algorithm then surfaces it because it is both popular <em>and</em> in the prioritized category.</p>

<p>The summaries <a href="https://www.macrumors.com/2025/03/06/ios-18-4-ai-review-summaries-app-store/">refresh at least once a week</a>. A bug complaint posted on Monday shapes the summary every visitor sees for at least seven days. If the reviews that mention the bug are never updated or diluted by enough new positive reviews, that complaint can dominate the summary for weeks or months.</p>

<h3 id="a-cross-platform-problem">A Cross-Platform Problem</h3>

<p>Google Play rolled out identical AI review summaries in <a href="https://android.gadgethacks.com/news/google-play-store-ai-review-summaries-roll-out-now/">Play Store v48.5</a> in October 2025. Under a “Users are saying” heading, the system condenses positive and negative feedback into a single conversational paragraph, with interactive chips for specific topics like “performance” and “user interface.” If you ship on both iOS and Android, bug complaints are algorithmically amplified on both storefronts simultaneously.</p>

<h2 id="why-fix-it-and-move-on-falls-short">Why “Fix It and Move On” Falls Short</h2>

<p>The traditional response to a negative review (ship a patch, reply to the reviewer, hope they revise their rating) made sense when individual reviews scrolled off the page. Under AI summarization, the math changes.</p>

<h3 id="the-stale-complaint-loop">The Stale Complaint Loop</h3>

<p>Users almost never update their reviews after a bug is fixed. As <a href="https://www.apptweak.com/en/aso-blog/app-store-reviews">AppTweak’s 2026 review guide</a> notes, “many users are unaware of this capability and rarely return to update their original app reviews.” The review that says “crashes every time I open settings” stays at one star even after you ship the patch. The AI summary keeps ingesting it every weekly refresh.</p>

<p>This creates a self-reinforcing cycle:</p>

<ol>
  <li>Users hit a bug and leave negative reviews</li>
  <li>You ship the fix</li>
  <li>Reviewers leave their reviews untouched; they have moved on emotionally, or uninstalled the app entirely</li>
  <li>The AI summary keeps surfacing the original complaints</li>
  <li>New potential users see the complaint and some skip the download</li>
  <li>Fewer new users means fewer new positive reviews to dilute the negative signal</li>
  <li>The summary persists, looping back to step 4</li>
</ol>

<p>The dilution math is punishing. On Google Play, <a href="https://appfollow.io/blog/ratings-and-reviews-what-affects-your-conversion-rate">offsetting a single negative review requires at least 10 positive ratings</a>. For a small app with low review velocity (maybe 5 to 10 new reviews per month), a cluster of three bug-related one-star reviews can take months to statistically overwhelm. The AI summary ignores that the bug was fixed in 48 hours. It only cares about the review corpus.</p>

<h3 id="the-conversion-impact">The Conversion Impact</h3>

<p><a href="https://appfollow.io/blog/ratings-and-reviews-what-affects-your-conversion-rate">79% of users check an app’s rating before downloading</a>. The AI summary is now the first review content they see; a snapshot that shapes first impressions before anyone scrolls to an individual review.</p>

<p>Apps with ratings below 3 stars lose nearly every potential download. Improving from 1 or 2 stars to 4 or 5 stars can yield <a href="https://appfollow.io/blog/ratings-and-reviews-what-affects-your-conversion-rate">six to seven times more downloads</a>. As one ASO analysis firm put it, if the AI summary <a href="https://www.theasoproject.com/blog/app-store-ai-review-summaries/">“latches onto critical comments,”</a> staying on top of app quality is “non-negotiable.”</p>

<h3 id="the-speed-asymmetry">The Speed Asymmetry</h3>

<p>It takes days to accumulate a cluster of negative reviews about a bug. It takes weeks or months of positive reviews to dilute them out of the summary. The window between “bug reported in reviews” and “summary drops the bug mention” stretches far longer than the time to ship the fix. It includes the fix <em>plus</em> the time to accumulate enough new positive reviews to shift the AI’s topic modeling. For small apps, this asymmetry is brutal.</p>

<h2 id="why-the-common-workarounds-fail">Why the Common Workarounds Fail</h2>

<p>If you have already been thinking about solutions, you have probably considered these. Each one misses the structural problem.</p>

<h3 id="review-gating-violates-apples-guidelines">Review-Gating Violates Apple’s Guidelines</h3>

<p>Review-gating (routing happy users to the App Store review prompt and unhappy users to a private feedback form) sounds logical. It is also prohibited.</p>

<p>Apple requires that <a href="https://developer.apple.com/documentation/storekit/skstorereviewcontroller"><code class="language-plaintext highlighter-rouge">SKStoreReviewController</code></a> is the only approved method for requesting reviews, limited to three prompts per user per year. Section 1.1.7 of Apple’s <a href="https://developer.apple.com/app-store/review/guidelines/">App Store Review Guidelines</a> prohibits conditioning functionality on reviews or selectively funneling positive sentiment. Google has similar restrictions. Getting caught risks app removal; a worse outcome than the negative reviews you were trying to prevent.</p>

<h3 id="review-response-tools-are-reactive">Review Response Tools Are Reactive</h3>

<p>Tools that help you reply to negative reviews quickly and professionally are useful hygiene. But by the time you reply, the AI summary has already incorporated the complaint. Your reply leaves the review’s star rating and text unchanged. The AI summary analyzes <em>user reviews</em> and ignores developer responses. Your thoughtful reply explaining the fix lives below the fold while the summary leads with the complaint above it.</p>

<h3 id="post-fix-outreach-has-diminishing-returns">Post-Fix Outreach Has Diminishing Returns</h3>

<p>Replying to a negative review with “We fixed this in v2.3; please consider updating your review!” feels proactive. In practice, users who wrote a frustrated one-star review three weeks ago have emotionally moved on. Many have uninstalled the app. They are unlikely to monitor their App Store reviews for your response. You are fighting the Stale Complaint Loop one review at a time, and losing.</p>

<h3 id="prompted-positive-reviews-miss-the-root-cause">Prompted Positive Reviews Miss the Root Cause</h3>

<p>Using <code class="language-plaintext highlighter-rouge">SKStoreReviewController</code> more aggressively to generate positive reviews that dilute negative ones is limited by Apple’s three-prompts-per-year cap. The system decides whether to actually display the prompt. You have minimal control over timing, making it hard to counter a burst of negative reviews. And at a deeper level, you are treating a symptom: the bug that caused the complaints still exists in production until someone reports it with enough context to actually reproduce.</p>

<h2 id="the-real-problem-is-friction">The Real Problem Is Friction</h2>

<p>Negative reviews are a symptom. The real issue: frustrated users have no lower-friction path to tell you about the bug.</p>

<p>A mobile user who just hit a crash has limited options:</p>

<ul>
  <li><strong>Email support:</strong> Leave the app, open email, compose a message, describe what happened from memory, maybe attach a screenshot. High effort.</li>
  <li><strong>Visit a support page:</strong> Leave the app, open a browser, find the support URL, fill out a form. High effort.</li>
  <li><strong>Leave an App Store review:</strong> Tap the rating prompt (if it appears), write a sentence, submit. Lower effort, and public.</li>
  <li><strong>Do nothing:</strong> Lowest effort. Uninstall silently.</li>
</ul>

<p>Mobile users face higher friction to report bugs than web users. A web user can open a support widget without leaving the page. A mobile user must exit the app, switch contexts, and describe something they can no longer see.</p>

<p>Three outcomes follow, all bad:</p>

<p><strong>Silent churn.</strong> According to a QualiTest Group survey conducted with Google Consumer Surveys, 51% of users would leave after experiencing just one or a few bugs in a single day. A separate study found that users retry a buggy app <a href="https://www.alphabin.co/blog/mobile-app-testing-uninstall-rates">only three times before uninstalling</a>. Most leave without a word. You learn about the bug from a rating dip weeks later.</p>

<p><strong>Terse negative review.</strong> The minority who do speak up leave a one-star review saying “crashes constantly,” with zero device info, zero steps to reproduce, nothing actionable. This becomes AI-summarization fuel.</p>

<p><strong>Actionable bug report.</strong> Almost nonexistent without tooling. According to a Software Reliability report by Undo, 91% of developers report unresolved bugs in their backlog due to irreproducibility.</p>

<p>The App Store review form is, perversely, the <em>lowest-friction feedback mechanism</em> available to most mobile app users. Without an easier option inside the app, you are funneling frustration toward the one place it does the most damage.</p>

<p>As a <a href="https://news.ycombinator.com/item?id=44225352">Hacker News thread with 44K+ engagement</a> put it: “Most users won’t report bugs unless you make it stupidly easy.” The friction barrier comes down to mechanics, not willingness.</p>

<p>There is a behavioral dimension too. Shaking happens naturally when users are frustrated; it is a gesture that matches their emotional state. Capturing feedback at the moment of frustration, through a physical gesture the user is already inclined to make, removes the cognitive overhead of deciding <em>how</em> to report.</p>

<p>In-app surveys average 30%+ completion rates compared to 5 to 10% for post-session email surveys (Zonka Feedback). That is a 3 to 6x improvement, driven entirely by reducing friction and capturing feedback while the user is still inside the experience.</p>

<h2 id="intercept-frustration-inside-the-app">Intercept Frustration Inside the App</h2>

<p>Give users a feedback path that is easier than the App Store, while automatically capturing the device context developers need to fix the bug fast.</p>

<p>Five principles separate approaches that work from those that fall flat:</p>

<p><strong>Lower the friction below the App Store.</strong> If submitting feedback inside the app requires fewer steps than writing a review, most users will take the easier path. The feedback mechanism must require zero setup from the user: no leaving the app, no composing an email, no manually attaching screenshots.</p>

<p><strong>Capture device context automatically.</strong> Users rarely volunteer their OS version, memory state, or network conditions. The tool must capture this silently: battery, memory, disk, network, OS, device model, app version, and ideally console logs. This is what makes the difference between “it crashed” and a reproducible bug report.</p>

<p><strong>Close the loop fast enough to beat the summary refresh.</strong> With weekly AI summary refreshes, you have a seven-day window. If a bug report arrives with full device telemetry on Monday and the fix ships by Thursday, you have addressed the issue before the next summary cycle. Without device context, reproduction alone can take longer than a week.</p>

<p><strong>Complement crash reporting.</strong> Automated crash reporters like Crashlytics and Sentry catch <em>what broke</em>. User-initiated feedback captures <em>what users experienced</em>: UX bugs, confusing flows, performance issues, feature gaps that never trigger a crash but absolutely trigger one-star reviews. Both signals are needed.</p>

<p><strong>Work out of the box.</strong> If the feedback tool requires building custom UI, most small teams will deprioritize it. The default experience must be complete: install, initialize, done. A built-in shake-to-report form is the baseline.</p>

<p>A <a href="https://www.sciencedirect.com/science/article/pii/S0167811624000922">study analyzing over one million reviews across 460 apps</a>, published in the <em>Journal of Interactive Marketing</em>, found that the rewards for responding to user feedback and the penalties for ignoring it are substantial. Incorporating user feedback into product development measurably improves ratings over time. The question is whether that feedback arrives as a private, actionable report or a public, context-free one-star review.</p>

<h2 id="how-critic-implements-this-approach">How Critic Implements This Approach</h2>

<p><a href="https://critictracking.com/">Critic</a> is an in-app feedback platform built for small mobile teams that need actionable bug reports without enterprise complexity or pricing. It maps directly to the five principles above.</p>

<p><strong>Shake-to-report, zero configuration.</strong> User shakes their device. A feedback form appears. The user types one sentence. A complete report is submitted: no leaving the app, no composing an email, no manually attaching anything. The built-in UI works out of the box with zero UI code. The friction is lower than opening the App Store.</p>

<p><strong>Automatic device telemetry on every report.</strong> Every report captures battery status, memory metrics (active, free, inactive, total, wired), disk space, network connectivity (WiFi, cellular, carrier), OS version, CPU usage, device hardware, and app version. The user does nothing beyond describing the issue. On Android, the last 500 logcat entries are attached automatically. On iOS, stderr and stdout are captured. The developer gets a reproducible bug report without asking the user a single question.</p>

<p><strong>Custom metadata for app-specific context.</strong> Critic accepts arbitrary JSON metadata on every report: user ID, feature flags, A/B test variant, subscription tier, session data, order ID. Whatever your app knows at the moment of frustration gets attached to the report. This goes beyond standard telemetry to capture the specific context <em>your</em> app needs for reproduction.</p>

<p><strong>Closing the loop within the summary window.</strong> One-line SDK integration means feedback collection starts in minutes. Automatic device telemetry means reproduction happens in hours. Bug reported Monday, reproduced Monday afternoon, fix shipped Wednesday. That beats the next weekly summary refresh. Compare this to a one-star review that says “it crashed” with zero device info, where reproduction alone can consume the entire seven-day summary cycle.</p>

<p><strong>Complementary to Crashlytics and Sentry.</strong> Critic captures user-initiated feedback that automated crash reporters miss entirely. Users know about bugs that never crash the app: the confusing flow, the button that fails to respond, the data that loads incorrectly. A team running Crashlytics (free) alongside Critic ($20/month) has a complete feedback pipeline for under $25/month.</p>

<p><strong>Multi-platform, one dashboard.</strong> SDKs for iOS, Android, Flutter, and JavaScript. One line of code to initialize on each platform. All reports flow into a single web dashboard with commenting, team invitations, role-based access, and email notifications.</p>

<p><strong>Pricing that works for small teams.</strong> $20/month per app. Unlimited seats on the Standard plan. Full feature access during the 30-day free trial. No credit card required to start. Critic is built for teams that need the core feedback loop (shake-to-report, device context, logs, screenshots, metadata, and a management dashboard) without paying for enterprise features they will never use.</p>

<p><strong>What a report looks like in practice:</strong> A developer opens the Critic dashboard and sees a report titled “App freezes on checkout.” Below the user’s description are rows of automatically captured data: iPhone 14, iOS 17.4, 12% battery, 1.2 GB free memory, cellular connection on T-Mobile, app version 2.3.1. Below that, 500 lines of console logs showing the exact sequence of events leading to the freeze. Screenshots are attached with automatic MIME type detection. Reproduction starts immediately.</p>

<h2 id="results-you-can-expect">Results You Can Expect</h2>

<p>In-app feedback will still miss some users who go straight to the App Store. But it shifts the ratio, and under AI summarization, the ratio determines whether the summary leads with your bugs or your strengths.</p>

<p><strong>Higher feedback volume through private channels.</strong> In-app surveys average 30%+ completion rates compared to 5 to 10% for external channels like post-session email surveys (Zonka Feedback). More reports submitted privately means fewer reports submitted publicly as App Store reviews.</p>

<p><strong>Faster reproduction and resolution.</strong> With full device telemetry, “cannot reproduce” becomes rare. The three-hour investigation triggered by “it crashed” becomes a twenty-minute fix informed by exact device state, memory pressure, network conditions, and 500 lines of logs. Modern in-app bug reporting SDKs reduce resolution time by up to 40% compared to manual reporting methods (Aqua Cloud).</p>

<p><strong>Breaking the Stale Complaint Loop.</strong> Faster fixes plus fewer bug-driven public reviews means the AI summary shifts toward positive themes sooner. If bugs are caught and fixed via private in-app feedback before users resort to the App Store, the negative review volume that feeds the AI summary drops at the source. You prevent the negative signal from being created, rather than trying to dilute it with positive reviews after the fact.</p>

<p><strong>Developer time reclaimed.</strong> Eliminating the “what device are you on?” back-and-forth saves an estimated 2 to 5 hours per week for a small team handling regular bug reports. Every report arrives complete. The conversation goes from “Can you send a screenshot? What OS are you running? Were you on WiFi?” to “I see the issue, fix incoming.”</p>

<p><strong>Frustration intercepted before it goes public.</strong> When users can shake their phone and submit a report in thirty seconds (while the bug is fresh, without leaving the app) they get a resolution path that is faster and more satisfying than navigating to the App Store. As one <a href="https://news.ycombinator.com/item?id=9854248">Hacker News discussion</a> validated: in-app feedback results in fewer negative reviews. Give frustrated users a private voice before they reach for the public one.</p>

<p>A realistic expectation: if 70 to 80% of frustrated users who would have left a one-star review instead submit in-app feedback, that is 70 to 80% fewer bug complaints for the AI to summarize. The summary still reflects your app’s reality, but the reality improves because you are fixing bugs faster with better context and catching frustration before it becomes permanent public record.</p>

<hr />

<p>Apple’s AI review summaries turned a handful of bug complaints into a persistent, prominent headline on your product page. Google Play followed suit. The old playbook (fix and move on) fails when the AI keeps surfacing stale complaints that reviewers never update.</p>

<p>Better review management misses the point. Give users a path that is easier than the App Store, with automatic device context that lets you fix the bug before the next weekly summary refresh.</p>

<p><a href="https://critictracking.com/">Critic</a> adds in-app feedback with full device telemetry to your iOS, Android, or Flutter app in one line of code. $20/month per app. 30-day free trial, no credit card required. <a href="https://critictracking.com/">Start catching frustration before it goes public</a>.</p>]]></content><author><name></name></author><category term="posts" /><summary type="html"><![CDATA[Apple's AI review summaries amplify bug complaints for weeks, even after you ship the fix. In-app feedback catches frustration before it goes public.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://critictracking.com/assets/images/apples-ai-review-summaries-put-your-worst-bugs-on-display.webp" /><media:content medium="image" url="https://critictracking.com/assets/images/apples-ai-review-summaries-put-your-worst-bugs-on-display.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">How We Built a 1,600-Line Android SDK That Captures Battery, Memory, Disk, and Network Data on Every Bug Report</title><link href="https://critictracking.com/blog/how-we-built-a-1600-line-android-sdk-that-captures-battery-memory-disk-and-network-data-on-every-bug-report/" rel="alternate" type="text/html" title="How We Built a 1,600-Line Android SDK That Captures Battery, Memory, Disk, and Network Data on Every Bug Report" /><published>2026-03-24T13:00:00+00:00</published><updated>2026-03-27T15:37:48+00:00</updated><id>https://critictracking.com/blog/how-we-built-a-1600-line-android-sdk-that-captures-battery-memory-disk-and-network-data-on-every-bug-report</id><content type="html" xml:base="https://critictracking.com/blog/how-we-built-a-1600-line-android-sdk-that-captures-battery-memory-disk-and-network-data-on-every-bug-report/"><![CDATA[<p>A user emails your support inbox: “the app crashed.” No device model. No OS version. No logs. You spend forty minutes bouncing between five test devices trying to reproduce something that may only happen at 8% battery on a Galaxy A14 with 200MB of free storage. This is mobile bug reporting by default, and it’s why we built Critic’s Android SDK to capture battery status, memory metrics, disk space, network connectivity, OS version, and 500 lines of logcat automatically with every user-submitted report.</p>

<p>This article walks through the architecture decisions, specific Android APIs, and trade-offs behind a feedback SDK that ships at roughly 1,600 lines of Java. If you’re evaluating whether to build device-context capture yourself or adopt an existing tool, this is the technical analysis that will save you from learning the hard way.</p>

<h2 id="context-and-constraints">Context and Constraints</h2>

<p>When we designed the SDK, we started with three non-negotiable requirements:</p>

<p><strong>1. Minimal footprint.</strong> The average Android app already integrates 15–17 SDKs for analytics, payments, advertising, and engagement. Google’s own analysis of Play Store data shows that <a href="https://medium.com/googleplaydev/shrinking-apks-growing-installs-5d3fcba23ce2">every 6MB increase in APK size reduces install conversion rates by 1%</a>. In emerging markets like India, the impact is steeper: a 10MB reduction correlates with a 2.5% conversion rate increase. Our SDK couldn’t be the one that tipped an app past a download threshold.</p>

<p><strong>2. No background monitoring.</strong> Continuous telemetry agents (the kind used by APM tools and session replay platforms) run persistent services, hold wake locks, and drain battery. We needed device context at the moment a user reports a problem, not a 24/7 stream of metrics the developer didn’t ask for. This constraint shaped every architectural decision.</p>

<p><strong>3. One-line initialization.</strong> The integration path had to be a single method call in the Application class or main Activity. No XML configuration files, no multi-step setup wizards, no mandatory permissions beyond what the host app already declares.</p>

<p>These constraints ruled out approaches like OpenTelemetry’s mobile instrumentation (designed for always-on collection with batch exports) and heavier SDKs that bundle session replay or crash monitoring alongside feedback capture.</p>

<h2 id="architecture-point-in-time-telemetry-vs-continuous-monitoring">Architecture: Point-in-Time Telemetry vs. Continuous Monitoring</h2>

<p>Critic’s SDK uses what we call <strong>point-in-time capture</strong>: device telemetry is collected at the moment a user initiates a bug report, not continuously in the background. This is the single most important architectural decision in the entire codebase, and it has cascading implications for size, battery impact, and complexity.</p>

<h3 id="why-point-in-time-capture-works-for-bug-reports">Why Point-in-Time Capture Works for Bug Reports</h3>

<p>When a user shakes their phone to report a bug, they’re describing something they just experienced. The device state at that moment (battery level, available memory, network type, free disk space) is the state that matters for reproduction. Capturing this snapshot requires a burst of synchronous API calls that complete in single-digit milliseconds. No background threads polling system metrics. No disk buffers accumulating telemetry. No wake locks preventing the CPU from sleeping.</p>

<p>The trade-off is real: we lose what happened <em>before</em> the report. Tools like Bugsee address this with 60-second rolling video buffers, and session replay platforms reconstruct the entire user session. But those capabilities come at a cost: heavier SDKs, higher memory consumption, and battery drain that users notice. For a feedback SDK focused on <em>user-initiated</em> bug reports (not automated crash capture), point-in-time telemetry delivers the vast majority of reproduction value at a fraction of the resource cost.</p>

<h3 id="what-the-sdk-captures-and-the-exact-apis-behind-it">What the SDK Captures (and the Exact APIs Behind It)</h3>

<p>Here’s what arrives in the dashboard when a user submits a report, and the specific Android APIs that produce each data point:</p>

<p><strong>Battery Status</strong> via <code class="language-plaintext highlighter-rouge">BroadcastReceiver</code> registered for <code class="language-plaintext highlighter-rouge">Intent.ACTION_BATTERY_CHANGED</code></p>

<p>The SDK registers a broadcast receiver during initialization. Because <code class="language-plaintext highlighter-rouge">ACTION_BATTERY_CHANGED</code> is a sticky intent, calling <code class="language-plaintext highlighter-rouge">registerReceiver(null, intentFilter)</code> returns the current battery state without waiting for a broadcast event. The returned Intent provides:</p>

<ul>
  <li>Battery level (percentage calculated from <code class="language-plaintext highlighter-rouge">EXTRA_LEVEL</code> / <code class="language-plaintext highlighter-rouge">EXTRA_SCALE</code>)</li>
  <li>Charging status (<code class="language-plaintext highlighter-rouge">EXTRA_STATUS</code>: charging, discharging, full, not charging)</li>
  <li>Charge source (<code class="language-plaintext highlighter-rouge">EXTRA_PLUGGED</code>: USB or AC)</li>
  <li>Battery health (<code class="language-plaintext highlighter-rouge">EXTRA_HEALTH</code>: good, overheat, dead, cold, over voltage, unspecified failure)</li>
</ul>

<p>This is critical for reproduction. A bug that only manifests when the OS throttles CPU under low-battery conditions is invisible without this data.</p>

<p><strong>Memory Metrics</strong> via <code class="language-plaintext highlighter-rouge">ActivityManager.MemoryInfo</code></p>

<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nc">ActivityManager</span> <span class="n">activityManager</span> <span class="o">=</span> <span class="o">(</span><span class="nc">ActivityManager</span><span class="o">)</span> <span class="n">context</span><span class="o">.</span><span class="na">getSystemService</span><span class="o">(</span><span class="nc">Context</span><span class="o">.</span><span class="na">ACTIVITY_SERVICE</span><span class="o">);</span>
<span class="nc">ActivityManager</span><span class="o">.</span><span class="na">MemoryInfo</span> <span class="n">memoryInfo</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">ActivityManager</span><span class="o">.</span><span class="na">MemoryInfo</span><span class="o">();</span>
<span class="n">activityManager</span><span class="o">.</span><span class="na">getMemoryInfo</span><span class="o">(</span><span class="n">memoryInfo</span><span class="o">);</span>
</code></pre></div></div>

<p>This yields <code class="language-plaintext highlighter-rouge">availMem</code> (bytes of available system RAM), <code class="language-plaintext highlighter-rouge">totalMem</code> (total device RAM, API 16+), and the <code class="language-plaintext highlighter-rouge">lowMemory</code> boolean indicating whether the system considers itself in a low-memory state. The <code class="language-plaintext highlighter-rouge">lowMemory</code> flag is particularly valuable: it tells you whether the OS was actively killing background processes when the user hit the bug, a condition that causes timing-dependent failures developers can rarely reproduce on their 12GB development phones.</p>

<p><strong>Disk Space</strong> via <code class="language-plaintext highlighter-rouge">Environment.getExternalStorageDirectory()</code> + <code class="language-plaintext highlighter-rouge">File</code> methods</p>

<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nc">File</span> <span class="n">storage</span> <span class="o">=</span> <span class="nc">Environment</span><span class="o">.</span><span class="na">getExternalStorageDirectory</span><span class="o">();</span>
<span class="kt">long</span> <span class="n">freeSpace</span> <span class="o">=</span> <span class="n">storage</span><span class="o">.</span><span class="na">getFreeSpace</span><span class="o">();</span>
<span class="kt">long</span> <span class="n">totalSpace</span> <span class="o">=</span> <span class="n">storage</span><span class="o">.</span><span class="na">getTotalSpace</span><span class="o">();</span>
<span class="kt">long</span> <span class="n">usableSpace</span> <span class="o">=</span> <span class="n">storage</span><span class="o">.</span><span class="na">getUsableSpace</span><span class="o">();</span>
</code></pre></div></div>

<p>Rather than using <code class="language-plaintext highlighter-rouge">StatFs</code> (which requires careful partition path handling), the SDK calls <code class="language-plaintext highlighter-rouge">getFreeSpace()</code>, <code class="language-plaintext highlighter-rouge">getTotalSpace()</code>, and <code class="language-plaintext highlighter-rouge">getUsableSpace()</code> on the external storage directory. This captures the storage conditions that matter when a bug stems from failed writes, incomplete downloads, or database operations hitting limits.</p>

<p>A transparency note: <code class="language-plaintext highlighter-rouge">Environment.getExternalStorageDirectory()</code> is deprecated as of API 29 in favor of scoped storage. The SDK still functions correctly (the method returns a valid path) but this is on our list for modernization.</p>

<p><strong>Network Connectivity</strong> via <code class="language-plaintext highlighter-rouge">ConnectivityManager</code> + <code class="language-plaintext highlighter-rouge">NetworkInfo</code></p>

<p>The SDK queries <code class="language-plaintext highlighter-rouge">ConnectivityManager.getActiveNetworkInfo()</code> to determine whether the device is connected via Wi-Fi or cellular, reporting two booleans: <code class="language-plaintext highlighter-rouge">network_wifi_connected</code> and <code class="language-plaintext highlighter-rouge">network_cell_connected</code>. It checks for the <code class="language-plaintext highlighter-rouge">ACCESS_NETWORK_STATE</code> permission before querying and also captures the carrier name via <code class="language-plaintext highlighter-rouge">TelephonyManager.getNetworkOperatorName()</code>.</p>

<p>Another transparency note: the SDK currently uses the older <code class="language-plaintext highlighter-rouge">NetworkInfo</code> API, which is deprecated in favor of <code class="language-plaintext highlighter-rouge">NetworkCapabilities</code> on API 23+. The older API still works but can’t distinguish between “connected to a network” and “has validated internet access.” Migrating to <code class="language-plaintext highlighter-rouge">NetworkCapabilities</code> would provide this distinction; something we plan to address in a future release.</p>

<p><strong>Device Hardware and OS</strong> via <code class="language-plaintext highlighter-rouge">android.os.Build.*</code></p>

<p>Standard fields: <code class="language-plaintext highlighter-rouge">Build.MANUFACTURER</code>, <code class="language-plaintext highlighter-rouge">Build.MODEL</code>, <code class="language-plaintext highlighter-rouge">Build.VERSION.RELEASE</code> (OS version string), <code class="language-plaintext highlighter-rouge">Build.VERSION.SDK_INT</code> (API level). Also captures the app’s version name and version code from <code class="language-plaintext highlighter-rouge">PackageInfo</code>. Every report includes these automatically, which eliminates the most common support question in mobile development: “What device are you on?”</p>

<p><strong>A note on CPU usage:</strong> Some of Critic’s marketing materials reference CPU metrics. The SDK does not capture CPU usage; we inspected the <code class="language-plaintext highlighter-rouge">getDeviceStatusJson()</code> method line by line to confirm this. Battery state and memory pressure are the system-level indicators that actually correlate with reproducible bugs. CPU percentage at a single point in time, without process-level attribution, provides minimal diagnostic value. We chose not to ship a metric just to pad a feature list, and we’ve corrected the marketing materials to match.</p>

<h2 id="the-dependency-decision-pragmatism-over-purity">The Dependency Decision: Pragmatism Over Purity</h2>

<p>Industry guidance on SDK development consistently recommends zero third-party dependencies. Luciq (formerly Instabug) published this as an explicit engineering principle: “no third-party code; if we need something, we create it ourselves.” Auth0’s SDK design guidelines similarly advocate minimizing external dependencies. The reasoning is sound: every dependency you bundle is a dependency you impose on every app that integrates your SDK. Version conflicts with the host app’s own dependencies cause build failures, and transitive dependencies multiply the surface area.</p>

<p>We chose a different path. Critic’s Android SDK depends on five libraries:</p>

<table>
  <thead>
    <tr>
      <th>Dependency</th>
      <th>Version</th>
      <th>Purpose</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Retrofit</strong></td>
      <td>2.3.0</td>
      <td>HTTP client and API interface definition</td>
    </tr>
    <tr>
      <td><strong>Retrofit Gson Converter</strong></td>
      <td>2.3.0</td>
      <td>JSON serialization (brings in Gson transitively)</td>
    </tr>
    <tr>
      <td><strong>Seismic</strong></td>
      <td>1.0.2 (Square)</td>
      <td>Shake gesture detection</td>
    </tr>
    <tr>
      <td><strong>AppCompat</strong></td>
      <td>26.1.0</td>
      <td>Backward-compatible Activity base class</td>
    </tr>
    <tr>
      <td><strong>ConstraintLayout</strong></td>
      <td>1.0.2</td>
      <td>Feedback form layout</td>
    </tr>
  </tbody>
</table>

<p>Retrofit (which brings in OkHttp transitively) eliminates hundreds of lines of manual HTTP connection management, URL building, multipart body construction, and response parsing. Gson handles all JSON serialization for device status payloads, metadata objects, and API responses. Together, they replace roughly 600–800 lines of networking boilerplate we would otherwise have to write, test, and maintain. At a total SDK size of ~1,600 lines (including model classes and layout resources), those 800 lines would have nearly doubled the codebase.</p>

<h3 id="the-trade-off-we-accepted">The Trade-Off We Accepted</h3>

<p>Using Retrofit means the host app can’t use an incompatible Retrofit version without dependency resolution. In practice, this is rarely a problem; Retrofit 2.x has been stable for years, and most apps that use Retrofit are on a compatible version. But it’s a real constraint, and developers evaluating the SDK should know about it.</p>

<p>The alternative (a zero-dependency HTTP layer using <code class="language-plaintext highlighter-rouge">HttpURLConnection</code>) would have meant writing our own multipart body encoder, our own JSON parser, our own retry logic, and our own header interceptor chain. For a bootstrapped team maintaining SDKs across four platforms (Android, iOS, Flutter, JavaScript), that’s maintenance cost we chose to avoid.</p>

<h3 id="a-note-on-seismic">A Note on Seismic</h3>

<p>Square’s Seismic library is <a href="https://github.com/square/seismic">deprecated</a> with no planned successor. Square’s recommendation is to fork the repo or copy the single source file. Seismic is a compact implementation: it checks whether more than 75% of accelerometer samples in the past 0.5 seconds indicate acceleration, with configurable sensitivity levels (LIGHT, MEDIUM, HARD). The library is small enough to vendor directly, and its deprecation means we will likely fold the shake detection logic into the SDK itself in a future release rather than continuing to depend on an unmaintained artifact.</p>

<h2 id="shake-detection-from-accelerometer-data-to-bug-report">Shake Detection: From Accelerometer Data to Bug Report</h2>

<p>The user-facing flow is simple: shake the phone, confirm “Do you want to send us feedback?” in a dialog, type a description, and hit submit. Under the hood, this involves lifecycle-aware sensor management.</p>

<h3 id="lifecycle-aware-listener-registration">Lifecycle-Aware Listener Registration</h3>

<p>The SDK registers an <code class="language-plaintext highlighter-rouge">Application.ActivityLifecycleCallbacks</code> listener during initialization. When an Activity enters the foreground (<code class="language-plaintext highlighter-rouge">onActivityResumed</code>), the SDK starts Seismic’s <code class="language-plaintext highlighter-rouge">ShakeDetector</code> via <code class="language-plaintext highlighter-rouge">SensorManager</code>. When the Activity goes to the background (<code class="language-plaintext highlighter-rouge">onActivityPaused</code>), it stops the detector.</p>

<p>This is essential for battery efficiency. The accelerometer is one of the highest-power sensors on an Android device. An SDK that registers a sensor listener in <code class="language-plaintext highlighter-rouge">onCreate</code> and never unregisters it will drain battery even when the app is in the background; a mistake that earns the host app one-star reviews for battery consumption.</p>

<h3 id="the-shake-to-dialog-pipeline">The Shake-to-Dialog Pipeline</h3>

<p>When Seismic detects a shake, the SDK’s inner <code class="language-plaintext highlighter-rouge">Shakes</code> class (implementing <code class="language-plaintext highlighter-rouge">ShakeDetector.Listener</code>) fires:</p>

<ol>
  <li>Checks a boolean flag to prevent duplicate dialogs (rapid shakes can fire multiple events)</li>
  <li>Displays an <code class="language-plaintext highlighter-rouge">AlertDialog</code> asking the user to confirm they want to send feedback</li>
  <li>On confirmation, launches <code class="language-plaintext highlighter-rouge">FeedbackReportActivity</code>: a standalone Activity with a description text field, progress spinner, and submit button</li>
  <li>On submit, an <code class="language-plaintext highlighter-rouge">AsyncTask</code> collects device telemetry via <code class="language-plaintext highlighter-rouge">getDeviceStatusJson()</code>, captures logcat, and sends the multipart request on a background thread</li>
</ol>

<p>The built-in UI is deliberately minimal: a text field and a submit button. Developers who want a custom feedback experience can skip the shake-to-report flow entirely and use <code class="language-plaintext highlighter-rouge">BugReportCreator</code> directly, attaching their own UI, their own metadata, and their own file attachments.</p>

<h2 id="logcat-capture-why-500-lines-and-how-it-works">Logcat Capture: Why 500 Lines, and How It Works</h2>

<p>When a report is submitted, the SDK’s <code class="language-plaintext highlighter-rouge">Logs.java</code> utility (~35 lines of code) captures the last 500 logcat entries:</p>

<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nc">Process</span> <span class="n">process</span> <span class="o">=</span> <span class="nc">Runtime</span><span class="o">.</span><span class="na">getRuntime</span><span class="o">().</span><span class="na">exec</span><span class="o">(</span><span class="k">new</span> <span class="nc">String</span><span class="o">[]{</span>
    <span class="s">"logcat"</span><span class="o">,</span> <span class="s">"--pid="</span> <span class="o">+</span> <span class="n">android</span><span class="o">.</span><span class="na">os</span><span class="o">.</span><span class="na">Process</span><span class="o">.</span><span class="na">myPid</span><span class="o">(),</span> <span class="s">"-t"</span><span class="o">,</span> <span class="s">"500"</span><span class="o">,</span> <span class="s">"-v"</span><span class="o">,</span> <span class="s">"threadtime"</span>
<span class="o">});</span>
</code></pre></div></div>

<p>Three details matter here:</p>

<p><strong>Process filtering (<code class="language-plaintext highlighter-rouge">--pid</code>).</strong> The <code class="language-plaintext highlighter-rouge">--pid</code> flag restricts output to the current app’s process ID. Combined with Android 4.1+’s restriction that apps can only read their own logs, this ensures the SDK captures only the host app’s log entries, not system-wide activity from other apps.</p>

<p><strong>The <code class="language-plaintext highlighter-rouge">-t 500</code> flag.</strong> This requests the 500 most recent entries and exits immediately (unlike <code class="language-plaintext highlighter-rouge">-f</code>, which would tail the stream). The output is read line-by-line with a <code class="language-plaintext highlighter-rouge">BufferedReader</code>, written to a temporary <code class="language-plaintext highlighter-rouge">logcat.txt</code> file in the app’s external files directory, and attached to the report as a multipart upload.</p>

<p><strong>The <code class="language-plaintext highlighter-rouge">threadtime</code> format.</strong> Each line includes the date, time, PID, TID, log level, and tag; giving developers the exact chronological sequence of events leading up to the report.</p>

<h3 id="why-500-lines">Why 500 Lines</h3>

<p>This number balances diagnostic value against payload size:</p>

<ul>
  <li><strong>100 lines</strong> is too few. On a busy app with verbose logging, 100 lines might cover the last 2–3 seconds, which isn’t enough to trace the event sequence leading to a bug.</li>
  <li><strong>All available logcat</strong> is too much. The buffer can hold thousands of entries. Shipping 50KB+ of log data per report increases upload time on slow networks, inflates storage costs, and buries relevant entries in noise.</li>
  <li><strong>500 lines</strong> typically covers 30–90 seconds of application activity, depending on log verbosity. That’s enough to see API calls, state transitions, and error messages leading up to the issue.</li>
</ul>

<h3 id="privacy-considerations">Privacy Considerations</h3>

<p>Developers should be aware that their own logs may contain sensitive data: authentication tokens, user identifiers, PII from API responses, or internal URLs. The SDK captures whatever the app has written to logcat. If your app logs request bodies or user data, those entries will appear in bug reports. The mitigation is straightforward: avoid logging sensitive data in production builds. But this responsibility falls on the host app developer, not the SDK.</p>

<h2 id="multipart-report-submission-what-gets-sent">Multipart Report Submission: What Gets Sent</h2>

<p>Every bug report is submitted as a single multipart HTTP POST via Retrofit to <code class="language-plaintext highlighter-rouge">/api/v2/bug_reports</code>. The <code class="language-plaintext highlighter-rouge">BugReportCreator</code> class assembles these parts:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>api_token               → Product access token (authentication)
app_install[id]         → Persistent device identifier (UUID, stored in SharedPreferences)
bug_report[description] → User-entered text
bug_report[metadata]    → Arbitrary JSON object (developer-defined)
device_status           → JSON payload: battery, memory, disk, network, OS, device info
bug_report[attachments][] → File attachments (logcat .log file + any developer-added files)
</code></pre></div></div>

<p>The metadata field accepts any valid JSON object: user IDs, feature flags, A/B test variants, order IDs, session identifiers, star ratings. Unlike rigid custom field systems with predefined schemas, Critic’s metadata is schemaless. Whatever JSON you attach at report time arrives in the dashboard exactly as sent.</p>

<p>File attachments use MIME type detection via <code class="language-plaintext highlighter-rouge">MimeTypeMap.getSingleton().getMimeTypeFromExtension()</code>, with a hardcoded fallback to <code class="language-plaintext highlighter-rouge">text/plain</code> for <code class="language-plaintext highlighter-rouge">.log</code> files and <code class="language-plaintext highlighter-rouge">*/*</code> for unknown types.</p>

<p>Before the first report, the SDK sends a <code class="language-plaintext highlighter-rouge">POST /api/v2/ping</code> request to register the device installation and validate the API token. This ping returns an <code class="language-plaintext highlighter-rouge">app_install_id</code> that’s used for all subsequent report submissions.</p>

<h3 id="what-we-chose-not-to-build">What We Chose Not to Build</h3>

<p>The SDK omits:</p>

<ul>
  <li><strong>Offline queuing.</strong> If the device has no connectivity when the user submits, the submission fails. There is no disk-based queue that retries when connectivity returns.</li>
  <li><strong>Automatic retry.</strong> A failed HTTP request is a failed HTTP request.</li>
  <li><strong>Background upload.</strong> Reports are submitted via <code class="language-plaintext highlighter-rouge">AsyncTask</code> on a background thread, but not via <code class="language-plaintext highlighter-rouge">WorkManager</code>; they don’t survive process death.</li>
</ul>

<p>These are deliberate omissions. Offline queuing requires disk persistence, encryption of stored reports (they contain user-entered text and device data), retry scheduling, and conflict resolution if the app is updated between queue and send. That’s significant infrastructure: essential for analytics pipelines and crash reporters, but overkill for a user-initiated feedback SDK where the user is actively in the app and can retry.</p>

<p>The honest trade-off: if your users frequently submit bug reports in subway tunnels or airplane mode, Critic will lose those reports. In practice, users submit feedback when they’re actively using the app, which almost always means they have connectivity.</p>

<h2 id="performance-what-the-sdk-costs-your-app">Performance: What the SDK Costs Your App</h2>

<p><strong>APK size impact:</strong> The SDK adds approximately 100–150KB to the final APK after ProGuard/R8 shrinking. For context, a single high-resolution image asset often exceeds 500KB. Full observability SDKs can add 5–15MB.</p>

<p><strong>Runtime memory:</strong> Negligible at idle. The SDK allocates memory only during shake detection (sensor event processing) and report submission (logcat buffer read, multipart body construction). No persistent in-memory caches, no event queues, no session state objects.</p>

<p><strong>Battery impact:</strong> Zero measurable impact beyond the accelerometer listener active while the app is in the foreground. No background services, no wake locks, no periodic network calls. The listener unregisters whenever the Activity pauses.</p>

<p><strong>Network:</strong> One initial ping request at initialization to validate the API token. One multipart POST per report submission. No heartbeats, no telemetry uploads, no analytics callbacks.</p>

<h2 id="trade-offs-and-alternatives-considered">Trade-Offs and Alternatives Considered</h2>

<h3 id="build-your-own-vs-sdk">Build-Your-Own vs. SDK</h3>

<p>The most common alternative to adopting a feedback SDK is building the feature in-house. Here’s what that actually involves:</p>

<table>
  <thead>
    <tr>
      <th>Component</th>
      <th>Effort</th>
      <th>Ongoing Maintenance</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Shake detection with lifecycle management</td>
      <td>2–3 days</td>
      <td>Sensor API changes, new device quirks</td>
    </tr>
    <tr>
      <td>Battery/memory/disk/network capture</td>
      <td>1–2 days</td>
      <td>API deprecations (we’re already tracking three)</td>
    </tr>
    <tr>
      <td>Logcat capture with process filtering</td>
      <td>1–2 days</td>
      <td>Permission model changes across Android versions</td>
    </tr>
    <tr>
      <td>Feedback UI (form, progress, error states)</td>
      <td>2–3 days</td>
      <td>Material Design updates, screen size support</td>
    </tr>
    <tr>
      <td>Multipart API endpoint + file uploads</td>
      <td>2–3 days</td>
      <td>Server-side maintenance, storage, authentication</td>
    </tr>
    <tr>
      <td>Web dashboard for viewing reports</td>
      <td>5–10 days</td>
      <td>Ongoing feature development</td>
    </tr>
    <tr>
      <td><strong>Total</strong></td>
      <td><strong>13–23 developer-days</strong></td>
      <td><strong>Ongoing across every Android release</strong></td>
    </tr>
  </tbody>
</table>

<p>At typical senior mobile engineer rates, that’s $20,000–$37,000 in initial development, plus ongoing maintenance every time Google deprecates an API, changes a permission model, or introduces a new storage framework. Critic costs <a href="https://critictracking.com/">$20/month</a>.</p>

<h3 id="continuous-monitoring-vs-point-in-time">Continuous Monitoring vs. Point-in-Time</h3>

<p>We considered and rejected always-on telemetry collection. The resources required (a persistent background service, a circular buffer for metrics, periodic disk flushes, wake locks for reliable delivery) are appropriate for APM tools. They’re overkill for a feedback SDK whose purpose is capturing context when a <em>user decides to report something</em>.</p>

<p>Point-in-time capture provides the vast majority of diagnostic value at a fraction of the resource cost. What you lose is pre-report session context, exact reproduction timeline, and performance waterfall data; all genuinely valuable for debugging complex state-dependent issues. If you need that, pair Critic with a crash reporter like Firebase Crashlytics (free) and you cover both bases for under $25/month total.</p>

<h3 id="technical-debt-were-tracking">Technical Debt We’re Tracking</h3>

<p>We believe in being transparent about the SDK’s rough edges:</p>

<ul>
  <li><strong><code class="language-plaintext highlighter-rouge">AsyncTask</code></strong> is deprecated as of API 30. A migration to Kotlin coroutines or <code class="language-plaintext highlighter-rouge">java.util.concurrent</code> executors is planned.</li>
  <li><strong><code class="language-plaintext highlighter-rouge">Environment.getExternalStorageDirectory()</code></strong> is deprecated as of API 29. Moving to <code class="language-plaintext highlighter-rouge">Context.getExternalFilesDir()</code> or scoped storage APIs.</li>
  <li><strong><code class="language-plaintext highlighter-rouge">NetworkInfo</code></strong> is deprecated as of API 29. Moving to <code class="language-plaintext highlighter-rouge">NetworkCapabilities</code> for richer connectivity data.</li>
  <li><strong><code class="language-plaintext highlighter-rouge">View.getDrawingCache()</code></strong> in the Screenshots utility is deprecated. Moving to <code class="language-plaintext highlighter-rouge">PixelCopy</code> API.</li>
</ul>

<p>None of these deprecations break functionality today; Android maintains backward compatibility. But modernizing them improves reliability on newer devices and prepares for the day Google removes the legacy APIs.</p>

<h2 id="lessons-learned">Lessons Learned</h2>

<p><strong>Marketing claims should match source code.</strong> Our early materials mentioned CPU usage capture. The SDK doesn’t capture CPU usage, and we’ve corrected the record. Trust erodes fast when developers read the docs, inspect the source, and find discrepancies. The SDK is <a href="https://github.com/twinsunllc/inventiv-critic-android">open source on GitHub</a>; anyone can verify every claim in this article.</p>

<p><strong>Deprecated dependencies need a plan.</strong> Square’s Seismic is deprecated with no replacement. The library is small enough (~150 lines) that vendoring is straightforward, but we should have done this proactively rather than waiting for the deprecation notice. If you depend on a small, focused library, have a plan for the day they stop maintaining it.</p>

<p><strong>The built-in UI should be skippable.</strong> Our most sophisticated users never see the shake-to-report dialog. They build custom feedback flows using <code class="language-plaintext highlighter-rouge">BugReportCreator</code> directly, attaching metadata and files programmatically. The default UI exists for the 80% case: developers who want feedback collection working in five minutes. But the API that powers it must be clean enough to use standalone.</p>

<p><strong>500 lines of logcat is a starting point.</strong> Some apps need more; some need less. A configurable log depth parameter is on our roadmap. But shipping a sensible default and iterating based on real usage data beats building configuration options nobody has asked for yet.</p>

<p><strong>Point-in-time capture is the right default for feedback tools.</strong> After years of bug reports across Critic’s user base, we’ve seen almost no cases where the device state at report time differed from the state when the bug occurred. Users report bugs in the moment; they rarely wait until the next day when their battery is charged and their network has changed. The snapshot is reliable.</p>

<h2 id="the-source-is-open">The Source Is Open</h2>

<p>Every technical claim in this article can be verified against the <a href="https://github.com/twinsunllc/inventiv-critic-android">SDK source code on GitHub</a>. The library directory contains the complete implementation: battery capture, memory queries, disk checks, network detection, shake handling, logcat collection, multipart submission. Roughly 1,600 lines of Java across 15 source files and layout resources, MIT-licensed, shipping since January 2018.</p>

<p>If you’re building your own device-context capture, the source serves as a reference implementation. If you’d rather not build it yourself, <a href="https://critictracking.com/">Critic</a> captures all of this automatically with a single line of initialization code; $20/month per app, with a 30-day free trial, no credit card required.</p>

<p>The bug report your users meant to send (with full device telemetry, 500 lines of logs, and arbitrary metadata) is one line of code away.</p>]]></content><author><name></name></author><category term="posts" /><summary type="html"><![CDATA[Inside Critic's Android SDK: the APIs, dependency decisions, and trade-offs behind capturing device telemetry in ~1,600 lines of Java.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://critictracking.com/assets/images/how-we-built-a-1600-line-android-sdk-that-captures-battery-memory-disk-and-network-data-on-every-bug-report.webp" /><media:content medium="image" url="https://critictracking.com/assets/images/how-we-built-a-1600-line-android-sdk-that-captures-battery-memory-disk-and-network-data-on-every-bug-report.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">The Bug You Can’t Reproduce: Why Mobile Crashes Disappear on Your Device</title><link href="https://critictracking.com/blog/the-bug-you-cant-reproduce-why-mobile-crashes-disappear-on-your-device/" rel="alternate" type="text/html" title="The Bug You Can’t Reproduce: Why Mobile Crashes Disappear on Your Device" /><published>2026-03-23T19:47:43+00:00</published><updated>2026-03-27T15:37:48+00:00</updated><id>https://critictracking.com/blog/the-bug-you-cant-reproduce-why-mobile-crashes-disappear-on-your-device</id><content type="html" xml:base="https://critictracking.com/blog/the-bug-you-cant-reproduce-why-mobile-crashes-disappear-on-your-device/"><![CDATA[<p>In February 2026, a developer published a post-mortem on Medium that every mobile engineer will recognize. He’d built an image-heavy feature that ran flawlessly on his flagship phone. Then the crash reports started rolling in. Users said the app was dying, but he couldn’t make it happen. Not once. Weeks later, he traced the root cause: the app was decoding full-resolution photos synchronously on the main thread. On his phone with 12GB of RAM, the operation was invisible. On budget Tecno, Itel, and Redmi phones with 2GB of RAM (the phones his actual users carried) it was an instant crash. Five separate issues that only became lethal in combination on constrained hardware.</p>

<p>His debugging took <em>weeks</em>. And the only reason it dragged on that long is that no user who reported the crash ever mentioned their device model, OS version, or available memory. Why would they? They just knew the app didn’t work.</p>

<p>This scenario plays out constantly. A CodeScene survey found that development teams <a href="https://jellyfish.co/library/developer-productivity/pain-points/">waste 23–42% of their time on technical debt and maintenance</a>, and unreproducible bugs are among the most expensive items in that category. The bug your user reported is reproducible. It’s just unreproducible <em>on your device</em>. The rest of this article covers why this keeps happening, why the usual workarounds fail, and what actually fixes it.</p>

<h2 id="why-your-test-device-lies-to-you">Why Your Test Device Lies to You</h2>

<p>The gap between your development environment and your users’ environments is a combinatorial explosion that no small team can test their way out of.</p>

<h3 id="the-device-fragmentation-math-is-brutal-for-small-teams">The device fragmentation math is brutal for small teams</h3>

<p>There are <a href="https://www.pcloudy.com/blogs/why-bugs-fail-on-devices/">over 24,000 distinct Android device variants</a> in active use across more than 1,300 manufacturers. Samsung alone accounts for roughly 40% of those variants. A team of three engineers testing on five devices covers less than 0.02% of the Android hardware landscape.</p>

<p>OS fragmentation compounds the problem. Android’s latest two major versions typically run on less than 40% of active devices, meaning the majority of your users are on older versions with different behaviors, permissions models, and API support. iOS fragmentation is smaller but real: the RAM difference between an iPhone SE and an iPhone 15 Pro Max is large enough to surface memory-related bugs on one device that never appear on the other.</p>

<p>A team of 1–5 engineers testing on 3–5 devices is facing a math problem that manual testing cannot solve.</p>

<h3 id="oem-customization-creates-invisible-incompatibilities">OEM customization creates invisible incompatibilities</h3>

<p>Android manufacturers modify the OS before shipping it. A <a href="https://arxiv.org/html/2408.01810v1">2024 academic study analyzing 197 device-specific compatibility issues across 94 GitHub repositories</a> found that 72% were “functionality break” issues: standard Android behaviors that fail because a manufacturer changed something under the hood.</p>

<p>The most affected features were camera and UI (the ones users interact with most), accounting for 73% of functionality break issues. The fixes are rarely simple: addressing these bugs involves calling additional APIs (36% of cases), using device-specific parameters (24%), or substituting the problematic API call entirely (15%). These bugs pass every automated test and every emulator run, then crash on real hardware in a user’s hand.</p>

<h3 id="your-flagship-phone-masks-bad-code">Your flagship phone masks bad code</h3>

<p>The February 2026 Medium post-mortem illustrates a pattern that repeats constantly in mobile development: modern flagship phones are so powerful they hide dangerous patterns. Synchronous operations on the main thread, aggressive memory allocation, uncompressed asset loading; all invisible on a phone with 12GB of RAM and a recent processor. But <a href="https://www.pcloudy.com/blogs/why-bugs-fail-on-devices/">21% of Google Play apps contain device-conditional code workarounds</a> written specifically to handle manufacturer quirks. If your app lacks those workarounds, the bugs are there. You just can’t see them from your test device.</p>

<p>Engineers <a href="https://www.pcloudy.com/blogs/why-bugs-fail-on-devices/">lose 3–5 hours weekly</a> to fragmentation-related troubleshooting alone. That’s a senior developer’s entire Friday afternoon, every week, spent chasing bugs that exist on devices they don’t own.</p>

<h2 id="why-users-wont-give-you-the-context-you-need">Why Users Won’t Give You the Context You Need</h2>

<p>The mechanics of mobile bug reporting make detailed reports structurally impossible for most people.</p>

<h3 id="the-mobile-reporting-friction-chain">The mobile reporting friction chain</h3>

<p>When a web user encounters a bug, they can open a support widget without leaving the page. A mobile user has to exit the app, open email or a support chat, describe a problem they can no longer see on screen, and (theoretically) manually look up their device model, OS version, and available RAM. Every additional step loses a percentage of reporters.</p>

<p>The typical bug report that actually arrives looks like this:</p>

<blockquote>
  <p>“The app crashed.”</p>
</blockquote>

<p>No device model. No OS version. No steps to reproduce. No logs.</p>

<p>Embrace’s engineering blog puts it directly: “You shouldn’t expect users to deliver detailed bug reports.” The expectation itself is flawed. Users know <em>something</em> broke. Asking them to also document the technical environment is asking for something that will never happen at scale.</p>

<h3 id="the-silent-majority-never-reports-at-all">The silent majority never reports at all</h3>

<p>Most users who hit a bug stay quiet. Research from thinkJar, <a href="https://mixpanel.com/blog/understanding-churn/">cited by Mixpanel</a>, found that 25 out of 26 customers churn silently without ever submitting a complaint. They just leave.</p>

<p>A <a href="https://news.ycombinator.com/item?id=44225352">Hacker News thread with significant engagement</a> captured the developer community consensus: “Most users won’t report bugs unless you make it stupidly easy.” The barrier is friction, not willingness. A <a href="https://news.ycombinator.com/item?id=21427996">separate HN thread</a> documented the downstream consequence: a bug that existed for <em>years</em>, known to 100% of the client team, unknown to the entire development team. Nobody reported it because the reporting path was too cumbersome.</p>

<h3 id="the-follow-up-question-death-spiral">The follow-up question death spiral</h3>

<p>A developer receives “the app crashed” and replies: “What device are you on? What OS version? What were you doing when it happened?”</p>

<p>The user has moved on. Response rate is near zero. The bug stays open with a “cannot reproduce” label until someone quietly closes it. Industry research consistently shows that in-app feedback mechanisms achieve response rates two to four times higher than email-based channels because they eliminate the friction that kills the feedback loop.</p>

<h2 id="why-the-usual-workarounds-fall-short">Why the Usual Workarounds Fall Short</h2>

<p>When faced with unreproducible bugs, developers reach for familiar tools. Each one solves a different problem than the one at hand.</p>

<h3 id="emulators-and-simulators-miss-real-world-conditions">Emulators and simulators miss real-world conditions</h3>

<p>Emulators are excellent for layout testing and basic functional flows. They are also not real devices. They can’t simulate real memory pressure under load, thermal throttling, OEM customizations to the Android framework, or hardware driver behavior. Camera behavior, GPS accuracy, Bluetooth pairing, fingerprint recognition: all require physical hardware.</p>

<p>Analysis across mobile teams found that <a href="https://www.pcloudy.com/blogs/why-bugs-fail-on-devices/">emulators miss 34% of device-specific bugs</a>. These are some of the most impactful user-facing issues, precisely because they involve the hardware interactions that users depend on most.</p>

<p>An emulator can’t reproduce the crash from the February 2026 post-mortem. That crash required a specific combination of low RAM, a particular OEM’s memory management behavior, and real-world usage patterns that no emulator profile captures.</p>

<h3 id="crash-reporters-catch-crashes-not-the-other-bugs">Crash reporters catch crashes, not the other bugs</h3>

<p>Crash reporters like Crashlytics, Sentry, and Bugsnag are excellent at capturing stack traces when the app process terminates unexpectedly. But they miss the <em>majority</em> of issues users experience.</p>

<p>A <a href="https://github.com/getsentry/sentry/discussions/54956">community discussion on Sentry’s own GitHub</a> made this explicit. Developers described the gap: “broken link, typo, or a user is not sure why a button is disabled.” Real problems that never throw an exception. A user who says “the checkout button did nothing” has experienced a legitimate bug. No crash reporter will ever see it.</p>

<p>UX confusion, performance slowdowns, broken flows, visual glitches, and “it just doesn’t work” reports: these are the bugs that drive one-star reviews. They exist in a blind spot that automated crash reporting can’t reach.</p>

<h3 id="better-bug-report-templates-miss-the-structural-problem">Better bug report templates miss the structural problem</h3>

<p>Asking users to fill in device model, OS version, steps to reproduce, and expected vs. actual behavior assumes a level of technical knowledge and patience that most users lack. Even technically sophisticated beta testers frequently skip fields or provide incomplete data. The template fails because this information must be captured by the system, not the person.</p>

<h3 id="cloud-device-labs-cant-tell-you-which-device-to-test">Cloud device labs can’t tell you which device to test</h3>

<p>Services like BrowserStack and Sauce Labs are valuable for proactive testing, but they’re reactive to your assumptions: you can only test on devices you think to test on. If the user’s report lacks the device configuration that triggered the bug (and it will), you’re guessing which of 24,000+ variants to try. For small teams, adding a $39–$199+/month testing service still leaves the fundamental information gap wide open.</p>

<h2 id="the-fix-make-the-tool-capture-device-context-not-the-user">The Fix: Make the Tool Capture Device Context, Not the User</h2>

<p>Every user-initiated report should arrive with the full device environment attached automatically, with zero effort from the user beyond describing what went wrong.</p>

<h3 id="what-automatic-context-capture-means-in-practice">What automatic context capture means in practice</h3>

<p>At the moment a user submits a report, the SDK silently collects the device manufacturer and model, OS version and build number, available RAM and memory pressure, battery level and charging state, free disk space, network type and carrier, CPU usage, app version, and the last several hundred lines of console logs.</p>

<p>The user writes one sentence: “the image won’t load.” The developer receives that sentence <em>plus</em> a complete environment snapshot. No follow-up questions. No guessing. The exact conditions that produced the bug, documented at the moment it happened.</p>

<p>In-app bug reporting SDKs <a href="https://aqua-cloud.io/bug-reporting-mobile-apps-best-practices/">reduce resolution time by up to 40%</a> compared to manual reporting methods, according to Aqua Cloud’s analysis. The time savings come entirely from eliminating the information-gathering phase: the emails, the follow-up questions, the “what device are you on?” loop that usually ends in silence.</p>

<h3 id="shake-to-report-matches-the-gesture-to-the-frustration">Shake-to-report matches the gesture to the frustration</h3>

<p>The lowest-friction reporting mechanism is also the most intuitive: the user shakes their phone when something goes wrong. The gesture matches the emotional state. A lightweight form appears, the user types a sentence, and the SDK handles the rest.</p>

<p>This is the “stupidly easy” reporting the Hacker News community called for. No app switching, no email composing, no describing a problem from memory. The bug gets reported in the same moment and context where it happened.</p>

<h3 id="custom-metadata-closes-the-remaining-gap">Custom metadata closes the remaining gap</h3>

<p>Automatic telemetry captures the device environment. But some bugs depend on app-specific state that no generic SDK can anticipate: the user’s subscription tier, which A/B test variant they’re seeing, their cart contents, a feature flag enabled for 10% of users.</p>

<p>Arbitrary JSON metadata lets developers attach any app-specific context to every report. User IDs, feature flags, session data, order IDs, star ratings, whatever the app knows at the moment of the report. This turns a bug report into a complete debugging snapshot: device state + app state + user description.</p>

<h2 id="what-this-looks-like-in-practice-critic-at-20month">What This Looks Like in Practice: Critic at $20/Month</h2>

<p>Here’s the difference automatic context capture makes, using a comparison modeled on real-world reports:</p>

<table>
  <thead>
    <tr>
      <th> </th>
      <th>Email Bug Report</th>
      <th>Report with Automatic Telemetry</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>User says</strong></td>
      <td>“The app crashed”</td>
      <td>“The app crashed”</td>
    </tr>
    <tr>
      <td><strong>Device model</strong></td>
      <td>Unknown</td>
      <td>Tecno Spark 10</td>
    </tr>
    <tr>
      <td><strong>OS version</strong></td>
      <td>Unknown</td>
      <td>Android 12, Build SP1A</td>
    </tr>
    <tr>
      <td><strong>Available RAM</strong></td>
      <td>Unknown</td>
      <td>512MB free / 2GB total</td>
    </tr>
    <tr>
      <td><strong>Network</strong></td>
      <td>Unknown</td>
      <td>Cellular, 3G</td>
    </tr>
    <tr>
      <td><strong>Battery</strong></td>
      <td>Unknown</td>
      <td>23%, not charging</td>
    </tr>
    <tr>
      <td><strong>Disk space</strong></td>
      <td>Unknown</td>
      <td>1.2GB free / 32GB</td>
    </tr>
    <tr>
      <td><strong>Console logs</strong></td>
      <td>None</td>
      <td>Last 500 logcat entries</td>
    </tr>
    <tr>
      <td><strong>App version</strong></td>
      <td>“The latest, I think”</td>
      <td>2.4.1 (build 847)</td>
    </tr>
    <tr>
      <td><strong>Time to reproduce</strong></td>
      <td>Hours to days (if ever)</td>
      <td>Minutes</td>
    </tr>
  </tbody>
</table>

<p>The left column is what that developer from the Medium post-mortem had for weeks. The right column is what would have pointed him to low-RAM budget phones on day one.</p>

<p><a href="https://critictracking.com/">Critic</a> is the in-app feedback tool that produces the right column. One line of code initializes the SDK. Shake-to-report works out of the box with a built-in UI; no configuration, no custom views. Every report captures battery status, memory metrics, disk space, network connectivity, OS version, CPU usage, device hardware info, and up to 500 lines of console logs automatically.</p>

<p>The SDK is lightweight by design: approximately 1,600 lines of Java on Android, minimal dependencies, no background monitoring or passive data collection. It captures context only when a user initiates a report.</p>

<p>For app-specific context, developers can attach arbitrary JSON metadata to every report: user IDs, feature flags, session state, subscription tier, anything the app knows. A full <a href="https://critictracking.com/getting-started/">REST API</a> exposes everything the web dashboard does, so teams can build custom feedback UIs or push reports into any project management tool.</p>

<p>SDKs cover iOS, Android, Flutter, and JavaScript. One dashboard for all platforms. $20/month per app, no seat limits, no feature-gating.</p>

<h3 id="what-critic-replaces-and-what-it-doesnt">What Critic replaces (and what it doesn’t)</h3>

<p>Critic is a user-initiated feedback tool, not a crash reporter. It complements Crashlytics or Sentry by capturing the reports that crash tools structurally miss: UX bugs, “this flow is confusing” feedback, “the button did nothing” reports that never throw an exception.</p>

<p>It’s also deliberately not an enterprise observability platform. No session replay, no AI-powered triage, no performance monitoring. As competitors have pivoted toward enterprise AI observability with opaque pricing and sprawling feature sets, Critic has stayed focused on the core feedback loop (user shakes phone, describes problem, device context arrives automatically) at a price indie developers and small teams can actually pay.</p>

<p>The $20/month <a href="https://critictracking.com/">Critic</a> + free Firebase Crashlytics combination gives a small team a complete feedback pipeline, covering both crash reporting <em>and</em> user-initiated reports with full device context, for under $25/month total.</p>

<h2 id="from-cant-reproduce-to-fixed">From “Can’t Reproduce” to Fixed</h2>

<p>The downstream effects of automatic context capture add up fast.</p>

<p><strong>More reports, more visibility.</strong> When reporting takes one shake and a sentence instead of an email composed from memory, more users report. That increased volume means more visibility into real-world issues, including the device-specific bugs that only surface on hardware your team doesn’t own.</p>

<p><strong>Faster fixes.</strong> In-app SDKs <a href="https://aqua-cloud.io/bug-reporting-mobile-apps-best-practices/">reduce resolution time by up to 40%</a> compared to manual reporting, according to Aqua Cloud. That’s the difference between a three-hour investigation and a twenty-minute fix. Multiply that across every bug report in a sprint, and the time savings are substantial.</p>

<p><strong>Fewer dead-end tickets.</strong> Fewer “cannot reproduce” resolutions means users see their bugs actually get fixed. This builds trust and keeps them reporting instead of silently churning (or worse, heading to the App Store).</p>

<p><strong>Reviews intercepted before they go public.</strong> Feedback captured inside the app stays inside the app. Since Apple began rolling out <a href="https://www.macrumors.com/2025/03/06/ios-18-4-ai-review-summaries-app-store/">AI-generated review summaries on App Store product pages</a>, a single unresolved bug can echo far beyond the original reviewer. Giving users a frictionless way to report problems in-app reduces the likelihood that frustration becomes a permanent public review.</p>

<p><strong>Bugs on budget devices get fixed.</strong> Automatic telemetry ensures that users running a Tecno Spark 10 with 2GB of RAM have their environment documented, rather than lost in a two-word email that nobody can act on. Those users deserve working software too, and now their bug reports arrive with the same rich context as everyone else’s.</p>

<hr />

<p>The bug is reproducible. It’s just unreproducible without context.</p>

<p>Your users have the bugs. They’ll even tell you about them, if you make it easy enough. But they will never tell you their device model, OS version, and available RAM. The tool has to do that.</p>

<p><a href="https://critictracking.com/">Critic</a> does it for one line of code and $20/month per app. Start a free 30-day trial (no credit card required). Your first report with full device telemetry arrives in minutes.</p>]]></content><author><name></name></author><category term="posts" /><summary type="html"><![CDATA[Your user's crash is reproducible, just not on your phone. Learn why mobile bugs vanish during testing and how automatic device telemetry solves it.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://critictracking.com/assets/images/the-bug-you-cant-reproduce-why-mobile-crashes-disappear-on-your-device.webp" /><media:content medium="image" url="https://critictracking.com/assets/images/the-bug-you-cant-reproduce-why-mobile-crashes-disappear-on-your-device.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>