I’ve used (and wrote) https://mockaton.com for this. It has a browser extension, which downloads all API responses in your flow.
then you can run mockaton with those mocks. you’ll manually have to anonymize sensitive parts though.
also, you can compile your Frontend(s) and copy their assets, so yo can deploy a standalone demo server.
see the last section of: https://mockaton.com/motivation
mocks don’t have to be fully static, it supports function mocks, which are http handlers.
for demoing, the dashboard has a feature for bulk selecting mocks by a comment tag.
Thanks for the Mockaton suggestion! I like the API mocking approach - that handles the backend data cleanly.
The challenge I kept running into was the frontend side during live screen shares. Even with mocked APIs, I'd have credentials visible in browser tabs, notifications popping up with client names, or sidebar elements showing sensitive info.
Did you find Mockaton solved the full screen-share exposure problem, or did you combine it with other approaches?
2b. If it doesn't set a cookie (some SSO providers set it in `sessionStorage`), and assuming it’s a React app with an <AuthProvider>, you might need to refactor the entry component (<App/>) so you can bypass it. e.g.:
SKIP_AUTH // env var
? <MyApp/>
: <AuthProvider><MyApp/></AuthProvider>
Then, instead of using the 3rd party hook directly (e.g., useAuth). Create a custom hook, that fallbacks to a mocked obj when there's no AuthContext. Something like:
Surely real product with fake data is the answer. Fake data doesn’t necessarily mean “silly” data e.g. names like Larry Llama or myfaketestwebsite.com. There’s no reason why your real product running on sandboxed data should hurt your credibility.
I thought this too initially - "just make the fake data look professional."
Where it broke down for me: investors with technical backgrounds would ask edge case questions ("show me how this handles 10K records" or "what does error handling look like with real load?"). The fake environment couldn't simulate that complexity authentically.
The other issue was muscle memory. When I'm demoing something I use daily, I'm fast and fluent. In a fake environment, I'd hesitate or click wrong because it's not my real workflow. Investors noticed.
At my last job they had an entire section of the product - hidden behind feature flags and enableable only in development - whose sole purpose was to generate dummy content for the different sections of the product. When I left, they were extending it with AI integration to have it generate more realistic data. The overall program - when in prod - contained massive amounts of PPI/PII, so we needed a way to generate massive amounts of realistic-looking dummy data to stress test it while in dev.
How much overhead did that add to your development workflow? I'm curious if building and maintaining that parallel demo infrastructure became its own project, or if it stayed lightweight.
Also, did you use this for investor demos specifically, or more for development/QA?
I’ve used (and wrote) https://mockaton.com for this. It has a browser extension, which downloads all API responses in your flow.
then you can run mockaton with those mocks. you’ll manually have to anonymize sensitive parts though.
also, you can compile your Frontend(s) and copy their assets, so yo can deploy a standalone demo server. see the last section of: https://mockaton.com/motivation
mocks don’t have to be fully static, it supports function mocks, which are http handlers.
for demoing, the dashboard has a feature for bulk selecting mocks by a comment tag.
Thanks for the Mockaton suggestion! I like the API mocking approach - that handles the backend data cleanly.
The challenge I kept running into was the frontend side during live screen shares. Even with mocked APIs, I'd have credentials visible in browser tabs, notifications popping up with client names, or sidebar elements showing sensitive info.
Did you find Mockaton solved the full screen-share exposure problem, or did you combine it with other approaches?
I’d need more details, but here are few guesses:
1. If Frontend is directly fetching from a third-party API. Maybe, you could add an env var with the base URL, so it points to the mock server.
2. If it’s a third-party auth service
2a. If the auth service sets a cookie with a JWT, you could inject that cookie with Mockaton like this: https://github.com/ericfortis/mockaton/blob/354d97d6ea42088b...
2b. If it doesn't set a cookie (some SSO providers set it in `sessionStorage`), and assuming it’s a React app with an <AuthProvider>, you might need to refactor the entry component (<App/>) so you can bypass it. e.g.:
Then, instead of using the 3rd party hook directly (e.g., useAuth). Create a custom hook, that fallbacks to a mocked obj when there's no AuthContext. Something like:Surely real product with fake data is the answer. Fake data doesn’t necessarily mean “silly” data e.g. names like Larry Llama or myfaketestwebsite.com. There’s no reason why your real product running on sandboxed data should hurt your credibility.
I thought this too initially - "just make the fake data look professional."
Where it broke down for me: investors with technical backgrounds would ask edge case questions ("show me how this handles 10K records" or "what does error handling look like with real load?"). The fake environment couldn't simulate that complexity authentically.
The other issue was muscle memory. When I'm demoing something I use daily, I'm fast and fluent. In a fake environment, I'd hesitate or click wrong because it's not my real workflow. Investors noticed.
Have you found ways around those issues?
At my last job they had an entire section of the product - hidden behind feature flags and enableable only in development - whose sole purpose was to generate dummy content for the different sections of the product. When I left, they were extending it with AI integration to have it generate more realistic data. The overall program - when in prod - contained massive amounts of PPI/PII, so we needed a way to generate massive amounts of realistic-looking dummy data to stress test it while in dev.
This is interesting.
How much overhead did that add to your development workflow? I'm curious if building and maintaining that parallel demo infrastructure became its own project, or if it stayed lightweight.
Also, did you use this for investor demos specifically, or more for development/QA?