I have a confession: I’m not a developer. >
Like many people here, I’ve been blown away by Cursor, Bolt, and Codex. But as a non-technical person, I quickly hit a wall. It wasn't that the AI couldn't code—it was that I didn't know how to describe what I wanted. >
I would give a 1-sentence prompt, get a broken app, and then get stuck in a "bug-fixing loop" because I hadn't defined the logic, the database schema, or the edge cases properly. I had the vision, but I lacked the "Technical Grammar" to communicate it.
It acts as the "Technical Co-founder" or "Product Manager" I didn't have. Instead of me struggling to write a prompt, the tool interviews me. It asks the questions I didn't know I should be asking (e.g., "How should we handle session persistence?" or "What's the data relationship between X and Y?").
How it works for me:
I chat with IdeaForge about my "napkin sketch" idea.
It grills me on the details until the logic is watertight.
It generates a structured Markdown specification.
I paste that spec into Cursor/Codex.
For the first time, I’m actually building tools that work on the first try. I’m sharing this today because I think there are many other "dreamers" who are just one clear specification away from their first functional MVP.
I’d love to get the perspective of the experienced engineers here: Does the output look like something you’d actually want to receive as a dev spec?
I'm excited! Almost everyone here is looking forward to big future payouts of gigs that require cleaning up during emergency slop outages that impact the loss of a critical line of business.
Great question, @ravroid. Custom GPTs are fantastic for general workflows, but IdeaForge is built for a more focused 'one-shot' success rate. Here’s why I think it’s worth a try over a generic prompt/GPT:
Guided Extraction vs. Open Chat: Custom GPTs can sometimes drift or get 'lazy.' IdeaForge uses a specific Socratic interview logic designed to pull out the 'unknown unknowns' (like edge cases and data relationships) that users often forget to prompt.
Optimized for Codex Context: The output isn't just a summary; it’s a structured specification specifically formatted to minimize hallucinations when pasted into Cursor or Codex.
Zero Context Switching: Instead of jumping between an 'interview GPT' and a 'dev plan GPT,' IdeaForge handles the entire pipeline in one specialized UI, ensuring the logic remains consistent from idea to spec.
Lower Barrier: No ChatGPT Plus subscription required for your end-users to get high-quality technical specs.
I'd love for you to run one of your existing ideas through IdeaForge and let me know if the resulting MD is more 'executable' than your current workflow
I'm probably not the target audience, but it's an intriguing idea for a product.
However you have no privacy policy or about page. I don't think I'd want to use a remote tool without one, otherwise how do I know you're not going to run away with my idea?
Thank you for the reality check! You’re absolutely right. As a solo creator, I initially focused on the core logic, but trust is the most important feature. I’ve just added a formal Privacy Policy and Terms
Thanks for pointing those out! GSD and spec-kit are great tools. IdeaForge tries to focus more on the pre-development interview phase—helping non-technical founders or solo devs clarify the logic before they even look at a repo.
I started to do this as well. Instead of prompting: "build x", I now prompt: "What questions do you need to ask me in order to have all the information you need to build x". It usually writes a long list of questions, which, once answered makes the thing a lot clearer for the AI to work.
If you don't do this, LLMs tend to make a lot of asumptions on their own without telling you.
It works for pretty much anything. Like medical advice: "What questions do you need to ask to diagnose my headache". Or practical help: "what questions do you need to ask to help me hang this mirror on the wall"
Spot on. The 'Ask me what questions you need' prompt is the secret sauce. IdeaForge essentially automates and refines that process so users don't have to figure out what questions to ask the AI. It's about reducing the cognitive load for the user.
Sorry to hear you gave up! That’s really helpful feedback. The interview is thorough because I wanted to ensure a 'one-shot' code generation, but I clearly need to work on the UX and maybe a 'Fast Track' mode. I'll be looking into how to make the steps feel less like a chore. Thanks for being honest!
Hi HN,
I have a confession: I’m not a developer. > Like many people here, I’ve been blown away by Cursor, Bolt, and Codex. But as a non-technical person, I quickly hit a wall. It wasn't that the AI couldn't code—it was that I didn't know how to describe what I wanted. > I would give a 1-sentence prompt, get a broken app, and then get stuck in a "bug-fixing loop" because I hadn't defined the logic, the database schema, or the edge cases properly. I had the vision, but I lacked the "Technical Grammar" to communicate it.
I built https://ideaforge.chat to solve my own problem.
It acts as the "Technical Co-founder" or "Product Manager" I didn't have. Instead of me struggling to write a prompt, the tool interviews me. It asks the questions I didn't know I should be asking (e.g., "How should we handle session persistence?" or "What's the data relationship between X and Y?").
How it works for me:
I chat with IdeaForge about my "napkin sketch" idea.
It grills me on the details until the logic is watertight.
It generates a structured Markdown specification.
I paste that spec into Cursor/Codex.
For the first time, I’m actually building tools that work on the first try. I’m sharing this today because I think there are many other "dreamers" who are just one clear specification away from their first functional MVP.
I’d love to get the perspective of the experienced engineers here: Does the output look like something you’d actually want to receive as a dev spec?
Thanks for letting me share!
I'm excited! Almost everyone here is looking forward to big future payouts of gigs that require cleaning up during emergency slop outages that impact the loss of a critical line of business.
I use custom GPTs for interviewing me about requirements then producing a spec (https://chatgpt.com/g/g-69724a25850c8191a6f16a519b5ae055-sof...).
I use another GPT to turn that spec into a development plan for Codex that I include in AGENTS.md. (https://chatgpt.com/g/g-698a6ee58aec8191ba1e3b520b13b5e7-dev...)
I'm curious what advantages this product offers vs. using a prompt?
Great question, @ravroid. Custom GPTs are fantastic for general workflows, but IdeaForge is built for a more focused 'one-shot' success rate. Here’s why I think it’s worth a try over a generic prompt/GPT:
Guided Extraction vs. Open Chat: Custom GPTs can sometimes drift or get 'lazy.' IdeaForge uses a specific Socratic interview logic designed to pull out the 'unknown unknowns' (like edge cases and data relationships) that users often forget to prompt.
Optimized for Codex Context: The output isn't just a summary; it’s a structured specification specifically formatted to minimize hallucinations when pasted into Cursor or Codex.
Zero Context Switching: Instead of jumping between an 'interview GPT' and a 'dev plan GPT,' IdeaForge handles the entire pipeline in one specialized UI, ensuring the logic remains consistent from idea to spec.
Lower Barrier: No ChatGPT Plus subscription required for your end-users to get high-quality technical specs.
I'd love for you to run one of your existing ideas through IdeaForge and let me know if the resulting MD is more 'executable' than your current workflow
I'm probably not the target audience, but it's an intriguing idea for a product.
However you have no privacy policy or about page. I don't think I'd want to use a remote tool without one, otherwise how do I know you're not going to run away with my idea?
Thank you for the reality check! You’re absolutely right. As a solo creator, I initially focused on the core logic, but trust is the most important feature. I’ve just added a formal Privacy Policy and Terms
I thought this response was a piss-take, but it's actually OP.
I have nothing against AI-coded projects, but please do the bare minimum of filtering when interacting with people.
It feels weird to be talking direct to LLM.
Great idea, and some of the tools are already somewhat covering it, e.g. GSD or spec-kit.
https://github.com/gsd-build/get-shit-done
https://github.com/github/spec-kit
Thanks for pointing those out! GSD and spec-kit are great tools. IdeaForge tries to focus more on the pre-development interview phase—helping non-technical founders or solo devs clarify the logic before they even look at a repo.
I started to do this as well. Instead of prompting: "build x", I now prompt: "What questions do you need to ask me in order to have all the information you need to build x". It usually writes a long list of questions, which, once answered makes the thing a lot clearer for the AI to work.
If you don't do this, LLMs tend to make a lot of asumptions on their own without telling you.
It works for pretty much anything. Like medical advice: "What questions do you need to ask to diagnose my headache". Or practical help: "what questions do you need to ask to help me hang this mirror on the wall"
Spot on. The 'Ask me what questions you need' prompt is the secret sauce. IdeaForge essentially automates and refines that process so users don't have to figure out what questions to ask the AI. It's about reducing the cognitive load for the user.
I gave up at step 5. it's taking too long
Sorry to hear you gave up! That’s really helpful feedback. The interview is thorough because I wanted to ensure a 'one-shot' code generation, but I clearly need to work on the UX and maybe a 'Fast Track' mode. I'll be looking into how to make the steps feel less like a chore. Thanks for being honest!