How I Built a Complex AI Agent With No Code (And What I Learnt the Hard Way)
- Chris Burgess
- Nov 24
- 9 min read
Updated: Nov 25
How a simple idea to build an AI agent with no code became a twenty-plus node agent and my crash course in LLMs, debugging, and token economics

I started with a clear goal. I wanted to interpret and summarise scattered data. Since scattered data is a problem many AI startups are trying to solve, using AI felt like the right solution. However, I am not technical, so I set out to build an AI agent with no code.
I like to think of myself as comfortable with ambiguity because working in product requires it, but I also value simplicity when I am building things. I lightly explored different tools such as Replit, Loveable and Vercel, but I could quickly see how MindStudio could help me reach my goal.
It uses a node system, which means you build the agent by visually connecting small blocks together. Each block has a single job, and you tell it what that job is using normal language rather than code. That structure makes it easy to see how everything fits together and how data moves from one step to the next.
It felt like the right tool for a product manager who understands the logic of systems but does not want to spend their time debugging issues. What I did not fully appreciate was how much complexity sits underneath that simple surface. More than sixty hours later, with more than twenty seven nodes created and the need to start vibe coding, it became clear that it was anything but simple.
Here is what I learnt.
Lesson 1: Document as you go
I found that because I was building something that relied on logic and constant debugging, I needed to document as I went. It proved to be the simplest habit that saved me the most time. Every time I added a block, changed a setting, tweaked a prompt or reran the workflow, I updated a Google Doc that mirrored the structure of the agent. I used titles and headers that matched the block names in MindStudio and collapsed sections so I could jump around quickly.
Because I broke the agent into small, single-purpose tasks, I grouped those tasks into sections based on their function, which I reflected in the structure of my master document. One group handled cleaning and normalising data, another discovered themes, another assigned rows, and another generated the final outputs. That separation of responsibilities turned out to be essential. When something broke later in the flow, it was usually because an earlier block had not been explicit enough.
As the workflow grew, that document became ever more essential. What started as three CSVs, one Generate Text block, and one Generate Asset block turned into a flow of 21 blocks, but at one stage ballooning to more than 30 blocks. Each time a block became overloaded with instructions, I had to split it in two. Then three. Then four. That pattern repeated throughout the build. I was learning about how to not overwhelm the agent, and ensure that I could troubleshoot the issues.
If I had not been documenting those changes in real time, I would never have been able to track what had changed, what had broken or why something that worked yesterday had stopped working today. It is one of the simplest lessons, but easily the one you appreciate most when things get complicated. Document while you build, not after.
Lesson 2: Treat AI like a person, not an omnipotent machine
Working closely with LLMs during this build reinforced something I heard a lot but only truly understood once I experienced it: LLMs are unreliable in very human ways. If you leave any room for interpretation, they will improvise. They do not tell you how they’ve improvised, which makes the debugging process a puzzle you did not ask for. They will drop fields, rename things, reorder outputs, or switch formats entirely, even when the instructions feel obvious.
That surfaced early during theme discovery. I asked the model to identify patterns in customer notes. It understood the intent, but without a strict structure it returned data that lacked any substance. It looked correct at a glance, but when I actually read through it, it really wasn’t useful.
The overarching lesson for me was that you have to be as explicit as possible about what output you’re expecting - you really have to spell it out. This isn’t any different from working with a product team, because in both contexts stating what is out of scope is as important as what is in scope - and this is no different with an LLM.
Treat AI like a person with no context, because unless you define everything explicitly, it will improvise in ways that cost you time and money. LLMs don’t guess well; they fill gaps badly.
Lesson 3: Data costs you actual money
There was a point where the agent kept breaking in ways that were both unpredictable and expensive. Nothing in the flow pointed to why. After a lot of debugging, I finally realised I was trying to push around six hundred rows through a model with a four-thousand-token context limit. That limit was quietly working against me.
I am a bit embarrassed to admit this, but until then “tokens” were something I knew existed without ever really thinking about how they worked. It was only when the workflow collapsed that I understood what they meant in practice.
What caught me out was understanding what data equated to a token. It was not one token per row. It depended entirely on how much text lived inside the row. A short note barely registered. A detailed customer insight consumed far more. I did not want to limit users to a fixed number of rows, but if I did, I needed to explain the implications clearly. And to do that, I had to understand tokens in a practical way.
Using Sonnet 3.5 throttled the amount of data I could send through the model, so I explored other options. Upgrading to Claude 3.7 Sonnet gave me a much larger 128k window, and the issue disappeared immediately. But it was expensive. Each full run cost more than a dollar, which made no sense for a test dataset of just six hundred rows across three CSVs. If this was going to scale, it had to do so sustainably.
Lesson 4: Choose the right solution for the right problem
This raised a more important question. If a task does not require interpretation, why am I using AI at all. I was trying to normalise the CSVs so the data was consistent and easier for the next block in the flow to work with. Because MindStudio sits in the no code space, it naturally guides you toward using AI blocks. That meant I was leaning on a model for work that did not genuinely need one.
A software engineer friend suggested using Python. Once he said it, it made complete sense, but because I had been thinking entirely in prompts up to that point, coding had not entered my mind.
I tried moving the heavy lifting into Python. This meant creating a new flow and adding a few new blocks. Inside MindStudio it never ran reliably. I kept getting errors that looked like caching issues, where the platform reused an old result instead of the updated version, even after updating the configuration that should have cleared the issue. Each run still cost money, so I could not keep guessing. ChatGPT eventually suggested that the issue might be related to the way Python was being executed inside the platform and recommended trying JavaScript instead. I rebuilt the block in JavaScript and everything ran smoothly on the first attempt.
At the same time, I tested a range of models. Claude Sonnet compared with Haiku, Gemini Flash compared with Gemini Pro, and GPT 5 compared with GPT 5 point one. I also looked into mixing providers. In the end I kept everything within one provider, which kept the data handling simpler and easier to reason about. The comparison exercise reinforced the same idea from a different angle. The best choice is usually the simplest tool that does the job without introducing new instability.
I learnt first hand that LLMs should be used for interpretation rather than for simple scanning or data cleanup. It is very easy to default to AI because it is available, but that does not make it the right tool. Choosing the right model and the right approach early in the build prevents a lot of rework.
Once I understood where AI added real value and where it did not, low code became the natural next step.
Lesson 5: Low-Code Was a Step Up but also a natural progression
Shifting from no-code to low-code opened up a whole new set of learnings. I had never really stopped to think about what “low-code” meant in practice, but now I get it.
For a long stretch, no-code was enough. It let me test ideas quickly, move blocks around, and see the workflow as a living system. And because I reached the limits of no-code gradually, I developed a sense of why things were breaking, and what needed to be tightened, constrained, or made explicit.
But eventually I was asking an LLM to do heavy lifting simply because it was already in the flow. That made everything expensive, and inefficient, with the added frustration of getting slightly different outputs each run.
I worked with Claude to vibe-code my way through it. The problem was that whenever I asked for a change, Claude would enthusiastically rewrite parts of the file I hadn’t asked it to touch. That meant another wasted run and more money spent. Getting AI to QA the code that AI wrote felt like the police policing the police. At first I was lazy and just accepted the output, but once the cost stopped making sense, I started pointing out the issues explicitly. It felt counterintuitive, but it eventually saved me time and frustration.
Low-code became far easier once I understood what no-code couldn’t do. It didn’t replace the foundation of what I had built, it just naturally optimised it. And after a few iterations, I found myself able to read the code well enough to spot obvious problems without asking an LLM every time. That shift made the whole workflow more efficient and predictable.
Low-code only made sense once I hit the ceiling of no-code, and by then it felt like a natural progression, not a different discipline.
How the final workflow came together
After enough iterations, the workflow finally settled into something that made structural sense. In the end, it looked more like a proper data pipeline than anything “no code”:
Task | Code Required? |
|---|---|
Extract CSVs | ✅ (Javascript) |
Normalise headers | ✅ (Javascript) |
Combine files | ⚠️ Partial |
Discover themes | ❌ |
Link signal IDs | ❌ |
Assign rows | ❌ |
Group context | ❌ |
Compute stats & evidence | ❌ |
Add trends & confidence | ❌ |
Generate insights | ❌ |
Produce final CSVs | ⚠️ Partial |
Render report | ✅ (HTML + Javascript) |
Every step depended on the previous one being clean, which meant I could not bundle multiple tasks into a single large block. If I did, the LLM would improvise, collapse the structure, or switch formats entirely. Splitting the workflow into smaller blocks made the system predictable, but it also introduced a real cost. Each block triggered its own LLM call, which meant a new round of token usage. Instead of paying once for a single long prompt, I was paying repeatedly across the entire chain. And whenever something broke upstream, I had to rerun the full workflow from start to finish, multiplying the token spend again.
Final reflections for anyone considering building an AI agent with no code
In the end, I built something I am genuinely proud of because I really learnt what I was doing as I went along. You can upload more than one CSV. It discovers themes, assigns every row, produces clear insights, calculates trends and confidence, and gives you two download buttons and a well-formatted report.
It took longer than expected - more than 60 hours from start to finish. I had to debug a LOT. I moved from no‑code to Python to JavaScript, and rethink how I used LLMs. But I eventually learned how to work with them by constraining them properly.
If you are a product manager, a founder, or anyone curious about AI tools, here is the honest summary:
What you need
A logical and patient approach, because you will refine the same prompts many times.
Clear communication, especially the ability to anticipate misinterpretations early.
An LLM to help you write code, along with the willingness to scrutinise that code yourself.
What you do not need
Formal coding expertise. It helps, but it does not stop you from building something real.
Experience working with LLMs. You learn how they behave by building with them.
Familiarity with data processing tools. You pick that up naturally once you see where the system breaks.
I came into this as a non‑coder and left feeling like I had built a proper data workflow with multi‑step reasoning, state management, and consistent outputs. Thanks to careful prompt design, I didn’t have to hand‑write the code myself, but troubleshooting was slow and I could not afford to be lazy.
If you are considering trying it, I would say this. Bring patience. Bring curiosity. And treat the thing you are building with the same discipline you would bring to any product. The tool rewards that mindset.
If you are leading a startup or scaling a product team and want to build AI agents without code or design workflows that behave reliably, let’s talk. I help founders and product leaders shape clarity out of complex systems. You can also take my one minute diagnostic to see where you may want to focus your product operations.

Comments