and no, I’m not talking about Vibe Coding…

The LLM as the Assistant

  • today, LLMs are typically being used as support tools for developers
  • focus is on a Developer driving the build, with an AI (LLM) to assist in suggesting structure, functionality and so forth
  • results are mixed and heavily dependant on which LLM with “the winner” constantly changing
  • developer centricity is the key… today
  • AI (LLM) is just another ‘tool’

LLM as the Prototype

  • LLMs increasingly are supplanting Low-code/No-Code approaches in rapid prototyping
  • Multi-step flows allow for LLMs to generate code, and then self-validate, before producing the final output
  • an “80%” prototype is produced
  • Developer then moves to the role of QA/Finisher fixing the output to comply with standards and deployment environment

Enter Larger Contexts

  • 1m token length contexts are rapidly becoming the default
  • most current code interactions are split 1:10 with each 1 token of input generating 10 tokens of output*
  • the larger context allows more rules, guidance, templates, consideration to be added to a prompt
  • this can include coding standards, deployment targets and more
  • but “LLM AppDevs” don’t need to know all the standards; agentic workflows apply them

A Possible Future - Structure

  • A single prompt interface for user interation “Build a quiz application to ask 20 multiple choice questions about this document (attachment)”
  • The flow…
    1. save the original prompt in a new session
    2. ask an LLM what type of app could answer the request. “You are an expert software engineer. Analyze the following request and answer only what type of application would be the simplest to answer this prompt (user prompt)”
    3. ask an LLM for standards compliance. “You are an expert software engineer. Attached as a set of policies and standards. Analyze them and return a json format of considerations for (code recomendation) to build (user prompt)”
    4. compose the “build” prompt federating all the above. “You are an expert software engineer. Write code to meed the following user requirement and write it within the following standards. (user prompt) (code recommendation) (standards)”
    5. validate the output using and MCP server. Submit the output code and see if it builds and if it doesn’t what the errors are.
    6. if there are errors, iterate asking the prompt - with the full context - to fix it
    7. once it’s ready, return just the code to the user
  • This comprises of multiple hits to multiple models and leverages MCP servers to support
  • Time taken is completely dependent on the infrastructure available, but…
    • requirements are in natural language
    • output is specific to the requirements
    • iterations are available in near-real-time
    • the requirements generator sees the output directly during the idea development lifecycle
  • Key elements revolve around
    • using agents to process standards (e.g. reading existing code bases to determine specific practices around apps)
    • using MCP servers to “test deploy” and validate prior to response
    • using internal iterations to ensure validity of response
    • relying upon context to provide library and versions
    • targeting the “80% rule” of applications and processes

### Not Quite a Conclusion

  • LLM AppDev is the idea of moving away from being Developer Centric using AI as a tool to the human being “Quality Assurance” instead
  • larger contexts open the ability to add more “first rules” into the development lifecycle
  • adding validation steps provided by LLM (e.g. code checks)

More to come, maybe even a coded example