Your cart is currently empty!
Some thoughts about the current state on LLM’s assist coding…
It’s good for small projects to kickstart or use it as an intelligent autocomplete tool but doesn’t shine as much for complex projects with vague documentation where tribal knowledge is predominant and no straightforward solution. Because AI, even in an agentic mode, cannot ask your colleague why it’s implemented this way and not another. Some project’s specific bugs, workarounds and technical debts unlikely you’ll get much help from LLM with. Context window is too small yet to step the game up leaving you in infinite loop of prompting for solution and your LLM guessing and hallucinating. There are workarounds and hacks though with extending the context window and tossing your codebase between LLM’s and get more guided and targeted assist but it all also can exponentially increase the hallucination factor. Cheers!