This is an interesting point in software engineering history. On the one side, we have the “AI skeptics”. These folks are engineers that reject the utility of AI. They’ll dub tools like ChatGPT as “glorified autocomplete”. They scoff at the words “prompt engineer” and see every SaaS utilizing these APIs as “GPT-Wrappers”. These types of engineers are common everywhere, from the workplace, to online interactions like Reddit and TikTok.
|
I, on the other hand, fall into the other side of the wall. I embrace AI and Large Language Models for everything that I’m doing. For me, they’re indispensable for generating code, crafting testing boilerplates, and enhancing my software’s functionality.
|
Take my project, NexusTrade, for instance. It stands out by offering an AI-Powered Copilot named Aurora, which guides users through developing sophisticated trading strategies.
|
And yet, without fail, anytime I show off Aurora, the AI Trading Copilot, I’m met with the same criticism.
|
You’re building a crappy GPT-Wrapper!
|
I mean, seriously? My platform can perform backtests, run genetic optimizations, and deploy trading strategies to the cloud, and its a GPT-Wrapper? The entire concept is ridiculous. If my app is a GPT-Wrapper, then Google is just a “Database Wrapper”.
|
|