“I just cannot get this AI tool to give me what I am asking for. It does not seem to understand my ask in the first place.” This is a familiar-sounding tantrum we hear often at home and at the workplace. In college faculty desks, the discussions have shifted from “no holiday for Holi this time” to “can you approve my request to allow me to use Claude instead of ChatGPT?” And then the reasoning and the sales pitch for Claude.
If your productivity has not increased tenfold over the last two years, you probably should take a reality check (or an industry check). Large language models make us sound more intelligent with ideas that an individual would have missed otherwise. Many emails I used to receive from colleagues in the first twenty years of my corporate life were rife with errors and screamed at me. Those errors have vanished. Good thing. The final output is greater than the sum of the outputs from individual LLMs.
What is the number of prompts that are submitted by employees of an organisation in a day, a week, or a month? How many of those generate content that is meant for sharing with prospects, customers, and the external world? How many different Generative AI tools are used to generate those? Different LLMs have different strengths and weaknesses. Not all of them interpret and generate content in the same way. There are differences. Does the content from four teams in the organisation sound like coming from four different companies? If yes, then alert!
My friend says one must not fear technology, but rather try to tame it. One way to tame LLMs is to ensure that the organisation’s vision, mission, and values are included in the prompt that every employee submits. With that context in mind, the LLM will align with the organisation’s vision, mission, and values. This way, prompts designed by the same employee from time to time, and those across the organisation have a standard to follow.
Introduce the standard HITL (Human in the Loop) to review the LLM responses. That will reduce the chances of misalignment. Having done all of that, if there is still a marketing or branding crisis, make swift corrective decisions to contain the damage. I wouldn’t be surprised if one relies again on Generative AI tools for corrective action.
True. I have a soft corner for LLMs. If a crisis happens, it’s likely we haven’t designed the best prompt, haven’t instructed the LLM enough, and have probably missed the HITL from the process. The solution to road accidents is not a ban on driving. It is to introduce sufficient guardrails and controls to promote disciplined driving.
Effective prompting is not just about technical skill—it reflects clarity of purpose and disciplined communication. Every interaction with an LLM extends how we think, plan, and express organisational intent. When employees treat prompts as strategic tools rather than text entry boxes, the result is far more coherent and better aligned with the brand’s voice. The art of prompting is, in essence, the art of direction and the ability to guide intelligence to serve vision.
LLMs, therefore, are mirrors of our mindset. The better we define our goals, standards, and context, the more accurately these models reflect them. What you prompt is what you reap. In the long run, organisations that respect this principle will not only tame technology but also thrive alongside it.
Best prompting practices must exist not only in the prompt window of an LLM provider but also in the everyday programs we write, the applications we develop, and the emails we intend to communicate through.
Disclaimer
Views expressed above are the author’s own.
END OF ARTICLE